13.2 Second stage of interpretation
The second stage of the interpretation of an EDL
script is
the test run of the EXPERIMENT
section. A test run is necessary
for two reasons. First, only a very rudimentary syntax check has been
done for the EXPERIMENT
section until now. Second, and much more
important, the script may contain logical errors and it would be rather
annoying if these would only be found after the experiment had already
been run for several hours, necessitating the premature end of the
experiment. For example, without a "dry" run it could happen that only
after a long time it is detected that the field of the magnet is
requested to be set to a value that the magnet can't produce. In this
case there usually are few alternatives, if any, to aborting the
experiment. Foreseeing and taking the appropriate measures for such
possibly fatal situation would complicate both the writing of modules
and EDL
scripts enormously and probably would still not catch
all of them.
On the other hand, by doing a test run for example the function for
setting the magnet to a new field will be called with all values that
are to be expected during the real experiment and thus invalid field
settings can be detected ilready in this "dry" run. Doing a test run
is much faster than running the experiment itself because during the
test run the devices will not be accessed (which usually uses at least
90% of the whole time), calls of the wait()
function do
not make the program sleep for the requested time, no graphics are
drawn etc.
Of course, running a complete test of an experiment can be a bit time
consuming. Thus a test run is only done for the first time a new
EDL
script is analyzed. fsc2
keeps a kind of database of
scripts it already has tested successfully, thus it can avoid re-doing
tests if not necessary. (Actually, fsc2
calculates the SHA1
hash of a script after it has been run through the fsc2_clean
utility and stores the SHA1 keys of all scripts in the file
`/usr/local/lib/fsc2/Digests' that successfully made it through
the test run. Scripts listed in there won't get tested fully again.)
The writers of modules have an important responsibility to make running the test run possible. During the test run the devices can't be accessed. Despite that the modules have to deal with requests for data to be returned from the devices in a reasonable way. Thus the modules must, during the test run, "make up" data for the real ones. This can be a bit tricky and special care must be taken to insure that these "invented" data are consistent. For example, if a module for a lock-in amplifier first gets asked for the sensitivity setting and then for measured data it may not return data that represent voltages larger than the sensitivity setting it "invented". There may even be situations where the module has no chance to find out if the arguments it gets passed for a function are acceptable without determining the real state of the device. If possible, incidents like this should be stored by the module and the module should test at the time of device initialization if these arguments were really acceptable and, if not, stop the experiment.
A typical example of this case are the settings for a "window" for the
digitizers, defining the part of a curve that gets returned or that is
integrated over etc. Because during the test run neither the timebase
nor the amount of pre-trigger the digitizer is set to are known (unless
both have been set explicitely from the EDL
script) it can't be
tested if the windows start and end positions are within the time range
the digitizer measures. Thus the module can just store these settings
and report to fsc2
that they seem to be reasonable. Only when the
experiment starts and the module has its first chance of finding out the
timebase and pre-trigger setting it can do the necessary checks on the
windows settings and should abort the experiment at the earliest
possible point if necessary.
To make things a bit easier when writing modules hook functions can be defined within a module that get called automatically at the start of the test run and after the test run finished successfully.
This document was generated by Jens Thoms Toerring on September 6, 2017 using texi2html 1.82.