13.4 Reading the sources
The following tries to give you an introduction to where to look when
you are searching for something in the source code of fsc2
. Of
course, the program has gotten too complex to be described easily (and
with less space then required for the program itself). Thus all I can
try is to show you the red line through the jungle of code, from what's
happening when fsc2
is started, when an EDL
script gets
loaded, tested and finally executed. This is still far from complete and
work in progress at best.
Lets start with what to do when you want to debug fsc2
. It's
probably obvious that when you want to run the main (parent) process of
fsc2
under a debugger you just start it within the debugger. To
keep the debugger from getting stopped each time an internally used
signal is received you probably should start with telling the debugger
to ignore the two signals SIGUSR1
and SIGUSR2
. Under
gdb
you do this by entering
(gdb) handle SIGUSR1 nostop noprint (gdb) handle SIGUSR2 nostop noprint |
Debugging the child process that runs the experiment requires the
debugger to attach to the newly created child process. To be able to do
so without the child process already starting to run the experiment while
you're still in the process of attaching to it you have to set the
environment variable FSC2_CHILD_DEBUG
, e.g.
jens@crowley:~/Lab/fsc2> export FSC2_CHILD_DEBUG=1 |
When this environment variable is defined (what you set it to doesn't
matter if it's not an empty string) the child process will
sleep(3)
for about 10 hours or until it receives a signal,
e.g. due to the debugger attaching to it. Moreover, when
FSC2_CHILD_DEBUG
is set a line telling you the PID of the child
process is printed out when the child process gets started. All you
have to do is to start the debugger with the PID to attach to. Here's
an example of a typical session where I start to debug the child
process using gdb
:
jens@crowley:~/Lab/fsc2 > export FSC2_CHILD_DEBUG=1 jens@crowley:~/Lab/fsc2 > src/fsc2 & [2] 28801 jens@crowley:~/Lab/fsc2 > Child process pid = 28805 jens@crowley:~/Lab/fsc2 > gdb src/fsc2 28805 GNU gdb 5.0 Copyright 2000 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-suse-linux"... /home/jens/Lab/fsc2/28805: No such file or directory. Attaching to program: /home/jens/Lab/fsc2/src/fsc2, Pid 28805 Reading symbols from /usr/X11R6/lib/libforms.so.1...done. Loaded symbols for /usr/X11R6/lib/libforms.so.1 Reading symbols from /usr/X11R6/lib/libX11.so.6...done. Loaded symbols for /usr/X11R6/lib/libX11.so.6 Reading symbols from /lib/libm.so.6...done. Loaded symbols for /lib/libm.so.6 Reading symbols from /lib/libdl.so.2...done. Loaded symbols for /lib/libdl.so.2 Reading symbols from /usr/local/lib/libgpib.so...done. Loaded symbols for /usr/local/lib/libgpib.so Reading symbols from /lib/libc.so.6...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /usr/X11R6/lib/libXext.so.6...done. Loaded symbols for /usr/X11R6/lib/libXext.so.6 Reading symbols from /usr/X11R6/lib/libXpm.so.4...done. Loaded symbols for /usr/X11R6/lib/libXpm.so.4 Reading symbols from /lib/ld-linux.so.2...done. Loaded symbols for /lib/ld-linux.so.2 Reading symbols from /lib/libnss_compat.so.2...done. Loaded symbols for /lib/libnss_compat.so.2 Reading symbols from /lib/libnsl.so.1...done. Loaded symbols for /lib/libnsl.so.1 Reading symbols from /usr/lib/gconv/ISO8859-1.so...done. Loaded symbols for /usr/lib/gconv/ISO8859-1.so Reading symbols from /usr/local/lib/fsc2/fsc2_rsc_lr.fsc2_so...done. Loaded symbols for /usr/local/lib/fsc2/fsc2_rsc_lr.fsc2_so Reading symbols from /usr/local/lib/fsc2/User_Functions.fsc2_so...done. Loaded symbols for /usr/local/lib/fsc2/User_Functions.fsc2_so 0x40698951 in __libc_nanosleep () from /lib/libc.so.6 (gdb) handle SIGUSR1 nostop noprint (gdb) handle SIGUSR2 nostop noprint (gdb) |
(There may be even more lines starting with "Reading symbols for
"
and "Loading symbols from
" if your EDL
script lists some
modules in the DEVICES
section.) Now the child process will be
waiting at the very start of its code in the function run_child()
in the file `run.c'.
Please note that because fsc2
is normally running as a setuid-ed
process you must not try to debug the already installed and setuid-ed
version (that's not allowed for security reason) but only a version
which belongs to you and for which you have unlimited execution
permissions. This might require that you temporarily relax the
permissions on the device files (for the GPIB board, the serial ports
and, possibly, cards installed in the computer and used by fsc2
)
of devices that are controlled by the EDL
script you use during
debugging to allow access by all users (or at least you). Don't forget
to reset the permissions when you're done.
This point out of the way I'm now going to start a tour de force
through the sources. When fsc2
is invoked it obviously starts with
the main()
function in the file `fsc2.c'. After setting up
lots of global variables and checking the command line options it tries to
connect to a kind of daemon process (or starts it if it's not already
running). This daemon is taking care of situations where an instance
of fsc2
crashes and then releases resources (lock files, shared
memory segments etc.) that may have been left over.
When this hurdle has been taken the initialization of the graphics is
done. All the code for doing so is in the file `xinit.c'. You will
have to read a bit about the Xforms
library to understand what's
going on there. Mostly it consists of loading a shared library for
creating the forms used by the program (there are two shared libraries,
`fsc2_rsc_lr.fsc2_so' and `fsc2_rsc_hr.fsc2_so', which one is
loaded depends on the screen resolution and the comand line option
-size
), evaluating the settings in the `.Xdefaults' and
`.Xresources' files, again setting up lots of global variables and
doing further checks on the command line arguments.
When this part was successful some further checks of the remaining
command line options are done and, if specified on the command line, an
EDL
script is loaded. Now we're nearly ready to start the main
loop of the program. But before this loop is entered another new process
is spawned that opens a socket (of type AF_UNIX
, i.e. a socket
to which only processes on the same machine can connect) to listen on
incoming connections from external programs that want to send EDL
scripts to fsc2
for execution. The code for spawning this child
process and the code for the child process itself can be found in the
`conn.c'.
After this stage the main loop of the program is entered. It consists of just these two lines:
while ( fl_do_forms( ) != GUI.main_form->quit ) /* empty */ ; |
Everything else is hidden behind these two lines. What they do is to
wait for new events until the Quit
button gets pressed. Possible
events are clicking on the buttons in the different form, but they don't
need to be mentioned in this loop because all buttons trigger callback
functions when clicked on. The remaining stuff in the main()
function is just cleaning up when the program quits and a few things for
dealing with certain circumstances.
When you want to understand what's really going on you will have to
start with figuring out what happens in the callback functions for the
different buttons. The simplest way to find out which callback functions
are associated with which functions is probably to use the
fdesign
program coming with the Xforms
library and
starting it on one of the files `fsc2_rsc_lr.fd' or
fsc2_rsc_hr.fd
. From within it you can display all of the forms
used by the program and find out the names of the callback functions
associated with each element of the forms.
The callback functions for the buttons of the main form are mostly in
`fsc2.c'. I will restrict myself to the most important ones: The
Load
button invokes the function load_file()
, which is
quite straight forward - it asks the user to select a new file, checks
if it exists and can be read and, if this tests succeed, loads the file
and displays it in the main browser.
Once a file has been read in the Test
button gets activated.
When it gets clicked on the function test_file()
gets invoked and
that's were things get interesting. As you will find over and over again
in the program is starts with lots of testing and adjustments of the
buttons of the main form. (Should you worry what the lines like
notify_conn( BUSY_SIGNAL ); |
and
notify_conn( UNBUSY_SIGNAL ); |
are about: they tell the child process listening for external
connections that fsc2
is at the moment too busy to accept new
EDL
scripts and then that it's again prepared to load such a
script.)
The real fun starts at the line
state = scan_main( EDL.in_file, in_file_fp ); |
which calls the central subroutine to parse and test the EDL
script. A good deal of the following is going to be about what's
happening there.
The function scan_main()
is located in the file
`split_lexer.l'. This obviously isn't a normall C
file but
a file from which the flex
utility creates a C
file. If
you don't know about it, flex
is a tool that generates programs
that perfom pattern-matching on input text, typically returning a
different value for each token, possibly with some more information
about the value of the token attached (i.e. an integer number found
in the input would be a token of type "integer" and its associated
value the the number itself). That means that the program created by
flex
will dissect an input text into tokens according to the
rules given the flex
input file (in this case
`split_lexer.l') and execute some action for each token found.
And that's exactly what needs to be done with a EDL
script
before it can later be digested by fsc2
(with the help of
another tool, bison
).
Before scan_main()
starts tokenizing the input it does some
initalization of things that may be needed later on. This consists of
first setting up an internal list of built-in EDL
functions and
EDL
functions that might be supplied by modules; this is done
by calling functions_init()
in the file `func.c'. Built-in
functions are all listed at the top of `func.c' and the list
built from it contains information about the names of the functions,
the C
functions that are to be called for the EDL
functions, the number of arguments, and in which sections of the
program the functions are allowed to be called. When fsc2
is
done with its built-in functions it also appends to the list the
functions supplied by modules. These are found in the `Functions'
file in the `config' subdirectory. To do so another flex
tokenizer is invoked on this file, which is generated by the code in
`func_list_lexer.l'.
After assembling the list of functions fsc2
also creates a list
of the registered modules. This is done by invoking the tokenizer
created from the file `devices_list_lexer.l' on the list of
all devices, `Devices' also in the `config' subdirectory.
After this succeeded fsc2
is ready to start interpreting the
input EDL
file. But there's a twist: it does not work directly
with the EDL
file, but with a somewhat cleaned up version as
has already ben mentioned above. This cleaning up is done by invoking
an external utility, fsc2_clean
, again a flex
generated
program from the file fsc2_clean.l
. This is done in the
function filter_edl()
in `util.c'. The fsc2_clean
utility is started with its stdin
redirected to the EDL
input file and its stdout
redirected to a pipe, from which
fsc2
in the following is reading the cleaned up version of the
EDL
file.
The tokenizer (or "lexer") created from `split_lexer.l' is rather
simple in that it just reads in the EDL
code until it finds the
first section keyword (and this should be the first line the lexer
gets from fsc2_clean
, which already removed all comments etc.).
On finding the first section keyword control is transfered immediately
to another lexer that is specifically written for dealing with the
syntax of this section. And that's why there are that many further
files to generate flex
scanners, i.e. files with names ending
in .l
, for each section there's a different tokenizer. In the
sequence the resulting lexers usually get invoked these are:
devices_lexer.l DEVICES section vars_lexer.l VARIABLES section assign_lexer.l ASSIGNMENTS section phases_lexer.l PHASES section preps_lexer.l PREPARATIONS section exp_lexer.l EXPERIMENT section |
Each of these lexers returns to the one created from `split_lexer.l' when it finds a new section label (or when an error is detected).
But these lexers don't work alone. The lexers main job is to split up
the source in reasonably chunks. These would e.g. keywords, variable
and function names, numbers, arithmetic operators, parentheses,
semicolons, commas etc. But that's not enough to be able to understand
what the EDL
script means. We also need a parser, that tries to
make sense from the stream of tokens created by the lexer by checking
if the sequences of tokens make up syntactically correct statements
that then get executed by calling some appropriate C
code.
These parsers are created by another tool, bison
, from files
with names ending in .y
. These are
devices_parser.y DEVICES section vars_parser.y VARIABLES section assign_parser.y ASSIGNMENTS section phases_parser.y PHASES section preps_parser.y PREPARATIONS section exp_test_parser.y EXPERIMENT section exp_run_parser.y EXPERIMENT section condition_parser.y EXPERIMENT section |
Since the EXPERIMENT
section is somewhat special so there's not
only one parser but three, which one is going to be used depends on the
circumstances.
If you don't know yet how lexers like flex
and lex
and
parsers like bison
and yacc
work and how they are combined
to interpret input you should start trying to find out, fsc2
strongly relies on them and you probably will have some of problems
understanding much of the sources without at least some basic knowledge
about them.
In a typical EDL
script the first lexer getting involved is the
one for the DEVICES
section, generated from
`devices_lexer.l'. This immediately calls the parser, generated
from `devices_parser.y'. The lexer and parser are very simple
because all the DEVICES
section may consist of is a list of
device names, separated by semicolons. The only thing of interest is
that when the end of the DEVICES
section is reached it invokes
the function load_all_drivers()
from the file loader.c
,
which is central to the plugin-like architecture of device handling in
fsc2
.
The first part of load_all_drivers()
consists of loading the
libraries for the devices listed in the DEVICES
section (plus
another one called User_Functions.fsc2_so
) and then trying to
find the (non-builtin) functions in the libraries that are listed in the
`Functions' file in the `config' subdirectory, which already
has been read in. This is done in the load_functions
subroutine.
Here first a library gets loaded (using dlopen(3)
), and if this
succeeds, the function tries to determine the addresses of the hook
functions (see the next chapter about writing modules for what the hook
functions are good for in detail, it should suffice to say that these
are (optional) functions in the modules that get executed at certain
points during the execution of the EDL
script, i.e. after the
library has been loaded, before and after the test run, before and after
the start of the experiment and, finally, just before the modules gets
unloaded). Then fsc2
runs through its list of non-builtin
functions and checks if some of them can be found in the library.
This last step is getting a bit more complicated by the fact that it is
possible to load two or more modules with the same type (e.g. two
modules for lock-in amplifiers), which both will supply functions of the
same names. fsc2
recognizes this from a global variable, a string
with the device type, that each module is supposed to define. When it finds
that there are two or more devices with the same type (according to this
global variable), it will accept functions of the same name more than
one time and make the names unique by appending a hash ("#
") and
a number for the device. So, if there are modules for two lock-in
amplifiers listed in the devices section, both supplying a function
lockin_get_data()
, it will create two entries in its internal
list of non-builtin functions, one named lockin_get_data#1()
and
associated with the first lock-in amplifier in the DEVICES
section and one named lockin_get_data#2()
for the second
lock-in. The first, addressing the first lock-in, can then be called as
either lockin_get_data#1()
(or also without the "#1
"),
while for lockin_get_data#2()
the function from the library for
the second lock-in amplifier is used.
After all device libraries have been loaded successfully the functions
init_hook()
in all modules that have such a function are invoked,
always in the same sequence as they were listed in the DEVICES
section. The modules can use these hook function to initialize
themselves.
After this the work for of the load_all_drivers()
and also the
lexer for the DEVICES
section is done and control returns to the
lexer generated by `split_lexer.l' to the function
section_parser()
. The last thing the lexer for the DEVICES
section did was setting a variable that tells this function what is the
next section in the EDL
code. All the function now does is
transfer control to the lexer for that section.
Normally, the next section will be the VARIABLES
section and the
lexer and parser, generated from vars_lexer.l
and
`vasrs_parser.y' take over. This one is a bit more interesting
because the syntax of the VARIABLES
section is more complicated
than that of the DEVICES
section. But the basic principle is
the same: the lexer splits up the EDL
code and feeds them to
the parser to "digest" them.
fsc2
maintains a linked list of all variables and these list is
assembled from the code in the VARIABLES
section. So this may be
a good place to give an introduction about how variables look like. All
variables are structures of type Var
, which is declared (and
typedef-ed to Var_T
) in the file `variables.h' (you may
prefer to look it up now). It contains a string pointer for the variable
name, a member for the type of the variable, an union for the value of
the variable (since there are several types of variables they can have
values of quit a range of types). Further, there are some data to keep
track of array variables (1- or multi-dimensional) and a member for
certain flags. Finally, there are pointers to allow the variable structure
to be inserted into a (doubly) linked list.
Before going into more details here's a list of the possible variable types:
UNDEF_VAR STR_VAR INT_VAR FLOAT_VAR INT_ARR FLOAT_ARR INT_REF FLOAT_REF INT_PTR FLOAT_PTR REF_PTR FUNC |
Each variable begins its life with type UNDEF_VAR
. But usually it
should become promoted to something more usful shortly afterwards, so
you will find it only in rare cases (it's sometimes used for temporary
variables, we're going to discuss them sometime later). A STR_VAR
is a variable holding a string, and also this type of variables only will
be found in temporary variables. What an INT_VAR
and FLOAT_VAR
is will probably be quite obvious, these types of variables can hold a
single (long) integer or floating point (double) value, which are stored
in the lval
and dval
members of the val
union of the
Var
structure.
Variables of type INT_ARR
and FLOAT_ARR
are for holding
one-dimensional arrays of integer and floating point values. For
variables of thess types the len
field of the Var
structure will contain the (current) length of the array and the
members lpnt
or dpnt
of the val
union are pointers
to an array with the data.
Variables of type INT_REF
and FLOAT_REF
are for
multidimensional arrays. These are a bit different because they don't
store any elements of the array directly but instead pointers to lower
dimensional arrays. These might again be multdimensional array variables
(but with one dimension less) or INT_ARR
or FLOAT_ARR
variables, that then contain the data of a one-dimensional array. If
you have ben programming in e.g. Perl this concept will probably not
be new to you - there you also have one-dimensional arrays but which in
turn can have elements that are pointers to arrays. Otherwise, to make
clearer what I mean, lets assume that you define a 3-dimensional array
called A
in the VARIABLES
section:
A[ 4, 2, 7 ]; |
This will result in the creation of 9 variables. The top-most one (and
only that one can be accessed directly from the EDL
script
because it's the only one having a name) is of type INT_REF
and
contains an array of 4 pointers to 2x7-dimensional arrays, stored in the
vprtr
member of the val
union of the Var
structure.
It's dim
member is set to 3 since it's a 3-dimensional variable
and the len
member gets set to 4 because the val.vptr
field is an array of 4 Var
pointers. Each of the 4 Var
pointers stored in the val.vptr
field point to a different
variable, of which each is a again of type INT_REF
. But these
variables pointed to will have a dimension of 2 only, so the dim
member is set to 2 and since each is of dimension [2, 7]
,
their len
members are set to 2. And each of this lower-dimension
variables again will have the val.vptr
array consist of (2)
pointers pointing to one.dimensional arrays, this time of type
INT_ARR
. These INT_ARR
variables, two levels below the
original variable named A
will each contain an array of 7 integer
values, pointed to by val.lpnt
.
When you count the variables actually created according to the scheme
above you will find that it are 9, one for the variable named A
itself, which in turn points to 4 newly created variables, of which each
again points to 2 further variables (which finally contain all the data
as one-dimensional arrays).
The remaining variable types INT_PTR
, FLOAT_PTR
,
REF_PTR
and FUNC
are again only used with temporary
variables and will be discussed later.
The variables declared in the VARIABLES
section are all elements
of a doubly linked list. The pointer to the top element is a member of
the global EDL
variable. It's a structure of type EDL_Stuff
declared in `fsc2.h' and countaining data relevant for the EDL
script currently under execution. To find the first element of the list of
variables see the EDL.Var_List
member. Directly beneath it you will
find that there's also a second variable named EDL.Var_Stack
. This
variable is also doubly linked list of variables, but in contrast this
list is for temporary variables which get created and deleted all of the
time during the interpretation of an EDL
script and is in the
following often referred to as the "stack". In this list also the types
of variables that were only mentioned en passant above can be found,
which I will shortly summarize here.
A variable of type STR_VAR
gets created whenever in the text of
the EDL
script a string is found or when an EDL
function
returns a string. Since strings are always used shortly after their
creation (always within the statement they appear in) they are all
temporary variables. Variable of type INT_PTR
and
FLOAT_PTR
are variables in which the val.lpnt
and
val.dpnt
members point to arrays belonging to some other
variable, but never to the variable itself. Variables of type
REF_PTR
are variables in which the from
member (which
hadn't been mentioned yet) pointing to the variable it's pointing to.
Finally, variables of type FUNC
have the val.fcnt
member
pointing to address to one of the C
functions that get called
for EDL
functions.
After this detour about variables lets go back to what happens in
the VARIABLES
section. In the most simple case the
VARIABLES
section isn't much more than a list of variable names,
which need to be created. When the lexer finds something which looks
like a variable name (i.e. a word starting with a letter, followed by
more letters, digits or underscore characters), it will first check if a
variable by this name already exists by calling the function
vars_get()
from `variables.h' with the name it found. It
either receives a pointer to the variable or NULL
if the variable
does not exist yet. In the latter it will create a new variable by
calling vars_new()
(which returns a pointer to the new
variable). It then passes the variables address to the parser. Assuming
the variable has been newly created it will still be of type
UNDEF_VAR
and it's not clear yet if it's a simple variable or
going to be an array. Thus the parser asks the lexer for the next
token. If this is a comma or a semicolon it can conclude that the
variable is a simple variable and can set its type to either
INT_VAR
or FLOAT_VAR
(depending on its name starting with
a lower or upper case character) and is done with it. But if the next
token is a "[
" the parser knows that this is going an
array and must ask the lexer for more tokens, which should be a list of
numbers, separated by commas and ending in a "]
" (but there are
even more complicated cases). When all these have been read in the
parser calls some C
code that sets up the new array according to
the list of sizes the parser received. More complicated cases may
include that instead of a number an asterisk ("*
") is found, in
which case the array has to be initialized in a way to indicate that the
array hasn't been fully specified yet (this is done by setting the
len
field of the variable structure for the array to 0 and
setting the IS_DYNAMIC
flag in the flags
member).
Other complications may include that a size if an array isn't given as a
number but as an arithmetic expression, possibly involving already
defined (and initialized) variables, arithmetic operators or even
function calls. In the hope not to bore you to death by getting too
detailed I want to describe shortly how the parser evaluates such an
epression because it's more or less the same all over the complete
program, not restricted to the VARIABLES
section. Let's discuss
things using the following example
abs( R + 5 * ( 2 - 7 ) ) |
Here the lexer will first extract the "abs
" token. Now I have to
admit a white lie I made above: I said that the lexer will check first
for tokens like this if it's an already existing variable. But it
actually first checks if it's an EDL
function name, only if it
isn't it will check if it's a variable. And here it will find that
abs
is a function by calling the function func_get()
in
`func.c'. This function will return the address of a new temporary
variable on the stack (pointed to by EDL.Var_Stack
) of type
FUNC
with the val.fcnt
holding the address of the function
to be executed for the abs()
EDL
function (which is
f_abs()
in `func_basic.c'). The lexer now passes the address
of the variable on to the parser.
The parser knows that functions always have to be followed by an opening
parenthesis and thus will ask the lexer for the next token. If this
isn't a "(
" the parser will give up, complaining about a syntax
error. Otherwise the parser has to look out for the function
argument(s), asking the lexer for more tokens. The next one it gets is a
pointer to the (hopefully already defined and initialized) variable
"R
". But it doesn't know yet if this is already the end of the
(first) argument, so it requests another token, which is the "+
".
From this the parser concludes that it obviously hasn't seen the end of
it yet and gets itself another token, the "5
". A stupid parser
might now add the 5
to the value of R
, but since the
parser knows the precedence of operators it has to defer this operation
at least until it has seen the next token. When the next token would be
a comma (indicating that a new function argument starts) or a closing
parenthesis it would now do the addition. But since the next token is a
"*
" it has to wait and first evaluate the "(2 - 7)
"
part and multiply the result with 5
before again checking if it's
prudent to add the result to the value of R
. Since the next token
the parser receives from the lexer is the ")
, indicating the end
of the function arguments, it can go on, adding the result of
"5 * (2 - 7)
" to the value of R
. In this process
the temporary variable holding the pointer to the variable R
gets
popped from the stack and a new variable with the result of the
operation is pushed onto the stack (i.e. is added to the end of the
linked list of variables making up the stack). Now the stack still
contains two variables, the variable pointing to the f_abs()
function and the variable with the function argument. And since the
parser has seen from the ")
" that no more arguments are to be
expected for the function it will invoke the f_abs()
function
with a pointer to the variable with the function argument.
If you cared to look it up you will have found that the f_abs()
function is declared as
Var *f_abs( Var *v ); |
This is typical for all functions that are invoked on behalf of
EDL
functions: they always expect a single argument, a pointer to
a Var
structure and always return a pointer to such a structure.
The pointer these functions receive is always pointing to the first
argument of the function. If the function requires more than one
argument it has to look for the next
member of the variable,
and if this isn't NULL
it points to the next argument. Of course,
zthis can be repeated until in the last argument the next
field
is NULL
. The function now has to check if the types of the
variables are what is required (it e.g. wouldn't make sense for the
f_abs()
function if the argument would be a variable of type
STR_VAR
) and if there are enough arguments (at least if the
function allows a variable number of arguments, if the function is
declared to accept only a fixed number of arguments these cases will be
dealt with before the function is ever called, see below).
The function now has to do it's work and, when it's done, creates
another temporary variable on the stack with the result (this is done
by a call of the function vars_push()
in `variables.c').
In the process it may remove the function arguments from the stack
(using vars_pop()
also in `variables.c'), if it doesn't do
so it will be done automatically when the function returns. Note that
there's a restriction in that a function never can return more than a
pointer to a single variable, i.e. the variable pointed to must have
its next
member set to NULL
, being the last variable on
the stack. A function may also chose to return NULL
, but it's
good practice to always return a value, if there isn't really anything
to be returned, i.e. the function always get invoked for its side
effects only, it should simply return an integer variable with a value
of 1
to indicate that it succeeded.
Again I have to admit that I wasn't completely honest when I wrote above
that "the parser invokes the f_abs()
function". The parser does
not call the f_abs()
function directly, but instead calls
func_call()
in `func.c' instead with a pointer to the
variable of type FUNC
pointing to the f_abs()
function
(please remember that the function argument(s) are coming directly after
this variable on the stack). Before func_call()
really calls
f_abs()
it will first do several checks. The first one is to see
if the variable it got is really pointing to a function. Then it checks
how many arguments there are and compares it to the number of arguments
the function to be called is prepared to accept. If there are too many
it will strip off the superfluous one (and print out a warning), if there
aren't enough it will print out an error message and stop the
interpretation of the EDL
script. If these tests show that the
function can be called without problems func_call()
still has to
create an entry on another stack (the "call stack") that keeps track of
situations where during the execution of a function another function is
called etc., which is e.g. needed for emitting reasonable error
messages. Only then f_abs()
is called. When f_abs()
returns, the func_call()
first pops the last element from the
"call stack", automatically removes what's left of the function
arguments and the variable with the pointer to the f_abs()
function (always checking that the called function hasn't messed up
the stack in unrecoverable ways) before it returns the pointer with the
result of the call of f_abs()
to the parser.
Now the parser will at last know what's the result of
abs( R + 5 * ( 2 - 7 ) ) |
and can use it e.g. as the length of a new array.
Of course, beside getting defined new variables can also become
intialized in the VARIABLES
section. The values used in the
initialization can, of course, also be the results of complicated
expressions. But these will be treated in exactly the same way as
already described above, the only new thing is the assignment part. The
parser knows that a variable is getting initialized when it sees the
"=
" operator after the definition of a variable. It then parses
and interprets the right hand side of the equation and finally assigns
the result to the newly defined variable on the left hand side. To do so
it calls the function vars_assign()
from `variables.c'
(if it's an initialization of an array also some other functions get
involved in the process).
The creation and initialization of one- and more-dimensional arrays makes up a good deal of the code in `variables.c'. Unfortunately, so many things have to be taken care of that it can be quite a bit of work understanding what's going on and I have to admit that it usually also takes me some time to figure out what (and why) I have written there, so don't worry in case you have problems understanding everything at the first glance...
But now let's get back to the main theme, i.e. what happes during the
interpretation of an EDL
script. I guess most of what can be said
about the VARIABLES
section has been said and we can assume that
we reached the end of this section. The lexer generated from
`vars_lexer.l' will then return to section_parser()
in the
lexer created from `split_lexer.l' with a number indicating the
type of the next section.
If the EDL
script is for an experiment where pulses are used
chances are high that the next sections will be ASSIGNMENTS
and
PHASES
section. But I don't want to go into the details of the
handling of these sections. In principle, things work exactly like in the
interpretation of the VARIABLES
section, i.e. there's again a
lexer for each section (generated from `assign_lexer.l' and
`phases_lexer.l') and a parser (generated from `assign_parser.y'
and phases_parser.y
), which work together to digest the EDL
code. The interesting things happening here is the interaction with the
module for the pulser, but this is in large parts already covered by the
second half of next chapter about writing modules.
The next section is usually the PREPARATIONS
section. And again
nothing much different is going on here from what we already found in
the VARIABLES
section: the lexer and parser generated from
`preps_lexer.l' and `preps_parser.y' play their usual game,
one asking the other for tokens and then trying to make sense from them,
analyzing the sequence of tokens and executing the appropriate actions.
The only difference is that the syntax is a bit different from the one
of the VARIABLES
section, otherwise the same lexer and parser
could be used.
Where things again get interesting is with the start of the
EXPERIMENT
section. Here fsc2
does not immediately
interpret the EDL
code as it has been doing in all the other
sections up until now. You already may notice from what files you find:
while there exists a file `exp_lexer.l' there are two parsers,
`exp_test_parser.y' and `exp_run_parser.y'. And at first,
these parsers even don't get used. Instead, only the lexer is used to
split the EXPERIMENT
section into tokens and functions from
`exp.c' store the tokens in an array of structures (of type
Prg_Token
, see `fsc2.h').
There are several reasons for storing the tokens instead of executing
statements immedately. But the main point is that the EXPERIMENT
section isn't interpreted only once but at least two times (or even more
often if the same experiment is run repeatedly), and parts of the
EXPERIMENT
section may even be repeated hundreds or thousands of
times (the loops in the EXPERIMENT
section). Now, as I already
mentioned above, fsc2
isn't interpreting the EDL
script
itself, but a "predigested" version that has been run through the
fsc2_clean
utility.
Of course, the question not answered yet is why it's done this way. And
the answer is simplicity and robustness (and, of course, my lazyness).
An EDL
script can contain comments, may include include further
EDL
scripts using the #INCLUDE
directive etc. If this
wouldn't be dealt with by a the fsc2_clean
utility each and every
section lexer would have contain code for removing comments and for
dealing with inclusion of other EDL
scripts (which isn't
trivial), making the whole design extremely complicated and thus
error-prone. By moving all of these tasks into a single external utility
a lot of potential problems simply disappear.
But one has to pay a price. And this is that we can't simply jump back
in a file to a certain statement in the EDL
script (because
there's no file to move around in, but just a stream of data that gets
read from an external utility). On the other hand, when you have to
repeatedly interpret parts of the script you have to jump back to be
able to interpret the same code over and over again. One solution would
be to store the "predigested" EDL
the program received from the
fsc2_clean
utility in memory and then make the lexer split it
into tokens again and again when needed. But this would be a wast of CPU
time when you can store the tokens of the the code instead, which then
can be feed to the parser again and again without the need for a tokenizer.
So, why to repeat some or all code of the EXPERIMENT
section at
all? First of all, before the experiment is run the code should be
checked carefully. It's much better to find potential problems at an
early stage instead of having an experiment stop after it has run a for
a long time just because of an easy to correct error which could have
been detected much earlier. (Just imagine how happy you would be if you
had run an experiment for 24 hours on a difficult to prepare sample,
already seeing from the display that that's going to become an important
part of your PhD thesis but then the program suddenly stops before it
finally stores the data to a file because in the code for storing the
data there's a syntax error...)
Thus, each EDL
script needs to be checked. And to do so, it must
have been read in completely before the experiment is started. And
another point is that an experiment may have to be repeated. Of course,
the whole EDL
script could be read in again when an experiment is
restarted. But this would also require testing it again (which might
take quite a bit of time), so it's faster to work with the already
tested code.
And that's why the tokens of the EXPERIMENT
section are stored
in memory e.g. in an array. It's done in the function
store_exp()
in `exp.c' (which is called from the C
code in `exp_lexer.l'). The function repeatedly calls the lexer
generated from `exp_lexer.l' for new tokens, storing each one in
a new structure, until the end end of the EDL
code is reached.
The array of structures is pointed to by EDL.prg_token
. While
it does so it already runs some simple checks, e.g. for unbalances
parentheses and braces.
When all tokens have been stored the function calls loop_setup()
.
The function initializes loops and IF
-ELSE
constructs.
Take as an example a FOR
loop. To later be able to find out where
the statements of the body of the loop starts a pointer to the first
token of the loop body is set in the structure for the FOR
token. And since, at the end of the FOR
loop, control needs to be
transfered to the first statement after the loop body also a pointer
pointing to the first token after the loop also is set. And for a
keyword like NEXT
a pointer to the start of the loop it belongs
to needs to be set. Finding the start and the end of loops is simply
done by counting levels of curly braces, '{
' and '}
'
(that's why the statements of loops and also IF
constructs must
be enclosed in curly braces, even if there's only a single statement).
This task out of the way the first real type of test can be done. This
is still not what in the rest of the manual is called the "test run" but
just a syntax check of the EXPERIMENT
section. For this purpose
there exists a parser, generated from `exp_test_parser.y' that does
not execute any actions associated with the statements of the
EXPERIMENT
section. It is only run to test if all statements are
syntactically correct. The parser itself need some instance that feeds
it the tokens. Since there's now no lexer (all tokens have already been
read in and stored in the array of tokens), the function
exp_testlex()
in exp.c
plays the role the lexers had in
the other sections: each time it's called it passes the next token from
the array back to the parser until it hits the end of the token array.
Only when this syntax check succeeded the real test run is started.
From the C
code in `exp_lexer.l' the function
exp_test_run()
, again from `exp.c', is called. But before
the test run can really start a bit of work has to be done. Some of the
variables in the EDL
script may have already been set during the
sections before the EXPERIMENT
section and when the real
experiment gets started, they must be in the same state as they were
before the test run was started. But since they will usually will be
changed during the test run all EDL
variables (i.e. all
variables from the variable list pointed to by EDL.Var_List
) must
be saved, which is done in the function vars_save_restore()
in
`variables.c'.
Then in all modules a hook function has to be called (at least if the
module defines such a function). This gives the modules e.g. a chance
to also save the states of their internal variables. All of the test
hook functions are called from the function run_test_hooks()
from `loader.c'.
This preparations successfully out of the way the test run can finally
start. To tell all parts of the program that get involved in the test
run that this is still the test run and not a real experiment the member
mode
of the global structure Internals
is set to a value
of TEST
. When you look through the code of the C
functions
called for EDL
functions, both in fsc2
itself and in the
modules, you will find, that Internals.mode
is again and again
tested, either directly or via the macro FSC2_MODE
(see
`fsc2_module.h' for its definition). They do so because some things
can or should only be done during the real experiment. E.g. all
modules must refrain from accesssing the devices they are written to
control because at this stage they aren't initialized yet. And also
other functions like the ones for graphics aren't supposed to really
draw anything to the screen yet. So everything these functions are
supposed to do during the test run is to check if the arguments they
receive are reasonable and then return some also reasonable values.
Instead of the parser for the mere syntax check now the "real" parser,
generated from `exp_run_parser.y' gets involved. It's the real
thing, executing the code associated with the EDL
statements
instead of just testing syntactical correctness. And it also needs some
instance feeding it tokens. This is now the function
deal_with_tokens_in_test()
. When you compare it to
exp_testlex()
that was used during the syntax check, you will
find that it's a bit more complicated, resulting from the necessity to
execute flow control statements, which had not to be done during the
syntax check and which the parser does not take care of.
So the function deal_with_tokens_in_test()
calls the parser
whenever a non-flow-control token is teh current token. The parser
itself calls exp_runlex()
whenever it needs another token.
exp_runlex()
stops the parser when it hits a flow control token
by returning 0 (which a parser interprets as end of file). This brings
us back into deal_with_tokens_in_test()
which now does what's
required for flow control. This especially includes checking the
conditions of loops and IF
-ELSE
constructs. For testing
conditions the function test_condition()
is called, which invokes
a special parser generated from `condition_parser.y' and made for
this purpose only and that requests new tokens by calling the function
conditionlex()
, which also can be found in exp.c
. When the
condition has been checked test_condition()
returns a value
indicating that the condition is either satisfied or not and the code
in deal_with_tokens_in_test()
can decide from the return value
how to proceed. This takes care that all loops are repeated as often as
they should and that in IF
-ELSE
constructs the correct
path through the EDL
code is taken.
All the above will be done until we either reach the end of the array of
tokens or one of the EDL
functions called in the process signals
an unrecoverable error. I have spend so much space with explaining all
this because the way the code in the EXPERIMENT
section is
executed during the experiment is basically identical to the way it is
done during the test run.
There are only a few things left that might be of interest when you try
to understand what's happening in exp.c
and the related parsers.
First of all, there's one token that does not get stored in the array of
tokens. This is the ON_STOP:
label. When during the experiment
the Stop
button gets pressed by the user flow of control is
passed as soon as possible to the code directly following the label. As
a label, it isn't something that can be executed, so it isn't included
into the array of tokens. Instead in the global variable
EDL.On_Stop_Pos
the position of the first token following the
ON_STOP:
label is stored, so that the parts of the program taking
care of flow control can calculate easily where to jump to when the user
hits the Stop
button.
The second point I have only mentioned en passant is error
handling. You will perhaps have already noticed that there seems to be
only a rather limited amount of error checking, but that on the other
hand there are some strange constructs in the C
code with
keywords like TRY
, TRY_SUCCESS
, CATCH()
,
OTHERWISE
or THROW()
. If you have some experience with
C++
some of the keywords will probably ring a bell, but for
C
programmers they look rather strange.
In C
errors are usually handled by passing back a return value
from functions indicating either success or failure (and possibly also
the kind of problem). This requires that for most function calls it must
be tested if the function succeeded and if not the function that called
the lower level function must either try to deal with the error or, when
it isn't able to do so, must escalate the problem by returning itself a
value that indicates the type of problems it run into. This requires
lots of discipline by the programmer because she has to explicitely
write error checking code over and over again and also makes the source
often quite hard to read since what really gets done in a function
becomes drowned in error checking code. And when an error happeningin a
very low level function that's hard to deal with it may happen that
control has to transfered to a function serveral levels above that
finally takes care of the problem, which might make figuring out what
will happen on such errors hard to figure out.
C++
has a concept of error handling which is very elegant when
compared to how it's usually is done in C
, the notion of
exceptions, which usually are seen as error conditions (but could also
be used for other unusual conditions). A function can declare itself
responsible for a certain type of expections by executing some block of
code, where this exception might be triggered, within a try-block
and after the end of the try-block catch the exception. That
means that when the exception happens (gets thrown) control is
transfered immediately to the code in the block of code following the
catch without functions on the intermediate levels having to get
involved. So throwing an exception results in priciple in a non-local
jumo from the place where the exception got thrown to the code in the
catch block. The idea of non-local jumps is a bit alien to most
C
programmers because, when one uses the infamous goto
at
all, it can only be used to jump within the code of a function (and even
then only with some restrictions). But there's a pair of (rarely used)
"functions" in C
that allow such non-local jumps, setjmp()
and longjmp()
. And these function can be used to cobble together
some poor mans equivalent of the try, throw and catch
functionality of C++
with a set of macros and functions. Of
course, it's not as polished as its big brother from C++
, it's
more difficult to use, more error prone and also much more restricted,
but it still can make life a bit simpler when compared to the usual way
error handling is done in C
.
I don't want to go into details on how exactly it works, you will find
the code for it in `exceptions.h' and `exceptions.c' and I
also will refrain from telling you here how it is used because that's
already documented in the chapter on writing device modules. I just want
to give an example how it's used in fsc2
. When you again look up
the function section_parser()
in the the primary lexer,
`split_lexer.l', you will find that the whole code in this function
is enclosed in a block starting with the macro TRY
, thus making
it the final place where every problem not handled by the lower level
function will end up. Now one type of error checking you will find all
over the place in (well-written) C
code is checking the return
value of functions for memory allocation. But this isn't done in
fsc2
(except when the function doing the allocation is willing to
deal with problems). Throughout the whole program instead of e.g.
malloc()
the function T_malloc()
is used. And this
function, which is just a wrapper around the malloc()
call, does
throw an OUT_OF_MEMORY_EXCEPTION
if its call of malloc()
fails. Unless the function calling T_malloc()
catches the
exception it gets escalated to the section_parser()
function,
thereby effectively stopping further interpretation of the EDL
script. The same happens for other types of exceptions, for example most
of the functions associated with EDL
functions (both the built-in
functions and the functions in modules) usually print an error message
and then throw an EXCEPTION
to indicate that they got a problem
that requires a premature end of the interpretation of the EDL
script.
But stopping the interpretation of the script isn't always necessary,
sometimes there are only potential problems the user should be made
aware of or things that are rather likely to be errors but also could
require only a warning. In these cases the function that should be
used to print out warnings and error messages, print()
from
`util.c', will help keeping track of the number of times this
happened. print()
is, if seen from the user perspective, more
or less like the standard printf()
function, only with an
additional argument, preceeding the arguments one would pass to
printf()
. This additional argument is an integer indicating
the severity of the problem and can be either NO_ERROR
,
WARN
, SEVERE
or FATAL
(with the obvious
meanings). print()
will now do a few additional things: it will
first increment a counter for the different types of warnings (these
counters are in EDL.compilation
) and then prepend the message
to be printed out with information about the name and the line number
in the EDL
script that led to the problem and also, if
appropriate, the EDL
function the problem was detected in.
Finally it writes the message into the error browser in fsc2
s
main form.
After all this talk about error handling lets get back to the bright
side of life: perhaps without you noticing we have nearly reached the
end of the test run. All that remains to be done under normal conditions
is to restore the values of all of the EDL
variables in
EDL.Var_List
(which, as you will remember, got stashed away
before the test run got started) and call hook functions via
run_end_of_test_hooks()
in `loader.c' for all modules that
contain a function to be run at the end of a test run. Afterwards
control will be transfered back to the main()
function in
`fsc2.c', which will now wait for the user to start the experiment
(of course unless the user initated the test run by pushing the
Start
button, in which case the experiment will be started
immediately).
When the experiment is to be started the function run_file()
in
`fsc2.c' is invoked. Its main purpose is to ask the user if she is
serious about starting the experiment even if in the test run it was
found that there were some things that required a warning or even a
severe warning. If there weren't or the user doesn't care about the
warnings then the function run()
in run.c
is called, which
is where the interesting stuff happens.
The first thing the function has to do is to test the value of the
global variable EDL.prg_length
, which normally holds the number
of tokens stored during the analysis of the EXPERIMENT
section in
the array of tokens, EDL.prg_token
. But when it's set to a
negative value there's no EXPERIMENT
section at all in the
EDL
script, so no experiment can be done. The next step is to
intialize the GPIB
bus, at least if one or more of the modules
indicated that the devices they are controlling are accessed via this
interface. Afterwards we again have to check the value of
EDL.prg_length
. If it is 0 this means that there was an
EXPERIMENT
section label but no code following it. This is
usually used when people want to get the devices into the state they
would be at the start of the experiment (e.g. for setting a certain
pulse pattern in a pulser or going to a certain field position), but
don't want to run a "real" experiment yet. So we should honor this
request and call no_prog_to_run()
in this case.
In no_prog_to_run()
the hook functions in the modules to be
executed at the start of an experiment (via run_exp_hooks()
in
`loader.c') are called. These are responsible for bringing the
devices into their initial states. Then we're already nearly done and
call via run_end_of_exp_hooks()
another set of hook functions,
the ones that are to be executed at the end of an experiment. Now the
GPIB
bus can be released and device files for serial ports that
got opened are closed in case the module that opened them should have
forgotten to do so. And that's already the end of this miniml kind of
experiment.
For a real experiment more exciting things happen, started by calling
init_devs_and_graphics()
. Of course, also here the hook functions
to be run at the start of an experiment in all modules get called. Then
the new window for displaying the results of the experiment is created,
involving the initialization of all kinds of variables for the graphics.
This is done by a call of the function start_graphics()
, which
you will find in `graphics.c'.
Then we must prepare for the program splitting itself in two separate
processes, one for running the experiment and one for dealing with the
interaction with the user. This requires setting up channels of
communication between the two processes by calling setup_comm()
from `comm.c'. Since the communication between the processes is
quite important I would like to spend some time on this topic.
All communication between parent and child is controlled via a shared
memory segment. It is a structure of type MESSAGE_QUEUE
, declared
in `comm.h'. This structure consists of an array of structures of
type SLOT
and two marker variables, low
and high
.
When the child needs to send data to the parent (there's no sending of
data from the parent to the child that isn't initialized by the child)
it sets the type
field of the SLOT
structure indexed by
the high
marker (which it increments when the message has been
assembled) to the values DATA_1D
or DATA_2D
. It then
creates a new shared memory segment for the data and then puts the key
of this shared memory segment into the shm_id
field of the
slot. Whenever it has time the parent checks the values of the
high
and the low
marker and, if they are different, deal
with the new data, afterwards incrementing the low
marker. Both
markers wrap around when they reach the number of avalailable
SLOT
structures.
To keep the child process from sending more messages than there are free
slots in the shared array of SLOT
structures there's a semaphore
that gets initialized to the number of available slots and that the child
process has to wait on (thereby decrementing it) before using a new
slot. The parent, on the other hand, will post (i.e. increment) the
semaphore each time it accepted a message from the slot indexed by the
low
marker, thus freeing the slot.
Beside data the child also may need to send what's called "requests" in
the following. These requests always require an answer by the parent.
In this case the type field of the SLOT
structure is set by the
child to the value REQUEST
, indicating that this is a request. The
data exchange between parent and child for requests is not done via
shared memory segments but by using a simple set of pipes, both for the
data making up the request from the child as well as the reply by the
parent. A request will induce the parent to listen on the pipe and,
depending on the type of the request, to execute some action on behalf
of the child. It then either returns data collected in the process or
just an acknowledgment, telling the child process that it's done and
can continue with its work. Because the child has always to wait for a
reply to its request there never can be more than a single request in
the message queue.
After having run the start-of-experiment handlers in all modules,
initializing the graphics and successfully setting up the communication
channels the parent still has to set up a few signal handlers. One
signal (SIGUSR2
) will be sent by the child to the parent when
it's about to exit and must be handled. And also for the SIGCHLD
signal a special handler is installed during the experiment. This also
out of the way, the parent finally forks to create the child process
resonsible for running the experiment. From now on we will have to
distinguish carefully about which process we're talking.
If the call of fork()
succeeded, the parent process just has to
continue to wait for new events triggered by the user (e.g. by
clicking on one of the buttons) and to reguarly check if new data from
the child have arrived. The latter is done from within an idle handler,
a function invoked whenever the parent process isn't busy. The function
is called new_data_callback()
and can be found in `comm.c'.
But most of the actual work, i.e. accepting and displaying the data,
is done in the function accept_new_data()
in `accept.c'.
The child process will have itself initialized in the mean time in the
function run_child()
in `run.c'. It closes the ends of pipes
it doesn't need anymore and sets up its own signal handlers. Then its
main work starts by invoking the function do_measurement()
where,
just as already described above for the test run, the stored tokens from
the EXPERIMENT
section of the EDL
script get interpreted.
Again for tokens not involved in flow control the parser created from
`exp_parser.y' is used. This parser gets passed tokens delivered
from the array of stored tokens by the function exp_runlex()
in
`exp.c' (the same one as used in the test run), executing the code
associated with the sequences of tokens. And again tokens for flow control
are not dealt with by the parser but by the code in
deal_with_program_tokens()
.
The most notable differences to the test run is that the member
mode
of the global structure Internals
is now set to a
value of EXPERIMENT
, which is tested all over the program and the
modules, either directly or via the macro FSC2_MODE
. The other
difference is that now it is always tested if the child process got a
signal from the parent process, telling it to quit (this happens if the
member do_quit
of the global structure EDL
is set). In
this case the flow of control has to be transfered immediately (or, to
be precise, immediately after the parser has interpreted the current
statement) to the code following after the ON_STOP
label.
When all the code from the EDL
script has been executed the
function do_measurement()
returns to run_child()
. Here
the child exit hook functions in all modules is executed. These
functions are run within the context of the child process and shouldn't
be confused with the exit hook functions that are executed in the
context of the parent process when the module gets unloaded. Then the
child process sends the parent a signal to inform it that it's going to
exit and does so after waiting for the parent to send it another signal.
After this tour the force throught the childs code lets take a closer look at the interactions between the child and the parent. Most important is, of course, the exchange of data between the child and parent. We already mentioned above how this is done, i.e. via a shared memory segment or a set of pipes. Now lets investigate the way data and requests are formated a bit further.
As already has been mentioned, every data exchange is triggered by the
child process, which puts a SLOT
structure in to the message
queue, residing in shared memory, and increments the high
marker of the message queue. The type
member of the SLOT
structure is set to either DATA_1D
, DATA_2D
or
REQUEST
. Messages of type DATA_1D
and DATA_2D
are
messages related to drawing new data on the screen, either in the window
for for one- or two-dimensional data. Messages of REQUEST
are
messages that ask the parent to do something on behalf of the child
process (e.g. asking the user to enter a file name, click on the
button in an alert message, create, modify or delete an element in the
tool box etc.) and always require a reply by the parent.
In the accept_new_data()
function new messages of the types
DATA_1D
and DATA_2D
are taken from the message queue and
the corresponding functions get called. Because a message can consist of
more than one set of data (e.g. when in the EDL
function
display_1d()
new data are to be drawn for more than one curve,
there would be one set for each curve). Thus the first bit of
information in the data set in shared memory and indexed by the key
(member shm_id
) in the SLOT
structure is the number of
data sets. Should this number be negative it means that the message
isn't meant for the EDL
functions display_1d()
or
display_2d()
(i.e. the functions for drawing new data on the
screen) but for one of the functions clear_curve()
,
change_scale()
, change_label()
, rescale()
,
draw_marker()
, clear_marker()
or display_mode()
(either the 1D or 2D version of the function, depending if the data type
was DATA_1D
or DATA_2D
). The messages are dealt with in
the function other_data_request()
in `accept.c', which then
calls the appropriate functions in the parts of the program responsible
for graphics (which are `graphics.c', `graph_handler_1d.c',
`graph_handler_2d.c' and `graph_cut.c').
In contrast, for data packages with a positive number of sets, the
functions accept_1d_data()
or accept_2d_data()
(also in
`accept.c') get invoked for each of the data sets. It s the
resonsibility of these functions to insert the new data into the
internal structures maintained by the program for the data currently
displayed. When the functions are done with a data set these internal
structures must be in a state that the next redraw of the canvas for
displaying the data will result in the new data becoming displayed
correctly. Explaining this in detail would require to also explain the
whole concept of the graphics in fsc2
, which I am not going to do
here. If you want to know more about it you will find the source code
associated with graphics in the already mentioned files
`graphics.c', `graph_handler_1d.c', `graph_handler_2d.c'
and `graph_cut.c'. Keep in mind that this code will never be
executed by the child process but only by the parent.
Now about the handling of requests: when in accept_new_data()
a
message of type REQUEST
is found control is passed back to the
calling function, new_data_handler()
in `comm.c', which
then invokes reader()
. Within this function the parent process
reads on its side of the pipe to the child process. The child process
will write a structure of type CommStruct
(see `comm.h') to
the pipe. This structure contains a type
field for the kind of
request and a union, which in some cases might already contain all
information associated with the request. Otherwise, the child will also
have to send further data via the pipe to the parent. The layout o
these additional data depends strongly on the type of the request and
you will have to look up the functions that initialize the request to
find out more about it.
Writing of the data to the pipe is done by the child via the
writer()
function in `comm.c'. When the child process has
successfully written its data to the pipe it must wait for the parent to
reply by listening on its read side of the pipe by calling
reader()
. In the mean time the parent will execute whatever
actions are associated with a request and then call writer()
to send
back either just an acknowledgment or a set of data to the child process
waiting in reader()
for the reply.
Another thing that might get done while the parent process is running
its idle handler and if there aren't any more data by the child process
to be dealt with is checking if the HTTP server, that might have gotten
switched on by the user is asking for either information about the
current state of fsc2
or a file with a copy of what's currently
displayed on the screen. This is done in the function http_check()
in `http.c'. Please note that never more than a single request by
the HTTP server is serviced to keep fsc2
from being slowed down
too much during the experiment from a large number of such requests.
When the child exits the parent has again to do a bit of work. It deletes the channels of communication with the child process, i.e. closes its remaining ends of the pipes and removes the shared memory segments and the semaphore used to protect the message queue (after having worked its way through all remaining messages). Then it deletes the tool box (if it was used at all) end runs the end-of-experiment hook functions of the modules. If the GPIB bus or serial ports were used the appropriate function in the GPIB library is called to close the connection to the bus and also the device files for the serial ports are closed.
If the user now closes the window(s) for displaying the data the program
is in a state that allows a new experiment to be started. This can be
done by repeating the same experiment, in which case the already stored
tokens would be interpreted again, i.e. the EDL
script would't
have to be read in and tested again but the program would immediately go
to the start of the EXPERIMENT
section. But, of course, a
completely new experiment can be done by loading a new EDL
script.
This document was generated by Jens Thoms Toerring on September 6, 2017 using texi2html 1.82.