3. Architecture

BCIoverview.gif

This figure shows how BrainStream? is incorporated in a general BCI-setup. It provides means for getting the data from the equipment involved. It knows by definition what actions need to be executed for every possible marker. It guarantees proper execution order, signals when data is available and includes options for parallel processing of events.
The user-friendly approach involves the entire definition of an experiment by a set of text files. Modular-based principles will be applied to facilitate compositions of new experiments by using fragments of previously defined experiments. This also includes parts of the framework that are specific to the BCI-hardware setup. It allows for an easy and flexible setup and sharing of (new) experiments.

Introduction

Brainstreams architecture is built around the smallest definition possible, i.e., a single processing step. Each processing step defines a logic time to inform BrainStream? when it should be processed. It defines which global variables are needed during this processing step, which functions should be applied to it, which global variables should be updated, and where it should execute.
Slide_processing_step.png

Related processing steps can be combined into a processing pipeline. For each pipeline, it can be specified how much data is requested from each connected data source. Processing pipelines have unique names and BrainStream? definitions use this name to refer to the processing steps involved in the pipeline. Within a pipeline, information between possible different processing steps is preserved. This property is very useful if there is a time lag between the two moments when information is set and when it actually needs to be processed, especially if in-between other pipelines modify the same information. A typical example is the labeling of applied stimuli versus the moment associated data is actually being processed. During runtime, processing pipelines provide BrainStream? the information necessary to instantiate an event. This event tells BrainStream? what to execute and stores all requested and produced information during the execution of all processing steps involved.

Slide_processing_pipeline.png

Experiment definition tables contain a collection of processing pipelines that together define specific functionality. It encourages people to build experiments in a modular way since any table can add references to other plugin tables, which definitions or functionality will then entirely be added to the current design. For instance, a single table could be dedicated to all processing steps and pipelines related to stimulus presentation.
Slide_table.png

A block is a collection of plugin tables. Processing steps/pipelines for the functional parts: hardware initialization, stimulus presentation, and data processing (feature extraction and classification), are each combined in a single table. These tables are then again combined in a block.
Slide_block.png

An experiment or BrainStream? project is a collection of blocks. A typical BCI experiment involves cap fitting, offline training, training a classifier, and the actual feedback. Each part can be separately defined in a block, completing the hierarchical structure of BrainStream? project definitions.
Slide_experiment.png

With all processing steps defines, BrainStream? still needs to know when exactly processing pipelines should execute. This is handled by markers, which are associated with processing pipelines. The next section will explain the details.
Slide_markers_mapped_timeline.png

Markers

In general a BCI application involves measurement of different kind of signals from different sources, i.e., brain signals, stimulus information, and user responses. Among others, these stimuli and responses represent “markers”. Not always can these signals be measured in a synchronized way. If acquisition systems allow some sort of injection of markers into the data stream, synchronized acquisition is possible and the sample index number can be utilized as timestamp (the current version of Brainstream follows this approach. For other cases, all events should in some way refer to the same timing source. Future versions of BrainStream? will try to solve this issue).
There can be different types of markers, e.g. stimulus, response and device control markers. As numbers are easy to confuse, the BrainStream? platform works with names everywhere. Names are strings, can contain characters (a..z), numbers (0..9), and must start with a character. Marker name, number, and type (i.e., marker=’tone’, type=’stimulus’, number = 1) combinations are stored in dictionary tables. The set of dictionaries must be unique, i.e., for a certain marker type one cannot specify the same number multiple times, and overall marker names must be unique. When a marker is received by the platform it becomes an event, based on definitions of corresponding processing pipeline.
Markers by itself do not represent any kind of processing. Meaning is given to it by the defined actions of corresponding processing pipeline associated with this marker. Actions can be freely defined in order to accomplish some particular task, which can be some numerical data processing, feedback to the subject or commands to control a data acquisition system. Incoming markers initiate an unlimited sequence of actions at unlimited number of logical timepoints. This mechanism together with scheduled and (by means of user functions) insertable markers allows for many options to choose from in order to achieve desired processing.

Events

Markers are received in synchrony with the data stream (they have a sample-accurate time resolution). They always have a specified order and translate their associated processing pipelines into events when received. Events have rich structure and can be regarded objects with data (optional), variables, and actions with different moments of execution relative to the onset of the event. Every subsequent same marker in the data stream instantiates a new event.

Actions associated with an event

By definition, by markers events can execute different actions. Examples of actions are data processing (preprocessing, analyses) by means of one or a list of Matlab functions and could be a complete BCI processing chain of artifact rejects, spectral analysis, classify, output feedback. If no data is associated with the event, the action may be e.g. to display a message on the subjects screen. Actions are also getting or putting variable’s content to the global variables (user defined variables to exchange information between events), loading or saving global variable’s content from/to disk or changing its content (explained in next section). All these actions are specified in (an) experiment definition table(s).

Markers instantiate events (or event objects) that can trigger actions at various logic times after they are received; immediately (time=EVENT), after some predefined time delay (time=..s), when requested data becomes available (time= DATA), or even wait until another marker appears in the input (time=marker). Event objects are kept 'alive' until all their actions for the different time points have been handled. Also, they can be discarded prematurely by calling an explicit 'cancel' function from one of the user functions in the list. Conceptually a processing time is atomic: its processing is dealt with before new actions may be triggered on that marker. Reserved marker names serve to process actions at experiment startup (BS_INIT), at exit finalizing the experiment (BS_EXIT), and to quit (BS_QUIT). BrainStream? will insert the BS_INIT marker at startup and the BS_EXIT marker at shutdown. Pressing the quit button from BrainStreams? graphical user interface will insert a BS_QUIT marker for which the user can define an alternative exiting route. Users can connect actions to these markers in order to define whatever is needed at startup, shutdown or quitting. Multiple time-action specs may be listed below each other one per row without duplicating the marker name. This is syntactic sugar that makes the table look better, in Table 2 (for overview), color bands group actions for a specific marker. Same-time rows for the same marker will be executed in row-order. In fact this is the same as listing a sequence of actions separated by commas. Functions in the table will be passed the event, but they may have extra arguments, all passed as optional parameters. Actions may enrich the event with new fields or update existing values in the fields, before it is passed to the next action.

Time can be one of the following:

EVENT
DATA
a number
another mrk
MRKSEQ
TIMEOUT

Executed at marker onset
Executed as data becomes available
Executed number seconds after marker onset
Executed at onset of marker mrk
only for marker sequences
only for marker sequences

To completely understand the rich structure of an event, it is important to realize that it is an ‘object’ with data and functionality. To some extent, it can be compared to the OOP (object oriented programming) concept. An object in OOP has data and functionality attached to it. The data can be partially public, and partially private. The functions operate on the data of the object, and can take additional data as input argument (e.g., put (new_value)). Differences to this OOP concept are 1) not all functionality is included, i.e., for functions the event object only refers to Matlab functions and doesn’t actually include its code, although for simple variable modifications, Matlab inline functions are included within the object; 2) during the execution of an event, new variables can be added, but will only exist during the ‘lifetime’ of the event; 3) it contains a predefined time scheme of when what actions should be executed, i.e., its ‘lifetime’ is known at forehand (it will be destroyed as soon as all actions have been executed).

An event consists of the following:

event

name (names can be freely chosen and spaces are allowed)
time points (logical time referring to time of marker arrival, data becoming available, of another marker)
EVENT
DATA
time delay in seconds
another marker
MRKSEQ (only for marker sequences, see here)
TIMEOUT (only for marker sequences, see here)
actions (function(s), modify variable(s), get or load, put or save)
modify variable
get variable(s)
load variable(s)
execute function(s)
put variable(s)
save variable(s)
Note - for multiple actions defined at equal time points, the processing order is determined by the type of action and as depicted in former list

client
Option to dedicate execution of processing step (time point) to another Matlab session.
data selection
Any table can include a separate DataSelection? sheet to define possible requested data for any marker, also in a very flexible way (see next paragraph).

This implies that an event is completely defined by specification of the actions for the time points and what amount of data should come with it. Define tp one of the possible time points, and ac one of the possible actions. Any combination of (tp,ac) can only be defined once.
In addition, for every time point or processing step it can be specified where it should execute. Since BrainStream? supports a design with multiple Matlab sessions running in parallel, every processing step can optional dedicate execution to another Matlab session, i.e., a client.

Data specified for an event

Each marker can optional specify a segment of data that should come along with the event. Data may start before or after onset of the marker and end before or after it. Data begin and end times are specified in a data selection table, like the table in the figure below.

  • slide_dataselection.gif

Positive numbers mean after, negative mean before onset of the marker. Events associated with marker mrk1 will receive data from 0.2 second before its arrival to 1.0 second after. Marker mrk2 receives all its data from a period before its arrival. Another marker may also terminate data reception. Marker A shows an example of how to compose flexible sized data epochs where each new occurrence of marker A ends data collection of its own previous one. This sequence ends by the occurrence of another marker B that stops data collection for the last marker A in row. Data associated with marker C will extent till another marker D is received (marker D could for example be associated with a button press event). Both data collection for markers A and C may timeout (if marker B repectively D does not arrive) after timeout seconds, as specified by the user. Note that if ending markers do not set the end of data selection and no timeout is specified, corresponding processing steps will never execute. Mentioned times t1, t2, and t3 can all be positive or negative numbers (or zero), and specify data selection end points relative to onset of the corresponding end marker. Multiple markers can end data collection for a specific marker. For marker A the endtime could also be something like: B+t1, C+t4, D+t5 etc. Instead of specifying timing offsets in seconds, they can also be specified in number of samples. To indicate such number should be interpreted as a number instead of time, put a '#' symbol behind the number (like, 1024#). For marker mrk1 a specification in sample numbers would make the table information: mrk1 -51# 256# considering a sample rate of 256 Hz.
Note that in the first case, 0.2 seconds will also be rounded off to the nearest integer (-51) by BrainStreams internal processing.

Multiple end time specification (separated by a comma) constitutes an implicit minimum. Row 4 specifies that MarkerA_'s data collection stops t2 second after receiving _MarkerB, or after timeout seconds, whatever happens first. The table can have a duration column (calculated from the start) and/or a endtime column. And the start column may be headed offset (in fieldtrip style). not yet implemented

Later versions of BrainStream? also support using a marker as logical begintime of data selection. In that case, it is not allowed to specify a marker for the endtime also.

Global variables (user state)

Different events (instances of same or different processing pipelines) can only exchange information via a setof (globally maintained) variables. This communication is explicit (it cannot be hidden in action programs). The variables are copied in and out of the corresponding fields of an event; actions on that event can only access the event fields. The variable names are given as headers in the action table, i.e. the fourth and subsequent columns in Table 2 and Table 3. These variables may be copied into an event field of the same name ( ‘get’ from user state; ‘load’ from disk) updated to another value (a matlab expression, possibly containing other variable names or its own name, or $self as abbreviation of the variable itself), or vice versa copy back the value from the corresponding field in the event ( ‘put’ to user state; ‘save’ to disk). With 'save', information of some user variable is copied to the runfolder and session folder (click here for more information). If during the execution of one block 'save' is executed multiple times for a specific variable (make sure save actions do not interfer with the timing of your experiment design), backup copies will be saved in the runfolder. The session folder only stores the last updated value. Load and save are especially useful if information from one block needs to be transferred to another. A subsequent block just defines a load for the specific variable at the BS_INIT marker to get content from a previously executed block.

This mechanism easily supports all kinds of counts and conditions, preventing the need for many different markers.

Experiment definition

Table structure

The first columns in a table are reserved to specify marker, time, function, and furthermore optional columns are feval, client and looptick, which will be discussed later. Although the following order of these columns in the table is arbitrary, it is best to keep it as described here. The first column (marker) specifies for which marker actions are defined. The second column (time) specifies at which logical timepoints (referred to time of incoming marker) actions should be executed. In the third column (function) none, one or more functions can be specified that will be executed in following order. These functions will always get event as first argument plus possible extra arguments (constants or any global variable). A client column is optional and can be used to direct execution of functions to another remote Matlab session. A feval column is optional and allows for specification of any functions that do not need to process any of the global variables (no event as first input argument) and do not need output arguments returned, but they can take global variables as input argument. A looptick column is optional and serves to define a special function that always executes on another client and will be put into a loop by BrainStream, click here for more information on how to use this. All subsequent columns are free to use for an arbitrary number of user specified variables (stored internally in BrainStreams? user state, i.e. the global variables).

marker time function feval looptick client var1 ........ varN
mymarker EVENT fnc1,fnc2       get,put ........ $self+1

Formatting rules for the marker column:
Entries contain a marker name, a list of marker names separated by commas, or a reference to another table. The effect of multiple markers in a single entry is that all defined actions will be associated with those markers. As an example, the following four tables are identical in their processing behaviour.

marker time function var1
mrk1 EVENT fnc1 $self+1
mrk1 EVENT fnc2  
mrk1 EVENT fnc3  
mrk2 EVENT fnc1 $self+1
mrk2 EVENT fnc2  
mrk2 EVENT fnc3  
If the next line defines an additonal processing step with actions for the same marker, marker name column can be empty.
marker time function var1
mrk1 EVENT fnc1 $self+1
  EVENT fnc2  
  EVENT fnc3  
mrk2 EVENT fnc1 $self+1
  EVENT fnc2  
  EVENT fnc3  
If markers define equal processing steps, they can be specified separated by commas in the same entry.
marker time function var1
mrk1, mrk2 EVENT fnc1 $self+1
  EVENT fnc2  
  EVENT fnc3  
If multiple functions are defined for a particular processing step (or timepoint), they can be specified together in one entry.
marker time function var1
mrk1,mrk2 EVENT fnc1, fnc2, fnc3 $self+1

Processing order of functions is determined by the order in which they are specified in the table (first order within cell entry, then order of rows). For instance:

marker time function
mrk1 EVENT fnc1, fncB, fnc4
mrk1 EVENT fnc2, fncA, fnc3
would execute: fnc1, fncB, fnc4, fnc2, fncA, fnc3 for marker mrk1
whereas,
marker time function
mrk1 EVENT fnc1, fncA, fnc3
mrk1 EVENT fnc2, fncB, fnc4

would execute: fnc2, fncA, fnc3, fnc1, fncB, fnc4

Marker names specified at different locations in the table add their actions just as shown in the previous example.

If markers require data to be passed the event structure, this must be specified in DataSelection tables (either file dataselection.txt or a subsheet named DataSelection in an edt-file). The structure is,

marker begintime endtime
mrk1 0 1

More information can be found in the paragraph about [[BrainStreamDocs.DocsSectionsArchitecture#DataSelection][data selection].

Dictionaries are required to translate incoming markers to their associated processing pipelines and marker names. This must be specified in Dictionary tables (either file dictionary.txt or a subsheet named Dictionary in a edt-file). The structure is,

marker type value substitute
mrk1 stimulus 100  

The substitute column is optional, more information about this can be found in the plugins section.

Modular composition of experiment definition

Building applications mostly involves putting together parts that are commonly used and a specific part responsible for the 'new' approach being investigated. To simplify reuse of parts that previously have been developed, BrainStream? supports plug-ins. The common parts then can be added to the new application as plug-ins and do not need to be redefined. It saves the researcher a lot of development time and encourages sharing parts of experiments between researchers within or among institutes. Although a plug-in can also be a function inserted in the function pipe-line, here it is focused on plug-in tables. Compared to function-only plug-ins, table plug-ins define a lot more, i.e. the specification of actions for multiple processing steps and multiple processing pipelines or markers.

Plug-in tables can be used in two different ways:
1) independent: the plug-in table entirely defines a complete implementation of some useful feature and has all required information defined.
2) dependent: the plug-in table actions must be merged with actions of (an)other marker(s), therefore, always a substitute needs to be defined. Information from the other merged marker(s) is required, they become ONE event and concequently information can be shared via the event structure.

The principle of referencing tables is shortly described here. A more detailed explanation can be found in the plugins section. By using referenced tables, experiments can be setup in a modular way. The referenced table can for example be some well defined functionality available by a plugin or just a part of the experiment that for reasons of readability is put in a separate table. Instead of a marker name in the marker column, a reference to another file (notation: @file) may be given in which case all processing pipelines defined for @file will be copied as default to all markers specified in the action table which @file is referring to. The defaults are used unless @file redefines certain processing steps actions ( tp,ac combination). In @file new actions (no default for ( tp,ac) means adding to the event) can be defined, which will be added to the event of corresponding marker(s).The exact notation in a table would be as follows:

@anothertable.edt SheetName
this refers to another BrainStream? table with sheet SheetName, or

@SheetName
this refers to another sheet enclosed in the same BrainStream? table (Note: if you use excel tables, the @-symbol indicates a formula, make sure to put a quote in front of the @)

The number of recursive calls is unlimited (theoretical). This functionality is especially useful if a large number of markers execute the same actions (see Table 4) or if one single event uses quite a lot of variables, not used by other events. It also gives the opportunity to isolate groups of actions with specific meaning (for example for the purpose of artifact detection), which can be referred to by other experiments. If for artifact detection, a new strategy is to be followed, only this table needs to be adapted and all experiments referring to this table will automatically use this new strategy. The same principle can be utilized (very useful too) for the definition of the initialization and closing part of an experiment.

--fig Table 4

In the example above in Table 4 all the markers that display instructions are listed in one referenced file (@Instruction) which is included as an additional sheet in the main action table. Use of referenced tables gives a better overview of all actions defined.
There is always one table that needs to be specified as the starting point for expanding all referenced tables into one big table (sheet Actions)). Any file can not refer to any of its referring files (this is tested for in the recursive expansion (thus guaranteed non-cyclic). In a separate sheet (DataSelection) it can be specified for which markers data is required.

Table variables

UNDER CONSTRUCTION

Miscelleaneous

Marker sequence

It is possible to temporarily treat marker information as data which then will be added as an additional field to the event structure. Specifying a marker name enclosed by brackets will indicate to BrainStream? that it should be treated as a sequence marker. All subsequent markers of that specific marker source are from then on differently processed. It can be specified how many markers should be collected, which corresponding value information will be put together in a Matlab cell-array. The cell-array then will be added as a field to the event structure. The marker sequence can either be defined for a fixed number of markers, or defined to prolongue until another marker stops the sequence. The latter case enables flexible sized marker sequences. For sequence markers a special logical time MRKSEQ can be used (in the time column) in order to specify actions that should execute the moment the marker sequence is completed. In case the sequence doesn't complete on time, a logical timepoint TIMEOUT can be used to specify the actions that need to be executed to work around the missing information.
The syntax forspecification of marker sequences is,
[<marker name> <number>?<name of parameter>;<timeout>], where,
marker name, name of the marker starting the sequence
number, number of markers to be collected for this parameter
? or * ,indicates a fixed (?) or unknown ( * ) number of markers to be collected. In case * , the next item specifies the name of the marker that ends the sequence.
name of parameter, this name will be added as field to the event structure and contains a string cell-array with the data (NOTE always strings!)
;, separate parameters form timeout
timeout, the maximum time to treat incoming markers as part of the sequence

Some examples:

[mrk1 ?parm1 1?parm2 ;0.05]
An incoming marker mrk1 starts the marker sequence. The next two markers will be processed as data for parm1 (one marker) and parm2 (one markers). The fields parm1 and parm2 wil be added to the event structure off marker mrk1. The names specified (i.e., parm1 and parm2) should not conflict with other global variables. The last item (after the semicolon) specifies a timeout value. In case not all markers arrive before this time interval, the alternative actions specified for the TIMEOUT timepoint will be executed instead of the ones specified for timepoint MRKSEQ.

[mrk2 5?parm3 ?parm4 7?parm5 ;0.099]
incoming marker mrk2 starts the marker sequence. Here, the next thirteen markers will be processed as data for respectively parm3 (5 markers), parm4 (1 marker), and parm5 (7 markers). Timeout value is set to 0.099 sec.

[mrk3 *parm6 mrk4 ;0.11]
Incoming marker mrk3 starts the marker sequence. An unknown number of markers will be processed as data for parm6 until marker mrk4 ends it. Timeout value is set to 0.11 seconds in this case.

Predefined Actions

Next to put and get, there are also functions for changing runtime behaviour, click here for more information.

Toolbox of processing

We will make wrapper functions to allow easy calling of functions from toolboxes such as fieldtrip or eeglab (specs will follow)

Programmers interface

Functions

Functions specified in the tables should take the form
event = function (event, arg1, ..., argN),
where,
arg1, …, argN are optional parameters (constants or global variables)
(Note: event is always the first argument and doesn’t need to be specified in the table)
Example
Rereference(event, {‘mastoid1’,’mastoid2’})
This function passes one additonal parameter to the rereference function, i.e., a cell array of channel names to be used as reference.

User defined variables will show up as fields in the event structure. For the event another_tone in Table 3 event would grow like:

time event action
EVENT event.ToneCount (get)
  event.SeqCount (get)
DATA event.data.raw (added by BrainStream)
  Event.trial.samp :
  Event.trial.offset :
  Event.trial.duration :
  : (other possible fields added by functions)
slice_now event.CorrectAnswer (get)
  event.Answer (get)
  event.CorrectAnswered (get)
  event.NumCorrect (get)
  event.NumWrong (get)

After having executed all actions for an event, the event structure is lost. Changes to variables will preserve within an event; however, over events they only do so if explicitly a put action was defined.

Information provided to you by BrainStream?

Functions get the event structure delivered from BrainStream. Fields in this structure represent all copies of requested (user) global variables. Some additonal information is also put in this structure. These are always: name, time, and brainstreamversion, and in case a DATA timepoint is involved also data, hdr and trial.

event.name                   : name of the marker
event.time                   : time [s] referred to number of acquired samples
event.brainstreamversion     : current version of BrainStream.BrainStream
-- if timepoint DATA, also --
event.data.raw               : raw data [channels x data]
          .eeg               : eeg data (only if information about eeg channels is available)
          .trial.time        : start of data segment [s]
          .trial.samp        : always set to 1
          .trial.offset      : start of data [#samples] 
          .trial.duration    : number of samples
      hdr.Fs                 : sample frequency      
         .channels.*         : see remarks below 
         .labels.*           : see remarks below
         .Cal                : raw data calibration matrix
         .nChans             : number of channels
         .label              : all channel labels (compatible with fieldtrip)
         .orig               : always original header information
         .usedcap            : name of cap used in current block (user should take care of further handling) 
event.trial                  : same as event.data.trial 

Dependent of the utilized data source, additonal fields might be present inside the hdr structure.

*: channel and label information can be grouped together based on settings specified for corresponding
hardware device topic in the blocksettings. It uses fieldrips ft_channelselection function to match
channels that belong to certain groups, like 'EEG' or 'EOG'. See fieldtrip website (http://fieldtrip.fcdonders.nl/reference/ft_channelselection) for more options.

In the blocksettings groups should be defined for the corresponding hardware devices topic, as follows:

[my_hardware_device]
BrainStream.GroupedChannels = ...
{   {'raw'},   {'all'};      ...
   {'eeg'},   {'EEG'};      ...
   {'eog'},   {'EOG'};      ...
   {'nonbrain'},   {'all','-EEG'} 
}
This will result in:
      hdr.channels.raw       : indices of all channels (same as 1:hdr.nChans)
      hdr.channels.eeg       : indices of all eeg channels
      hdr.channels.eog       : indices of all eog channels
      hdr.channels.nonbain   : indices of all non-eeg channels  
      hdr.labels.raw         : labels of all channels (same as hdr.label)
      hdr.labels.eeg         : labels of all eeg channels 
      hdr.labels.eog         : labels of all eog channels  
      hdr.labels.nonbrain    : labels of all non-eeg channels 
Grouping channels is useful in case data processing is meant for specific type channels only or if experiments execute in different labs with different signals. For example if an experiment is ported from an eeg lab environment to a meg lab environment, all you need to do is to make sure you have a group 'brain' defined for both GroupedChannels definitions of both hardware devices involved. If your code always uses event.data.raw(event.hdr.channels.brain,:) to access the data, it will in both situations lead to a processing of the correct portion of channels. Then, no changes to the data processing code is required.

#SecModVar

Modifying variables

Variables’ content can be modified in different ways. By changing them during execution of functions, or by specifying expressions in the table. Such expression can be simple such as ‘add one to the current value’, like $self+1, or using the name of the variable itself, like ToneCount+1, or just set it to some value. Specifying a number or string alone, means the variable will be set to that value. Expressions can use other global variables too, for instance in a Boolean expression using two of your global variables: Answer==CorrectAnswer. Furthermore, functions can be incorporated in these expressions as long as the entire expression results in a single output. This output will then become the new content of the variable. Note that also the arguments of these functions can use other global variables too, but will get parsed the values stored in the global variables space just at the time the mod-action for this processing step is started.

Examples are:

  • Set variable to content specified:
    -1, 0, [], {}
    ‘ a string’
  • Modify content by a simple expression
     $self+1
     Answer == BrainStream.CorrectAnswer 
     BrainStream.NumTrials - BrainStream.NumCorrect
  • Modify content using a function
    InitInstructionFigure()
    This function initializes properties for a figure. The return value of this function is the handle to this figure and put into InstructFig. By a get operation on this variable, the application receives the handle to the figure to be used for showing instructions to the subject.

Note: variables can also be of type structure. In this case a function is required to initialize the fields of the structure.

Getting variable content

BrainStream? maintains a central storage space (user state) for all user-defined global variables. From this space, content is being copied to the event structure (get) and vice versa updated to it (put). Users should be aware of these copying operations and prevent storage of large amounts of unused data.

Updating variable content

Users can choose the moments when changes made to the variables need to be updated to BrainStream’s central storage space. They should explicitly request for this by specification of a put -action at proper processing step or time point in the experiment definition table.

Notes

Note 1

During the processing of an event, over timepoints, extra variables (as many as you like) can be added as fields to the event structure. They will exist up to the point that all actions of the event have been executed and the event disappears.

Topic revision: r1 - 09 Nov 2009 - 17:40:41 - MarianneSeverens
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback