Timing Theory

From BCI2000 Wiki
Revision as of 15:02, 13 April 2018 by Mellinger (talk | contribs) (Blockwise IO =)

Jump to: navigation, search

Timing is a complex issue, entering into the design of a BCI system at multiple places, in differing contexts and perspectives. Focusing on a single of these, rather than the full picture, may lead to unrealistic goals, and/or significant waste of effort.

Timing limitations of the Brain

First, human perception and processing in the brain itself have fundamental timing/bandwidth restrictions against which timing issues in a BCI system have to be compared, in order to avoid significant waste of effort improving beyond limits that could reasonably enter into the result of any experiment performed using such a system. It seems no coincidence that readily available video and audio playback hardware works within those limits, such as the visual system's temporal resolution at around 1/25s, and the auditory system's transition from perception of individual stimuli vs. continuous sound at roughly the same range between 1/25 and 1/15s. In Electrophysiology, the smallest time scale of evoked responses time-locked to a stimulus has been found to be the N100 wave so far, corresponding to a temporal resolution of 1/10s. From these observations, it appears that the brain itself cannot make use of a temporal resolution beyond 1/25s=40ms.

Causality

Second, it is important to observe that timely processing of brain signals, and timely presentation of stimuli, is only relevant for interactions which are truly causal, i.e. stimulus delivery that depends on the analysis of immediately preceding brain signals. In contrast, noncausal, which means predictable, interaction, as well as later analysis of experimental data is not affected by how fast a BCI system is able to process data; rather, it only depends on how accurately any time delays are recorded together with brain data, such that they may be used in later data analysis.

Nyquist's theorem

Third, bandwidth limitations both in data acquisition technology, and inherent in the kind of brain signals available to transfer information from or to the brain, impose fundamental limits onto the adverse effect that timing issues may have on experimental outcomes. Nyquist's theorem implies that, e.g. a brain rhythm with a frequency of 20Hz cannot be modulated to transport information about a temporal separation better than 1/10s. Moreover, standard EEG itself is bandwidth limited to 70Hz, and even in case of more sophisticated techniques such as ECoG which theoretically provide much more bandwidth, the amount of bandwidth available for bi-directional causal interaction between brain and BCI system is limited by the amount of unrelated background activity, which acts as a noise floor, masking the typically smaller amplitudes available at higher frequencies.

Processing delay

Fourth, the impact of processing delay depends on the kind of computation performed, in how it scales in terms of computational effort, and how well it is parallelizable. Where possible, BCI2000 will already exploit parallel computation in order to minimize processing delay. However, such gains are generally modest because they are limited by the number of full processor cores available in a machine. As an example, consider a typical desktop processor, 4 to 8 full cores are available, resulting in a factor 8 speedup at best. For a full-matrix spatial filter, parallelization is straightforward, and computational effort scales as O(n^2) in terms of channels, and O(n) in terms of sampling rate. Technological advance in the realm of data acquisition easily doubles both sampling rate, and number of channels within a few years, and will result in a computational effort increase by a factor of 8, consuming all of the gain available from parallelism. Basically, this means that in case of a well-parallelizable problem requiring polynomial time, significant processing delay will prevail and tend to get worse over time; this simply has to be accepted, as there is no technology that can mitigate this kind of problem. Rather, economically choosing algorithms, plus accepting and managing processing delays that exist, is the primary way to cope.

Blockwise IO

Fifth, unavoidable and significant delays exist due to the block-wise nature of data input and output in available computer systems. Data protocol overhead and physical limitations make it necessary to read EEG data in sample blocks, provide visual stimulation data in terms of full video frames, and fill audio output buffers in blocks. A CPU serving physical input/output channels, rather than just memory transfers, is typically unable to cope with the effort required to do so in a per-sample manner. Transferring data in blocks greatly reduces processing overhead, but inevitably introduces considerable delays corresponding to the temporal extent of a data block.

Realistic example

As an illustration, consider a BCI that provides visual stimulation on a flat-panel screen, responding to EEG data in real time. In such a system, typical delays will be

  • acquisition buffer delay: time delay between physical EEG amplitude measurement, and availability of EEG amplitude data for processing: half the acquisition buffer duration on average, full acquisition buffer duration max (typically 25ms).
  • processing delay (strongly dependent on data rate and algorithmic complexity)
  • frame rendering delay: time required to compute and store frame content in video memory
  • frame swap delay: time delay between delivery of a frame of video, and delivery of the next modified version of a video frame, e.g. buffer swap: video frame duration (typically 16ms)
  • frame transmission delay: time delay between modification of frame data in video buffer, and reception of the modified frame in display hardware: video frame duration (typically 16ms)
  • display response time: time delay between frame change, and physical luminance/color change on display surface (typically 5ms)

In the example, video output delay will be 37ms, EEG input delay 25ms, together 62ms. If the sum of processing and frame rendering delay is sufficiently small, both will be masked by frame swap delay, and not enter into the total delay between input and output; still, the irreducible delay of 62ms will be large enough to warrant efforts for _managing_ delays rather than trying to _avoid_ them.