Normalizer offsets and gains

Forum for discussion on different signal processing algorithms
Locked
emily
Posts: 40
Joined: 24 Mar 2008, 06:13

Normalizer offsets and gains

Post by emily » 08 Apr 2008, 02:47

Hello,
Are the normalizer offsets and gains which adapt during each session saved anywhere at the end of a session to be used in the subsequent session?

thanks

mellinger
Posts: 1208
Joined: 12 Feb 2003, 11:06

Post by mellinger » 08 Apr 2008, 14:06

Emily,

the adaptation is reflected in the NormalizerOffsets and NormalizerGains parameters. At the end of a run, these have their updated values.

Using the "Save" button from the operator module's parameter configuration dialog, you may save the updated values for the next session, as suggested in the Mu Rhythm Tutorial at http://www.bci2000.org/wiki/index.php/U ... n#Finished

HTH,
Juergen

emily
Posts: 40
Joined: 24 Mar 2008, 06:13

Adaptation

Post by emily » 13 May 2008, 09:15

Hello,

Would you be able to tell me if there is a significant difference in the training time if adaptation is turned off compared to adaptation turned on?

I recall reading that in one study using BCI2000 the majority of users reached over 80% accuracy within 3-6hrs of training. Would this have been with adaptation on or off?

Training time I am particularly interested in is for the 1D, 2 target task training mu.

Thanks for all of your help.
Emily

mellinger
Posts: 1208
Joined: 12 Feb 2003, 11:06

Post by mellinger » 13 May 2008, 09:24

Emily,

the reported performance is almost certainly with adaptation turned on.

There is large variation in mu rhythm amplitude even between sessions of the same subject, due to differences in electrode position, electrode impedances, or day-to-day variations in EEG amplitude.

For this reason, good performance cannot be expected without calibration, i.e. automatic adaptation to mu rhythm amplitude within a session. When you want to avoid continuous online adaptation for methodological reasons, I suggest that you separate each session into a calibration period of at least 20 trials during which adaptation is switched on; then, you may switch off adaptation, using the normalizer gain and offset values chosen by the adaptation algorithm from that initial calibration period.

HTH,
Juergen

aloplop
Posts: 41
Joined: 03 Sep 2008, 07:20

Post by aloplop » 18 Nov 2008, 11:37

Hello,

as it was commented on the last post, for my sessions online I perform
2 or 3 runs of about 20 trials in order to calibrate the normalizer before
stop calibrating and perform the real session (which is 10 runs of 20
trials each).

However, I have noticed that if I stop calibration at the end of a run, it appears a kind of bias, so that
the cursor in the Cursor Task is more sensible to up than down or viceversa. So I doubt whether I should stop
calibrating the normalizer
for example if I hit the upper and the lower
target correctly in two consecutive trials (after having performed some more trials).

Also, I have tried to perform the sessions with calibration, but I feel it
doesn´t help me as much as it should do so I decided to perform them
withot calibration. After some sessions the best result WITHOUT calibration was about 80% accuracy.

Have you got any recommendation??

Thanks.

mellinger
Posts: 1208
Joined: 12 Feb 2003, 11:06

Post by mellinger » 19 Nov 2008, 07:51

Hi,

due to all kinds of noise, the normalizer's computed offset and gain may differ from the actual values, which will be perceived as a "bias". Also, some additional form of bias is inherent in the fact that amplitudes do not follow a symmetric distribution (they can't go below zero, see http://en.wikipedia.org/wiki/Rayleigh_distribution for more details). Thus, noise will affect "up" trials differently from "down" trials, especially if the amplitude approaches zero for one of the conditions.

When you set the normalizer's buffer length to a low value (corresponding to a small number of past trials), then each single trial will have a comparably large effect on the computed offset and gain, resulting in more noise there.
On the other hand, using large normalizer buffers will result in less noisy parameter estimation but will require that the user produces consistent brain signals over a longer period of time.

If you don't want to use calibration at all, you will need to obtain appropriate offset and gain values from an analysis of your initial session. Still, these values might change on a day-to-day basis, so I think it makes sense to let the normalizer do this during a calibration period.

--Juergen

Locked

Who is online

Users browsing this forum: No registered users and 5 guests