three mental tasks with mu-rhythm

Forum for discussion on different brain signals
Locked
engicri
Posts: 6
Joined: 24 Nov 2009, 07:58

three mental tasks with mu-rhythm

Post by engicri » 04 Dec 2009, 10:36

Hi to all! :D

I am new to the BCI technology. I have got a MOBIlab+ a week ago and I am trying to use the BCI2000 software. I would like to work with the mu-rhythm but I need to discriminate three mental tasks instead of two. :(
I would like to know if there is any way to discriminate three states instead of two.

Thanks in advance,
Cristina

jawilson
Posts: 109
Joined: 28 Feb 2005, 16:31

Post by jawilson » 07 Dec 2009, 03:40

Hello Cristina, and welcome to the BCI2000 community :-). To directly answer your question, yes, it is certainly possible to do something like this using BCI2000. However, it can be a complicated procedure to use even ONE mental state for BCI control, much less two or more. Fortunately, we provide a comprehensive tutorial on setting up BCI2000 for Mu Rhythm experiments on the BCI2000 wiki at:

http://www.bci2000.org/wiki/index.php/U ... I_Tutorial

This tutorial explains how to configure BCI2000 to control a cursor in one dimension, i.e., to use real or imagined movements to move the cursor up or down. Once you are confident that you understand this process, then you can move on to more complicated configurations using multiple mental states.

Just so that I am sure that I understand exactly what you are trying to accomplish, would you mind describing in more detail what you have in mind, including the different mental states you are trying to discriminate?

Please let me know if anything is unclear, or if you have other questions. Good luck!
Adam

engicri
Posts: 6
Joined: 24 Nov 2009, 07:58

three mental tasks with mu-rhythm

Post by engicri » 07 Dec 2009, 17:42

Dear Adam,

thank you for your welcome message and reply to my post.

Actually, I have read the tutorial on Mu Rhythm but I haven't found what I needed. :(

More precisely, I am working on the control of external devices by BCI. To this end, I need to be able to discriminate three mental tasks. Since I am not an expert on signal elaboration, I have no clue which mental task to discriminate would be more appropriate.
In the state of the art, I have found a protocol implemented by Millan et al. for the guidance of a powered wheelchair which takes into account 5 mental tasks: relax, imagination of right and left hand (or arm) movements, "cube rotation", subtraction  and word association [ J. del R. Millàn, F. Renkens, J. Mouriño, W. Gerstener (2004): “Noninvasive Brain-Actuated Control of Mobile Robot by Human EEG”. IEEE Transactions on biomedical engineering, vol.51, NO.6, June 2004].
According to this work, during a training period the user tries to execute every of those mental tasks and, at the end of this period, the three tasks that he/she can execute easier are selected to be associated to the guide commands of the wheelchair.  The classifier is a statistical one but
I can't tell you more details on the signal elaboration that has been developed. I was thinking to consider their work as a starting point for my research.

I assume that having three mental task well recognizable is already a tough task on its own, so I wonder if anybody is working on this problem.

Thanks again,
Cristina

jawilson
Posts: 109
Joined: 28 Feb 2005, 16:31

Re: three mental tasks with mu-rhythm

Post by jawilson » 10 Dec 2009, 12:28

Cristina,
Sorry for the delay in getting back to you; I have been in China, and just finished traveling. Incidentally, I met with Dr. Millan at this conference, and saw much of his work on the wheelchair control project. I will make a suggestion as to how to proceed here, and you can let me know if it sounds like what you want to accomplish.

First, some background on how the mu rhythm task works in bci2000. As you likely know, there are changes in particular frequency bands associated with movements, for example, the power in the mu and/or beta bands decrease on electrode C3 during right hand/arm movements. BCI2000 uses a filter called the "linear classifier" to configure the output signal based on combinations of different frequency bands and channels. A few lines in the LinearClassifier parameter in BCI2000 might look something like this:

Input Channel | Bin | Output Channel | Weight
---------------------------------------------------------
6 | 10Hz | 1 | -1
7 | 14Hz | 1 | 1
10 | 20Hz | 2 | 1
14 | 24Hz | 3 | 1
---------------------------------------------------------

This indicates that there are 3 output signals that will be generated (the "Output Channel" column). Output channel 1 is the summation of the power in the 10 Hz power bin on channel 6 and the 14 Hz power bin on channel 7; also notice that channel 6 is given a weight of -1. If channels 6 and 7 corresponded to channels C3 and C4, then the first two lines could be used to control horizontal cursor movement, i.e., real or imagined movement with the right hand would move the cursor right, and with the left hand would move it left. Similarly, output channel 2 is comprised only of channel 10, using the 20 Hz power bin. This might be channel Cz, and records changes in the power during foot movement. Finally, output 3 is channel 14 using the 24Hz bin, which might be some other imagined movement or activity, depending on the electrode location.

Therefore, with this in mind, it is possible to theoretically configure any number of outputs, based on any type of input. It is your job to determine which signal features (i.e., channel(s) and frequency bins) are correlated with these tasks. We typically do this using a training period, similar to the one you mentioned in the article. This is done using the BCI2000 stimulus presentation task. This application presents different captions, pictures, or sounds to the subject for a short period of time, during which they perform the task indicated, and relax in between. For example, in a typical test session, we will present "LEFT HAND", "RIGHT HAND", "BOTH HANDS", and "BOTH FEET". Then using the BCI2000 offline analysis tool, it is possible to determine which electrodes and frequency bins were correlated with the actions. It would be very simple to modify this task to perform your new experiment. For example, you could add stimuli in which they performed different mental tasks such as subtraction, or did a "cube rotation" and so on. Then the offline analysis tool will tell you the channels and frequencies that changed with those activities. Once you have that information, it is possible to configure the BCI control application so that each mental state changes a different output signal.

I realize that this is a lot of information, particularly if you are just getting started. Therefore, if you like, I can create some parameter files for you that can help you get started, and work out the details of your experiment. To do so, I would need to know exactly what it is that you want the subjects to do during the different mental states.

Please let me know if you have other questions, and once you get this information to me, I can start working on your parameter files.

Thank you,
Adam Wilson

engicri wrote:Dear Adam,

thank you for your welcome message and reply to my post.

Actually, I have read the tutorial on Mu Rhythm but I haven't found what I needed. :(

More precisely, I am working on the control of external devices by BCI. To this end, I need to be able to discriminate three mental tasks. Since I am not an expert on signal elaboration, I have no clue which mental task to discriminate would be more appropriate.
In the state of the art, I have found a protocol implemented by Millan et al. for the guidance of a powered wheelchair which takes into account 5 mental tasks: relax, imagination of right and left hand (or arm) movements, "cube rotation", subtraction  and word association [ J. del R. Millàn, F. Renkens, J. Mouriño, W. Gerstener (2004): “Noninvasive Brain-Actuated Control of Mobile Robot by Human EEG”. IEEE Transactions on biomedical engineering, vol.51, NO.6, June 2004].
According to this work, during a training period the user tries to execute every of those mental tasks and, at the end of this period, the three tasks that he/she can execute easier are selected to be associated to the guide commands of the wheelchair.  The classifier is a statistical one but
I can't tell you more details on the signal elaboration that has been developed. I was thinking to consider their work as a starting point for my research.

I assume that having three mental task well recognizable is already a tough task on its own, so I wonder if anybody is working on this problem.

Thanks again,
Cristina

engicri
Posts: 6
Joined: 24 Nov 2009, 07:58

Post by engicri » 12 Dec 2009, 19:18

Adam,

I do appreciate your help. Actually, your post is reallly interesting and it has been very useful for me.

You wrote "..This indicates that there are 3 output signals that will be generated (the "Output Channel"
column)...". So, if I have understood correctly, three outputs are available from the BCI2000
classifier and I can obtain three different commands directly from it. In addition, while for the first
two commands the parameters have been already identified I would need a training period
to identify the parameters associated to the third mental task, namely the channel(s) and frequency
bins correlated with such a task. This could be achieved using the BCI2000 stimulus presentation task
and the BCI2000 offline analysis tool.
Now, the question is: WHICH task can be choosen and HOW can it be identified?
Obviously, it would be convenient to choose a mental task that is easy to execute and discriminate
with respect to the other two (that I suppose to be the imagery of the movement of the
right hand/arm and of the left hand/arm).

In order to achieve that I could follow two different directions.

The first one is to follow the Millan's approach. According to his work, I could choose the third
mental task among the additional taskes he proposed, e.g., imagery of cube rotation, mental
subtraction, word association. I wonder how complicated would be for the user to think about
a rotating cube in order to turn left, but this is not an issue..at least for now :-)
The tricky part is that even if i decide to use one of the Millan's mental tasks I don't know which part(s)
of the brain would be involved in these "non motor" activities, that means I don't know which
is(are) the electrode(s) to associate to the input channel of the classifier.

Otherwise, I could think to use a third motor task. Do you think this would be easy to execute?
If so i could chose:
1) imagery of movement of left arm [foot]
2) imagery of movement of right arm [foot]
3) imagery of movement of one foot [arm]
There could be any problem concerning the spatial proximity of the two brain areas associated to the
hand and the foot movements?

To summarize, I think that the imagery of foot or arm movements is easier than the one of a

cube rotation, a mental subtraction or word association, from the user point of view.
Nevertheless, the fact that Millan decided to choose a non motor task suggests that perhaps my opinion is
not completely correct :-)

What do you think about that?

Thank you again for your time!

Cristina

jawilson
Posts: 109
Joined: 28 Feb 2005, 16:31

Post by jawilson » 13 Dec 2009, 11:43

First, you could have any number of output channels from the linear classifier. For example, if you developed some method to record individual finger movements from individual electrodes, you could configure the classifier to have 10 output channels, one per finger. It all depends on your application.

As for the question of what the tasks should actually be, that is completely up to you. The arm/foot imagery tasks are relatively straightforward to get working and see results within a few minutes. I imagine the more complex the task, the more trials and time will be required to see results, and probably a denser EEG montage. However, if you believe that the task has a change in the EEG signals in the frequency-domain (such as the sensorimotor rhythms), you should be able to see this change using the stimulus presentation task. Just to reiterate the workflow:

1. Have subject perform the tasks using the StimulusPresentation application.
2. Use the OfflineAnalysis tool to determine the channels/frequency bins that change independently with each task.
3. Start the application (such as the 3-dimensional cursor movement task), and configure the linear classifier using the OfflineAnalysis results so that each output channel corresponds to a particular mental task.

In your case, you could discriminate left hand vs. right hand vs. feet to get 3 different states right away, without the other more complex mental tasks. If you have a higher density EEG montage, particular with the channels C5 and C6, you might try facial movements as well, such as sticking out the tongue or "kissing" the lips. The problem with facial movements is that you will likely see a lot of EMG noise in the EEG, so you may have to start with imagined facial movements from the start.

So, does all of this make sense? Like I said, if you think of specific tasks you want the subjects to perform, let me know, and I can create some BCI2000 parameter files for the stimulus presentation task for you.

Adam
engicri wrote:Adam,

I do appreciate your help. Actually, your post is reallly interesting and it has been very useful for me.

You wrote "..This indicates that there are 3 output signals that will be generated (the "Output Channel"
column)...". So, if I have understood correctly, three outputs are available from the BCI2000
classifier and I can obtain three different commands directly from it. In addition, while for the first
two commands the parameters have been already identified I would need a training period
to identify the parameters associated to the third mental task, namely the channel(s) and frequency
bins correlated with such a task. This could be achieved using the BCI2000 stimulus presentation task
and the BCI2000 offline analysis tool.
Now, the question is: WHICH task can be choosen and HOW can it be identified?
Obviously, it would be convenient to choose a mental task that is easy to execute and discriminate
with respect to the other two (that I suppose to be the imagery of the movement of the
right hand/arm and of the left hand/arm).

In order to achieve that I could follow two different directions.

The first one is to follow the Millan's approach. According to his work, I could choose the third
mental task among the additional taskes he proposed, e.g., imagery of cube rotation, mental
subtraction, word association. I wonder how complicated would be for the user to think about
a rotating cube in order to turn left, but this is not an issue..at least for now :-)
The tricky part is that even if i decide to use one of the Millan's mental tasks I don't know which part(s)
of the brain would be involved in these "non motor" activities, that means I don't know which
is(are) the electrode(s) to associate to the input channel of the classifier.

Otherwise, I could think to use a third motor task. Do you think this would be easy to execute?
If so i could chose:
1) imagery of movement of left arm [foot]
2) imagery of movement of right arm [foot]
3) imagery of movement of one foot [arm]
There could be any problem concerning the spatial proximity of the two brain areas associated to the
hand and the foot movements?

To summarize, I think that the imagery of foot or arm movements is easier than the one of a

cube rotation, a mental subtraction or word association, from the user point of view.
Nevertheless, the fact that Millan decided to choose a non motor task suggests that perhaps my opinion is
not completely correct :-)

What do you think about that?

Thank you again for your time!

Cristina

engicri
Posts: 6
Joined: 24 Nov 2009, 07:58

Post by engicri » 14 Dec 2009, 07:58

Adam,

following your advices and considering that I could have more than 3 output channels from the linear classifier, I would decide to take into consideration 4 different states: imagery of movement of left hand vs. right hand vs. left foot vs. right foot.

So, could you create the right parameters files for the stimulus presentation task for me?

Two other questions, please.

What are the necessary parameters for the stimulus presentation task? Are perhaps they related with electrode(s) location(s), input channel, frequency bin and weight?

Finally, the subject who performs the stimulus presentation tasks and the one who performs the defined 4 mental tasks for the control of an external device, have they to be the same person? How much can the parameters change from a person to another one?

I really thank you again,

Cristina

jawilson
Posts: 109
Joined: 28 Feb 2005, 16:31

Post by jawilson » 17 Dec 2009, 11:05

Cristina,

First, it is likely not possible to discriminate left foot vs. right foot. Electrode Cz will record changes in activity for both action, and you will not be able to tell which foot was moving. This is why we typically have subjects move both feet at the same time. However, it is certainly feasible to have 3 output channels based on left hand, right hand, and feet, recorded from C4, C3, and Cz. There is actually a sample parameter file that contains the appropriate StimulusPresentation task settings for this experiment. To load it, start BCI2000 with the StimulusPresentation task, the DummySignalProcessing module, and the g.MOBIlab+ amplifier (e.g., in the BCI2000 folder, start batch/StimulusPresentation_gMOBIlabPlus.bat), and load the parameter files:

parms/fragments/amplifiers/gMOBIlab.prm
parms/fragments/mu_tutorial/InitialMuSession.prm

This tells the stimuluspresentation app to display left, right, up, and down arrows, corresponding to the left hand, right hand, both hands, and both feet. If you wanted to change this so that it displays text instead of the arrow icons, in the Config window, go to the Application tab, and check the box that says "Present Captions", and UNCHECK the box that says "Present Icon Files." Then, if you want to change the text, you can edit the Stimuli matrix in the Config window to present different or additional captions.

For your additional questions:
What are the necessary parameters for the stimulus presentation task? Are perhaps they related with electrode(s) location(s), input channel, frequency bin and weight?
The stimulus presentation task does not require any of these parameters. It only displays time-locked stimuli, which allows you to perform an offline analysis in Matlab to determine the electrodes and features that should be used for the cursor movement task.
Finally, the subject who performs the stimulus presentation tasks and the one who performs the defined 4 mental tasks for the control of an external device, have they to be the same person? How much can the parameters change from a person to another one?
You want to perform the stimuluspresentation screening for every subject. Generally, we have a pretty good idea of the channels and frequencies that we can expect to see change for a given task, e.g., C3, C4, Cz and the Mu/Beta rhythms. However, the exact frequency bins that change with the task are likely different, and can sometimes vary from day to day with the same person. Therefore, you should always start with a screening task to get the best settings for each individual.

Let me know if you have more questions!
Adam

EliGC
Posts: 26
Joined: 29 Feb 2008, 10:33

Trying to find changes in mu and betha rhythms SM task

Post by EliGC » 06 Apr 2010, 15:46

Hello.


First of all, sorry about my long post. :?

Since two weeks a go, I started with the mu tutorial, implementing the same 4 task stimulation that is showed there. :D (We already had experimented during two years with P300, but sensorimotor rhythms are new for us :oops: ).

I have a couple of questions about the results that I found:

1. As the tutorial says, is imposible to achive the quality of results that are shown there, but I have to see something similar (when I analyse the signal with Offline analyzer GUI). I can´t determinate if my results are similar of yours, because sometimes I see paterns that show desynchronization in the contralateral side, but sometimes no :cry: (like left hand movement (is terrible), both hand movement, both feet movement (just show frequencies between 3-6 hz or more than 50hz). I will describe you my experiment, and please, if you see that I am doing something wrong in there that is the cause to not obtain that kind of desynchronization, let me know.

I am using 16 electrodes (around/in C3, Cz, C4) (g.EEG cap) and USBamp. I tested 5 subjects. The subject has to move right hand, left hand, both hand and both feet according to the stimulus showed in the screen. I use as basic stimulus presentation parameters called initialmusession. I just modified the number of repetitions per task, equall to 8. In this way, the subject had to do 8 times each task in a random order, and the number of sequences is 2.

2. When I try to see what is going on in each channel where the r^2 is maximum, sometimes I found synchronization instead of desynchronization between rest and movement conditions. For example: in channel C4 in frequence= 10 Hz left hand movement condition has larger amplitud than the rest condition.

2. I would like to know which time segment of signal are been analyzed and compare, for example: when condition 1 = 0 (rest) and condition 2 =1 (left hand movement). I suppose that you compare the mean of period when the stimulusCode is 0 (rest) with the mean of period when the stimulusCode is 1.

I mean, the GUI offline analyzer are comparing the average of 2 seconds movement stimulation with the average of 1 - 1.5 seconds (all the trials) of rest?. Or is different?

Do you think that I can improve this analysis considering more time, before and after de movement. This because, I many papers says that desynchronization and synchronization ocurred in specific times during and even before and after the movement was done.

3. I try to do more repetitions of all tasks and I didn´t found good results :( (desynchronization in the contralateral side). 20 20 20 20 (x4 times). Is better if we increase the number of repetitions to average? or the tiredness factor could influence here?.

Thank you so much for your help, I really appreciate it!!!

Eli

jawilson
Posts: 109
Joined: 28 Feb 2005, 16:31

Post by jawilson » 08 Apr 2010, 10:27

Hello,
In response to your long email, here is a short one :-):
Can you somehow send me one or more of your data files, so that I can look at them quickly to see if I catch anything? Do you have an FTP site you can give me access to, and put them there? Something like:
http://www.yousendit.com/ would work as well. Let me know, and I can take a look at the files.
Adam

engicri
Posts: 6
Joined: 24 Nov 2009, 07:58

Real Time in Matlab

Post by engicri » 16 Jun 2010, 08:27

Hi to all (in particular to Adam :) who helped me in the past and I hope he to do it again :oops: )!

I have a g.Mobilab+ / BCI2000 system in my lab.
Following Adams's previous suggestions, I performed the initial mu session to enhance the classification of 4 mental tasks (left hand vs right hand vs both hand vs both feet). All seems to be good (several tests are still under analysis).

Now, my aim is to translate these recognized mental tasks in commands to drive a simulated wheelchair running in matlab.
More precisely, the matlab simulator takes as input a variable assuming values 00 01 10 11 , each associated to a driving command (turn left, turn right, go forward, stop). Until now, I can drive realtime the simulated wheelchair pressing the associated arrow keys on the keyboard (left, right, up, down respectively). Now, I would substitute these keys with the mental tasks executing by the user (coming from BCI2000).

Then, I have the discriminated mental tasks from the BCI2000, I have the simulated wheelchair ready to receive and execute 4 commands in Matlab but I don't have the gate between these two agents.

On the BCI2000tutorial I read that the only way to interfacing real time BCI2000 e Matlab is through FieldTrip. Did I understand well?

In this case, can I perform the signal filtering and classification in BCI2000 and send to matlab the discriminated task as coded value of a variable ??? or do I have to send from BCI2000 to matlab the raw signals and process them in matlab???

Thank you in advance for your help.

Cristina

Locked

Who is online

Users browsing this forum: No registered users and 2 guests