fear detection from eeg sensor

Locked
tcbsi
Posts: 1
Joined: 02 Aug 2014, 19:30

fear detection from eeg sensor

Post by tcbsi » 02 Aug 2014, 19:31

I'm newbie in neurofeedback research field.I use a neurosky mindwave mobile sensor to collect all my data. I want to ask you, What are the basic steps one must follow to detect the emotion of fear (detect fear values every second)? First i import the csv file in Matlab.What are the next steps to extract this emotion?

PS.I have attached a sample csv file here https_nospam_www.wetransfer.com/downloa ... 258/3aa7e3

Thank you in advance!

boulay
Posts: 382
Joined: 25 Dec 2011, 21:14

Re: fear detection from eeg sensor

Post by boulay » 02 Aug 2014, 23:42

I don't think the Mindwave is the appropriate toy/tool for the job. The only thing fear-related any scientists in the field will believe you can get out of a neurosky mindwave will be changes in muscle activation, e.g. changes in facial expression. You may as well use EMG electrodes.

Anyway, the answer to your question is the same as the answer to any "How do I detect brain state X?"

Assuming you had adequate signals to extract a real state-related brain signal then you'd need a large corpus of data containing segments where you know the subject was in state X and segments where you know they were not in state X, but their experiences were otherwise similar. How do you know someone is experiencing X? How can you have near-identical stimuli where you know one is generating X with high probability and the other is not? With "motor imagery", it is quite simple. You tell them to perform motor imagery and you trust that they are performing the task as demanded. With fear, it is not so simple.

Assuming you have good data, then you need to do feature extraction. Exactly how you do this depends entirely on the state you are trying to detect and if you know anything about it. Typically, you either look for signals that are time-locked to the presentation of a stimulus, or you look for signals that are not time-locked. For the latter, it is likely you will need to do a time-frequency transform of your data. There are many other features you can extract like connectivity metrics, laterality metrics, phase-amplitude-coupling metrics... etc. It helps if there is some scientific background for expecting certain signals with state X.

Once you have the features for each segment, and you have the labels for each segment, then you use your favourite machine-learning tools to find features that are different between labels.

Then you may proceed to doing feedback.
With knowledge of how these features can be aggregated to distinguish between state X and non-X, you then build a chain of filters that can process your incoming signals online in real-time to generate a number (typically called a "control signal") that gives you an idea of whether the subject is in state X or not. You then transform this control signal into some sort of feedback (tone frequency, visual stimulus speed, electrical stimulation amplitude, etc) that the subject must then learn to try to control. Even if they can acquire control, whether or not this has any implication on state X is unknown.

Locked

Who is online

Users browsing this forum: Google [Bot] and 11 guests