Hi everyone! How are you?
I made my own stimulus presentation using BCPy2000. Up to now, everything work "almost" fine. However, I need to know where the user is watching to determine the target that He/She want to spell.
My system consists of, g.MOBILab like my signal surce, P3SignalProcesing like my processing module and BciApplication.py like the application module.
I know that the processing module consists of several processes and according THIS topic, the OutputStates are, among others, StimulusCodeRes. Are these variables that allow me know where is the user is watching?
I don't know if I should process the incoming signals from processing module in my Application Module or there are some variables from Processing Module with this information (from LinearClassifier, for example).
Upto now in order to test the system, I use the Signal Generator module and the results are in one of the attach files.
I mean, I obtain the correct SimulusCode form P3SignalProcesing, but, I should work with these?
In other hand, when I said that my application works "almost" fine it is because some times I have a message like this:
The time line you can see in the attach file.Collision in state EventOffset : Oldvalue: 512 New Python Value: 509 New BCI Value: 0
debug warning in self.db:
<MultipleTransition': 5, 'state collision in EventOfsset': 43>
And, using BCI2000 v3.05 (latest) I have this message
Execution required more than a sample block duration. (twice).
Execution required more than a sample block duration. (9 times).
I'm using a 256Hz SamplingRate with a SampleBlockSize = 8
Perhaps, this will not influence in my system, haha!
Please guys, I need help.
Thank you so much, really.