Yes that was clearer. Actually I answered that question previously but I wasn't clear the first time.
First, I should say that you are not the target audience for the P300Classifier. The P300Classifier is designed for people that do not stray from the standard protocol. If you write your own custom artifact rejection filter and you want to use that filter in your offline analysis then it is expected that your offline analysis will involve a lot of customization and therefore you cannot use the P300Classifier.
The P300Classifier expects raw (unprocessed) data. I didn't write the P300Classifier, but a brief look at the code suggests to me that the first thing the P300Classifier does is run the data through a common average reference. Thus, if you really want to use the P300Classifier, and you really want to use your artifact rejection filter on the data before you use the P300Classifier, then you have to take the raw data, process it offline up until your artifact rejection step, then undo the other processing steps, then save the data to a dat file. This is complicated.
First, build a
command-line version of your filter
and any other filters you want to use on the data before it is passed to P300Classifier.
Then you will have to use these filters offline to take your data from the raw .dat format into an intermediate processed format
Then you will have to undo the actions of the filters other than your artifact rejection filter.
Then you will have to write the processed data back to a .dat file.
Now, once that is all done, you can load the data in the P300Classifier.
Before you go and do all of that, please answer back here to give me a general idea of how your artifact rejection filter works. If it uses ICA, then the above is not a good method for you.
Also, I recommend not doing the above anyway. If you want to use your filter in the offline analysis then that's fine. Use the techniques I mentioned above. However, instead of undoing the actions of the other filters, writing a dat file, then using P300Classifier, you would be better off leaving the data in the processed state then using custom signal analysis techniques to build your classifier.