Difference between revisions of "Contributions:AudioExtension"

From BCI2000 Wiki
Jump to: navigation, search
(Added AudioExtension Parameters to documentation)
(Authors)
 
(30 intermediate revisions by 5 users not shown)
Line 8: Line 8:
 
===Authors===
 
===Authors===
 
Griffin Milsap (griffin.milsap@gmail.com)
 
Griffin Milsap (griffin.milsap@gmail.com)
 +
 +
Jordan Powell (jpow7@outlook.com)
 +
 
===Version History===
 
===Version History===
06/11/2012: Initial public release;
+
* 2012/06/11: Initial public release;
  
 
===Source Code Revisions===
 
===Source Code Revisions===
 
*Initial development: 4095
 
*Initial development: 4095
*Tested under: 4095
+
*Tested under: 5896
*Known to compile under: 4095
+
*Known to compile under: 5896
 
*Broken since: --
 
*Broken since: --
  
Line 22: Line 25:
  
 
===Known Issues===
 
===Known Issues===
* Leaving the module running for long periods of time in halted state causes a long time of no state logging before signal goes to realtime. Seems to be unrelated to how long system was left running (~12-15 seconds) -- Not sure if this is an issue with the extension itself, or an issue with the [[Programming_Reference:Events|bcievent]] interface.
+
* Using DirectSound when suspending and resuming states can cause an issue where the file recorded drops samples, this can be fixed by suspending and resuming until the audio clears up. ''Luckily, AudioExtensions plays back what has been recorded so its easy to detect when this issue happens'', just restart the trial to fix or use ASIO where no known issues exist.
* Bandpass filtering in filterbanks doesn't appear to function
+
 
 +
*When compiling in Debug mode the audio clips and some data may be lost, this '''DOES NOT''' occur in release mode.
  
 
==Functional Description==
 
==Functional Description==
Line 29: Line 33:
  
 
==Integration into BCI2000==
 
==Integration into BCI2000==
Compile the extension into your source module by enabling contributed extensions in your CMake configuration.  You can do this by going into your root build folder and deleting <code>CMakeCache.txt</code> and re-running the project batch file, or by running <code>cmake -i</code> and enabling '''BUILD_AUDIOEXTENSION'''.  Once the extension is built into the source module, enable it by starting the source module with the <code>--EnableAudioExtension=1</code> command line argument.
+
Compile the extension into your source module by enabling contributed extensions in your CMake configuration.  You can do this by going into your root build folder and deleting <code>CMakeCache.txt</code> and re-running the project batch file, or by running <code>cmake -i</code> and enabling '''BUILD_AUDIOEXTENSION'''.  Once the extension is built into the source module, enable it by starting the source module with the <code>--EnableAudioExtension=1</code> command line argument (NB, as explained below, the numeric value here matters, and denotes the audio API to be used:  =1 means DirectSound).
 +
 
 +
===Building with ASIO support===
 +
ASIO is a driver that allows for recording from devices with up to four input channels. It also can provide lower latency than other audio drivers. To compile with ASIO support, visit https://www.steinberg.net/en/company/developers.html and download the ASIO SDK. Extract the downloaded SDK zip file to <code>src/extlib/portaudio</code> and rename it <code>asio</code>. Enable the AudioExtension in CMake and click "Configure". Make sure the "Advanced" option is checked in the CMake GUI and enable <code>PORTAUDIO_ENABLE_ASIO</code>. Click "Generate" and recompile BCI2000. ASIO will now appear as an option under the <code>EnableAudioExtension</code> parameter when BCI2000 is run with the AudioExtension enabled.
  
 
==Block Diagram==
 
==Block Diagram==
Line 38: Line 45:
 
The AudioExtension is configured in the Source tab within the AudioExtension section.  The configurable parameters are:
 
The AudioExtension is configured in the Source tab within the AudioExtension section.  The configurable parameters are:
  
*<code>EnableAudioExtension</code>  - Enables/Disables the AudioExtension.  This parameter performs double-duty as an audio host API selector.  The following values of this parameter are valid.  NOTE: Not all audio APIs are available on all platforms.
+
===EnableAudioExtension===
 +
Enables/Disables the AudioExtension.  This parameter performs double-duty as an audio host API selector.  The following values of this parameter are valid.  NOTE: Not all audio APIs are available on all platforms.
 
**[0] - Disabled
 
**[0] - Disabled
 
**[1] - DirectSound
 
**[1] - DirectSound
Line 55: Line 63:
 
**[14] - AudioScienceHPI
 
**[14] - AudioScienceHPI
  
*<code>AudioMixer</code> - This matrix of expressions mixes input (rows) to output(columns). It must be dimensioned with exactly <code>n</code> columns where <code>n</code> is the number of outputs. Row labels define the input source. Change row labels by double clicking on the row. The following inputs are valid row labels.
+
===AudioMixer===
**<code>X</code> - This is automatically interpreted as INPUT[X]
+
 
**<code>INPUT[X]</code> - This input will come from channel X on the sound card input.
+
The Audio Mixer is represented as an '''N x N''' Matrix, where '''N''' is the number of output channels on the selected device.
**<code>FILE[X]</code> - This input will come from channel X in the AudioInputFile.
+
 
**<code>TONE[X]</code> - This input will be a synthesized sine wave with the frequency of X Hz.
+
If the input device has 2 inputs and 2 outputs, the user must open the '''AudioMixer''' and set the matrix size to 2 x 2. To specify which input will be mapped to a specific output you place a <code>1</code> at the intersection of the row (input) and column (output).  
**<code>NOISE[X]</code> - This input will be generated white noise at X Hz.  NOTE: NOISE[] is white noise at the audio sampling rate (which defaults to 44100)
+
 
*<code>AudioInputDevice</code>  - The index for the device to use as the audio input device on the current Host API. See the operator log after "Set Config" for valid device indices on the selected host API. A value of -1 for this parameter selects the default input device on this host API.
+
''For the simplest configuration set the number of inputs and outputs and place a <code>1</code> in a diagonal line from the top left hand corner to the bottom right hand corner. ''
*<code>AudioOutputDevice</code>  - The index for the device to use as the audio input device on the current Host API.  See the operator log after "Set Config" for valid device indices on the selected host API.  A value of -1 for this parameter selects the default output device on this host API.
+
    row:1, column:1; row:2, column:2; row:3, column:3; ... , row:(N-1), column:(N-1); row:N, column:N;
*<code>AudioInputFile</code> - Audio file to use as audio input to AudioMixer.  The selected file can have any non-zero number of channels and be encoded in almost any format (except MP3), but MUST be encoded at 44100 Hz.
+
 
*<code>AudioRecordInput</code> - Enables/Disables recording of audio data to a file in the DataDirectory.
+
 
*<code>AudioRecordOutput</code> - Enables/Disables recording of audio data to a file in the DataDirectory.
+
By Default the Matrix will have numeric values for all the labels. To specify a different label, double click on the label and type the specified input type.  
*<code>AudioRecordingFormat</code> - Changes the file format and encoding options of the recorded output files.  This parameter has the following three options:
+
 
**Raw - Records to 16 bit Microsoft formatted WAV files with no compression.  These files open directly in MATLAB if that's interesting to you.
+
Below are a list of valid input labels:
**Lossless - Records to FLAC formatted files.  These files are slightly smaller than RAW files, but have no quality loss.
+
 
**Lossy - Records to Ogg Vorbis files.  These files are similar to MP3 but do not have the associated licensing issues.  They are compressed using a lossy algorithm, so the resulting files are very small but sound slightly worse than lossless encoding.  This format is good for long recordings where perfect quality is not necessary.
+
*<code>X</code> - This is automatically interpreted as INPUT[X], where x is the input channel on the device.
*<code>Audio[Input/Output]Filterbank</code> - A filterbank which filters audio input and output before rectification/smoothing for envelope extraction.  These butterworth filters will not be applied to the audible signal.  The format of the filter bank is as follows:
+
*<code>INPUT[X]</code> - This input will come from channel X on the sound capturing device.
**Type - The characteristic of the filter. The following values are valid.
+
*<code>FILE[X]</code> - This input will come from channel X in the specified ''AudioInputFile'' listed in the ''Source'' Tab of BCI2000 Config.
***Lowpass - Creates a low pass filter
+
*<code>TONE[X]</code> - This input will be a synthesized sine wave with the frequency of X Hz.
***Highpass - Creates a high pass filter
+
*<code>NOISE[X]</code> - This input will be generated white noise at X Hz.  ''NOTE: NOISE[] is white noise at the audio sampling rate (which defaults to 44100)''
***Bandpass - Creates a band pass filter *See known issues*
+
 
***Bandstop - Creates a band stop, or notch filter
+
===AudioInputDevice===
**Order - The order of the filter model. Higher order filters are more accurate but more expensive computationally.
+
Requires a number, which corresponds to an input device ID. Each Audio Recording Device connected to the computer has an associated number. To select a specific device, enter the number into the corresponding box. To view a list of detected Audio Input Devices in BCI2000 click on <code>Set Config</code> and the devices will be listed below 'Audio Extension Enabled' in the operator log.
**Cutoff1 - The cutoff frequency for Lowpass and Highpass filters, and the cut-on frequency for Bandpass and Bandstop filters.
+
 
**Cutoff2 - The cut-off frequency for Bandpass and Bandstop filters.
+
 
The matrix can have as many rows as necessary to filter the signal.  Filters can be applied in any order and their transfer functions are multiplied before filtering occurs.
+
'''Format:'''
*<code>LogEyeDist</code> - Enables/Disables logging of the distance from the screen to the eyes (again, rough)
+
              Audio Input Device ID [ '''i''' ] : ''[Name of Audio Device] supports '''N''' Input Channels''
*<code>AudioEnvelopeSmoothing</code> - The cutoff frequency for the low pass filter which is applied to the filtered and full-wave rectified audio data. This should be set to the highest frequency you want to see in the resulting audio envelope.
+
 
 +
 
 +
Where '''i''' is a number that corresponds to the Name of the Audio Device. A value of -1 selects the default input device on this host API.
 +
 
 +
Where '''N''' is the number of input channels that can capture audio. This is also used as the number to set up the ''AudioMixer'' during configuration.
  
==State Variables==
+
===AudioOutputDevice===
Unless otherwise specified, all states are prefixed with <code>Eyetracker<Left/Right>Eye</code> which corresponds with each individual eye.  The EyetrackerLogger extension does not support subjects with more than two eyes at the moment.
+
Requires a number, which corresponds to an output device ID. Each Audio Playback Device connected to the computer has an associated number. To select a specific device, enter the number into the corresponding box. To view a list of detected Audio Input Devices in BCI2000 click on <code>Set Config</code> and the devices will be listed below 'Audio Extension Enabled' in the operator log.
  
===GazeX, GazeY===
 
The eye gaze position (where - on the screen - the subject is looking) is returned from the Tobii SDK as 32 bit floating point numbers which (roughly) range from 0.0 to 1.0.  They are multiplied by 65535 and stored as 16 bit integers in these states if the <code>LogGazeData</code> parameter is enabled.  (0,0) corresponds to the top left of the screen, (65535,65535) corresponds to the right bottom of the screen. -- See [[Contributions:EyetrackerLogger#EyetrackerStatesOK|EyetrackerStatesOK]].
 
  
===PosX, PosY===
+
'''Format:'''
The eye position relative to the camera in 2D space is returned if <code>LogEyePos</code> is enabled.  Again, these are returned from the library as floating point numbers from 0.0 to 1.0 and are scaled to 16 bit integer values from 0 to 65535.  (0,0) corresponds to the top left of the camera's view, and (65535,65535) corresponds to the bottom right of the camera's view.
+
              Audio Output Device ID [ '''i''' ] : ''[Name of Audio Device] Supports N Output Channels''
  
===PupilSize===
 
The pupil size in mm is saved in this state if <code>LogPupilSize</code> is enabled.  It corresponds to the length of the longest chord drawn from one side of the pupil to the other.  The size will change depending on the eye position and distance from the screen.  Although it is given in mm, it would be best to use this as a relative measurement.
 
  
===EyeDist===
+
Where '''i''' is a number that corresponds to the Name of the Audio Device. A value of -1 selects the default output device on this host API.
The distance between the screen and the eyes in mm is saved in this state if <code>LogEyeDist</code> is enabled. This measurement is an approximation.  The actual measurement will depend on whether or not the test subject is wearing glasses or not.
 
  
===EyeValidity===
+
Where '''N''' is the number of output channels that where audio can be stored. This is also used as the number to set up the ''AudioMixer'' during configuration.
This state is a number from 0 to 4 and is documented in the Tobii SDK manual.  It is repeated here for convenience.
 
* 0 - The eye tracker is certain that the data for this eye is right. There is no risk of confusing data from the other eye.
 
* 1 - The eye tracker has only recorded one eye and made some assumptions and estimations regarding which is the left and which is the right eye.  However, it is still very likely that the assumption made is correct.  The validity code for the other eye is in this case always set to 3.
 
* 2 - The eye tracker has only recorded one eye, and has no way of determining which one is the left eye and which one is the right eye.  The validity code for both eyes is set to 2.
 
* 3 - The eye tracker is fairly confident that the actual gaze data belongs to the other eye.  The other eye will always have validity code 1.
 
* 4 - The actual gaze data is missing or definitely belonging to the other eye.
 
  
{| class="wikitable"
+
===AudioInputFile===
|-
+
Audio file to use as audio input to AudioMixer.  The selected file can have any non-zero number of channels and be encoded in almost any format (except MP3), but MUST be encoded at 44100 Hz.
! Code (Right - Left)
+
===AudioRecordInput===
! Description
+
Enables/Disables recording of audio data to a file in the DataDirectory.
|-
+
===AudioRecordOutput===
| 0 - 0
+
Enables/Disables recording of audio data to a file in the DataDirectory.
| Both eyes foundData is valid for both eyes.
+
===AudioRecordingFormat===
|-
+
Changes the file format and encoding options of the recorded output files.  This parameter has the following three options:
| 0 - 4 or 4 - 0
+
*Raw - Records to 16 bit Microsoft formatted WAV files with no compression.  These files open directly in MATLAB if that's interesting to you.
| One eye foundGaze data is the same for both eyes.
+
*Lossless - Records to FLAC formatted files.  These files are slightly smaller than RAW files, but have no quality loss.
|-
+
*Lossy - Records to Ogg Vorbis files.  These files are similar to MP3 but do not have the associated licensing issues.  They are compressed using a lossy algorithm, so the resulting files are very small but sound slightly worse than lossless encodingThis format is good for long recordings where perfect quality is not necessary.
| 1 - 3 or 3 - 1
+
===AudioInputFilterbank, AudioOutputFilterbank===
| One eye foundGaze data is the same for both eyes.
+
A filterbank which filters audio input and output before rectification/smoothing for envelope extraction.  These butterworth filters will not be applied to the audible signalThe format of the filter bank is as follows:
|-
+
*Type - The characteristic of the filter.  The following values are valid.
| 2 - 2
+
**Lowpass - Creates a low pass filter
| One eye foundGaze data is the same for both eyes.
+
**Highpass - Creates a high pass filter
|-
+
**Bandpass - Creates a band pass filter [[Contributions:AudioExtension#Known_Issues|*See Known Issues*]]
| 4 - 4
+
**Bandstop - Creates a band stop, or notch filter
| No eye foundGaze data for both eyes are invalid.
+
*Order - The order of the filter modelHigher order filters are more accurate but more expensive computationally.
|}
+
*Cutoff1 - The cutoff frequency for Lowpass and Highpass filters, and the cut-on frequency for Bandpass and Bandstop filters.
 +
*Cutoff2 - The cut-off frequency for Bandpass and Bandstop filters.
 +
The matrix can have as many rows as necessary to filter the signalFilters can be applied in any order and their transfer functions are multiplied before filtering occurs.
 +
===AudioEnvelopeSmoothing===
 +
The cutoff frequency for the low pass filter which is applied to the filtered and full-wave rectified audio dataThis should be set to the highest frequency you want to see in the resulting audio envelope.
 +
 
 +
==State Variables==
 +
The AudioExtension outputs the following state variables:
  
It'd probably be wise to remove all data points with a validity state of 2 or higher while running your analysis.
+
===Audio[In/Out]Envelope[0-3]===
 +
These are the envelope values of each channel (up to channel 4) of the audio inputs and outputs (in the AudioMixer matrix).  These 16 bit unsigned values correspond to the resulting envelope after the audio envelope extraction.  For architectural reasons, it is not possible to publish states after system startup, so you are limited to four channels of input and output.  The AudioExtension can be easily modified to change the number of channels by editing the <code>#define NUM_INPUT_ENVELOPES 4</code> and <code>#define NUM_OUTPUT_ENVELOPES</code> lines in AudioExtension.cpp, and recompiling your source module.
  
===EyetrackerStatesOK===
+
===AudioFrame===
Early versions of the extension didn't take into account that the library may return a number greater than 1.0 or less than 0.0.  This resulted in "pac-man" style wrap around of gaze coordinates in 2.0 and crashes in 3.0.  If the output from the library is out of bounds, it is clamped to the boundaries and the "EyetrackerStatesOK" parameter is changedA value of "1" corresponds to valid gaze data, a value of "0" corresponds to invalid "clamped" gaze data. Use the "GazeOffset" and "GazeScale" parameters to avoid clampingThose parameters scale and offset the data so that when it does go out of range, it can still be fit into the 16 bit state.
+
This 32 bit unsigned number corresponds to the current frame of audio data in the recorded output filesThis can be used to resynchronize the lossless audio to the resulting .dat file offlineAudio is sampled internally at 44100 Hz, so this number will roll over once every 27 hours or so.
  
 
==See also==
 
==See also==

Latest revision as of 15:19, 3 April 2019

Synopsis

An environment extension which manages multichannel, low latency audio I/O.

Location

http://www.bci2000.org/svn/trunk/src/contrib/Extensions/AudioExtension

Versioning

Authors

Griffin Milsap (griffin.milsap@gmail.com)

Jordan Powell (jpow7@outlook.com)

Version History

  • 2012/06/11: Initial public release;

Source Code Revisions

  • Initial development: 4095
  • Tested under: 5896
  • Known to compile under: 5896
  • Broken since: --

Todo

  • Fix Known Issues
  • Add per-sample resolution to envelopes

Known Issues

  • Using DirectSound when suspending and resuming states can cause an issue where the file recorded drops samples, this can be fixed by suspending and resuming until the audio clears up. Luckily, AudioExtensions plays back what has been recorded so its easy to detect when this issue happens, just restart the trial to fix or use ASIO where no known issues exist.
  • When compiling in Debug mode the audio clips and some data may be lost, this DOES NOT occur in release mode.

Functional Description

Experiments which require audio input or real-time audio synthesis based on system state are now possible with the AudioExtension. This extension is capable of recording multiple channels of audio input, synthesizing tones or noise, and reading encoded audio files. These channels are input to a mixing matrix which mixes those inputs to multiple channels of audio output. Both input and output are run through a simple filterbank, then they have their envelope extracted and logged into states via the bcievent interface. Audio input and output channels can be recorded into audio files losslessly and can be resynchronized offline. The mixing matrix is a matrix of expressions which can be used to dynamically change audio mixing based on the system state.

Integration into BCI2000

Compile the extension into your source module by enabling contributed extensions in your CMake configuration. You can do this by going into your root build folder and deleting CMakeCache.txt and re-running the project batch file, or by running cmake -i and enabling BUILD_AUDIOEXTENSION. Once the extension is built into the source module, enable it by starting the source module with the --EnableAudioExtension=1 command line argument (NB, as explained below, the numeric value here matters, and denotes the audio API to be used: =1 means DirectSound).

Building with ASIO support

ASIO is a driver that allows for recording from devices with up to four input channels. It also can provide lower latency than other audio drivers. To compile with ASIO support, visit https://www.steinberg.net/en/company/developers.html and download the ASIO SDK. Extract the downloaded SDK zip file to src/extlib/portaudio and rename it asio. Enable the AudioExtension in CMake and click "Configure". Make sure the "Advanced" option is checked in the CMake GUI and enable PORTAUDIO_ENABLE_ASIO. Click "Generate" and recompile BCI2000. ASIO will now appear as an option under the EnableAudioExtension parameter when BCI2000 is run with the AudioExtension enabled.

Block Diagram

AudioExtensionBlockDiagram.png

Parameters

The AudioExtension is configured in the Source tab within the AudioExtension section. The configurable parameters are:

EnableAudioExtension

Enables/Disables the AudioExtension. This parameter performs double-duty as an audio host API selector. The following values of this parameter are valid. NOTE: Not all audio APIs are available on all platforms.

    • [0] - Disabled
    • [1] - DirectSound
    • [2] - MME
    • [3] - ASIO
    • [4] - SoundManager
    • [5] - CoreAudio
    • [6] - Disabled
    • [7] - OSS
    • [8] - ALSA
    • [9] - AL
    • [10] - BeOs
    • [11] - WDMKS
    • [12] - JACK
    • [13] - WASAPI
    • [14] - AudioScienceHPI

AudioMixer

The Audio Mixer is represented as an N x N Matrix, where N is the number of output channels on the selected device.

If the input device has 2 inputs and 2 outputs, the user must open the AudioMixer and set the matrix size to 2 x 2. To specify which input will be mapped to a specific output you place a 1 at the intersection of the row (input) and column (output).

For the simplest configuration set the number of inputs and outputs and place a 1 in a diagonal line from the top left hand corner to the bottom right hand corner.

   row:1, column:1; row:2, column:2; row:3, column:3; ... , row:(N-1), column:(N-1); row:N, column:N;


By Default the Matrix will have numeric values for all the labels. To specify a different label, double click on the label and type the specified input type.

Below are a list of valid input labels:

  • X - This is automatically interpreted as INPUT[X], where x is the input channel on the device.
  • INPUT[X] - This input will come from channel X on the sound capturing device.
  • FILE[X] - This input will come from channel X in the specified AudioInputFile listed in the Source Tab of BCI2000 Config.
  • TONE[X] - This input will be a synthesized sine wave with the frequency of X Hz.
  • NOISE[X] - This input will be generated white noise at X Hz. NOTE: NOISE[] is white noise at the audio sampling rate (which defaults to 44100)

AudioInputDevice

Requires a number, which corresponds to an input device ID. Each Audio Recording Device connected to the computer has an associated number. To select a specific device, enter the number into the corresponding box. To view a list of detected Audio Input Devices in BCI2000 click on Set Config and the devices will be listed below 'Audio Extension Enabled' in the operator log.


Format:

             Audio Input Device ID [ i ] : [Name of Audio Device] supports  N Input Channels


Where i is a number that corresponds to the Name of the Audio Device. A value of -1 selects the default input device on this host API.

Where N is the number of input channels that can capture audio. This is also used as the number to set up the AudioMixer during configuration.

AudioOutputDevice

Requires a number, which corresponds to an output device ID. Each Audio Playback Device connected to the computer has an associated number. To select a specific device, enter the number into the corresponding box. To view a list of detected Audio Input Devices in BCI2000 click on Set Config and the devices will be listed below 'Audio Extension Enabled' in the operator log.


Format:

             Audio Output Device ID [ i ] : [Name of Audio Device] Supports N Output Channels


Where i is a number that corresponds to the Name of the Audio Device. A value of -1 selects the default output device on this host API.

Where N is the number of output channels that where audio can be stored. This is also used as the number to set up the AudioMixer during configuration.

AudioInputFile

Audio file to use as audio input to AudioMixer. The selected file can have any non-zero number of channels and be encoded in almost any format (except MP3), but MUST be encoded at 44100 Hz.

AudioRecordInput

Enables/Disables recording of audio data to a file in the DataDirectory.

AudioRecordOutput

Enables/Disables recording of audio data to a file in the DataDirectory.

AudioRecordingFormat

Changes the file format and encoding options of the recorded output files. This parameter has the following three options:

  • Raw - Records to 16 bit Microsoft formatted WAV files with no compression. These files open directly in MATLAB if that's interesting to you.
  • Lossless - Records to FLAC formatted files. These files are slightly smaller than RAW files, but have no quality loss.
  • Lossy - Records to Ogg Vorbis files. These files are similar to MP3 but do not have the associated licensing issues. They are compressed using a lossy algorithm, so the resulting files are very small but sound slightly worse than lossless encoding. This format is good for long recordings where perfect quality is not necessary.

AudioInputFilterbank, AudioOutputFilterbank

A filterbank which filters audio input and output before rectification/smoothing for envelope extraction. These butterworth filters will not be applied to the audible signal. The format of the filter bank is as follows:

  • Type - The characteristic of the filter. The following values are valid.
    • Lowpass - Creates a low pass filter
    • Highpass - Creates a high pass filter
    • Bandpass - Creates a band pass filter *See Known Issues*
    • Bandstop - Creates a band stop, or notch filter
  • Order - The order of the filter model. Higher order filters are more accurate but more expensive computationally.
  • Cutoff1 - The cutoff frequency for Lowpass and Highpass filters, and the cut-on frequency for Bandpass and Bandstop filters.
  • Cutoff2 - The cut-off frequency for Bandpass and Bandstop filters.

The matrix can have as many rows as necessary to filter the signal. Filters can be applied in any order and their transfer functions are multiplied before filtering occurs.

AudioEnvelopeSmoothing

The cutoff frequency for the low pass filter which is applied to the filtered and full-wave rectified audio data. This should be set to the highest frequency you want to see in the resulting audio envelope.

State Variables

The AudioExtension outputs the following state variables:

Audio[In/Out]Envelope[0-3]

These are the envelope values of each channel (up to channel 4) of the audio inputs and outputs (in the AudioMixer matrix). These 16 bit unsigned values correspond to the resulting envelope after the audio envelope extraction. For architectural reasons, it is not possible to publish states after system startup, so you are limited to four channels of input and output. The AudioExtension can be easily modified to change the number of channels by editing the #define NUM_INPUT_ENVELOPES 4 and #define NUM_OUTPUT_ENVELOPES lines in AudioExtension.cpp, and recompiling your source module.

AudioFrame

This 32 bit unsigned number corresponds to the current frame of audio data in the recorded output files. This can be used to resynchronize the lossless audio to the resulting .dat file offline. Audio is sampled internally at 44100 Hz, so this number will roll over once every 27 hours or so.

See also

User Reference:Logging Input, Contributions:Extensions