Data Insights
Sensor and Physiological Outputs
The sensor and physiological data are outputted directly under the system information section in the .csv
file.
This contains the ‘Time' (time elapsed since the start of the recording), EMG contact (impedance), EMG raw sensor, EMG Filtered, EMG Amplitude (RMS) for each sensor (from ‘0' being the first EMG sensor to ‘6' being the last), followed by the PPG raw sensor data, PPG average heart-rate (BPM), HRV metrics and IMU sensor data from each axis.
Important The row or line on which the headers are outputted can vary between hardware and firmware versions.
The table below show the detailed view of the measures provided.
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
Faceplate/FaceState | Discrete OFF>ON face information indicating when the device is detected as being worn. |
Faceplate/FitState | Abstract continuous measurement of Mask ‘Fit' with higher values representing the ideal state of system performance/quality. |
Faceplate/FitState.any# | Supplementary data counting number of electrodes which have any contact on either electrode of the pair. |
Faceplate/FitState.both# | Supplementary data counting the number of electrode-pairs which are both in contact. |
Faceplate/FitState.settled# | Supplementary data counting the number of electrode pairs with settled contact. |
Emg/ContactState[SENSOR_NAMES] | Discrete (OFF>ON>STABLE>SETTLED) contact information (8-bit value). |
Emg/Contact[SENSOR_NAMES] | Impedance measurement of electrode-to-skin contact when #Emg/Properties.contactMode is AC mode. |
Emg/Raw[SENSOR_NAMES] | Raw Analog signal from each EMG measurement sensor. |
Emg/RawLift[SENSOR_NAMES] | Supplementary AC contact mode signal (may be removed in future versions). |
Emg/Filtered[SENSOR_NAMES] | Filtered EMG data to contain only in-band EMG measurements. |
Emg/Amplitude[SENSOR_NAMES] | Amplitude of the muscle EMG in-band signal acquired by a moving-window RMS over Emg/Filtered. |
HeartRate/Average | Average beats-per-minute (BPM) of the cardiac cycle as measured from the Photoplethysmographic (PPG) sensor on the user's forehead. |
Ppg/Raw.ppg | Raw Photoplethysmographic (PPG) sensor reading which detects variation in blood volume within the skin of the user's forehead. |
Ppg/Raw.proximity | Raw proximity sensor reading. |
Accelerometer/Raw.(x y z) | IMU sensor reading of linear-acceleration for the X, Y, and Z axes. |
Magnetometer/Raw.(x y z) | IMU sensor reading of magnetic-field strength on the X, Y, and Z axes which can be used to derive absolute orientation or compensate for Gyroscopic drift. Not available on some models. |
Gyroscope/Raw.(x y z) | IMU sensor reading of angular-velocity on the X, Y, and Z axes. |
```EMG sensor names SENSOR_NAMES: RightOrbicularis, RightZygomaticus, RightFrontalis, CenterCorrugator, LeftFrontalis, LeftZygomaticus, LeftOrbicularis
```warning
**Important** The raw data values can be outputted directly to Volts if you selected the Dab2CSV Normalised FUNCTION.
For more information on each feature outputted in the .csv
file, please refer to the CSV Specification
We provide some sample data extraction and analysis scripts in the ‘Data Processing’ section.
How can I find the exact time I started the recording?
The Seconds.referenceOffset
in the system information can be found in the last commented lines before the sensor data in the csv file.
It defines the absolute date-time at which the recording started. It is a J2000 epoch timestamp.
Converting the epoch timestamp to human date & time
Using this value you can calculate the UnixTime.
unixTime = (Seconds.referenceOffset + 946684800)
You can also convert the UnixTime to Human readable dateTime by following the instructions below:
Starting_timestamp = Seconds.referenceOffset
To get this reference Offset and convert to ‘Unix' time there is an offset of +946684800 seconds between 01-01-1970 and 01-01-2000 e.g.:
- Starting_timestamp = 656682759 (an example).
- Unix_startingtime = Starting_timestamp + 946684800 = 1603367559 You can then easily convert the Unix_startingtime to UTC datetime using existing tools, e.g., https://www.epochconverter.com/, and e.g., the ‘datetime' library in Python.
- (1603367559) = Thursday, 22 October 2020 11:52:39
If you want to do the same for each sample (or observation), simply add the Unix_startingtime to the ‘Time' column values.
Synchronising data using the ‘Time' data column
The ‘Time' column in the sensor data output (usually the second column from the left) refers to the time in seconds at which the data was captured, with ‘0.0' being at the beginning of the recording. ‘Time' between records is regular in the .csv
but is governed by the fastest configured measurement i.e., if all are set at 50Hz then the .csv
file will have 50-records-per-second, but if a single measurement were 1000Hz then the .csv
will have 1000 records per second.
‘Time' may not always be regular as clocks may drift over very long recordings. Also, in rare occasions, whenever there is an ‘AdsMiss' event there will be a larger step. This shouldn't occur but any consumer of the data should be aware of potential gaps in the data due to PC or Device performance issues. Refer to the CSV specification for the specific AdsMiss message information.
Data Synchronisation and Processing
The .csv sensor data and .json
event data files saved during a data recording session can be imported and analysed in data analysis tools like MATLAB, Python and R. Event data and sensor data can be synchronised using the timestamps/Time (for more information see the CSV Specification).
To help you get started with data processing we provide you with sample analysis scripts for Python and with some sample data.
- emteqlabs_oneuser_AnalysisScript1 - shows how to import raw sensor and event data from one user and run some basic analysis on the EMG signals (corrugator vs zygomaticus) and heart rate for positive, negative and neutral videos.
- emteqlabs_multipleusers_AnalysisScript2 - shows how to import sensors data from multiple users, how to extract affective insights (valence and arousal) and compare a cohort based on demographic split (age).
The associated data for both scripts can be downloaded from the Downloads Section of this support site
Please Contact Support if you require any additional information.
Physiological data analysis references:
Fridlund, A. J., & Cacioppo, J. T. (1986). Guidelines for human electromyographic research. Psychophysiology, 23(5), 567-589.
Calibration and Experience
When recording data you will need to perform a short calibration session for each participant.
The proposed calibration steps (provided by our SDK) ensures that insights can be generated (via our SuperVision software) and that are tailored specifically to that individual. You can do so by running the calibration step prior to collecting any other data or running through the main experience of your content.
We recommend using the existing, out-of-the-box calibration step provided by our SDK and demos, including baseline and expressions calibration explained below. Note that calibration can be customised depending on the type of data you are collecting, with more time spent on the calibration portion as required. In order to gather the minimum amount of data required for this you must ensure you capture a 30 second period of time containing a selection of facial expressions, including neutral (no expression).
For best result we recommend first taking 2 minutes relaxation to gather baseline heart rate, followed by several expressions, such as in our example SDK Demo apps. These are currently including smiling with maximum intensity, frowning (squishing the brows together) with maximum intensity, and eye brow raise with maximum intensity. Further expressions may also be included when developing your own app but not required for the generation of our insights. Generally, each expression should be at least 3 seconds long, and should have 3 seconds of neutral between each expression.
What follows from the calibration (the "experience") is fairly unrestricted. You can include as many, or as few, event markers as you wish so long as the data captured after calibration is 2 minutes in duration minimum. This allows enough time for the affective insights listed below to process quality data.
Obtaining affective insights
Information When running the available Unity Demos, recording will start immediately once the application is loaded. However, running through sections such as ‘Affective Videos’ will produce additional event markers which can be used to identify expressions and emotions during the recording.
An integral part of the emteqPRO system is the Emteq Emotion AI Engine - a proprietary Artificial Intelligence (AI) used to analyse multimodal sensor data using advanced data-fusion and machine learning algorithms, and in doing so, to recognize the user's affective and emotional state.
In particular, the Emteq Emotion AI Engine consists of several modules:
- Heart-Rate Variability (HRV)
- Breathing Rate
- Expressions
- Arousal
- Valence
- Facial Activation
- Facial Valence
During Calibration you must collect several facial expressions such as smiling, frowning (with brow) and eye brow raising, as well as and extended neutral expression lasting at least 30 seconds for many of the affective insights to function.
To obtain the affective insights for a particular recording, you can use the ‘Data Insights' tab in the SuperVision application, which requires an active internet connection to function. There you should provide two files. Depending on the tool used for data collection and its version, you will either have:
- One
.json
file that contains the events data and one.dab
file with the calibration and experience data.- This is preferred as it contains more information and is easier to manage.
- Two
.dab
files - one that contains the calibration data and one that contains the experience data.
Please ensure the version number of the data insights you are using corresponds to the version of the system you are currently recording data with. The file requirements and the data outputs may be updated in future versions.
What Insights are currently offered?
Once the processing is done, you can download the resulting .csv
file in the ‘Your Insights' section. The generated .zip
file will contain a separate .csv
file with the output from each module that you have previously selected, as well as the original input data file (converted to .csv
format).
Heart-Rate Variability (HRV):
Calibration requirements: None
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
HRV/mean_hr | Mean heart rate (beats per minute). |
HRV/rr | Mean value of RR intervals (in milliseconds). |
HRV/sdnn | Standard deviation of RR intervals. |
HRVsdsd | Root mean square of successive differences. |
HRV/rmssd | Standard deviation of successive differences. |
Imu/MotionIntensity | An estimate of the motion intensity in the segment used for generating the output. It ranges from 0 (no motion) to 1 (high level of motion). |
Ppg/QualityIndex | Signal Quality Index (SQI) for PPG signal, which provides assessment of the suitability of the PPG signal for deriving reliable heart rate variability parameters. It ranges from 0 (low quality - ‘bad' signal) to 1 (great quality - ‘good' signal). |
This insight updates every 10 seconds of data. Data insights are logged at 1000Hz, or 1000 rows per second of data. This means you will see a repetition of the calculated value(s) within the .csv
file.
Breathing Rate:
Calibration requirements: None
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
BreathingRate | Estimated breathing rate (breaths per minute). |
Imu/MotionIntensity | An estimate of the motion intensity in the segment used for generating the output. It ranges from 0 (no motion) to 1 (high level of motion). |
Ppg/QualityIndex | Signal Quality Index (SQI) for PPG signal, which provides assessment of the suitability of the PPG signal for deriving reliable heart rate variability parameters. It ranges from 0 (low quality - ‘bad' signal) to 1 (great quality - ‘good' signal). |
This insight updates every 1 second of data. Data insights are logged at 1000Hz, or 1000 rows per second of data. This means you will see a repetition of the calculated value(s) within the .csv
file.
Check our paper on breathing rate estimation here: Stankoski, S.; Kiprijanovska, I.; Mavridou, I.; Nduka, C.; Gjoreski, H.; Gjoreski, M. Breathing Rate Estimation from Head-Worn Photoplethysmography Sensor Data Using Machine Learning. Sensors 2022.
Expressions
Calibration requirements: EMG data - including maximum expressions
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
Expression/Type | Recognized facial expressions - smile, frown, or surprise (if no expression is recognized, the expression type is neutral). |
Expression/Intensity | Intensity of the recognized expression, expressed as a percentage of the maximum expression (recorded in the calibration file). |
This insight updates every 100 milliseconds, or ten times per second of data. Data insights are logged at 1000Hz, or 1000 rows per second of data. This means you will see a repetition of the calculated value(s) within the .csv
file.
Arousal:
Calibration requirements: good PPG and EMG data - including maximum expressions
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
Arousal/class | Model's prediction for the arousal level, which can be -1 (low), 0 (medium), or 1 (high). |
Arousal/probability | Arousal class probability, representing the model's certainty in the outputted prediction, where 1 is the highest value. |
Imu/MotionIntensity | An estimate of the motion intensity in the segment used for generating the output. It ranges from 0 (no motion) to 1(high level of motion). |
Ppg/QualityIndex | Signal Quality Index (SQI) for PPG signal, which provides assessment of the suitability of the PPG signal for deriving reliable heart rate variability parameters. It ranges from 0 (low quality - ‘bad' signal) to 1 (great quality - ‘good' signal). |
This insight updates every 10 seconds of data. Data insights are logged at 1000Hz, or 1000 rows per second of data. This means you will see a repetition of the calculated value(s) within the .csv
file.
Valence:
Calibration requirements: good PPG and EMG data - including maximum expressions
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
Valence/class | Model's prediction for the valence level, which can be -1 (negative), 0 (neutral), or 1 (positive). |
Valence/probability | Valence class probability, representing the model's certainty in the outputted prediction, where 1 is the highest value. |
Imu/MotionIntensity | An estimate of the motion intensity in the segment used for generating the output. It ranges from 0 (no motion) to 1(high level of motion). |
Ppg/QualityIndex | Signal Quality Index (SQI) for PPG signal, which provides assessment of the suitability of the PPG signal for deriving reliable heart rate variability parameters. It ranges from 0 (low quality - ‘bad' signal) to 1 (great quality - ‘good' signal). |
This insight updates every 10 seconds of data. Data insights are logged at 1000Hz, or 1000 rows per second of data. This means you will see a repetition of the calculated value(s) within the .csv
file.
Facial Activation:
The facial activation algorithm calculates the amount of activation of the user's facial muscles, based on the data provided in the calibration session. The output of the algorithm is ranging from 0 (no activation of the facial muscles) to 1 (maximum activation of the facial muscles).
Calibration requirements: EMG data - including maximum expressions
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
FacialActivation | Model's prediction for the facial activation level, ranging from 0 (no activation of the facial muscles) to 1 (maximum activation of the facial muscles). |
This insight updates every 500 milliseconds, or twice per second of data. Data insights are logged at 1000Hz, or 1000 rows per second of data. This means you will see a repetition of the calculated value(s) within the .csv
file.
Facial Valence:
The facial valence algorithm recognizes if the user is experiencing positive of negative emotions based on their facial muscles' activations. In contrast to the standard valence algorithm, for which we use a multi-modal approach to detect different levels of emotional valence, the facial valence algorithm is based only on EMG data and facial expressions, and it can be used for catching more subtle changes in the valence of the user (the facial valence output is updated twice per second, as opposed to the standard valence output that is updated every 10 seconds).
Calibration requirements: EMG data - including maximum expressions
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
FacialValence | Model's prediction for the facial valence level, ranging from -1 (negative facial valence) to 1 (positive facial valence). |
This insight updates every 500 milliseconds, or twice per second of data. Data insights are logged at 1000Hz, or 1000 rows per second of data. This means you will see a repetition of the calculated value(s) within the .csv
file.
EMG Activation:
Calibration requirements: EMG data - including maximum expressions
Header | Output |
---|---|
Frame# | Row index. |
Time | Time elapsed since the start of the recording. |
Emg/Amplitude/zygo/weighted | The activation of the zygomaticus muscles detected on the respected sensor. The activation is expressed as a percentage of the maximum activation of the particular muscles from the calibration session. |
Emg/Amplitude/orbi/weighted | The activation of the orbicularis muscles detected on the respected sensor. The activation is expressed as a percentage of the maximum activation of the particular muscles from the calibration session. |
Emg/Amplitude/front/weighted | The activation of the frontalis muscles detected on the respected sensor. The activation is expressed as a percentage of the maximum activation of the particular muscle from the calibration session. |
Emg/Amplitude/corr/weighted | The activation of the corrugator muscle detected on the respected sensor The activation is expressed as a percentage of the maximum activation of the particular muscle from the calibration session. |
These insights are logged at 1000Hz.
Head Motion:
Calibration requirements: None
Header | Output |
---|---|
headMotion | A metric that shows the percentage of the recording session during which head movement is detected. |