Emteq creates wearable devices that measure our biological responses to stimuli – the measurable symptoms of our emotional response. We all know those symptoms; we flush when we are embarrassed, our hearts race when we are excited, and we make expressions (e.g. smile or frown) when we experience something pleasurable or nasty. We tend not to notice these physical responses when they are at the milder end of the spectrum, for example when we see a product on a shelf that catches our eye or when we are deep in conversation, but they still happen. In our society we applaud people who have high ‘Emotional Intelligence', which broadly means those of us who can read those more subtle physical & verbal cues to how someone feels – and this is the function of the emteqPRO. Our multi-sensor, multi-modal mask and connected machine learning has been built to have a high Emotional IQ and thus be able to read subtle cues to our emotional state and responses; changes in our muscle tone, differences in volume and intensity of our movement, features of our gaze and eyes, the pattern of our heartbeats and more.
The emteqPRO platform consists of four primary components:
The emteqPRO has been created to work with Virtual Reality (VR) headsets for two primary reasons:
Data collected from the human participant wearing the emteqPRO mask (responses), and from the connected experience (stimuli) are combined to create data insights. emteqPRO data insights are delivered at three levels:
Questionnaires are a well-established means of trying to gain insights into personality traits, preferences and likes, and provide highly detailed information on what an individual says they experienced when, for example, participating in consumer research. However it is widely known and accepted that what people say and what they feel does not always correlate - hence the importance of objective measures.
Implicit testing, aimed at eliciting an individual's automatic response to something, has been used effectively to uncover hidden or subconscious feelings and biases - but these tests are generally obtrusive and interruptive. Thus, the need for objective techniques and measures of response is paramount and has driven the development of the Emotion Analytics market in the last decade.
The most common approach to understanding emotional response used in the Emotion Analytics market today is the analysis of images (stills or video) to classify emotions from facial expressions. The use of images to evaluate facial expressions was kick-started by the pioneering work of Paul Ekman in the 1960's. He developed a coding system that translated facial muscle movements into "action units" that correspond with certain changes in expression. This facial action coding system (FACS) was initially manually assessed by trained coders, which was a laborious process. As computer vision technologies developed in the 1990s and machine learning advanced, researchers were able to train these systems using expert coders to provide the "ground truth" label for each image. Underpinning this technology was the assumption that:
Whilst these assumptions may seem to be correct under specific circumstances under laboratory conditions, more recent work has shown that these assumptions do not always hold true in the real world.
In 2019, world-renowned emotions pioneer Professor Lisa Feldman Barrett rocked the research community with her review of the methods used to infer emotions from facial images. The findings of her research conclusively demonstrated that current models for inferring emotions from facial coding and classifying them into six categories needs to be re-considered.
"There are three common procedures for measuring facial movements in a scientific experiment. The most sensitive, objective measure of the facial movements, called facial electromyography (fEMG) detects the electrical activity from actual muscular contractions… This is a perceiver-independent way of assessing facial movements that detects muscle contractions that are not necessarily visible to the naked eye." Professor Lisa Feldman Barrett, Professor of Psychology at Northeastern University.
The main issue with the validity of scientific facial coding is the lack of context provided by just viewing the face. For example, furrowing the brow may be indicative of anger, which would typically lead the individual to increase engagement to remove the source of irritation. By contrast, the same expression may be seen in frustration, which would typically result in the opposite behavior, with the individual moving away from the source of irritation. The comprehensive analysis of prior facial coding research by Professor Lisa Feldman Barrett and colleagues found that the scientific models underpinning facial coding methods are seriously flawed.
It is for this reason that at emteq labs we have developed our platform to:
The Emteq Labs approach to classifying emotional response (Affect) is through use of the Dimensional Model.
Psychologists and researchers seeking to understand human responses to stimuli are principally interested in measuring whether those viewing media content are experiencing positive or negative emotions, known as valence. They also want to understand whether there is evidence of engagement or excitement as determined from measures of physiological activation, termed arousal.
Valence and arousal are often plotted on a two-dimensional graph called the Dimensional Model. The activation axis, plotted vertically, ranges from deactivation (low arousal) to activation (high arousal). The valence axis, plotted horizontally, ranges from negative to positive.
In this section we describe the data readings provided by the system and the associated methods used for the detection of the following physiological insights:
Method: Electromyography (EMG)
EMG (electromyography) records the movement of our muscles by capturing the electrical activity generated by muscle contraction. The muscles receive signals from the spinal cord via motor neurons which innervate the muscle directly at the neuromuscular junction. This innervation causes the release of Calcium ions within the muscle, ultimately creating a mechanical change in the tension of the muscle. As this process involves depolarization (a change in the electrochemical gradient), the difference in current can be detected by EMG.
EMG activity (usually measured in µV) is correlated to the amount of muscle activation, thus the stronger the muscle activation, the higher the recorded voltage amplitude will be.
The Amplitude of the EMG signal is calculated from the root mean square (RMS) envelope of the filtered signal. This is calculated using rolling (moving) windows. The RMS output is commonly used in EMG analysis, as it provides a direct insight on the power of the EMG activation at a given time, in a simple form, as shown in the graph below.
In his example, a user was recorded performing three expressions twice: a smile (top graph), a frown (second graph) and a surprised expression (bottom graph). For each expression some of the muscles were activated more than others. For example, during smiling, the zygomaticus sensors (right and left, here as orange and brown) and the orbicularis sensors (right and left, here as green and purple) are activated intensely, above the activation of the remaining sensors.
Example of EMG amplitude coming from the activation of the zygomaticus muscles (top), corrugator muscle (middle) and frontalis muscles (top).
The signal measured by the EMG sensors provides an insight of the muscle contractions and configurations made by the facial muscles during a VR experience. These can be voluntary or spontaneous (e.g., as a response to a stimulus). However, spontaneous and naturalistic expressions differ from posed voluntary expressions (see Duchenne versus non-Duchenne smile, citation), in terms of intensity, duration and configuration.
As the face is the richest source of valence information (Ekman, 2009), facial EMG provides a window to tracking valence changes.
What is provided for each sensor:
See more information on data outputs in ‘Data Acquisition' section.
Reference: Ekman P. (2009). Darwin's contributions to our understanding of emotional expressions. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 364(1535), 3449–3451. https://doi.org/10.1098/rstb.2009.0189
Method: Photoplethysmogram (PPG)
PPG (photoplethysmography) sensor, embedded within the emteqPRO mask, uses a light-based technology to detect the systolic peaks (and the rate of blood flow) as controlled by the heart's pumping action.
Throughout the cardiac cycle, blood pressure increases and decreases periodically – even in the outer layers and small vessels of the skin. Peripheral blood flow can be measured using optical sensors attached to the forehead, fingertip, the ear lobe or other capillary tissue.
The PPG sensor provides a comparative signal output to the electrocardiogram (ECG) gold-standard method, see the ECG/PPG graph below. The ECG records the electrical activity generated by heart muscle depolarization, which propagate in pulsating electrical waves towards the skin. Although the amount of electricity is in fact very small, it can be measured with ECG electrodes attached to the skin. However, it typically requires the attachment of multiple wet sensors in the chest area which can be cumbersome and obtrusive for the VR user.
How does it work?
A typical blood flow measuring device as the PPG sensor, has an LED that sends light into the tissue and records how much light is either absorbed or reflected to the photodiode (a light-sensitive sensor).
PPG are dry sensors that do not require skin preparation, unlike ECG devices.
Graph showing ECG and PPG sensor signals.
In above graph, the peaks (referred to as R for ECG and P for PPG) of the signals are outlines per sensor modality. Those peaks are relative to the heart-cycle, and from their distances we can extract other useful metrics including the measurement of beats per minute (BPM) and heart-rate (or pulse rate) variability (HRV) measures.
What is provided:
Heart Rate Variability
Heart rate variability (HRV)is being used to measure changes in physiological arousal and stress. HRV features are traditionally calculated from RR interval time series, which are extracted from ECG sensor data. More specifically, the RR interval time series represent the distances between successive heartbeats (RR intervals).
Alternatively, the distances between successive heartbeats can be calculated from photoplethysmography (PPG) sensor data. In our devices, a PPG sensor is placed on the forehead to measure users' blood flow. From the blood flow, pulse to pulse intervals (PP intervals) can be extracted. Similarly, as the RR intervals, the PP intervals represent the distances between successive heartbeats, but they are calculated through the PPG signal.
Technically, the PPG signal lags behind the ECG signal by the time required for transmission of blood flow, thus there is a small difference between RR intervals and PP intervals. However, there is a high correlation (median = 0.97) between the RR intervals and PP intervals . Additionally, there is no significant differences (p < 0.05) among HRV parameters computed using RR intervals and HRV parameters (also known as PRV when computed from PPG) computed using PP intervals. Thus, HRV can be reliably estimated from the PP intervals.
In summary, our HRV features are calculated from the PPG sensor data. For simplification, we are referring to them as HRV features instead of PRV features.
What is provided: Heart Rate Variability (HRV) features provided by the emteqPRO system.
|mean RR-interval||Mean value of the RR interval (milliseconds)||787.7 (79.2)|
|mean heartrate||Mean heart rate (beats per minute)||76.17 (7.7)|
|sdnn||Standard deviation of the RR intervals||136.5 (33.4)|
|rmssd||Root mean square of successive differences||27.9 (12.3)|
|sdsd||Standard deviation of successive differences||136.5 (33.4)|
See more information on data outputs in Data Acquisition section.
Method: Inertial Measurement unit (IMU)
Head movement tracking can be attained via the inertial measurement unit (IMU) which is integrated within the emteqPRO system. The IMU contains a gyroscope, an accelerometer and a magnetometer. Each sensor outputs data along three aces, the x, y and z. Such sensors can be easily integrated in wearable solutions and are non-invasive. A wealth of research is utilising inertial sensing for activity recognition in active experimental protocols and for inferring the underlying emotional state of the user.
What is provided:
See more information on data outputs in Data Acquisition section.
Method: Eye tracking (with VR headset only)
Eye tracking is the method that enables the continuous monitoring of the eyes movements. This in turn allow us to track where the eyes are pointed towards (and therefore what they are looking at). Eye trackers are the sensors that measure the eye motion relative to the head as well as pupil changes. Some of the most popular eye trackers are using computer vision techniques via video-feed and infrared lights to track the eye pupil. Such sensors are embedded within commercial VR headsets, such as the HTC VIVE Pro Eye and the Pico Neo 2.
What is provided:
Note: These features are only provided via the emteqVR Unity3D SDK
See more information on data outputs in the Unity SDK Data Points & Data Sections and Recording Data section.