Introduction

What is the emteqPRO?

Emteq creates wearable devices that measure our biological responses to stimuli – the measurable symptoms of our emotional response. We all know those symptoms; we flush when we are embarrassed, our hearts race when we are excited, and we make expressions (e.g. smile or frown) when we experience something pleasurable or nasty. We tend not to notice these physical responses when they are at the milder end of the spectrum, for example when we see a product on a shelf that catches our eye or when we are deep in conversation, but they still happen. In our society we applaud people who have high ‘Emotional Intelligence', which broadly means those of us who can read those more subtle physical & verbal cues to how someone feels – and this is the function of the emteqPRO. Our multi-sensor, multi-modal mask and connected machine learning has been built to have a high Emotional IQ and thus be able to read subtle cues to our emotional state and responses; changes in our muscle tone, differences in volume and intensity of our movement, features of our gaze and eyes, the pattern of our heartbeats and more.

the Emteq technology Stack

The emteqPRO platform consists of four primary components:

  1. A medical-grade wearable mask that detects the tell-tale symptoms of emotional state through its integrated physiological sensors. The emteqPRO contains photoplethysmographic (PPG), electromyographic (EMG) and Inertial Measurement Unit (IMU) sensors, providing data on facial muscle activation at 7 locations on the user's face, pulse features and information on head and upper body motion as well as an absolute barometric pressure sensor (Altimeter). The mask may be integrated with the HTC VIVE Pro / VIVE Pro Eye, or may be used stand-alone in "Open Face" mode. There is also an Android variant that integrates with the Pico Neo 3 Pro Eye, however this does not support "Open Face" mode.
  2. An SDK for the Unity 3D environment, providing integration between the immersive experience and the mask data.
  3. SuperVision, an application for monitoring data recording and immersive experience in real-time, providing in-flight supervision of emotional experiences and helping to ensure the highest quality of data recording.
  4. Our AI, the "Emteq Emotion AI Engine", our proprietary AI for generating actionable emotion insights from the collected data.
emteqPRO Mask for HTC VIVE Pro Eye emteqPRO Mask for Pico Neo 3

Why build a system for VR?

The emteqPRO has been created to work with Virtual Reality (VR) headsets for two primary reasons:

  1. VR provides the ultimate experimental environment for researchers of all types, enabling the creation of cost-effective simulations of locations and experiences that would otherwise be prohibitive in cost, safety concerns or accessibility. The emteqPRO is effectively a ‘lab-in-a-box', allowing infinite replay of situations, experiences and inputs for experimentation and learning.
  2. Context matters. The emteqPRO Mask collects data on physiological and emotional responses, but a response is only truly informative if you know what the context is – i.e. what triggered that response. Our Emteq SDK allows the programmatic collection of the precise object interaction, skills exercise or event that was the stimulus for any given response, allowing researchers and trainers to gather actionable information.

What Insights do you collect?

Data collected from the human participant wearing the emteqPRO Mask (responses), and from the connected experience (stimuli) are combined to create data insights. emteqPRO data insights are delivered at three levels:

  1. Sensor Insights. The emteqPRO provides high quality raw data for experimental analysis, as well as filtered sensor data that has been cleansed and is ready for ingest.
  2. Physiological Insights. Sensor data is translated into an understanding of physiological responses within the emteqPRO firmware, providing a real-time feed of physiological data – for example heart rate
  3. Affective Insights. The Emteq Emotion AI Engine is a cloud-based machine learning tool for translating our sensor insight data into an understanding of Affect – the measurable symptoms of emotion. For more detail on our approach to Affect, please see Obtaining Affective Insights.
  4. Contextual Insights. At their highest level, and combined with information on the stimulus that evoked the measured response, our affective insights can provide an understanding of stress response, cognitive load, pain response and more.

emteq's Hierarchy of Insights

The Science of Measuring Emotions

Why not just ask people what they feel?

Questionnaires are a well-established means of trying to gain insights into personality traits, preferences and likes, and provide highly detailed information on what an individual says they experienced when, for example, participating in consumer research. However it is widely known and accepted that what people say and what they feel does not always correlate - hence the importance of objective measures.

Implicit testing, aimed at eliciting an individual's automatic response to something, has been used effectively to uncover hidden or subconscious feelings and biases - but these tests are generally obtrusive and interruptive. Thus, the need for objective techniques and measures of response is paramount and has driven the development of the Emotion Analytics market in the last decade.

Why not just use Facial Expressions from Images?

The most common approach to understanding emotional response used in the Emotion Analytics market today is the analysis of images (stills or video) to classify emotions from facial expressions. The use of images to evaluate facial expressions was kick-started by the pioneering work of Paul Ekman in the 1960's. He developed a coding system that translated facial muscle movements into "action units" that correspond with certain changes in expression. This facial action coding system (FACS) was initially manually assessed by trained coders, which was a laborious process. As computer vision technologies developed in the 1990s and machine learning advanced, researchers were able to train these systems using expert coders to provide the "ground truth" label for each image. Underpinning this technology was the assumption that:

  • There is only a small subset of emotional expressions (happy, sad, fear, anger, disgust, surprise, contempt)
  • All people express in the same way to the same stimulus
  • Expressions can be inferred regardless of context

Whilst these assumptions may seem to be correct under specific circumstances under laboratory conditions, more recent work has shown that these assumptions do not always hold true in the real world.

The Categorical (classical) model assumes discreet categories without reference to context

In 2019, world-renowned emotions pioneer Professor Lisa Feldman Barrett rocked the research community with her review of the methods used to infer emotions from facial images. The findings of her research conclusively demonstrated that current models for inferring emotions from facial coding and classifying them into six categories needs to be re-considered.

"There are three common procedures for measuring facial movements in a scientific experiment. The most sensitive, objective measure of the facial movements, called facial electromyography (fEMG) detects the electrical activity from actual muscular contractions… This is a perceiver-independent way of assessing facial movements that detects muscle contractions that are not necessarily visible to the naked eye." Professor Lisa Feldman Barrett, Professor of Psychology at Northeastern University.

The main issue with the validity of scientific facial coding is the lack of context provided by just viewing the face. For example, furrowing the brow may be indicative of anger, which would typically lead the individual to increase engagement to remove the source of irritation. By contrast, the same expression may be seen in frustration, which would typically result in the opposite behaviour, with the individual moving away from the source of irritation. The comprehensive analysis of prior facial coding research by Professor Lisa Feldman Barrett and colleagues found that the scientific models underpinning facial coding methods are seriously flawed.

It is for this reason that at emteq labs we have developed our platform to:

  1. Use a multi-modal approach to objective measurement of emotional cues, including facial EMG.
  2. Classify our data on the Dimensional Model of Emotion, and not on the Categorical model with its inherently unsatisfying "boxes" of emotion.

The Dimensional Model

The Emteq Labs approach to classifying emotional response (Affect) is through use of the Dimensional Model.

Psychologists and researchers seeking to understand human responses to stimuli are principally interested in measuring whether those viewing media content are experiencing positive or negative emotions, known as valence. They also want to understand whether there is evidence of engagement or excitement as determined from measures of physiological activation, termed arousal.

The Dimensional Model