Emteq creates wearable devices that measure our biological responses to stimuli – the measurable symptoms of our emotional response. We all know those symptoms; we flush when we are embarrassed, our hearts race when we are excited, and we make expressions (e.g. smile or frown) when we experience something pleasurable or nasty. We tend not to notice these physical responses when they are at the milder end of the spectrum, for example when we see a product on a shelf that catches our eye or when we are deep in conversation, but they still happen. In our society we applaud people who have high ‘Emotional Intelligence', which broadly means those of us who can read those more subtle physical & verbal cues to how someone feels – and this is the function of the emteqPRO. Our multi-sensor, multi-modal mask and connected machine learning has been built to have a high Emotional IQ and thus be able to read subtle cues to our emotional state and responses; changes in our muscle tone, differences in volume and intensity of our movement, features of our gaze and eyes, the pattern of our heartbeats and more.
The emteqPRO platform consists of four primary components:
The emteqPRO has been created to work with Virtual Reality (VR) headsets for two primary reasons:
Data collected from the human participant wearing the emteqPRO Mask (responses), and from the connected experience (stimuli) are combined to create data insights. emteqPRO data insights are delivered at three levels:
Questionnaires are a well-established means of trying to gain insights into personality traits, preferences and likes, and provide highly detailed information on what an individual says they experienced when, for example, participating in consumer research. However it is widely known and accepted that what people say and what they feel does not always correlate - hence the importance of objective measures.
Implicit testing, aimed at eliciting an individual's automatic response to something, has been used effectively to uncover hidden or subconscious feelings and biases - but these tests are generally obtrusive and interruptive. Thus, the need for objective techniques and measures of response is paramount and has driven the development of the Emotion Analytics market in the last decade.
The most common approach to understanding emotional response used in the Emotion Analytics market today is the analysis of images (stills or video) to classify emotions from facial expressions. The use of images to evaluate facial expressions was kick-started by the pioneering work of Paul Ekman in the 1960's. He developed a coding system that translated facial muscle movements into "action units" that correspond with certain changes in expression. This facial action coding system (FACS) was initially manually assessed by trained coders, which was a laborious process. As computer vision technologies developed in the 1990s and machine learning advanced, researchers were able to train these systems using expert coders to provide the "ground truth" label for each image. Underpinning this technology was the assumption that:
Whilst these assumptions may seem to be correct under specific circumstances under laboratory conditions, more recent work has shown that these assumptions do not always hold true in the real world.
In 2019, world-renowned emotions pioneer Professor Lisa Feldman Barrett rocked the research community with her review of the methods used to infer emotions from facial images. The findings of her research conclusively demonstrated that current models for inferring emotions from facial coding and classifying them into six categories needs to be re-considered.
"There are three common procedures for measuring facial movements in a scientific experiment. The most sensitive, objective measure of the facial movements, called facial electromyography (fEMG) detects the electrical activity from actual muscular contractions… This is a perceiver-independent way of assessing facial movements that detects muscle contractions that are not necessarily visible to the naked eye." Professor Lisa Feldman Barrett, Professor of Psychology at Northeastern University.
The main issue with the validity of scientific facial coding is the lack of context provided by just viewing the face. For example, furrowing the brow may be indicative of anger, which would typically lead the individual to increase engagement to remove the source of irritation. By contrast, the same expression may be seen in frustration, which would typically result in the opposite behaviour, with the individual moving away from the source of irritation. The comprehensive analysis of prior facial coding research by Professor Lisa Feldman Barrett and colleagues found that the scientific models underpinning facial coding methods are seriously flawed.
It is for this reason that at emteq labs we have developed our platform to:
The Emteq Labs approach to classifying emotional response (Affect) is through use of the Dimensional Model.
Psychologists and researchers seeking to understand human responses to stimuli are principally interested in measuring whether those viewing media content are experiencing positive or negative emotions, known as valence. They also want to understand whether there is evidence of engagement or excitement as determined from measures of physiological activation, termed arousal.