Data & Insights
1. Structure of the data
The data in the storage is always structured in the same way. The data is stored in the following format:
- Project
- Participant_1
- Task_1
- sensor_data.txt
- video_data.mp4
- Task_2
- Task_n
- Participant_2
- Participant_n
2. Sensor Data
Sensor data is saved in .txt
format and includes a header and detailed data points:
Header Information
The header contains metadata about sensor details, sampling rates (50 Hz), and additional information which includes file information, device specifics, sensor information and calibration information. In the table below you can find the description of the fields in the header.
Header Field | Description |
---|---|
File Information | |
File/DateTime | The creation date and time of the data file |
File/FileFormat | The internal format identifier of the file |
File/User | The username assigned to this data |
File/Tags | User-defined tags associated with the file |
File/Category | The task category related to the data |
File/Exercise | Details of the task, such as the video name for video-related tasks |
Device Information | |
ConnectDevice/Name | The name of the device (e.g., iPad) used for data collection |
Software/Build.buildTag | The build tag for the software version |
Firmware/Build.buildTag | The build tag for the firmware version |
Device/Version.serialId | The serial ID of the device |
Device/Version.hardware | The hardware version of the device |
IMU Sensors (Accelerometer, Gyroscope, Magnetometer, Euler) + Pressure Sensor | |
sensor/Version.model | The model of the IMU sensors |
sensor/Properties.rawDivisor | The divisor used to convert raw sensor data into standardized units (m/s², deg/s, µT, deg, hPa) |
sensor/Config.hertz | The sampling rate of the IMU sensors |
Navigation (OCO™) and Proximity Sensors | |
Nav/Properties.rawDivisor | The divisor used to convert raw navigation data into standardized units [mm] |
Nav/Properties.sensorCount | The total number of navigation sensors |
Nav/Properties.ids[] | The IDs of the navigation sensors included in the setup |
Nav/Config/Raw.hertz | The sampling rate of the navigation sensors |
Prox/Properties.formula | The formula used to convert raw proximity data into standardized units [mm] |
Prox/Properties.sensorCount | The total number of proximity sensors |
Prox/Properties.ids[] | The IDs of the proximity sensors included in the setup |
Prox/Config/Raw.hertz | The sampling rate of the proximity sensors |
Calibration (If calibrated, both left and right sides are considered calibrated) | |
Calibration/SmileL | Indicates whether the smile expression is calibrated (true/false) |
Calibration/SmileR | Indicates whether the smile expression is calibrated (true/false) |
Calibration/BrowraiseL | Indicates whether the eyebrow raise expression is calibrated (true/false) |
Calibration/BrowraiseR | Indicates whether the eyebrow raise expression is calibrated (true/false) |
Calibration/FrownL | Indicates whether the frown expression is calibrated (true/false) |
Calibration/FrownR | Indicates whether the frown expression is calibrated (true/false) |
Data Points
Key data points included in each frame:
- Timestamp: Unix timestamp for each data frame
- Accelerometer/Raw: Raw data from 3 accelerometer channels
- Gyroscope/Raw: Raw data from 3 gyroscope channels
- Magnetometer/Raw: Raw data from 3 magnetometer channels
- Euler/Raw: Raw data from 3 Euler angles
- Nav/Raw: Data from 6 navigation sensors
- Proximity/Raw: Data from 6 proximity sensors
- Pressure/Raw: Data from pressure sensor
- Battery: Remaining battery percentage on OCOsense glasses
- RSSI: Bluetooth signal strength on the tablet
- DebugBits/Flags: Navigation and proximity sensor data quality
- MicBits: Status of front/rear microphones detecting audio
The data is provided in raw format. To convert the values into standardized units, apply the divisors specified in the header file.
The table below lists the standardized units associated with each sensor:
Sensor | Standardized Unit |
---|---|
Accelerometer | m/s² |
Gyroscope | deg/s |
Magnetometer | µT |
Euler | deg |
Navigation (OCO™) | mm |
Proximity | mm |
3. OCO Supervision Insights
When optional analytics is enabled in the OCO Supervision app, insights are included in the output .txt
file:
- OCO/State: Glasses' state (0 = not worn, 1 = calibrating, 2 = ready)
- Expression Intensity: Size of recognized expressions
- Expression Certainty: Confidence level of recognition
- High-Level Metrics:
- Attention: Defined as the percentage of frames in which the user's gaze remains within a 12.5-degree "attention circle" centered on the iPad. The radius of the attention circle was derived from the fact that iPad typically has a horizontal FOV of 25 degrees at normal user distance. Range: 0% to 100%
- Expressivity: Defined as how much of the total time has been spent in recognized expressions, without taking into account the intensity of the expressions. Range: 0% to 100%
- Interaction: Defined as the percentage of time the user interacts with the hosting device (iPad/iPhone) through screen taps. Range: 0% to 100%
- Valence: Measures emotional tone, ranging from 100% positive (smile) to 100% negative (frown), based on both the presence and intensity of expressions. Range: -100% (completely negative) to 100% (completely positive)
- Engagement: A measure combining attention and expressivity, indicating the level of user involvement. Engagement is high when the user is both attentive (within the attention circle) and expressive. Range: 0% to 100%
- Annotations: Custom labels defined by the user in the app
4. Video Data
Video recordings are saved in .mp4
format with consistent specifications:
- Resolution: 640x480
- Codec: H265
- Player Recommendation: VLC Player for playback
5. Cloud AI Insights
Advanced insights can be generated through the OCO Data Lab portal (see detailed instructions). These insights are generated using our most advanced models. Once the processing is done the insights are available in the "Processed Files" section.
For each file, the following outputs are generated:
- Class Predictions:
filename_output.csv
- Contains predictions for all available algorithms, with each column representing a different algorithm. - Class Probabilities:
filename_probabilities_output.csv
- Contains probability scores for each class prediction across all algorithms. Each column follows the format:{algorithm_name}_pred_proba_{class_name}
. - Video Output:
filename_output.mp4
- If a video was recorded with the iPad camera, an output video is generated that includes both the recording and overlaid prediction probabilities.
Our models currently support the following analyses:
Facial Expression Recognition
The system classifies various facial expressions, which can reveal emotional responses and engagement levels. Possible expressions (with their numeric mappings) include:
- Neutral (0)
- Smile (1)
- Frown (2)
- Eyebrow Raise (3)
- Squeezed Eyes (4)
- Other Facial Movements (5)
Human Activity Recognition (HAR)
Recognized physical activities can provide context for emotional data and overall engagement. The activities detected (with their numeric mappings) are:
- Jogging (0)
- Jumping (1)
- Laying (2)
- Stationary (3)
- Walking (4)
Note: The numeric mappings for each class are used in the output file for class probabilities, where each class is represented by its corresponding number.