The AI toolkit to understand emotional expression and align technology with human well-being
Voice
Discover over 25 patterns of tune, rhythm, and timbre that imbue everyday speech with complex, blended meanings
Voice
Explore vocal utterances by inferring probabilities of 67 descriptors, like 'laugh', 'sigh', 'shriek', 'oh', 'ahh', 'mhm', and more
Face + Body
Differentiate 37 kinds of facial movement that are recognized as conveying distinct meanings, and the many ways they are blended together
Voice
Differentiate 28 kinds of vocal expression recognized as conveying distinct meanings, and the many ways they are blended together
Face + Body
Measure dynamic patterns in facial expression over time that are correlated with over 20 distinct reported emotions
Face + Body
An improved, automated facial action coding system (FACS): measure 26 facial action units (AUs) and 29 other features with even less bias than traditional FACS
Voice
Recordings of sentences being spoken in dozens of emotional intonations around the world, with self-report and demographics
Voice
Diverse vocal expressions (e.g., laughs, sighs, screams) worldwide, including 28+ call types with distinct self-reported meanings
Face + Body
Hundreds of thousands of diverse facial expressions worldwide, capturing 37+ expressions with distinct self-reported meanings
Multimodal
Reactions to thousands of evocative experiences across 4 continents, capturing 27+ distinct patterns of emotional response
Multimodal
Hundreds of thousands of short emotional conversations between friends or strangers, with detailed self-report ratings of expression
Empirical Evidence
Our platform is developed in tandem with scientific innovations that reveal how people experience and express over 30 distinct emotions
The ScienceNo one can read minds, so we start by using statistics to ask what expressions mean to the people making them
Our data-driven science represents real emotional behavior with 3x more precision than traditional approaches
We use scientific control and globally diverse data to remove biases that are entrenched in most AI models
Our Values
Supported Uses
Expressive understanding and communication is critical to the future of voice assistants, health tech, social networks, and much more
I'm InterestedVoice assistants: understand expressive queries, engage conversationally, and learn from expressive feedback
Social networks: moderate content with multimodal toxicity detection and well-being impact measures
Health tech: improve the screening, diagnosis, and treatment of mood, mental health, and pain
Accessibility: restore emotional intonation to speech aids, transcribe emotional signals, and guide empathic behavior
By using this website, you agree to our use of cookies.