Teaching AI to make people happy

The AI toolkit to understand emotional expression and align technology with human well-being

The leading empathy toolkit

our models
Speech Prosody
Model

Voice

Discover over 25 patterns of tune, rhythm, and timbre that imbue everyday speech with complex, blended meanings

Vocal Call Types
Model

Voice

Explore vocal utterances by inferring probabilities of 67 descriptors, like 'laugh', 'sigh', 'shriek', 'oh', 'ahh', 'mhm', and more

Expressive Language
Model

Voice

Measure 53 emotions reliably expressed by the subtleties of language

Facial Expression
Model

Face + Body

Differentiate 37 kinds of facial movement that are recognized as conveying distinct meanings, and the many ways they are blended together

Vocal Expression
Model

Voice

Differentiate 28 kinds of vocal expression recognized as conveying distinct meanings, and the many ways they are blended together

Dynamic Reaction
Model

Face + Body

Measure dynamic patterns in facial expression over time that are correlated with over 20 distinct reported emotions

FACS 2.0
Model

Face + Body

An improved, automated facial action coding system (FACS): measure 26 facial action units (AUs) and 29 other features with even less bias than traditional FACS

Sentiment
Model

Voice

Measure the distribution of possible sentiments expressed in a sentence, negative to positive, neutral or ambiguous

our datasets
Emotional Speech
Dataset

Voice

Recordings of sentences being spoken in dozens of emotional intonations around the world, with self-report and demographics

Vocal Utterances
Dataset

Voice

Diverse vocal expressions (e.g., laughs, sighs, screams) worldwide, including 28+ call types with distinct self-reported meanings

Facial Expressions
Dataset

Face + Body

Hundreds of thousands of diverse facial expressions worldwide, capturing 37+ expressions with distinct self-reported meanings

Multimodal Reactions
Dataset

Multimodal

Reactions to thousands of evocative experiences across 4 continents, capturing 27+ distinct patterns of emotional response

Dyadic Conversations
Dataset

Multimodal

Hundreds of thousands of short emotional conversations between friends or strangers, with detailed self-report ratings of expression

Reinventing the science of human expression

NatureTrends In Cognitive SciencesPNASNature Human BehaviorScience Advances

Empirical Evidence

Datasets and models built with emotion science

Our platform is developed in tandem with scientific innovations that reveal how people experience and express over 30 distinct emotions

The Science

No one can read minds, so we start by using statistics to ask what expressions mean to the people making them

Our data-driven science represents real emotional behavior with 3x more precision than traditional approaches

We use scientific control and globally diverse data to remove biases that are entrenched in most AI models

    1 of 3

Our Values

Understanding expressive behavior is essential to aligning AI with human well-being. Learn more about the ethical principles that guide our work

About Us

Supported Uses

Learning from expression to serve human values

Expressive understanding and communication is critical to the future of voice assistants, health tech, social networks, and much more

I'm Interested

Voice assistants: understand expressive queries, engage conversationally, and learn from expressive feedback

Social networks: moderate content with multimodal toxicity detection and well-being impact measures

Health tech: improve the screening, diagnosis, and treatment of mood, mental health, and pain

Accessibility: restore emotional intonation to speech aids, transcribe emotional signals, and guide empathic behavior

    1 of 4

By using this website, you agree to our use of cookies.

Hume