api

models

datasets

Capture human expression in text, audio, video, or images

Human values lie beyond words: in tones of sarcasm, subtle facial movements, cringes of empathic pain, laughter tinged with awkwardness, sighs of relief, and more. We can help you read between the lines.

ui illustration

The New Science of Expression

A unified platform for expressive communication

Based on scientific research, we offer the world’s most accurate and comprehensive tools for understanding nonverbal behavior

Many models, one API

Language, speech prosody, facial expression, and more

Insights from any input

Analyze text, audio, video, or images with one line of code

Pay as you go

With usage-based pricing, start building immediately

State-of-the-art

Updated in tandem with the most advanced published research

Interactive playground

Discover the power of expressive behavior in seconds

Open ethical guidelines

Read the ethical guidelines that define our supported uses

our models

Advancing empathic technology with the most accurate and nuanced models to date

Capture nuances in expressive behavior with newfound precision: subtle facial movements that express love or admiration, cringes of empathic pain, laughter tinged with awkwardness, sighs of relief.

Expressive Language
Model

Voice

Measure 53 emotions reliably expressed by the subtleties of language

Vocal Call Types
Model

Voice

Explore vocal utterances by inferring probabilities of 67 descriptors, like 'laugh', 'sigh', 'shriek', 'oh', 'ahh', 'mhm', and more

Valence & Arousal
Model

Multimodal

Predict perceived valence and arousal in facial expression, vocal bursts, speech, or language

FACS 2.0
Model

Face + Body

An improved, automated facial action coding system (FACS): measure 26 facial action units (AUs) and 29 other features with even less bias than traditional FACS

Facial Expression
Model

Face + Body

Differentiate 37 kinds of facial movement that are recognized as conveying distinct meanings, and the many ways they are blended together

Vocal Expression
Model

Voice

Differentiate 28 kinds of vocal expression recognized as conveying distinct meanings, and the many ways they are blended together

Speech Prosody
Model

Voice

Discover over 25 patterns of tune, rhythm, and timbre that imbue everyday speech with complex, blended meanings

Dynamic Reaction
Model

Face + Body

Measure dynamic patterns in facial expression over time that are correlated with over 20 distinct reported emotions

Sentiment
Model

Voice

Measure the distribution of possible sentiments expressed in a sentence, negative to positive, neutral or ambiguous

our datasets

Millions of human experiences and expressions from diverse people around the world

With hundreds of thousands of fully-consented samples, our datasets are emotionally rich, naturalistic, culturally diverse, and equitable. They are the tools needed to train and evaluate unbiased empathic technologies.

Emotional Speech
Dataset

Voice

Recordings of sentences being spoken in dozens of emotional intonations around the world, with self-report and demographics

Vocal Utterances
Dataset

Voice

Diverse vocal expressions (e.g., laughs, sighs, screams) worldwide, including 28+ call types with distinct self-reported meanings

Facial Expressions
Dataset

Face + Body

Hundreds of thousands of diverse facial expressions worldwide, capturing 37+ expressions with distinct self-reported meanings

Multimodal Reactions
Dataset

Multimodal

Reactions to thousands of evocative experiences across 4 continents, capturing 27+ distinct patterns of emotional response

Dyadic Conversations
Dataset

Multimodal

Hundreds of thousands of short emotional conversations between friends or strangers, with detailed self-report ratings of expression

By using this website, you agree to our use of cookies.

Hume