Empathic AI to serve human well-being

With a single API call, interpret emotional expressions and generate empathic responses. Meet the first AI with emotional intelligence.

Trusted By

LG
Coty
UCSF
Lawyer.com
Synthesia
The University of Chicago
LG
Coty
UCSF
Lawyer.com
Synthesia
The University of Chicago
LG
Coty
UCSF
Lawyer.com
Synthesia
The University of Chicago
Empathic Voice Interface (EVI)
Give your application empathy and a voice

EVI is a conversational voice API powered by empathic AI. It is the only API that measures nuanced vocal modulations, guiding language and speech generation. Trained on millions of human interactions, our empathic large language model (eLLM) unites language modeling and text-to-speech with better EQ, prosody, end-of-turn detection, interruptibility, and alignment.

Expression Measurement API
Interpret vocal and facial expression

Built on 10+ years of research, our models instantly capture nuance in expressions in audio, video, and images. Laughter tinged with awkwardness, sighs of relief, nostalgic glances, and more.

Custom Model API
Predict wellbeing
better than any other AI

Build customizable insights into your application with our low-code custom model solution. Developed using transfer learning from our state-of-the-art expression measurement models and eLLMs, our Custom Model API can predict almost any outcome more accurately than language alone.

We research foundation models and how to align them with human well-being

00/00

“I get to solve problems no one imagined five years ago . . . I get to experience technologies no one will be able to live without in five years.”

Moses Oh | Senior Research Engineer
Moses Oh

Join us in building the future of empathic technology

Careers at hume