Hume Startup Grant Program now liveApplication
Product Updates

Hume AI Raises $12.7M in Series A Funding

By Alan Cowen on Feb 16, 2023

Did you hear?

Hume AI, a small New York startup, has raised $12.7 million on the premise that it's not enough for AI systems to understand the world's information — they also need to understand human reactions. - Ina Fried, Axios

Series A Blog image

Thank you to our partners and investors! Now seems like a good time to answer some questions we thought you might have about Hume AI! We’ll let our friends in the media answer them for you:

Why raise money now?

The company, which provides its datasets and models through a unified API platform, intends to use the funds to meet the demand for its technology, which is based on the analysis of human expressive behavior in images, audio, video, or text. - Finsmes

How does our technology work?

The company aims to do for AI technology what Bob Dylan did for music: endow it with EQ and concern for human well-being. Dr. Alan Cowen, who leads Hume AI’s fantastic team of engineers, AI scientists, and psychologists, developed a novel approach to emotion science called semantic space theory. This theory is what’s behind the data-driven methods that Hume AI uses to capture and understand complex, subtle nuances of human expression and communication—tones of language, facial and bodily expressions, the tune, rhythm, and timbre of speech, “umms” and “ahhs.”

Hume AI has productized this research into an expressive communication toolkit for software developers to build their applications with the guidance of human emotional expression. The toolkit contains a comprehensive set of AI tools for understanding vocal and nonverbal communication – models that capture and integrate hundreds of expressive signals in the face and body, language and voice. The company also provides transfer learning tools for adapting these models of expression to drive specific metrics in any application. Its technology is being explored for applications including healthcare, education, and robotics. - Andy Weissman, Union Square Ventures

Who is using Hume AI’s technology?

In September, the company began rolling out a beta version of its technology to its waitlist of over two thousand companies and research organizations, with an early focus on healthcare applications. The company has research partnerships with labs at Mt. Sinai, Boston University Medical Center, and Harvard Medical School examining how the analysis of patients’ nuanced vocal and facial expressions with Hume AI’s tools can improve healthcare outcomes for patients. Applications include standardized patient screening and triaging, more targeted diagnosis and treatment of mental health conditions, and patient monitoring and crisis prediction. - Citybiz

Why is this technology important?

As AI technology begins to shape every aspect of our lives, can we ensure that it cultivates our emotional well-being as a fundamental, overriding objective? Can we optimize algorithms for our happiness and satisfaction as opposed to the engagement-driven methods that drive many applications today? Hume AI believes we can. -  Andy Weissman, Union Square Ventures

It seems obvious that emotional well-being is the most important objective for the advanced AI systems at the core of our increasingly connected society.

But can society trust AI that understands emotions?

Empathic AI such as this could pose risks; for example, interpreting emotional behaviors in ways that are not conducive to well-being. It could surface and reinforce unhealthy temptations when we are most vulnerable to them, help create more convincing deepfakes, or exacerbate harmful stereotypes. Hume AI has established the Hume Initiative with a set of six guiding principles: beneficence, emotional primacy, scientific legitimacy, inclusivity, transparency and consent. As part of those principles and guidelines developed by a panel of experts (including AI ethicists, cyberlaw experts, and social scientists), Hume AI has committed to specific use cases that they will never support. -  Andy Weissman, Union Square Ventures

We think its worth the effort to carefully mitigate the risks of this technology, because the future depends on AI having the capacity for empathy and using it for beneficial purposes. 

Subscribe

Sign up now to get notified of any updates or new articles.

Recent articles