Episode 22 Listener Questions + Emotion Science News | The Feelings Lab
Published on May 31, 2022
Join Hume AI CEO Dr. Alan Cowen and podcast host Matt Forte as they venture through the best pod-listener questions we've received so far this season: a veritable emotion science "mailbag." Can people who understand the emotions of others better interpret emotions conveyed through music? How should we responsibly address the ethics around emotion AI data collection and usage? Is there a healthy level of emotional expressivity conducive to emotional well-being? Are video calls bad for brainstorming? Do lobsters or hermit crabs have feelings? Tune in to hear the answer to these questions and more.
We begin with Dr. Alan Cowen, Hume AI CEO, and Matt Forte discussing recent scientific findings regarding video calls and how they change the way we think and communicate.
Hume AI CEO Dr. Alan Cowen and Matt Forte discuss how scientists grapple with emotional experience in animals, going back to Darwin's observations of "purposeless behaviors" among animals which he attributed to emotional expression.
Dr. Alan Cowen, CEO of Hume AI, describes The Hume Initiative, a not-for-profit developing concrete guidelines for the use of empathic AI.
Hume AI CEO Dr. Alan Cowen and Matt Forte discuss how expressing emotion may be critical to our mental health and well-being and to healthy relationships.
Hume AI CEO Dr. Alan Cowen and Matt Forte discuss how expressing emotion may be critical to our mental health and well-being and to healthy relationships.
Does the use of English-language terms to describe emotions contribute a Western bias in emotion science and empathic AI? Dr. Alan Cowen shares the importance of cross-cultural data.
Subscribe
Sign up now to get notified of any updates or new articles.
Share article
Recent articles
Introducing Hume’s Empathic Voice Interface (EVI) API
Last month, we released the demo of our Empathic Voice Interface (EVI). The first emotionally intelligent voice AI API is finally here! EVI does a lot more than stitch together transcription, LLMs, and text-to-speech. With a new empathic LLM (eLLM) that processes your tone of voice, EVI unlocks new capabilities like knowing when to speak, generating more empathic language, and intelligently modulating its own tune, rhythm, and timbre. EVI is the first voice AI that really sounds like it understands you.
What is semantic space theory?
Our models and products are built on a cutting-edge approach to understanding emotion: semantic space theory (SST), which uses computational methods and data-driven approaches to map the full spectrum of our feelings