Episode 25 Myths About Emotion Science | The Feelings Lab
Published on Aug 30, 2022
Join Dr. Alan Cowen, founder of Hume AI, Dr. Dacher Keltner, founding director of the Greater Good Science Center and Professor of Psychology at the University of California, Berkeley, and host Matt Forte as they discuss Myths About Emotion Science. Does the face ‘reveal our emotions’? What does science really say about how people express their feelings? We discuss the real complexity and nuance of facial and vocal expressions, and revisit what Darwin meant when he said that expressions are “purposeless.”
To kick us off, Hume AI CEO Dr. Alan Cowen discusses the meaning of expressions, and explains why the question of whether they "reveal our emotions" is a bit of a misdirection.
Dr. Dacher Keltner discusses how debates surrounding emotion science are often independent from the data, and how non-peer-reviewed articles and public statements have popularized the notion that expressions have no meaningful correlation with emotion.
Later in the episode, Hume AI CEO Dr. Alan Cowen discusses the misconception that AI is being built to "measure our emotions," and how it derails attempts to build transparency, control, and empathy into technology that is already teaching itself about human expressive behaviors.
Dr. Dacher Keltner shares the importance of Darwin's "The Expression of the Emotions in Man and Animals" and how it anticipated more recent discoveries revealing how our expressions coordinate social interaction.
Next, Hume AI CEO Dr. Alan Cowen discusses the three sources of evidence that humans evolved to form expressive behaviors, including studies of different species, developmental stages, and cultures.
Dr. Dacher Keltner and Hume AI CEO Dr. Alan Cowen discuss why popular debates seem to center on the face to the exclusion of other modalities of emotional expression, such as the voice.
All this and more can be found in our full episode, available on Apple and Spotify
Subscribe, and tell a friend to subscribe!
Subscribe
Sign up now to get notified of any updates or new articles.
Recent articles
00/00
We’re introducing Voice Control, a novel interpretability-based method that brings precise control to AI voice customization without the risks of voice cloning. Our tool gives developers control over 10 voice dimensions, labeled “masculine/feminine,” “assertiveness,” “buoyancy,” “confidence,” “enthusiasm,” “nasality,” “relaxedness,” “smoothness,” “tepidity,” and “tightness.” Unlike prompt-based approaches, Voice Control enables continuous adjustments along these dimensions, allowing for precise control and making voice modifications reproducible across sessions.
Hume AI creates emotionally intelligent voice interactions with Claude
Hume AI trained its speech-language foundation model to verbalize Claude responses, powering natural, empathic voice conversations that help developers build trust with users in healthcare, customer service, and consumer applications.
How EverFriends.ai uses empathic AI for eldercare
To truly connect with users and provide a natural, empathic experience, EverFriends.ai needed an AI solution capable of understanding and responding to emotional cues. They found their answer in Hume's Empathic Voice Interface (EVI). EVI merges generative language and voice into a single model trained specifically for emotional intelligence, enabling it to emphasize the right words, laugh or sigh at appropriate times, and much more, guided by language prompting to suit any particular use case.