Hume Powers Projects at UC Berkeley LLM Hackathon
Published on Aug 11, 2023
UC Berkeley hosted the world's largest AI hackathon on June 17-18, with over 1200 students developing diverse applications using large language models and open-source APIs. Hume's APIs were used alongside LLMs in 57 of the 240 projects and three of the 12 finalists, ranging from healthcare devices to education tools, indicating its wide applicability and versatility.
Read on to learn about some highlighted projects:
mila - Recognize signs of postpartum depression
The team behind mila, winner of the Best Use of Hume recognition, designed a device to better support mothers with postpartum depression (PPD), a healthcare gap that affects four out of five new moms in the United States. Motivated by their own medical and motherhood experiences, the developers emphasized a need for emotional awareness in AI and technology. “There’s a lot of people using [AI] in ways that are fast and fun, but empathy and human connection is very important and meaningful,” one member stated, noting Hume’s mission of elevating human well-being. By implementing Hume’s APIs to translate real-time audio input of daily conversations into expressions associated with certain emotions, mila and its companion app detect early indicators of PPD. The tool connects mothers with healthcare professionals to better pinpoint actions, tailor follow-ups, and support mental well-being.
“As a multidisciplinary team, passionate about women’s health, our project is driven by the core belief of amplifying the voices of women and supporting their mental well-being during the transformative journey of motherhood.”
The winners, in addition to winning $1,000, enjoyed a conversation with our CEO, Dr. Alan Cowen.
EdGauge - Improve student focus and retention
Educators can take advantage of EdGauge to evaluate student comprehension of academic material through real-time feedback and advice. The hackathon team, inspired by their time as teaching assistants in college, wanted to provide educators a tool to receive feedback on student performance. By reading expressions in a classroom and using Hume’s platform to tag expressions that relate to confusion, boredom, and concentration, teachers can personalize and modify strategies to maximize learning.
Polysphere - Elevate your music listening experience
Polysphere tracks users’ real-time reactions to songs, providing tailored song recommendations and connecting compatible users. Hume’s facial expression measurement capabilities incorporate the complexity of the listening experience, capturing the range of emotions that music can evoke. The developers hope to improve the quality of content consumed, foster friendships, and revolutionize the discovery and sharing of music.
Violet - Access intelligent, empathic AI therapists
Violet, a voice-enabled AI therapist, involves users in genuine and organic conversations to assess the user’s mental well-being. The team incorporated Hume’s APIs into OpenAI’s GPT-4 to generate an emotionally intelligent and powerful virtual counselor. Hume’s technology allows Violet to surpass mere audio transcription, comprehending users’ facial expressions and speech patterns to assess topics of interest and select effective therapeutic approaches.
All projects can be viewed on the Hackathon DevPost website.
Subscribe
Sign up now to get notified of any updates or new articles.
Recent articles
Speech-to-text and text-to-speech
Speech-to-text (STT) and text-to-speech (TTS) are two groundbreaking technologies that have transformed how we engage with computers and other devices. Leading tech companies like Google, IBM, and Amazon are constantly competing to develop the most accurate and sophisticated speech recognition systems. While both STT and TTS involve converting between spoken and written language, they have distinct functions and applications. This article explores the inner workings of each technology, examines their diverse use cases, analyzes their strengths and weaknesses, and discusses the current advancements and future trends in the field.
Creating AI voiceovers with emotion
The use of artificial intelligence (AI) in voice generation has progressed significantly, moving beyond the monotonous, robotic voices of the past to more natural and expressive speech. This evolution has opened up exciting possibilities for various applications, including voiceovers. This article delves into current techniques for creating AI voiceovers with emotion, explores the upcoming capabilities of next-generation AI voice models like Hume AI's OCTAVE, and discusses the potential benefits and challenges of this technology.
Creating custom character voices with AI
The use of AI in voice generation is rapidly changing how we interact with technology and consume content. From video games and animated films to audiobooks and virtual assistants, AI-generated voices are becoming increasingly prevalent. One of the most exciting applications of this technology is in the creation of custom character voices. This allows developers and creators to imbue their characters with unique and engaging personalities, enhancing the overall user experience. This article delves into the best ways to create custom character voices using AI, exploring the various technologies, platforms, and customization options available.