Connecting Language and Emotion in Large Language Models for Human-AI Collaboration
Shadab Choudhury - MS Thesis Defense
Friday, April 18, 2025 · 11 AM - 12:30 PM
Abstract - Large Language Models demonstrate linguistic abilities on par with humans, generating short texts, stories, instructions, and even code that’s often indistinguishable from what is created by humans. This allows humans to use large language models (LLMs) collaboratively — as communication aides or writing assistants.
However, humans cannot always assume an LLM will behave the same way another person would. This is particularly evident in subjective scenarios such as where emotion is involved. In this work, I explore to what depth do LLMs perceive and understand human emotions, and look at ways of describing an emotion to an LLM for collaborative work. First, I study the problem of classifying emotions and show that LLMs perform well on their own, and can also improve smaller models at the same task. Secondly, I focus on generating emotions, using the problem space of keyword-constrained generation and a human participant-study to see where human expectations and LLM outputs diverge and how we can minimize any misalignment. Here, I find that using English words and lexical expressions valence-arousal-dominance (VAD) scales lead to good alignment and generation quality, while numeric dimensions of VAD or emojis fare worse.
Committee -
Dr. Lara Martin (Chair/Advisor)
Dr. Cynthia Matuszek
Dr. Frank Ferraro