Search papers, labs, and topics across Lattice.
This paper introduces a Generator-Interpreter framework to evaluate LLMs' ability to attribute emotions across 15 countries, considering both the cultural context of emotion expression (generator) and interpretation. They find that LLM performance varies significantly based on emotion type and cultural context, with the generator's country of origin having a stronger impact. The results highlight the need for culturally sensitive emotion modeling in LLMs.
LLMs struggle to attribute emotions across cultures, and where an emotion *originates* matters more than where it's *interpreted*.
Large language models (LLMs) are increasingly used in cross-cultural systems to understand and adapt to human emotions, which are shaped by cultural norms of expression and interpretation. However, prior work on emotion attribution has focused mainly on interpretation, overlooking the cultural background of emotion generators. This assumption of universality neglects variation in how emotions are expressed and perceived across nations. To address this gap, we propose a Generator-Interpreter framework that captures dual perspectives of emotion attribution by considering both expression and interpretation. We systematically evaluate six LLMs on an emotion attribution task using data from 15 countries. Our analysis reveals that performance variations depend on the emotion type and cultural context. Generator-interpreter alignment effects are present; the generator's country of origin has a stronger impact on performance. We call for culturally sensitive emotion modeling in LLM-based systems to improve robustness and fairness in emotion understanding across diverse cultural contexts.