Search papers, labs, and topics across Lattice.
The paper evaluates the ability of four leading LLMs (GPT-4, Claude, Gemini, and Llama 3) to understand and moderate Gen Alpha's unique digital language, focusing on detecting masked harassment and manipulation. It uses a novel dataset of 100 Gen Alpha expressions collected from gaming platforms, social media, and video content to assess the models' comprehension capabilities. The study reveals significant gaps in the LLMs' understanding, highlighting vulnerabilities in current AI safety systems for protecting young users.
LLMs struggle to understand Gen Alpha's unique digital language, leaving young users vulnerable to online harassment and manipulation that goes undetected by current AI safety systems.
This research provides a unique assessment of how AI systems interpret Generation Alpha (Gen Alpha, born 2010-2024) digital communication patterns. As the first generation to grow up with AI as part of daily life, Gen Alpha faces unprecedented online vulnerability due to their immersive digital engagement and the growing disconnect between their communication patterns and traditional safety mechanisms. Their distinctive ways of communicating, blending gaming references, memes, and AI-influenced expressions, often obscure concerning interactions from both human moderators and AI safety systems. The study evaluates four leading AI systems’ (GPT-4, Claude, Gemini, and Llama 3) ability to understand and moderate this communication, with particular focus on detecting masked harassment and manipulation that exploit Gen Alpha’s unique linguistic patterns. Through analysis of 100 contemporary Gen Alpha expressions collected from gaming platforms, social media, and video content, significant gaps in AI systems’ comprehension capabilities were found, highlighting critical safety implications. This paper makes four key contributions: (1) a first-of-a-kind dataset of Gen Alpha expressions, (2) a framework for improving AI content moderation systems to better protect young users in digital spaces, (3) a systematic evaluation of understanding of Gen Alpha communication - by AI systems, human moderators and parents - incorporating Gen Alpha direct participation in the research process, and (4) the identification of specific vulnerabilities created by growing linguistic gap between Gen Alpha users and their protectors (both human and AI). The findings highlight an urgent need for improved AI safety systems to better protect young users, especially given Gen Alpha’s tendency to avoid seeking help due to perceived adult incomprehension of their digital world. This research uniquely combines the perspective of a Gen Alpha researcher with rigorous academic analysis to address critical challenges in online safety.