Search papers, labs, and topics across Lattice.
The authors introduce HateMirage, a new dataset of 4,530 YouTube comments related to debunked misinformation claims, designed to study subtle forms of online hate speech. Each comment is annotated along three dimensions: Target, Intent, and Implication, providing a multi-dimensional explanation framework. Benchmarking experiments with open-source language models reveal that explanation quality is more dependent on pretraining diversity and reasoning-oriented data than model scale.
HateMirage reveals that detecting subtle hate speech emerging from misinformation requires more than just scaling up models; pretraining diversity and reasoning-oriented data are key.
Subtle and indirect hate speech remains an underexplored challenge in online safety research, particularly when harmful intent is embedded within misleading or manipulative narratives. Existing hate speech datasets primarily capture overt toxicity, underrepresenting the nuanced ways misinformation can incite or normalize hate. To address this gap, we present HateMirage, a novel dataset of Faux Hate comments designed to advance reasoning and explainability research on hate emerging from fake or distorted narratives. The dataset was constructed by identifying widely debunked misinformation claims from fact-checking sources and tracing related YouTube discussions, resulting in 4,530 user comments. Each comment is annotated along three interpretable dimensions: Target (who is affected), Intent (the underlying motivation or goal behind the comment), and Implication (its potential social impact). Unlike prior explainability datasets such as HateXplain and HARE, which offer token-level or single-dimensional reasoning, HateMirage introduces a multi-dimensional explanation framework that captures the interplay between misinformation, harm, and social consequence. We benchmark multiple open-source language models on HateMirage using ROUGE-L F1 and Sentence-BERT similarity to assess explanation coherence. Results suggest that explanation quality may depend more on pretraining diversity and reasoning-oriented data rather than on model scale alone. By coupling misinformation reasoning with harm attribution, HateMirage establishes a new benchmark for interpretable hate detection and responsible AI research.