Search papers, labs, and topics across Lattice.
This paper introduces a benchmark to evaluate LLMs' sensitivity to linguistic shibboleths in simulated hiring evaluations. The benchmark uses 100 question-response pairs with controlled linguistic variations to isolate and measure how LLMs penalize specific linguistic patterns, such as hedging. Results show that hedged responses receive significantly lower ratings (25.6% lower on average), demonstrating the presence of demographic bias in automated evaluation systems.
LLMs evaluating job candidates exhibit significant bias against hedging language, docking candidates by 25.6% on average, even when the content is equivalent.
This paper introduces a comprehensive benchmark for evaluating how Large Language Models (LLMs) respond to linguistic shibboleths: subtle linguistic markers that can inadvertently reveal demographic attributes such as gender, social class, or regional background. Through carefully constructed interview simulations using 100 validated question-response pairs, we demonstrate how LLMs systematically penalize certain linguistic patterns, particularly hedging language, despite equivalent content quality. Our benchmark generates controlled linguistic variations that isolate specific phenomena while maintaining semantic equivalence, which enables the precise measurement of demographic bias in automated evaluation systems. We validate our approach along multiple linguistic dimensions, showing that hedged responses receive 25.6% lower ratings on average, and demonstrate the benchmark's effectiveness in identifying model-specific biases. This work establishes a foundational framework for detecting and measuring linguistic discrimination in AI systems, with broad applications to fairness in automated decision-making contexts.