Search papers, labs, and topics across Lattice.
The paper identifies Rubric-Induced Preference Drift (RIPD), a vulnerability in LLM-based judges where seemingly benign rubric edits cause systematic shifts in preferences on target domains, even when benchmark validation is passed. They demonstrate that RIPD can be exploited through rubric-based preference attacks, where benchmark-compliant rubric edits steer judgments away from trusted references, reducing target-domain accuracy. Furthermore, the induced bias propagates through alignment pipelines, leading to persistent drift in model behavior when these judgments are used for downstream post-training.
LLM judges are surprisingly susceptible to subtle rubric manipulations that can induce significant preference drift, even while maintaining benchmark performance, creating a stealthy attack surface for biasing model alignment.
Evaluation and alignment pipelines for large language models increasingly rely on LLM-based judges, whose behavior is guided by natural-language rubrics and validated on benchmarks. We identify a previously under-recognized vulnerability in this workflow, which we term Rubric-Induced Preference Drift (RIPD). Even when rubric edits pass benchmark validation, they can still produce systematic and directional shifts in a judge's preferences on target domains. Because rubrics serve as a high-level decision interface, such drift can emerge from seemingly natural, criterion-preserving edits and remain difficult to detect through aggregate benchmark metrics or limited spot-checking. We further show this vulnerability can be exploited through rubric-based preference attacks, in which benchmark-compliant rubric edits steer judgments away from a fixed human or trusted reference on target domains, systematically inducing RIPD and reducing target-domain accuracy up to 9.5% (helpfulness) and 27.9% (harmlessness). When these judgments are used to generate preference labels for downstream post-training, the induced bias propagates through alignment pipelines and becomes internalized in trained policies. This leads to persistent and systematic drift in model behavior. Overall, our findings highlight evaluation rubrics as a sensitive and manipulable control interface, revealing a system-level alignment risk that extends beyond evaluator reliability alone. The code is available at: https://github.com/ZDCSlab/Rubrics-as-an-Attack-Surface. Warning: Certain sections may contain potentially harmful content that may not be appropriate for all readers.