Search papers, labs, and topics across Lattice.
The paper introduces ConsistencyChecker, a tree-based evaluation framework, to assess the consistency of LLMs across reversible transformations in tasks like machine translation and AI-assisted programming. This framework constructs a tree where nodes are text states and edges are inverse operations, enabling the quantification of consistency based on similarity across different depths. Experiments on eight models demonstrate that ConsistencyChecker effectively differentiates model performance and correlates strongly with WMT 2024 auto-ranking without relying on WMT paired data.
ConsistencyChecker reveals that LLM consistency, measured via reversible transformations, strongly correlates with translation quality, all without relying on traditional paired training data.
Evaluating consistency in large language models (LLMs) is crucial for ensuring reliability, particularly in complex, multi-step interactions between humans and LLMs. Traditional self-consistency methods often miss subtle semantic changes in natural language and functional shifts in code or equations, which can accumulate over multiple transformations. To address this, we propose ConsistencyChecker, a tree-based evaluation framework designed to measure consistency through sequences of reversible transformations, including machine translation tasks and AI-assisted programming tasks. In our framework, nodes represent distinct text states, while edges correspond to pairs of inverse operations. Dynamic and LLM-generated benchmarks ensure a fair assessment of the model's generalization ability and eliminate benchmark leakage. Consistency is quantified based on similarity across different depths of the transformation tree. Experiments on eight models from various families and sizes show that ConsistencyChecker can distinguish the performance of different models. Notably, our consistency scores-computed entirely without using WMT paired data-correlate strongly (r>0.7) with WMT 2024 auto-ranking, demonstrating the validity of our benchmark-free approach. Our implementation is available at: https://github.com/ulab-uiuc/consistencychecker.