Search papers, labs, and topics across Lattice.
This paper introduces a decomposition-based evaluation framework for cross-lingual LLM evaluation, using a Universal Criteria Set (UCS) to represent language-agnostic evaluation dimensions. By decomposing the evaluation process, the framework enables effective transfer learning from English to other languages, mitigating the need for costly target-language annotations. Experiments across multiple faithfulness tasks and languages demonstrate consistent performance gains over existing methods.
Forget expensive multilingual annotations: this framework lets you evaluate LLMs in new languages by transferring knowledge from English, with surprisingly strong results.
As large language models are increasingly deployed across diverse real-world applications, extending automated evaluation beyond English has become a critical challenge. Existing evaluation approaches are predominantly English-focused, and adapting them to other languages is hindered by the scarcity and cost of human-annotated judgments in most languages. We introduce a decomposition-based evaluation framework built around a Universal Criteria Set (UCS). UCS consists of a shared, language-agnostic set of evaluation dimensions, producing an interpretable intermediate representation that supports cross-lingual transfer with minimal supervision. Experiments on multiple faithfulness tasks across languages and model backbones demonstrate consistent improvements over strong baselines without requiring target-language annotations.