Search papers, labs, and topics across Lattice.
The GaoYao benchmark is introduced to address limitations in evaluating multilingual and multicultural capabilities of LLMs by providing a unified framework across three cultural layers and nine cognitive sub-layers. The benchmark expands language coverage with native-quality localization by experts into 19 languages and synthesizes cross-cultural test sets for 34 cultures. Evaluation of 20+ LLMs using GaoYao reveals geographical performance disparities and task-specific gaps, providing a detailed diagnostic resource.
LLMs exhibit significant geographical performance disparities and task-specific gaps when evaluated on the new GaoYao benchmark, highlighting the need for more nuanced multilingual and multicultural training.
Evaluating the multilingual and multicultural capabilities of Large Language Models (LLMs) is essential for their global utility. However, current benchmarks face three critical limitations: (1) fragmented evaluation dimensions that often neglect deep cultural nuances; (2) insufficient language coverage in subjective tasks relying on low-quality machine translation; and (3) shallow analysis that lacks diagnostic depth beyond simple rankings. To address these, we introduce GaoYao, a comprehensive benchmark with 182.3k samples, 26 languages and 51 nations/areas. First, GaoYao proposes a unified framework categorizing evaluation tasks into three cultural layers (General Multilingual, Cross-cultural, Monocultural) and nine cognitive sub-layers. Second, we achieve native-quality expansion by leveraging experts to rigorously localize subjective benchmarks into 19 languages and synthesizing cross-cultural test sets for 34 cultures, surpassing prior coverage by up to 111%. Third, we conduct an in-depth diagnostic analysis on 20+ flagship and compact LLMs. Our findings reveal significant geographical performance disparities and distinct gaps between tasks, offering a reliable map for future work. We release the benchmark (https://github.com/lunyiliu/GaoYao).