Search papers, labs, and topics across Lattice.
SemBench is introduced as a framework for automatically generating synthetic benchmarks to evaluate LLM semantic understanding using dictionary sense definitions and sentence encoders, removing the need for curated examples. The framework leverages a language-independent approach, enabling evaluation across multiple languages with varying resource availability. Experiments across English, Spanish, and Basque demonstrate that SemBench rankings correlate strongly with standard Word-in-Context (WiC) datasets, while requiring only a small number of examples for stable rankings.
LLM semantic understanding can now be evaluated cheaply and in any language, thanks to a new framework that synthesizes benchmarks from dictionaries and sentence encoders.
Recent progress in Natural Language Processing (NLP) has been driven by the emergence of Large Language Models (LLMs), which exhibit remarkable generative and reasoning capabilities. However, despite their success, evaluating the true semantic understanding of these models remains a persistent challenge. Traditional benchmarks such as Word-in-Context (WiC) effectively probe this capability, but their creation is resource-intensive and often limited to high-resource languages. In this paper, we introduce SemBench, a framework for automatically generating synthetic benchmarks that assess the semantic competence of LLMs using only dictionary sense definitions and a sentence encoder. This approach eliminates the need for curated example sentences, making it both scalable and language-independent. We evaluate SemBench in three languages (English, Spanish, and Basque) spanning different levels of linguistic resources, and across a wide range of LLMs. Our results show that rankings derived from SemBench strongly correlate with those obtained from standard WiC datasets. Furthermore, our analysis demonstrates that only a small number of examples is required to achieve stable and meaningful rankings. Overall, SemBench provides a lightweight, adaptable, and data-efficient framework for cross-lingual evaluation of semantic understanding in LLMs.