Search papers, labs, and topics across Lattice.
This paper presents the first comprehensive evaluation of open-source machine translation (MT) systems for Esperanto, comparing rule-based systems, encoder-decoder models, and large language models (LLMs). They evaluated translation quality across six language directions (English, Spanish, Catalan, and Esperanto) using automatic metrics and human evaluation. The NLLB family achieved the best performance, closely followed by fine-tuned compact models and a general-purpose LLM, although human evaluation revealed remaining errors.
Despite its simple grammar, Esperanto translation still poses challenges for LLMs, with NLLB models only preferred in about half of human evaluations.
Esperanto is a widespread constructed language, known for its regular grammar and productive word formation. Besides having substantial resources available thanks to its online community, it remains relatively underexplored in the context of modern machine translation (MT) approaches. In this work, we present the first comprehensive evaluation of open-source MT systems for Esperanto, comparing rule-based systems, encoder-decoder models, and LLMs across model sizes. We evaluate translation quality across six language directions involving English, Spanish, Catalan, and Esperanto using multiple automatic metrics as well as human evaluation. Our results show that the NLLB family achieves the best performance in all language pairs, followed closely by our trained compact models and a fine-tuned general-purpose LLM. Human evaluation confirms this trend, with NLLB translations preferred in approximately half of the comparisons, although noticeable errors remain. In line with Esperanto's tradition of openness and international collaboration, we release our code and best-performing models publicly.