Search papers, labs, and topics across Lattice.
The paper introduces FORMOSANBENCH, a new benchmark to evaluate LLM performance on three low-resource Formosan languages (Atayal, Amis, and Paiwan) across machine translation, ASR, and text summarization tasks. The benchmark assesses LLMs in zero-shot, 10-shot, and fine-tuned settings, revealing a significant performance gap compared to high-resource languages. Results show that existing LLMs underperform substantially, even with few-shot learning and fine-tuning, highlighting the need for improved NLP technologies for endangered languages.
LLMs stumble significantly when applied to endangered Formosan languages, even with few-shot learning or fine-tuning, revealing a stark performance disparity compared to high-resource languages.
While large language models (LLMs) have demonstrated impressive performance across a wide range of natural language processing (NLP) tasks in high-resource languages, their capabilities in low-resource and minority languages remain significantly underexplored. Formosan languages -- a subgroup of Austronesian languages spoken in Taiwan -- are both linguistically rich and endangered, largely due to the sociolinguistic dominance of Mandarin. In this work, we introduce FORMOSANBENCH, the first benchmark for evaluating LLMs on low-resource Austronesian languages. It covers three endangered Formosan languages: Atayal, Amis, and Paiwan, across three core NLP tasks: machine translation, automatic speech recognition (ASR), and text summarization. We assess model performance in zero-shot, 10-shot, and fine-tuned settings using FORMOSANBENCH. Our results reveal a substantial performance gap between high-resource and Formosan languages. Existing LLMs consistently underperform across all tasks, with 10-shot learning and fine-tuning offering only limited improvements. These findings underscore the urgent need for more inclusive NLP technologies that can effectively support endangered and underrepresented languages. We release our datasets and code to facilitate future research in this direction.