Search papers, labs, and topics across Lattice.
The paper introduces YoNER, a new multi-domain Named Entity Recognition (NER) dataset for the Yor\`ub\'a language, comprising 5,000 sentences and 100,000 tokens across five domains. They manually annotated the dataset with three entity types (PER, ORG, LOC) achieving an inter-annotator agreement of over 0.70. Benchmarking experiments reveal that African-centric models outperform multilingual models, but cross-domain performance suffers, and a new Yor\`ub\'a-specific language model (OyoBERT) shows improved in-domain performance.
Yor\`ub\'a NLP gets a boost: a new multi-domain NER dataset and language model reveal the limitations of cross-domain transfer and the power of language-specific pretraining.
Named Entity Recognition (NER) is a foundational NLP task, yet research in Yor\`ub\'a has been constrained by limited and domain-specific resources. Existing resources, such as MasakhaNER (a manually annotated news-domain corpus) and WikiAnn (automatically created from Wikipedia), are valuable but restricted in domain coverage. To address this gap, we present YoNER, a new multidomain Yor\`ub\'a NER dataset that extends entity coverage beyond news and Wikipedia. The dataset comprises about 5,000 sentences and 100,000 tokens collected from five domains including Bible, Blogs, Movies, Radio broadcast and Wikipedia, and annotated with three entity types: Person (PER), Organization (ORG) and Location (LOC), following CoNLL-style guidelines. Annotation was conducted manually by three native Yor\`ub\'a speakers, with an inter-annotator agreement of over 0.70, ensuring high quality and consistency. We benchmark several transformer encoder models using cross-domain experiments with MasakhaNER 2.0, and we also assess the effect of few-shot in-domain data using YoNER and cross-lingual setups with English datasets. Our results show that African-centric models outperform general multilingual models for Yor\`ub\'a, but cross-domain performance drops substantially, particularly for blogs and movie domains. Furthermore, we observed that closely related formal domains, such as news and Wikipedia, transfer more effectively. In addition, we introduce a new Yor\`ub\'a-specific language model (OyoBERT) that outperforms multilingual models in in-domain evaluation. We publicly release the YoNER dataset and pretrained OyoBERT models to support future research on Yor\`ub\'a natural language processing.