Search papers, labs, and topics across Lattice.
This paper investigates the effectiveness of LLM-based generation (Gemini 2.5 Flash) and back-translation (NLLB-200) for data augmentation in low-resource NLP, specifically for Hausa and Fongbe NER and POS tagging tasks. The study reveals that augmentation success hinges on the task type rather than solely on language or LLM generation quality, with NER consistently underperforming with augmentation while POS tagging shows mixed results. The key finding is that task structure plays a more significant role than synthetic data quality in determining the outcome of data augmentation.
Data augmentation with LLMs can tank your NER performance even when it boosts POS tagging, proving task structure matters more than synthetic data quality.
Data scarcity limits NLP development for low-resource African languages. We evaluate two data augmentation methods -- LLM-based generation (Gemini 2.5 Flash) and back-translation (NLLB-200) -- for Hausa and Fongbe, two West African languages that differ substantially in LLM generation quality. We assess augmentation on named entity recognition (NER) and part-of-speech (POS) tagging using MasakhaNER 2.0 and MasakhaPOS benchmarks. Our results reveal that augmentation effectiveness depends on task type rather than language or LLM quality alone. For NER, neither method improves over baseline for either language; LLM augmentation reduces Hausa NER by 0.24% F1 and Fongbe NER by 1.81% F1. For POS tagging, LLM augmentation improves Fongbe by 0.33% accuracy, while back-translation improves Hausa by 0.17%; back-translation reduces Fongbe POS by 0.35% and has negligible effect on Hausa POS. The same LLM-generated synthetic data produces opposite effects across tasks for Fongbe -- hurting NER while helping POS -- suggesting task structure governs augmentation outcomes more than synthetic data quality. These findings challenge the assumption that LLM generation quality predicts augmentation success, and provide actionable guidance: data augmentation should be treated as a task-specific intervention rather than a universally beneficial preprocessing step.