Search papers, labs, and topics across Lattice.
This paper introduces resources to advance LLMs for Macedonian, a low-resource language. The authors collected a 40GB Macedonian corpus, a 106k-instance instruction dataset, and a seven-benchmark evaluation suite. They then trained an 8B-parameter model, domestic-yak, which outperformed existing models in the 8B parameter range and achieved comparable performance to models up to 10x larger, while also demonstrating superior grammatical correctness and cultural appropriateness in qualitative evaluations.
A new 8B-parameter Macedonian LLM, trained on a custom corpus, rivals models 10x its size and is preferred by native speakers for its grammatical correctness and cultural relevance.
The increase in technological adoption worldwide comes with demands for novel tools to be used by the general population. Large Language Models (LLMs) provide a great opportunity in this respect, but their capabilities remain limited for low-resource languages, restricting applications in countries where such languages are spoken. We create several resources to facilitate the adoption of LLMs and to support research advancements for Macedonian. We collect the largest Macedonian corpus to date, consisting of 40GB of textual data and totaling 3.5B words. To support conversational applications, we collect a 106k-instance instruction dataset, carefully built to be culturally grounded. For evaluation, we construct a Macedonian evaluation suite covering seven benchmarks. Finally, we train domestic-yak, a state-of-the-art 8B-parameter model, on our curated datasets and evaluate it against eight baseline models using the newly constructed benchmark suite. Our model outperforms all existing models in the 8B parameter range across all benchmarks, and achieves performance comparable to models up to 10x larger. Furthermore, a qualitative analysis with native speakers reveals that our model is preferred over larger counterparts, receiving higher ratings for grammatical correctness and cultural appropriateness. All datasets, code, and model weights are openly released, setting a foundation for advancing LLMs in similarly underrepresented languages. These resources are publicly available at github.com/LVSTCK for source code, and at huggingface.co/LVSTCK for pretrained model weights and data.