Search papers, labs, and topics across Lattice.
This paper introduces Diffutron, a masked diffusion language model tailored for Turkish, addressing the gap in MDLMs for morphologically rich languages. They use LoRA-based continual pre-training of a multilingual encoder followed by progressive instruction tuning on general and task-specific datasets. Experiments show Diffutron achieves competitive performance against much larger models on Turkish benchmarks, demonstrating the efficacy of their approach.
A compact masked diffusion model can rival multi-billion parameter models in a morphologically rich language like Turkish, challenging the assumption that bigger is always better.
Masked Diffusion Language Models (MDLMs) have emerged as a compelling non-autoregressive alternative to standard large language models; however, their application to morphologically rich languages remains limited. In this paper, we introduce $\textit{Diffutron}$, a masked diffusion language model specifically designed for Turkish. Our approach leverages a resource-efficient training pipeline, starting with LoRA-based continual pre-training of a multilingual encoder on a large-scale corpus. To enable generative capabilities, we employ a progressive instruction-tuning strategy, sequentially adapting the model on general and task-specific instruction sets. Experimental results across comprehensive benchmarks demonstrate that, despite its compact size, our model achieves competitive performance compared to existing multi-billion-parameter baselines. These findings validate the effectiveness of masked diffusion modeling combined with multi-stage tuning for non-autoregressive text generation in Turkish.