Search papers, labs, and topics across Lattice.
This paper introduces an ensemble self-training method for unsupervised neural machine translation (UNMT) that leverages structured diversity by training multiple models with different auxiliary languages. Token-level ensemble decoding is used to generate pseudo-translations for the primary language pair, which are then used to fine-tune individual models. The approach achieves statistically significant improvements over single-model UNMT baselines, demonstrating the effectiveness of shared supervision in unsupervised translation.
Ensemble self-training with diverse auxiliary languages boosts unsupervised machine translation by up to 1.7 chrF, proving that shared supervision can overcome the limitations of single-model approaches.
We present an ensemble-driven self-training framework for unsupervised neural machine translation (UNMT). Starting from a primary language pair, we train multiple UNMT models that share the same translation task but differ in an auxiliary language, inducing structured diversity across models. We then generate pseudo-translations for the primary pair using token-level ensemble decoding, averaging model predictions in both directions. These ensemble outputs are used as synthetic parallel data to further train each model, allowing the models to improve via shared supervision. At deployment time, we select a single model by validation performance, preserving single-model inference cost. Experiments show statistically significant improvements over single-model UNMT baselines, with mean gains of 1.7 chrF when translating from English and 0.67 chrF when translating into English.