Search papers, labs, and topics across Lattice.
This paper introduces a semi-synthetic parallel dataset for English-to-Hebrew Quality Estimation (QE) designed to address the challenges of under-resourced language pairs. The dataset was created by generating English sentences based on typical linguistic patterns, translating them to Hebrew using multiple MT engines, filtering outputs via BLEU, and then manually evaluating and scoring the translations. Neural QE models, including BERT and XLM-R, were trained on this dataset, and the study analyzes the impact of dataset characteristics like size, balance, and error distribution on QE performance.
A novel semi-synthetic dataset reveals the critical impact of dataset size, balance, and error distribution on the performance of neural quality estimation models for under-resourced languages.
Quality estimation (QE) plays a crucial role in machine translation (MT) workflows, as it serves to evaluate generated outputs that have no reference translations and to determine whether human post-editing or full retranslation is necessary. Yet, developing highly accurate, adaptable and reliable QE systems for under-resourced language pairs remains largely unsolved, due mainly to limited parallel corpora and to diverse language-dependent factors, such as with morphosyntactically complex languages. This study presents a semi-synthetic parallel dataset for English-to-Hebrew QE, generated by creating English sentences based on examples of usage that illustrate typical linguistic patterns, translating them to Hebrew using multiple MT engines, and filtering outputs via BLEU-based selection. Each translated segment was manually evaluated and scored by a linguist, and we also incorporated professionally translated English-Hebrew segments from our own resources, which were assigned the highest quality score. Controlled translation errors were introduced to address linguistic challenges, particularly regarding gender and number agreement, and we trained neural QE models, including BERT and XLM-R, on this dataset to assess sentence-level MT quality. Our findings highlight the impact of dataset size, distributed balance, and error distribution on model performance. We will describe the challenges, methodology and results of our experiments, and specify future directions aimed at improving QE performance. This research contributes to advancing QE models for under resourced language pairs, including morphology-rich languages.