Search papers, labs, and topics across Lattice.
This paper explores using GPT-4o to generate MQM-style annotations for training COMET models in Machine Translation Quality Estimation (MTQE). They introduce a simplified MQM scheme and a GPT-4o prompt called PPbMQM to produce segment-level annotations. Training COMET on these LLM-generated annotations achieves competitive performance on segment-level QE for Chinese-English and English-German translation tasks.
Forget expensive LLM inference for MTQE: train a COMET model on GPT-4o-generated annotations and get competitive performance.
Large Language Models (LLMs) have demonstrated excellent performance on Machine Translation Quality Estimation (MTQE), yet their high inference costs make them impractical for direct application. In this work, we propose applying LLMs to generate MQM-style annotations for training a COMET model: following Fernandes et al. (2023), we reckon that segment-level annotations provide a strong rationale for LLMs and are key to good segment-level QE. We propose a simplified MQM scheme, mostly restricted to top-level categories, to guide LLM selection. We present a systematic approach for the development of a GPT-4o-based prompt, called PPbMQM (Prompt-Pattern-based-MQM). We show that the resulting annotations correlate well with human annotations and that training COMET on them leads to competitive performance on segment-level QE for Chinese-English and English-German.