Search papers, labs, and topics across Lattice.
This paper investigates two-step fine-tuning of the Transformer-T5 model for abstractive summarization of Indonesian, a low-resource language. The approach leverages transfer learning by first fine-tuning T5 on a machine translation task and then fine-tuning the resulting model on text summarization. Experiments demonstrate that this two-step fine-tuning (T5-MT-SUM) achieves superior ROUGE scores compared to zero-shot and single-step fine-tuned baselines, highlighting the benefits of task transferability.
Two-step fine-tuning (translation then summarization) unlocks surprisingly strong abstractive summarization in low-resource languages.
—This study explores the potential of two-step fine-tuning for abstractive summarization in a low-resource language, focusing on Indonesian. Leveraging the Transformer-T5 model, the research investigates the impact of transfer learning across two tasks: machine translation and text summarization. Four configurations were evaluated, ranging from zero-shot to two-step fine-tuned models. The evaluation, conducted using the ROUGE metric, shows that the two-step fine-tuned model (T5-MT-SUM) achieved the best performance, with ROUGE-1: 0.7126, ROUGE-2: 0.6416, and ROUGE-L: 0.6816, outperforming all baselines. These findings demonstrate the effectiveness of task transfer-ability in improving abstractive summarization performance for low-resource languages like Indonesian. This study provides a pathway for advancing natural language processing (NLP) in low-resource language through two-step transfer learning.