Search papers, labs, and topics across Lattice.
This paper introduces DRCR, a framework for multi-party dialogue generation that uses context rewriting guided by discourse coherence and response quality. The framework leverages preference data constructed from these two feedback signals to train both a context rewriter and a response generator. A dynamic self-evolution learning method iteratively improves the rewriter and responder through mutual interaction, leading to improved performance on four multi-party dialogue datasets.
Rewriting dialogue context based on both discourse coherence *and* response quality substantially improves multi-party dialogue generation, outperforming methods relying solely on dialogue structure.
Previous research on multi-party dialogue generation has predominantly leveraged structural information inherent in dialogues to directly inform the generation process. However, the prevalence of colloquial expressions and incomplete utterances in dialogues often impedes comprehension and weakens the fidelity of dialogue structure representations, which is particularly pronounced in multi-party dialogues. In this work, we propose a novel framework DRCR (Discourse coherence and Response-guided Context Rewriting) to improve multi-party dialogue generation through dialogue context rewriting. Specifically, DRCR employs two complementary feedback signals, discourse coherence and response quality, to construct preference data for both context rewriting and response generation. Moreover, we propose a dynamic self-evolution learning method that allows the rewriter and responder to continuously enhance their capabilities through mutual interaction in an iterative training loop. Comprehensive experiments conducted on four multi-party dialogue datasets substantiate the effectiveness of DRCR.