Search papers, labs, and topics across Lattice.
This paper explores methods for automatically evaluating open-ended dialogue systems, focusing on predicting dimension-specific scores at the dialogue level within the DSTC-12 Track 1 challenge. They investigated both language model prompting and training smaller encoder-based classification/regression models (under 13B parameters) to predict dialogue quality. While LM prompting achieved moderate correlation with human judgments and ranked second on the test set, the smaller regression/classification models showed high correlation on the validation set for some dimensions, though performance decreased on the test set due to distributional shift.
Even with limited parameters, language model prompting surprisingly rivals traditional methods for evaluating open-ended conversations, highlighting its potential as a lightweight dialogue evaluation technique.
The growing number of generative AI-based dialogue systems has made their evaluation a crucial challenge. This paper presents our contribution to this important problem through the Dialogue System Technology Challenge (DSTC-12, Track 1), where we developed models to predict dialogue-level, dimension-specific scores. Given the constraint of using relatively small models (i.e. fewer than 13 billion parameters) our work follows two main strategies: employing Language Models (LMs) as evaluators through prompting, and training encoder-based classification and regression models. Our results show that while LM prompting achieves only modest correlations with human judgments, it still ranks second on the test set, outperformed only by the baseline. The regression and classification models, with significantly fewer parameters, demonstrate high correlation for some dimensions on the validation set. Although their performance decreases on the test set, it is important to note that the test set contains annotations with significantly different score ranges for some of the dimensions with respect to the train and validation sets.