Search papers, labs, and topics across Lattice.
This paper introduces a multi-domain dialogue system that dynamically integrates information from DBpedia to improve response generation. The system addresses the limitations of static knowledge sources by retrieving relevant information from DBpedia when needed, updating its knowledge base, and incorporating the new data into the response. The system employs transformer-based LLMs for response generation and demonstrates improved performance on standard benchmark datasets and a custom DBpedia dataset.
LLM-powered dialogue systems can overcome static knowledge limitations by dynamically querying and integrating information from knowledge graphs like DBpedia, leading to improved response generation.
Recently, with the rapid advancement of Large Language Models (LLMs), various applications such as Natural Language Generation (NLG) have garnered significant attention. NLG has found use across multiple domains, including machine translation, text summarization, and dialogue systems. Dialogue systems facilitate communication between humans and machines, and they are typically classified into task-oriented and open-domain systems. To enable dialogue systems to interact in a human-like manner, large volumes of training data are necessary. However, these systems often lack the reasoning capabilities needed for more intelligent interactions. To address this limitation, researchers have explored integrating external knowledge into dialogue systems, which has led to notable improvements in performance. Nevertheless, external knowledge sources are frequently static. As a result, when relevant information is missing, the system may fail to generate appropriate responses. This paper proposes the design of a multi-domain dialogue system that leverages publicly available knowledge graphs, such as DBpedia, to enhance response generation within predefined domains. When the system encounters a query for which it lacks the necessary knowledge, it dynamically retrieves the relevant information from DBpedia, updates its external knowledge base, and incorporates this data into the response generation process. The system utilizes transformer-based LLMs for generating responses and has been evaluated using two standard benchmark datasets as well as a custom dataset derived from DBpedia. Experimental results demonstrate notable improvements in performance.