Search papers, labs, and topics across Lattice.
The authors introduce MedDialogRubrics, a new benchmark for evaluating multi-turn diagnostic capabilities of LLMs in medical consultations, consisting of 5,200 synthetic patient cases and 60,000 fine-grained evaluation rubrics. They use a multi-agent system with a Patient Agent augmented with a dynamic guidance mechanism to generate realistic patient records while mitigating privacy concerns. Evaluation of state-of-the-art models on MedDialogRubrics reveals significant challenges, suggesting that improvements in medical dialogue require advances in dialogue management architectures beyond simple model tuning.
Current LLMs still struggle with multi-turn medical diagnosis, needing more than just bigger models to truly master dialogue management.
Medical conversational AI (AI) plays a pivotal role in the development of safer and more effective medical dialogue systems. However, existing benchmarks and evaluation frameworks for assessing the information-gathering and diagnostic reasoning abilities of medical large language models (LLMs) have not been rigorously evaluated. To address these gaps, we present MedDialogRubrics, a novel benchmark comprising 5,200 synthetically constructed patient cases and over 60,000 fine-grained evaluation rubrics generated by LLMs and subsequently refined by clinical experts, specifically designed to assess the multi-turn diagnostic capabilities of LLM. Our framework employs a multi-agent system to synthesize realistic patient records and chief complaints from underlying disease knowledge without accessing real-world electronic health records, thereby mitigating privacy and data-governance concerns. We design a robust Patient Agent that is limited to a set of atomic medical facts and augmented with a dynamic guidance mechanism that continuously detects and corrects hallucinations throughout the dialogue, ensuring internal coherence and clinical plausibility of the simulated cases. Furthermore, we propose a structured LLM-based and expert-annotated rubric-generation pipeline that retrieves Evidence-Based Medicine (EBM) guidelines and utilizes the reject sampling to derive a prioritized set of rubric items ("must-ask"items) for each case. We perform a comprehensive evaluation of state-of-the-art models and demonstrate that, across multiple assessment dimensions, current models face substantial challenges. Our results indicate that improving medical dialogue will require advances in dialogue management architectures, not just incremental tuning of the base-model.