Search papers, labs, and topics across Lattice.
The paper introduces MedImageEdu, a new benchmark for multi-turn, evidence-grounded radiology patient education involving interactions between a DoctorAgent and a PatientAgent with varying profiles. The DoctorAgent uses a drawing tool to provide visual support based on radiology reports and images, generating multimodal responses. Experiments with various vision-language models on MedImageEdu reveal gaps in visual grounding, safety, and handling emotionally tense interactions, highlighting the challenge of teaching from evidence.
Current vision-language models can generate fluent medical explanations, but struggle to ground them in relevant visual evidence, especially in emotionally charged scenarios.
Most medical multimodal benchmarks focus on static tasks such as image question answering, report generation, and plain-language rewriting. Patient education is more demanding: systems must identify relevant evidence across images, show patients where to look, explain findings in accessible language, and handle confusion or distress. Yet most patient education work remains text-only, even though combined image-and-text explanations may better support understanding. We introduce MedImageEdu, a benchmark for multi-turn, evidence-grounded radiology patient education. Each case provides a radiology report with report text and case images. A DoctorAgent interacts with a PatientAgent, conditioned on a hidden profile that captures factors such as education level, health literacy, and personality. When a patient question would benefit from visual support, the DoctorAgent can issue drawing instructions grounded in the report, case images, and the current question to a benchmark-provided drawing tool. The tool returns image(s), after which the DoctorAgent produces a final multimodal response consisting of the image(s) and a grounded plain-language explanation. MedImageEdu contains 150 cases from three sources and evaluates both the consultation process and the final multimodal response along five dimensions: Consultation, Safety and Scope, Language Quality, Drawing Quality, and Image-Text Response Quality. Across representative open- and closed-source vision-language model agents, we find three consistent gaps: fluent language often outpaces faithful visual grounding, safety is the weakest dimension across disease categories, and emotionally tense interactions are harder than low education or low health literacy. MedImageEdu provides a controlled testbed for assessing whether multimodal agents can teach from evidence rather than merely answer from text.