Search papers, labs, and topics across Lattice.
This paper evaluates the ability of large language models (LLMs) to detect errors in complex clinical documentation in oncology, comparing their performance to human experts. The study used synthetic clinical vignettes and discharge summaries with controlled errors to benchmark LLMs like GPT-4-mini, Gemini 2.5 Pro, and Gemma 3 27B against human clinicians in error detection and localization. Results showed that frontier LLMs, particularly Gemini 2.5 Pro, significantly outperformed human specialists in identifying errors, suggesting their potential to enhance patient safety and reduce clinician workload.
LLMs spot twice as many critical errors in oncology documentation as human specialists, hinting at a future where AI acts as a failsafe in high-stakes medical settings.
PURPOSE In high-risk specialties such as oncology, errors in clinical documentation can have severe consequences, highlighting a need for enhanced safety checks. We therefore aimed to evaluate the capability of frontier large language models (LLMs) to identify and correct errors in complex clinical documentation in oncology. METHODS We conducted a two-phase evaluation. First, we assessed LLMs (GPT o4-mini and Gemini 2.5 Pro) on 1,000 synthetic clinical hematology/oncology vignettes with controlled errors, benchmarking against human expert data for error flag detection and sentence localization. Second, we evaluated advanced LLMs and a local LLM (Gemma 3 27B) against six clinicians in detecting single, predefined, and clinically relevant errors, such as wrong risk classifications or omission of critical medication within 90 synthetic discharge summaries from oncologic patients. RESULTS LLMs outperformed human benchmark in error flag and sentence localization tasks, with Gemini 2.5 Pro achieving top accuracies of 0.928 and 0.915, respectively. Results were robust across subgroups and scalable, with simultaneous processing of up to 50 vignettes. Within complex discharge summaries, Gemini 2.5 Pro and GPT o4-mini-high identified 97.8% and 87.8% of injected errors, respectively, substantially exceeding the 47.8% average detection rate of human specialists. Gemma 3 27B detected 35.6% of errors. Analysis of error detection overlap revealed a synergistic potential for hybrid human-artificial intelligence (AI) systems. CONCLUSION Frontier LLMs exhibit superior error‐detection capabilities and speed compared with both local models and human specialists, who are inherently time-constrained. Although synthetic data provide a controlled testbed, real-world evaluation across diverse errors and documentation styles remains critical. Advanced LLMs can serve as powerful assistants for clinical documentation reviews, substantially reducing the risk of oversight and clinician workload. Integrating LLM‐driven error flagging into electronic health record workflows offers a promising strategy for enhancing documentation accuracy, treatment quality, and patient safety in oncology.