Search papers, labs, and topics across Lattice.
This paper introduces HIVMedQA, a new benchmark dataset for evaluating large language models (LLMs) in the context of HIV medical decision support, comprising clinically relevant questions developed with physician input. The authors evaluated ten LLMs (seven general-purpose and three medically specialized) using prompt engineering and a novel evaluation framework incorporating both lexical similarity and an LLM-as-a-judge approach tailored for clinical relevance. Results showed that Gemini 2.5 Pro performed best, but performance decreased with question complexity, and medically fine-tuned models did not consistently outperform general-purpose ones, highlighting challenges in reasoning, comprehension, and cognitive biases.
Despite the hype, medically fine-tuned LLMs don't always beat general-purpose models on HIV medical decision support, and bigger isn't always better.
Large language models (LLMs) are emerging as valuable tools to support clinicians in routine decision-making. HIV management is a compelling use case due to its complexity, including diverse treatment options, comorbidities, and adherence challenges. However, integrating LLMs into clinical practice raises concerns about accuracy, potential harm, and clinician acceptance. Despite their promise, AI applications in HIV care remain underexplored, and LLM benchmarking studies are scarce. This study evaluates the current capabilities of LLMs in HIV management, highlighting their strengths and limitations. We introduce HIVMedQA, a benchmark designed to assess open-ended medical question answering in HIV care. The dataset consists of curated, clinically relevant questions developed with input from an infectious disease physician. We evaluated seven general-purpose and three medically specialized LLMs, applying prompt engineering to enhance performance. Our evaluation framework incorporates both lexical similarity and an LLM-as-a-judge approach, extended to better reflect clinical relevance. We assessed performance across key dimensions: question comprehension, reasoning, knowledge recall, bias, potential harm, and factual accuracy. Results show that Gemini 2.5 Pro consistently outperformed other models across most dimensions. Notably, two of the top three models were proprietary. Performance declined as question complexity increased. Medically fine-tuned models did not always outperform general-purpose ones, and larger model size was not a reliable predictor of performance. Reasoning and comprehension were more challenging than factual recall, and cognitive biases such as recency and status quo were observed. These findings underscore the need for targeted development and evaluation to ensure safe, effective LLM integration in clinical care.