Search papers, labs, and topics across Lattice.
This cross-sectional study analyzed over 4 million anonymized user conversations with Anthropic's Claude to quantify the real-world frequency and scope of healthcare-related tasks, finding that healthcare tasks represented only 2.58% of total conversations, significantly less than computing. The study identified specific healthcare occupations with high interaction volumes, such as dietitians/nutritionists and nurse practitioners, and assessed the breadth of task adoption within roles using a "digital adoption rate," which averaged 16.92% across healthcare roles. The authors conclude that GenAI is being adopted for a measurable subset of healthcare tasks, but the inability to differentiate between healthcare professionals and the general public limits definitive conclusions about the nature of this adoption.
Despite the hype, healthcare tasks represent a surprisingly small fraction (2.58%) of real-world GenAI usage, raising questions about actual clinical impact versus perceived potential.
Abstract Background Generative artificial intelligence (GenAI) systems like Anthropic’s Claude and OpenAI’s ChatGPT are rapidly being adopted in various sectors, including health care, offering potential benefits for clinical support, administrative efficiency, and patient information access. However, real-world adoption patterns and the extent to which GenAI is used for health care–related tasks remain poorly understood and distinct from performance benchmarks in controlled settings. Understanding these organic usage patterns is key for assessing GenAI’s impact on health care delivery and patient-provider dynamics. Objective This study aimed to quantify the real-world frequency and scope of health care–related tasks performed using Anthropic’s Claude GenAI. We sought to (1) measure the proportion of Claude interactions related to health care tasks versus other domains; (2) identify specific health care occupations (as per O*NET classifications) with high associated interaction volumes; (3) assess the breadth of task adoption within roles using a “digital adoption rate”; and (4) interpret these findings considering the inherent ambiguity regarding user identity (ie, professionals vs public) in the dataset. Methods We performed a cross-sectional analysis of more than 4 million anonymized user conversations with Claude (ie, including both free and pro subscribers) from December 2024 to January 2025, using a publicly available dataset from Anthropic’s Economic Index research. Interactions were preclassified by Anthropic’s proprietary Clio model into standardized occupational tasks mapped to the US Department of Labor’s O*NET database. The dataset did not allow differentiation between health care professionals and the general public as users. We focused on interactions mapped to O*NET Healthcare Practitioners and Technical Occupations. Main outcomes included the proportion of interactions per health care occupation, proportion of overall health care interaction versus other categories, and the digital adoption rate (ie, distinct tasks performed via GenAI divided by the total possible tasks per occupation). Results Health care–related tasks accounted for 2.58% of total analyzed GenAI conversations, significantly lower than domains such as computing (37.22%). Within health care, interaction frequency varied notably by role. Occupations emphasizing patient education and guidance exhibited the highest proportion, including dietitians and nutritionists (6.61% of health care conversations), nurse practitioners (5.63%), music therapists (4.54%), and clinical nurse specialists (4.53%). Digital adoption rates (task breadth) ranged widely across top health care roles (13.33%‐65%), averaging 16.92%, below the global average (21.13%). Tasks associated with medical records and health information technicians had the highest adoption rate (65.0%). Conclusions GenAI tools are being adopted for a measurable subset of health care–related tasks, with usage concentrated in specific, often patient-facing roles. The critical limitation of user anonymity prevents definitive conclusions regarding whether usage primarily reflects patient information–seeking behavior (potentially driven by access needs) or professional workflow assistance. This ambiguity necessitates caution when interpreting current GenAI adoption. Our findings emphasize the urgent need for strategies addressing potential impacts on clinical workflows, patient decision-making, information quality, and health equity. Future research must aim to differentiate user types, while stakeholders should develop targeted guidance for both safe patient use and responsible professional integration.