Search papers, labs, and topics across Lattice.
This research analyzes the potential risks of generative AI (GenAI) in healthcare to fundamental patients’ rights, focusing on privacy, justice, and autonomy. Through a mixed methodology of literature review and expert interviews, the study identifies risks related to medical data protection, equal access to healthcare, and informed consent. The report concludes with legal and ethical recommendations to promote the responsible use of GenAI in healthcare, particularly within the EU context.
The rush to adopt GenAI in healthcare is outpacing our understanding of its risks, potentially trampling patients' fundamental rights to privacy, equitable access, and informed consent.
Photo by Igor Omilaev on Unsplash Abstract Healthcare systems are facing constant changes due to demographic modifications (a rapidly aging population), technological developments, global pandemics, and shifts in social paradigms. These changes are increasingly being analysed through the lens of patients’ rights, which are central in ethical and legal discussions in healthcare. A significant change in healthcare today is the growing use of generative artificial intelligence (AI) in clinical practice. This research analyses the potential risks of the use of generative AI systems to fundamental patients’ rights. With a mixed methodology combining literature review and semi-structured interviews with experts and stakeholders, the study identifies three main areas of risk, each one associated with fundamental values: the right to medical data protection (privacy), the right to equal access to healthcare (justice), and the right to informed consent (autonomy). The report concludes with a discussion of the findings and presents legal and ethical recommendations to promote the benefits of generative AI in healthcare. 1. Introduction The increasing digitalization of healthcare is reshaping how healthcare professionals deal with clinical tasks and patient interactions. This technological shift is accelerated by systemic pressures that healthcare is facing today due to a double aging population and workforce shortages. Generative artificial intelligence (GenAI) has the capacity to help healthcare providers with clinical documentation, decision-making, and patient communication through automated processes. At the same time, the fast integration of GenAI models in healthcare raises ethical and legal concerns. For example, general-purpose AI models are already being used in clinical practice without being subject to high-risk regulatory requirements. This produces regulatory gaps that challenge the protection of fundamental patients’ rights in real-world clinical settings. This report focuses on three main patients’ rights: the right to privacy, the right to equitable access, and the right to informed consent. These rights are represented in bioethical and legal frameworks for the protection of patients. The question guiding this study is the following: How does the use of generative AI in healthcare impact patients’ rights, particularly regarding privacy, justice, and autonomy? While the analysis is framed within the EU context, the concepts and findings remain relevant for broader global discussions. By identifying key risks, such as unauthorized access to health data, limitations of anonymization techniques, algorithmic bias, and digital informed consent, this study contributes to the growing body of research on AI in healthcare and the protection of patients’ rights. 2. Context 2.1. What is Generative AI? Generative artificial intelligence (GenAI) is a broad category of AI that, in addition to recognizing and predicting patterns, can also generate new content such as text, images, and sound, based on input and training data.[1] GenAI differs from traditional AI in two key ways: dynamic context and scale of use. While traditional AI is typically designed for specific contexts and predefined tasks, GenAI has a sort of “flexibility” and “creativity” that allows the model to learn new capabilities that it had never been explicitly trained for, allowing it to adapt to different contexts and uses.[2] In this sense, GenAI is one single tool with multiple uses and applications.[3] Because of this high adaptability, it is harder to interpret the complex learning algorithms of GenAI, which leads to less transparency of the system. Ultimately, when asking a GenAI model to create an outcome, if asked the same thing twice, it will provide inconsistent outcomes due to its probabilistic nature. A specific category of GenAI is large language models (LLMs), which are designed to generate human-like text. These models pertain to the class of natural language processing (NLP), the technology that allows computers to understand and process human language (an example would be Google Translate). LLMs are trained on enormous text datasets that allow the model to self-learn and create text on its own.[4] GenAI has gained significant attention since the release of ChatGPT, a chatbot made publicly available by the American organization OpenAI in 2019. Its ease and free accessibility reached widespread adoption[5] also in healthcare settings.[6] 2.2. Generative AI in Healthcare In healthcare, traditional AI systems are used in several areas. For example, in radiology, they automate the detection and classification of medical images.[7] In emergency departments and intensive care units (ICUs), AI is used as a decision support system. For example, the Pacmed Critical model at Leiden University Medical Centre (UMC) (Netherlands) is a machine learning model that predicts readmission or death after ICU discharge.[8] AI is also used in patient monitoring to track physiological changes and provide predictive analytics: MS Sherpa is an application for multiple sclerosis that uses digital biomarkers to monitor symptom progression and disease activity.[9] GenAI offers new possibilities, mainly aimed at reducing administrative burdens, for instance, through automatically creating clinical documents like discharge letters, referral letters, and clinical notes.[10] For example, the UMC Utrecht (Netherlands) has developed an application that uses General Pre-training Transformer (GPT) to generate draft discharge letters.[11] GenAI is also being used to transcribe and summarize conversations between doctor and patient. “Autoscriber,” at Leiden UMC research department (Netherlands), is a digital scribe system that automatically records, transcribes, and summarizes the clinical encounter.[12] Besides administrative tasks, GenAI can assist with clinical decision-making by creating diagnosis and treatment recommendations based on patient data.[13] It also supports medical research activities like assisting in systematic reviews.[14] GenAI is also used to automatically answer patients’ questions related to their care. For example, at the Elizabeth-Twee Steden Hospital (Netherlands), a chatbot called “Eliza” answers patients’ medical questions.[15] 2.3. Current Use of Generative AI in Healthcare The use of GenAI in healthcare is rapidly increasing, which is changing how healthcare providers manage clinical tasks and patient interactions. Recent empirical studies reveal that more than half of healthcare providers use ChatGPT, or similar general-purpose LLMs, to assist with clinical documentation, patient communication, clinical decision-making, research, and more.[16] These studies also show that despite this widespread use of GenAI, most healthcare providers lack the required knowledge and awareness of the risks of using this tool in general, and specifically for clinical tasks.[17] This lack of comprehension is probably because GenAI has only become popular and widespread recently, which makes it difficult to fully understand and assess the risks and scale of these technologies to society. This gap in understanding GenAI’s risks is reflected in healthcare institutions. For example, a survey on AI use in Dutch hospitals found that GenAI was used in 57 percent of hospitals, with applications such as automatic transcriptions, document summarisation, and text generation.[18] The same study showed critical issues: in only 29 percent of hospitals, it was clear on what frequency AI models are retested, trained, and calibrated to errors such as hallucinations[19] and data drifting.[20] In more than half of the hospitals (52 percent), it is unknown whether, and if so, in what frequency, such practices occur at all, and in 11 percent, AI models are never retrained. Moreover, only 30 percent of hospitals reported having an AI policy describing the frameworks, standards, and guidelines for the use of AI.[21] Another survey found that 76 percent of physicians reported using general-purpose LLMs, like ChatGPT, for clinical decision-making.[22] More than 60 percent of primary care doctors reported using them to check drug interactions; while more than half use them for diagnosis support, nearly half for clinical documentation, and more than 40 percent for treatment planning. Additionally, 70 percent use general-purpose LLMs for patient education and literature search. These findings show a mismatch between the growing use of GenAI in clinical practices and the governance needed to ensure its responsible use. While GenAI has the potential to enhance efficiency and accuracy in clinical tasks, if it is integrated without the necessary knowledge, governance, legal, and ethical oversight, it can lead to harmful consequences to patients, such as data protection violations, automation bias, unclear accountability, healthcare inequality, incorrect clinical decisions, and the spread of misinformation.[23] 2.4. Regulatory Landscape At the European Union (EU) level, efforts to regulate the safe use of AI in healthcare are currently fragmented. This means there is not one regulatory framework solely dedicated to governing the use of AI in healthcare. Instead, different laws cover different parts of the issue, including the European Union AI Act,[24] the General Data Protection Regulation,[25] and the Medical Devices Regulation.[26] 2.4.1. The European Union AI Act In August 2024, the Artificial Intelligence (AI) Act entered into force. The AI Act is an EU regulation that sets rules for the development, introduction to the market, and deployment of AI systems. It adopts a risk-based approach: depending on the application and use of the system, it will fall under low, middle, high, or impermissible risk. The higher the risk, the stricter the regulatory requirements (e.g., risk management, data governance, human oversight).[27] Medical devices like AI diagnostic