Search papers, labs, and topics across Lattice.
This paper analyzes the risks of using Generative Legal AI (GLAI) systems, which are AI models adapted for the legal domain, focusing on hallucination and overreliance. It argues that GLAI models, built on statistical token prediction, prioritize fluency over factual accuracy, leading to confabulations. This phenomenon, coupled with automation bias, undermines explainability and poses risks to judicial independence and fundamental rights, particularly within the context of European AI governance.
Generative legal AI's fluency masks factual inaccuracies, creating a dangerous illusion of reliability that threatens judicial independence and fundamental rights.
This article argues that the deployment of generative AI systems in legal profession requires strong restraint due to the critical risks of hallucination and overreliance. Central to this analysis is the definition of Generative Legal AI (GLAI), an umbrella term for systems specifically adapted for the legal domain which is ranging from document drafting to decision support in criminal justice. Unlike traditional AI, GLAI models are built on architectures designed for statistical token prediction rather than legal reasoning, often leading to confabulations where the system prioritizes linguistic fluency over factual accuracy. These hallucinations obscure the reasoning process, while the persuasive, human-like nature of the output encourages professional overreliance. The paper situates these dynamics within the framework of European AI governance, arguing that the interaction between fabricated data and automation bias fundamentally weakens the principle of explainability. The article concludes that without effective mechanisms for meaningful human scrutiny, the routine adoption of GLAI poses significant challenges to judicial independence and the protection of fundamental rights.