Search papers, labs, and topics across Lattice.
This paper analyzes the potential risks of using Large Language Models (LLMs) in 15 different policing tasks within the England and Wales legal system. It identifies 17 specific risks associated with LLM deployment and provides over 40 examples of how these risks could impact case progression. The work highlights the need for proactive risk mitigation and a comprehensive understanding of the system-wide impacts of LLMs in criminal justice.
LLMs in policing: a seemingly efficient tool that could introduce 17 distinct risks, potentially derailing case progression in over 40 ways.
There is growing interest in the use of Large Language Models (LLMs) in policing, but there are potential risks. We have developed a practical approach to identifying risks, grounded in the policing and legal system of England and Wales. We identify 15 policing tasks that could be implemented using LLMs and 17 risks from their use, then illustrate with over 40 examples of impact on case progression. As good practice is agreed, many risks could be reduced. But this requires effort: we need to address these risks in a timely manner and define system wide impacts and benefits.