Search papers, labs, and topics across Lattice.
The authors introduce Legal RAG Bench, a new benchmark and evaluation methodology for assessing end-to-end performance of legal RAG systems, comprising 4,876 passages from the Victorian Criminal Charge Book and 100 complex, hand-crafted questions. Using a full factorial design and hierarchical error decomposition, the benchmark facilitates isolating the contributions of retrieval and reasoning models. Experiments with state-of-the-art embedding models (Kanon 2, Gemini Embedding 001, Text Embedding 3 Large) and LLMs (Gemini 3.1 Pro, GPT-5.2) reveal that retrieval performance is the primary driver of overall legal RAG performance, with retrieval failures often manifesting as apparent LLM hallucinations.
Hallucinations in legal RAG systems are often retrieval failures in disguise, suggesting retrieval quality sets the upper bound for performance.
We introduce Legal RAG Bench, a benchmark and evaluation methodology for assessing the end-to-end performance of legal RAG systems. As a benchmark, Legal RAG Bench consists of 4,876 passages from the Victorian Criminal Charge Book alongside 100 complex, hand-crafted questions demanding expert knowledge of criminal law and procedure. Both long-form answers and supporting passages are provided. As an evaluation methodology, Legal RAG Bench leverages a full factorial design and novel hierarchical error decomposition framework, enabling apples-to-apples comparisons of the contributions of retrieval and reasoning models in RAG. We evaluate three state-of-the-art embedding models (Isaacus'Kanon 2 Embedder, Google's Gemini Embedding 001, and OpenAI's Text Embedding 3 Large) and two frontier LLMs (Gemini 3.1 Pro and GPT-5.2), finding that information retrieval is the primary driver of legal RAG performance, with LLMs exerting a more moderate effect on correctness and groundedness. Kanon 2 Embedder, in particular, had the largest positive impact on performance, improving average correctness by 17.5 points, groundedness by 4.5 points, and retrieval accuracy by 34 points. We observe that many errors attributed to hallucinations in legal RAG systems are in fact triggered by retrieval failures, concluding that retrieval sets the ceiling for the performance of many modern legal RAG systems. We document why and how we built Legal RAG Bench alongside the results of our evaluations. We also openly release our code and data to assist with reproduction of our findings.