Search papers, labs, and topics across Lattice.
This paper introduces an LLM-assisted causal inference framework to improve Legal Judgment Prediction (LJP) by addressing the limitations of statistical correlation-based methods. The framework uses a hybrid extraction mechanism combining statistical sampling and LLM semantic reasoning for accurate legal factor extraction, and an LLM-assisted causal structure disambiguation mechanism to resolve structural uncertainty in causal discovery. Experiments on benchmark datasets show the proposed method outperforms state-of-the-art baselines in accuracy and robustness, especially in distinguishing confusing charges.
LLMs can resolve causal ambiguity in legal judgment prediction, leading to more accurate and robust models that outperform purely statistical approaches.
Mainstream methods for Legal Judgment Prediction (LJP) based on Pre-trained Language Models (PLMs) heavily rely on the statistical correlation between case facts and judgment results. This paradigm lacks explicit modeling of legal constituent elements and underlying causal logic, making models prone to learning spurious correlations and suffering from poor robustness. While introducing causal inference can mitigate this issue, existing causal LJP methods face two critical bottlenecks in real-world legal texts: inaccurate legal factor extraction with severe noise, and significant uncertainty in causal structure discovery due to Markov equivalence under sparse features. To address these challenges, we propose an enhanced causal inference framework that integrates Large Language Model (LLM) priors with statistical causal discovery. First, we design a coarse-to-fine hybrid extraction mechanism combining statistical sampling and LLM semantic reasoning to accurately identify and purify standard legal constituent elements. Second, to resolve structural uncertainty, we introduce an LLM-assisted causal structure disambiguation mechanism. By utilizing the LLM as a constrained prior knowledge base, we conduct probabilistic evaluation and pruning on ambiguous causal directions to generate legally compliant candidate causal graphs. Finally, a causal-aware judgment prediction model is constructed by explicitly constraining text attention intensity via the generated causal graphs. Extensive experiments on multiple benchmark datasets, including LEVEN , QA, and CAIL, demonstrate that our proposed method significantly outperforms state-of-the-art baselines in both predictive accuracy and robustness, particularly in distinguishing confusing charges.