Search papers, labs, and topics across Lattice.
This paper introduces a framework for stabilizing human-AI reasoning by addressing the issue of LLMs producing fluent but potentially unreliable outputs, which can mislead users. The proposed solution involves a two-layer approach: human-side mechanisms (uncertainty cues, conflict surfacing, reasoning traces) and a model-side Epistemic Control Loop (ECL) to detect instability. The framework aims to increase signal-to-noise in AI interactions, making uncertainty and drift visible for better governance and compliance.
LLMs' fluent outputs mask unreliable reasoning, creating a "drift" effect that demands a two-layer solution for stable human-AI collaboration.
Large language models are increasingly integrated into decision-making in areas such as healthcare, law, finance, engineering, and government. Yet they share a critical limitation: they produce fluent outputs even when their internal reasoning has drifted. A confident answer can conceal uncertainty, speculation, or inconsistency, and small changes in phrasing can lead to different conclusions. This makes LLMs useful assistants but unreliable partners in high-stakes contexts. Humans exhibit a similar weakness, often mistaking fluency for reliability. When a model responds smoothly, users tend to trust it, even when both model and user are drifting together. This paper is the first in a five-paper research series on stabilising human-AI reasoning. The series proposes a two-layer approach: Parts II-IV introduce human-side mechanisms such as uncertainty cues, conflict surfacing, and auditable reasoning traces, while Part V develops a model-side Epistemic Control Loop (ECL) that detects instability and modulates generation accordingly. Together, these layers form a missing operational substrate for governance by increasing signal-to-noise at the point of use. Stabilising interaction makes uncertainty and drift visible before enforcement is applied, enabling more precise capability governance. This aligns with emerging compliance expectations, including the EU AI Act and ISO/IEC 42001, by making reasoning processes traceable under real conditions of use. The central claim is that fluency is not reliability. Without structures that stabilise both human and model reasoning, AI cannot be trusted or governed where it matters most.