Search papers, labs, and topics across Lattice.
This paper introduces a decision-centric framework for LLM systems that explicitly separates decision-relevant signal estimation from the policy that maps these signals to actions. By disentangling assessment and action, the framework enables modular improvement and targeted debugging of LLM control. Experiments demonstrate that this approach reduces futile actions, improves task success, and provides interpretable failure modes compared to implicit decision-making within generation.
Untangling LLM control into explicit decision-making layers slashes futile actions and boosts task success, revealing failure modes you can actually debug.
LLM systems must make control decisions in addition to generating outputs: whether to answer, clarify, retrieve, call tools, repair, or escalate. In many current architectures, these decisions remain implicit within generation, entangling assessment and action in a single model call and making failures hard to inspect, constrain, or repair. We propose a decision-centric framework that separates decision-relevant signals from the policy that maps them to actions, turning control into an explicit and inspectable layer of the system. This separation supports attribution of failures to signal estimation, decision policy, or execution, and enables modular improvement of each component. It unifies familiar single-step settings such as routing and adaptive inference, and extends naturally to sequential settings in which actions alter the information available before acting. Across three controlled experiments, the framework reduces futile actions, improves task success, and reveals interpretable failure modes. More broadly, it offers a general architectural principle for building more reliable, controllable, and diagnosable LLM systems.