Search papers, labs, and topics across Lattice.
This paper introduces the Test-Time Control (TTC) layer, an adapter that integrates optimal control within LLMs to enhance reasoning capabilities. TTC formulates reasoning as finite-horizon Linear Quadratic Regulator (LQR) planning over latent states, representing a value function within the architecture to enable planning before prediction. By implementing a hardware-efficient LQR solver as a fused CUDA kernel, the authors achieve scalability and demonstrate significant improvements in mathematical reasoning tasks when applied to pretrained LLMs.
Forget test-time training: this work bakes optimal control directly into LLMs, yielding up to 27.8% gains in mathematical reasoning.
Associative memory has long underpinned the design of sequential models. Beyond recall, humans reason by projecting future states and selecting goal-directed actions, a capability that modern language models increasingly require but do not natively encode. While prior work uses reinforcement learning or test-time training, planning remains external to the model architecture. We formulate reasoning as optimal control and introduce the Test-Time Control (TTC) layer, which performs finite-horizon LQR planning over latent states at inference time, represents a value function within neural architectures, and leverages it as the nested objective to enable planning before prediction. To ensure scalability, we derive a hardware-efficient LQR solver based on a symplectic formulation and implement it as a fused CUDA kernel, enabling parallel execution with minimal overhead. Integrated as an adapter into pretrained LLMs, TTC layers improve mathematical reasoning performance by up to +27.8% on MATH-500 and 2-3x Pass@8 improvements on AMC and AIME, demonstrating that embedding optimal control as an architectural component provides an effective and scalable mechanism for reasoning beyond test-time training.