Search papers, labs, and topics across Lattice.
The paper introduces Test-Time Control (TTC), a novel neural network layer that integrates optimal control for reasoning by performing finite-horizon LQR planning over latent states during inference. A hardware-efficient LQR solver, based on a symplectic formulation and implemented as a fused CUDA kernel, ensures scalability. Integrating TTC layers into pretrained LLMs improves mathematical reasoning performance significantly, demonstrating the effectiveness of embedding optimal control directly into the architecture.
LLMs can get a 27.8% boost in mathematical reasoning by fusing a hardware-efficient optimal control layer directly into their architecture, enabling planning before prediction.
Associative memory has long underpinned the design of sequential models. Beyond recall, humans reason by projecting future states and selecting goal-directed actions, a capability that modern language models increasingly require but do not natively encode. While prior work uses reinforcement learning or test-time training, planning remains external to the model architecture. We formulate reasoning as optimal control and introduce the Test-Time Control (TTC) layer, which performs finite-horizon LQR planning over latent states at inference time, represents a value function within neural architectures, and leverages it as the nested objective to enable planning before prediction. To ensure scalability, we derive a hardware-efficient LQR solver based on a symplectic formulation and implement it as a fused CUDA kernel, enabling parallel execution with minimal overhead. Integrated as an adapter into pretrained LLMs, TTC layers improve mathematical reasoning performance by up to +27.8% on MATH-500 and 2-3x Pass@8 improvements on AMC and AIME, demonstrating that embedding optimal control as an architectural component provides an effective and scalable mechanism for reasoning beyond test-time training.