Search papers, labs, and topics across Lattice.
The paper introduces Entropy Trend Reward (ETR), a novel reward function for training LLMs to generate more efficient chain-of-thought reasoning. ETR encourages a progressive reduction in uncertainty throughout the reasoning process, rather than simply minimizing overall entropy or penalizing length. Integrating ETR with Group Relative Policy Optimization (GRPO) yields significant improvements in accuracy and CoT length, achieving a 9.9% accuracy boost and 67% length reduction on DeepSeek-R1-Distill-7B across several benchmarks.
LLMs reason better when their uncertainty consistently decreases, paving the way for shorter, more accurate chain-of-thought reasoning.
Chain-of-thought (CoT) reasoning improves large language model performance on complex tasks, but often produces excessively long and inefficient reasoning traces. Existing methods shorten CoTs using length penalties or global entropy reduction, implicitly assuming that low uncertainty is desirable throughout reasoning. We show instead that reasoning efficiency is governed by the trajectory of uncertainty. CoTs with dominant downward entropy trends are substantially shorter. Motivated by this insight, we propose Entropy Trend Reward (ETR), a trajectory-aware objective that encourages progressive uncertainty reduction while allowing limited local exploration. We integrate ETR into Group Relative Policy Optimization (GRPO) and evaluate it across multiple reasoning models and challenging benchmarks. ETR consistently achieves a superior accuracy-efficiency tradeoff, improving DeepSeek-R1-Distill-7B by 9.9% in accuracy while reducing CoT length by 67% across four benchmarks. Code is available at https://github.com/Xuan1030/ETR