Search papers, labs, and topics across Lattice.
The paper introduces Process Reward over Structured Formal Intermediates (PRoSFI), a novel reward method for training LLMs to perform more reliable multi-step reasoning. PRoSFI rewards models for generating structured intermediate steps that are formally verifiable, rather than directly rewarding final answer correctness. Experiments show that this approach enhances reasoning reliability without sacrificing accuracy, guiding the model towards generating machine-checkable proofs.
Reward LLMs for verifiable reasoning steps, not just correct answers, to get more reliable multi-step logic.
Large language models (LLMs) have recently demonstrated impressive performance on complex, multi-step reasoning tasks, especially when post-trained with outcome-rewarded reinforcement learning Guo et al. 2025. However, it has been observed that outcome rewards often overlook flawed intermediate steps, leading to unreliable reasoning steps even when final answers are correct. To address this unreliable reasoning, we propose PRoSFI (Process Reward over Structured Formal Intermediates), a novel reward method that enhances reasoning reliability without compromising accuracy. Instead of generating formal proofs directly, which is rarely accomplishable for a modest-sized (7B) model, the model outputs structured intermediate steps aligned with its natural language reasoning. Each step is then verified by a formal prover. Only fully validated reasoning chains receive high rewards. The integration of formal verification guides the model towards generating step-by-step machine-checkable proofs, thereby yielding more credible final answers. PRoSFI offers a simple and effective approach to training trustworthy reasoning models.