Search papers, labs, and topics across Lattice.
This paper introduces ToolRM, a family of lightweight reward models (RMs) designed specifically for tool-use scenarios, addressing the lack of specialized RMs for function-calling tasks. They construct a high-quality pairwise preference dataset, ToolPref-Pairwise-30K, using rule-based scoring and multidimensional sampling, and introduce TRBench$_{BFCL}$ to evaluate RMs on tool calling. Experiments show that ToolRMs trained on their dataset outperform existing LLMs and RMs in tool-use accuracy and reward judgment, and generalize to critique tasks like Best-of-N sampling and self-correction, while also reducing token usage.
ToolRMs drastically improve tool-use accuracy in LLMs, outperforming existing models by up to 17.94%, while also reducing output token usage by over 66% through efficient inference-time scaling.
Reward models (RMs) play a critical role in aligning large language models (LLMs) with human preferences. Yet in the domain of tool learning, the lack of RMs specifically designed for function-calling tasks has limited progress toward more capable agentic AI. We introduce ToolRM, a family of lightweight reward models tailored for general tool-use scenarios. To build these models, we propose a novel pipeline that constructs high-quality pairwise preference data using rule-based scoring and multidimensional sampling. This yields ToolPref-Pairwise-30K, a diverse, balanced, and challenging preference dataset that supports both generative and discriminative reward modeling. We also introduce TRBench$_{BFCL}$, a benchmark built on the agent evaluation suite BFCL to evaluate RMs on tool calling tasks. Trained on our constructed data, models from the Qwen3-4B/8B series achieve up to 17.94% higher accuracy, substantially outperforming frontier LLMs and RMs in pairwise reward judgments. Beyond training objectives, generative ToolRM generalizes to broader critique tasks, including Best-of-N sampling and self-correction. Experiments on ACEBench highlight its effectiveness and efficiency, enabling inference-time scaling while reducing output token usage by over 66%. Its support for downstream RL training further validates its practical utility. We release data to facilitate future research.