Search papers, labs, and topics across Lattice.
This paper introduces ToolPRM, a novel inference scaling framework for function calling that combines fine-grained beam search with a process reward model (PRM) to score internal steps of function calls. To train ToolPRM, the authors create a fine-grained intra-call process supervision dataset using function-masking techniques for step-level rewards. Experiments show that ToolPRM outperforms coarse-grained and outcome reward models, significantly improving performance on function calling tasks and revealing the principle of "explore more but retain less" for structured output generation.
Function calling gets a serious upgrade: a new reward model and inference scaling technique boosts performance by focusing on the *process* of tool use, not just the outcome.
Large language models (LLMs) are increasingly demonstrating strong capabilities as autonomous agents, with function calling serving as a core mechanism for interaction with the environment. Meanwhile, inference scaling has become a cutting-edge technique to enhance LLM performance by allocating more computational resources during the inference process. However, current research on inference scaling primarily focuses on unstructured output generation tasks, leaving its application in structured outputs, like function calling, largely underexplored. To bridge this gap, we propose an inference scaling framework that combines fine-grained beam search with a process reward model, ToolPRM, which scores the internal steps of each single function call. To train ToolPRM, we construct the first fine-grained intra-call process supervision dataset, automatically annotated with function-masking techniques to provide step-level rewards for structured tool-use reasoning. Extensive experiments demonstrate that ToolPRM beats the coarse-grained and outcome reward models in terms of predictive accuracy, indicating its stronger capability in supervising the function calling inference process. Inference scaling technique equipped with ToolPRM also significantly improves the backbone model performance across various function calling tasks and benchmarks. More importantly, we reveal a key principle for applying inference scaling techniques to structured outputs:"explore more but retain less"due to the unrecoverability characteristics of structured function calling generation.