Search papers, labs, and topics across Lattice.
This paper introduces R2IF, a reasoning-aware reinforcement learning framework designed to improve the alignment between reasoning processes and decision-making in large language models (LLMs) during function calling. By integrating a composite reward system that includes format/correctness constraints, Chain-of-Thought Effectiveness Reward (CER), and Specification-Modification-Value (SMV) reward, R2IF significantly enhances both the accuracy and interpretability of tool calls. Experimental results demonstrate that R2IF outperforms existing methods by up to 34.62% on benchmark tasks, indicating its potential for more reliable LLM deployments in real-world applications.
R2IF achieves up to 34.62% better performance in function calling accuracy, bridging the gap between reasoning and decision-making in LLMs.
Function calling empowers large language models (LLMs) to interface with external tools, yet existing RL-based approaches suffer from misalignment between reasoning processes and tool-call decisions. We propose R2IF, a reasoning-aware RL framework for interpretable function calling, adopting a composite reward integrating format/correctness constraints, Chain-of-Thought Effectiveness Reward (CER), and Specification-Modification-Value (SMV) reward, optimized via GRPO. Experiments on BFCL/ACEBench show R2IF outperforms baselines by up to 34.62% (Llama3.2-3B on BFCL) with positive Average CoT Effectiveness (0.05 for Llama3.2-3B), enhancing both function-calling accuracy and interpretability for reliable tool-augmented LLM deployment.