Search papers, labs, and topics across Lattice.
This paper investigates the challenges of tool-using agents in complex dynamic environments like τ-bench, focusing on issues of consistent reasoning and adherence to domain-specific policies. Through manual error analysis of conversational trajectories, the authors identify common failure modes and propose input reformulation as a mitigation strategy. They introduce the Input-Reformulation Multi-Agent (IRMA) framework, which reformulates user queries with domain rules and tool suggestions, achieving significant performance gains over ReAct, Function Calling, and Self-Reflection baselines.
LLMs struggle to consistently use tools in dynamic environments, but a simple input reformulation strategy can boost performance by over 16% compared to standard methods like ReAct.
Recent advances in reasoning and planning capabilities of large language models (LLMs) have enabled their potential as autonomous agents capable of tool use in dynamic environments. However, in multi-turn conversational environments like $\tau$-bench, these agents often struggle with consistent reasoning, adherence to domain-specific policies, and extracting correct information over a long horizon of tool-calls and conversation. To capture and mitigate these failures, we conduct a comprehensive manual analysis of the common errors occurring in the conversation trajectories. We then experiment with reformulations of inputs to the tool-calling agent for improvement in agent decision making. Finally, we propose the Input-Reformulation Multi-Agent (IRMA) framework, which automatically reformulates user queries augmented with relevant domain rules and tool suggestions for the tool-calling agent to focus on. The results show that IRMA significantly outperforms ReAct, Function Calling, and Self-Reflection by 16.1%, 12.7%, and 19.1%, respectively, in overall pass^5 scores. These findings highlight the superior reliability and consistency of IRMA compared to other methods in dynamic environments.