Search papers, labs, and topics across Lattice.
ZebraArena, a procedurally generated diagnostic environment, is introduced to isolate and study the coupling of reasoning and action in tool-augmented LLMs by minimizing the influence of environment dynamics, memorized knowledge, and dataset contamination. The environment requires targeted tool use to acquire critical information, enabling interpretable evaluation of reasoning and tool-use efficiency via deterministic solutions and a theoretical optimal query count. Experiments show that even advanced models like GPT-5 and Gemini 2.5 Pro struggle on hard instances, achieving only 60% accuracy, and exhibit significant inefficiencies in tool usage, exceeding the theoretical optimum by 70-270%.
Even GPT-5 and Gemini 2.5 Pro still fail to efficiently couple reasoning with tool use, requiring up to 2.7x more tool calls than theoretically optimal in a new diagnostic environment.
Tool-augmented large language models (LLMs) must tightly couple multi-step reasoning with external actions, yet existing benchmarks often confound this interplay with complex environment dynamics, memorized knowledge or dataset contamination. In this paper, we introduce ZebraArena, a procedurally generated diagnostic environment for studying reasoning-action coupling in tool-augmented LLMs, with controllable difficulty and a knowledge-minimal design, which limits gains from memorization or dataset contamination. Each task in ZebraArena requires a set of critical information which is available only through targeted tool use, yielding an interpretable interface between external information acquisition and deductive reasoning. This design provides deterministic evaluation via unique solutions, and a theoretical optimal query count for measuring efficient tool use. We show that ZebraArena requires a combination of in-depth reasoning and accurate external tool calling, which remains a challenge as frontier reasoning models such as GPT-5 and Gemini 2.5 Pro only achieves 60% accuracy on the hard instances. We also observe a persistent gaps between theoretical optimality and practical tool usage. For example, GPT-5 uses 70-270% more tool calls than the theoretical optimum. We highlight the key findings in our evaluation, and hope ZebraArena stimulates further research on the interplay between internal reasoning and external action.