Search papers, labs, and topics across Lattice.
The paper introduces Environment Tuning, a novel training paradigm for LLM agents that learns from problem instances without expert trajectories, addressing data scarcity and overfitting issues in tool-use tasks. This method employs a structured curriculum, environment augmentation for corrective feedback, and fine-grained progress rewards to stabilize exploration. Using only 400 instances from the BFCL benchmark, Environment Tuning achieves competitive in-distribution performance and superior out-of-distribution generalization compared to SFT-based approaches.
Forget synthetic data and overfitting: Environment Tuning lets LLM agents learn complex tool-use behaviors directly from the environment, slashing data needs and boosting generalization.
Large Language Model (LLM) agents show great promise for complex, multi-turn tool-use tasks, but their development is often hampered by the extreme scarcity of high-quality training data. Supervised fine-tuning (SFT) on synthetic data leads to overfitting, whereas standard reinforcement learning (RL) struggles with a critical cold-start problem and training instability. To address these challenges, we introduce $\textbf{Environment Tuning}$, a novel training paradigm that enables agents to learn complex behaviors directly from problem instances without relying on pre-collected expert trajectories. $\textbf{Environment Tuning}$ orchestrates this learning process through a structured curriculum, actionable environment augmentation that provides corrective feedback, and fine-grained progress rewards to ensure stable and efficient exploration. Using only 400 problem instances from Berkeley Function-Calling Leaderboard (BFCL) benchmark, our method not only achieves competitive in-distribution performance against strong baselines but also demonstrates superior out-of-distribution generalization, overcoming the performance collapse common to SFT-based approaches. Our work presents a paradigm shift from supervised fine-tuning on static trajectories to dynamic, environment-based exploration, paving the way for training more robust and data-efficient agents. The code is available at https://github.com/inclusionAI/AWorld-RL/tree/main/EnvTuning.