Search papers, labs, and topics across Lattice.
The paper introduces Gaia2, a benchmark designed to evaluate LLM agents in dynamic, asynchronous environments where the environment evolves independently of agent actions. Gaia2 features scenarios requiring agents to handle temporal constraints, adapt to noisy events, resolve ambiguity, and collaborate, coupled with write-action verifiers for fine-grained evaluation. Evaluations of state-of-the-art models reveal trade-offs between reasoning, efficiency, and robustness, with no single model dominating across all capabilities, highlighting the sim2real gap.
GPT-5 can ace most agent benchmarks, but put it in a dynamic, real-world environment and it chokes on time-sensitive tasks, exposing a critical "sim2real" gap.
We introduce Gaia2, a benchmark for evaluating large language model agents in realistic, asynchronous environments. Unlike prior static or synchronous evaluations, Gaia2 introduces scenarios where environments evolve independently of agent actions, requiring agents to operate under temporal constraints, adapt to noisy and dynamic events, resolve ambiguity, and collaborate with other agents. Each scenario is paired with a write-action verifier, enabling fine-grained, action-level evaluation and making Gaia2 directly usable for reinforcement learning from verifiable rewards. Our evaluation of state-of-the-art proprietary and open-source models shows that no model dominates across capabilities: GPT-5 (high) reaches the strongest overall score of 42% pass@1 but fails on time-sensitive tasks, Claude-4 Sonnet trades accuracy and speed for cost, Kimi-K2 leads among open-source models with 21% pass@1. These results highlight fundamental trade-offs between reasoning, efficiency, robustness, and expose challenges in closing the"sim2real"gap. Gaia2 is built on a consumer environment with the open-source Agents Research Environments platform and designed to be easy to extend. By releasing Gaia2 alongside the foundational ARE framework, we aim to provide the community with a flexible infrastructure for developing, benchmarking, and training the next generation of practical agent systems.