Search papers, labs, and topics across Lattice.
This paper investigates the reliability of computer-use agents, identifying stochasticity, task ambiguity, and behavioral variability as key factors contributing to inconsistent performance. The authors analyze these factors in the OSWorld environment using repeated task executions and statistical tests to quantify performance changes across settings. Results indicate that reliability is significantly impacted by both task specification and the consistency of agent behavior across different runs.
Even when a computer-use agent succeeds once, inconsistent task specification and variable agent behavior can tank its reliability.
Computer-use agents have rapidly improved on real-world tasks such as web navigation, desktop automation, and software interaction, in some cases surpassing human performance. Yet even when the task and model are unchanged, an agent that succeeds once may fail on a repeated execution of the same task. This raises a fundamental question: if an agent can succeed at a task once, what prevents it from doing so reliably? In this work, we study the sources of unreliability in computer-use agents through three factors: stochasticity during execution, ambiguity in task specification, and variability in agent behavior. We analyze these factors on OSWorld using repeated executions of the same task together with paired statistical tests that capture task-level changes across settings. Our analysis shows that reliability depends on both how tasks are specified and how agent behavior varies across executions. These findings suggest the need to evaluate agents under repeated execution, to allow agents to resolve task ambiguity through interaction, and to favor strategies that remain stable across runs.