Search papers, labs, and topics across Lattice.
This perspective paper critiques the limitations of current human-agent interaction paradigms, which are pointwise and reactive, lacking foresight into long-term consequences. It proposes "simulation-in-the-loop," a new paradigm where users and agents explore simulated future trajectories before committing to actions. By enabling informed exploration and discovery of latent constraints, simulation-in-the-loop promises more effective human-agent collaboration.
Stop micromanaging your AI assistant: "simulation-in-the-loop" lets you explore future outcomes *before* acting, turning reactive corrections into proactive collaboration.
Large Language Models (LLMs) are increasingly used to power autonomous agents for complex, multi-step tasks. However, human-agent interaction remains pointwise and reactive: users approve or correct individual actions to mitigate immediate risks, without visibility into subsequent consequences. This forces users to mentally simulate long-term effects, a cognitively demanding and often inaccurate process. Users have control over individual steps but lack the foresight to make informed decisions. We argue that effective collaboration requires foresight, not just control. We propose simulation-in-the-loop, an interaction paradigm that enables users and agents to explore simulated future trajectories before committing to decisions. Simulation transforms intervention from reactive guesswork into informed exploration, while helping users discover latent constraints and preferences along the way. This perspective paper characterizes the limitations of current paradigms, introduces a conceptual framework for simulation-based collaboration, and illustrates its potential through concrete human-agent collaboration scenarios.