Search papers, labs, and topics across Lattice.
This paper investigates the impact of intermediate feedback from agentic LLM-based in-car assistants on user experience during multi-step tasks in a driving context. A controlled study (N=45) compared providing planned steps and intermediate results against a silent, final-response-only approach, revealing that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load. The study also found user preferences for adaptive feedback, starting with high transparency and decreasing verbosity as trust is established, adjusted by task stakes and context.
Giving drivers intermediate updates from an LLM assistant boosts trust and reduces workload, suggesting transparency is key for agentic AI in safety-critical environments.
Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.