Search papers, labs, and topics across Lattice.
The paper addresses the challenge of sparse rewards in Reinforcement Learning for GUI agents by introducing Adaptive Milestone Reward (ADMIRE), a mechanism that dynamically distills milestones from successful explorations to provide verifiable, adaptive rewards. ADMIRE employs an asymmetric credit assignment strategy to denoise successful trajectories and scaffold failed ones, effectively balancing reward fidelity and density. Experiments on AndroidWorld demonstrate over 10% improvement in success rate across different base models, with strong generalizability observed in web navigation and embodied tasks.
GUI agents learn faster and generalize better with a new reward shaping technique that dynamically adapts to successful exploration trajectories, outperforming fixed reward schemes.
Reinforcement Learning (RL) has emerged as a mainstream paradigm for training Mobile GUI Agents, yet it struggles with the temporal credit assignment problem inherent in long-horizon tasks. A primary challenge lies in the trade-off between reward fidelity and density: outcome reward offers high fidelity but suffers from signal sparsity, while process reward provides dense supervision but remains prone to bias and reward hacking. To resolve this conflict, we propose the Adaptive Milestone Reward (ADMIRE) mechanism. ADMIRE constructs a verifiable, adaptive reward system by anchoring trajectory to milestones, which are dynamically distilled from successful explorations. Crucially, ADMIRE integrates an asymmetric credit assignment strategy that denoises successful trajectories and scaffolds failed trajectories. Extensive experiments demonstrate that ADMIRE consistently yields over 10% absolute improvement in success rate across different base models on AndroidWorld. Moreover, the method exhibits robust generalizability, achieving strong performance across diverse RL algorithms and heterogeneous environments such as web navigation and embodied tasks.