Search papers, labs, and topics across Lattice.
This paper introduces Gradient Realignment via Active Shared Perception (GRASP), a multi-agent reinforcement learning framework designed to address non-stationarity by enabling agents to actively perceive and align with each other's policy updates. GRASP derives a consensus gradient from independent agent gradients, guiding policy evolution towards a generalized Bellman equilibrium. Experiments on SMAC and GRF show GRASP's scalability and performance improvements over existing CTDE methods.
Multi-agent RL agents can learn to collaborate *faster* by actively "perceiving" and aligning with each other's policy updates, rather than passively observing environment interactions.
Non-stationarity arises from concurrent policy updates and leads to persistent environmental fluctuations. Existing approaches like Centralized Training with Decentralized Execution (CTDE) and sequential update schemes mitigate this issue. However, since the perception of the policies of other agents remains dependent on sampling environmental interaction data, the agent essentially operates in a passive perception state. This inevitably triggers equilibrium oscillations and significantly slows the convergence speed of the system. To address this issue, we propose Gradient Realignment via Active Shared Perception (GRASP), a novel framework that defines generalized Bellman equilibrium as a stable objective for policy evolution. The core mechanism of GRASP involves utilizing the independent gradients of agents to derive a defined consensus gradient, enabling agents to actively perceive policy updates and optimize team collaboration. Theoretically, we leverage the Kakutani Fixed-Point Theorem to prove that the consensus direction $u^*$ guarantees the existence and attainability of this equilibrium. Extensive experiments on StarCraft II Multi-Agent Challenge (SMAC) and Google Research Football (GRF) demonstrate the scalability and promising performance of the framework.