Search papers, labs, and topics across Lattice.
STAIRS-Former, a novel transformer architecture, is introduced to address challenges in offline multi-agent reinforcement learning (MARL) with multi-task datasets by incorporating spatial and temporal hierarchies for improved attention and long-horizon dependency capture. The architecture employs interleaved recursive structure transformers to enable effective attention over critical tokens and uses token dropout to enhance robustness across varying agent populations. Empirical results on SMAC, SMAC-v2, MPE, and MaMuJoCo demonstrate that STAIRS-Former achieves state-of-the-art performance compared to existing methods.
By interleaving spatial and temporal hierarchies in a transformer architecture, STAIRS-Former significantly boosts performance in offline multi-agent reinforcement learning across diverse benchmarks.
Offline multi-agent reinforcement learning (MARL) with multi-task datasets is challenging due to varying numbers of agents across tasks and the need to generalize to unseen scenarios. Prior works employ transformers with observation tokenization and hierarchical skill learning to address these issues. However, they underutilize the transformer attention mechanism for inter-agent coordination and rely on a single history token, which limits their ability to capture long-horizon temporal dependencies in partially observable MARL settings. In this paper, we propose STAIRS-Former, a transformer architecture augmented with spatial and temporal hierarchies that enables effective attention over critical tokens while capturing long interaction histories. We further introduce token dropout to enhance robustness and generalization across varying agent populations. Extensive experiments on diverse multi-agent benchmarks, including SMAC, SMAC-v2, MPE, and MaMuJoCo, with multi-task datasets demonstrate that STAIRS-Former consistently outperforms prior methods and achieves new state-of-the-art performance.