Search papers, labs, and topics across Lattice.
The paper introduces LLM-NAR, a framework that enhances Large Language Model (LLM) performance on Multi-Agent Path Finding (MAPF) tasks by integrating a pre-trained graph neural network-based Neural Algorithmic Reasoner (NAR). LLM-NAR uses a cross-attention mechanism to allow the NAR to inform the LLM with map information, improving planning and multi-agent coordination. Experiments in both simulated and real-world environments demonstrate that LLM-NAR significantly outperforms existing LLM-based approaches for MAPF.
LLMs can now navigate complex multi-agent pathfinding scenarios with superhuman efficiency, thanks to a neural algorithmic reasoning module that injects graph-aware intelligence.
The development and application of large language models (LLM) have demonstrated that foundational models can be utilized to solve a wide array of tasks. However, their performance in multi-agent path finding (MAPF) tasks has been less than satisfactory, with only a few studies exploring this area. MAPF is a complex problem requiring both planning and multi-agent coordination. To improve the performance of LLM in MAPF tasks, we propose a novel framework, LLM-NAR, which leverages neural algorithmic reasoners (NAR) to inform LLM for MAPF. LLM-NAR consists of three key components: an LLM for MAPF, a pre-trained graph neural network-based NAR, and a cross-attention mechanism. This is the first work to propose using a neural algorithmic reasoner to integrate GNNs with the map information for MAPF, thereby guiding LLM to achieve superior performance. LLM-NAR can be easily adapted to various LLM models. Both simulation and real-world experiments demonstrate that our method significantly outperforms existing LLM-based approaches in solving MAPF problems.