Search papers, labs, and topics across Lattice.
PanguMotion is introduced, a motion forecasting framework for autonomous driving that leverages Transformer blocks from the Pangu-1B LLM to enhance feature extraction. By processing driving scenes as continuous sequences rather than independent snapshots, the model captures temporal dependencies and historical context. Experiments on the Argoverse 2 dataset, reorganized to simulate continuous driving, demonstrate improved motion forecasting accuracy.
LLM Transformers can be effectively repurposed to enhance motion forecasting in autonomous driving by capturing temporal context in continuous driving scenarios.
Motion forecasting is a core task in autonomous driving systems, aiming to accurately predict the future trajectories of surrounding agents to ensure driving safety. Existing methods typically process discrete driving scenes independently, neglecting the temporal continuity and historical context correlations inherent in real-world driving environments. This paper proposes PanguMotion, a motion forecasting framework for continuous driving scenarios that integrates Transformer blocks from the Pangu-1B large language model as feature enhancement modules into autonomous driving motion prediction architectures. We conduct experiments on the Argoverse 2 datasets processed by the RealMotion data reorganization strategy, transforming each independent scene into a continuous sequence to mimic real-world driving scenarios.