Search papers, labs, and topics across Lattice.
This paper introduces RadarSplat-RIO, a novel radar bundle adjustment framework that leverages Gaussian Splatting (GS) for dense and differentiable scene representation. It jointly optimizes radar sensor poses and scene geometry using full range-azimuth-Doppler data, enabling multi-frame BA for radar odometry. Experiments across indoor scenes demonstrate a 90% reduction in average absolute translational error and an 80% reduction in rotational error compared to prior radar-inertial odometry.
Radar odometry gets a 90% accuracy boost by finally bringing bundle adjustment to the table, thanks to a Gaussian Splatting representation.
Radar is more resilient to adverse weather and lighting conditions than visual and Lidar simultaneous localization and mapping (SLAM). However, most radar SLAM pipelines still rely heavily on frame-to-frame odometry, which leads to substantial drift. While loop closure can correct long-term errors, it requires revisiting places and relies on robust place recognition. In contrast, visual odometry methods typically leverage bundle adjustment (BA) to jointly optimize poses and map within a local window. However, an equivalent BA formulation for radar has remained largely unexplored. We present the first radar BA framework enabled by Gaussian Splatting (GS), a dense and differentiable scene representation. Our method jointly optimizes radar sensor poses and scene geometry using full range-azimuth-Doppler data, bringing the benefits of multi-frame BA to radar for the first time. When integrated with an existing radar-inertial odometry frontend, our approach significantly reduces pose drift and improves robustness. Across multiple indoor scenes, our radar BA achieves substantial gains over the prior radar-inertial odometry, reducing average absolute translational and rotational errors by 90% and 80%, respectively.