Search papers, labs, and topics across Lattice.
BEV-SLD is introduced, a self-supervised LiDAR global localization method that learns scene-specific landmarks from bird's-eye-view (BEV) images. By aligning learnable global landmark coordinates with per-frame heatmaps using a consistency loss, the method achieves consistent landmark detections. Experiments across diverse environments demonstrate robust localization and strong performance compared to existing state-of-the-art methods.
Self-supervision unlocks robust LiDAR global localization by learning scene-specific landmarks from BEV images, outperforming scene-agnostic methods.
We present BEV-SLD, a LiDAR global localization method building on the Scene Landmark Detection (SLD) concept. Unlike scene-agnostic pipelines, our self-supervised approach leverages bird's-eye-view (BEV) images to discover scene-specific patterns at a prescribed spatial density and treat them as landmarks. A consistency loss aligns learnable global landmark coordinates with per-frame heatmaps, yielding consistent landmark detections across the scene. Across campus, industrial, and forest environments, BEV-SLD delivers robust localization and achieves strong performance compared to state-of-the-art methods.