Search papers, labs, and topics across Lattice.
This paper benchmarks visual odometry (VO) and visual SLAM (VSLAM) systems for mobile robot navigation in logistics environments, evaluating performance against ground truth from Vicon and LiDAR-SLAM. The study finds that a hybrid approach, using the cuVSLAM front-end with a custom SLAM back-end, yields the best mapping accuracy. The integrated cuVSLAM-based VO stack is validated through deployment on an NVIDIA Jetson platform.
A hybrid cuVSLAM-based visual SLAM system achieves superior mapping accuracy in real-world logistics environments, outperforming other VO/VSLAM approaches.
This work presents a comprehensive benchmark evaluation of visual odometry (VO) and visual SLAM (VSLAM) systems for mobile robot navigation in real-world logistical environments. We compare multiple visual odometry approaches across controlled trajectories covering translational, rotational, and mixed motion patterns, as well as a large-scale production facility dataset spanning approximately 1.7 km. Performance is evaluated using Absolute Pose Error (APE) against ground truth from a Vicon motion capture system and a LiDAR-based SLAM reference. Our results show that a hybrid stack combining the cuVSLAM front-end with a custom SLAM back-end achieves the strongest mapping accuracy, motivating a deeper integration of cuVSLAM as the core VO component in our robotics stack. We further validate this integration by deploying and testing the cuVSLAM-based VO stack on an NVIDIA Jetson platform.