Search papers, labs, and topics across Lattice.
R4Det is introduced to address challenges in 4D radar-camera fusion for 3D object detection, specifically improving depth estimation, temporal fusion robustness, and small object detection. The method incorporates a Panoramic Depth Fusion module for enhanced depth estimation, a Deformable Gated Temporal Fusion module independent of ego-pose, and an Instance-Guided Dynamic Refinement module leveraging 2D instance guidance. Experiments on TJ4DRadSet and VoD datasets demonstrate state-of-the-art 3D object detection performance.
Achieve state-of-the-art 3D object detection by fusing radar and camera data without relying on accurate ego-pose estimation, a common bottleneck in autonomous driving.
4D radar-camera sensing configuration has gained increasing importance in autonomous driving. However, existing 3D object detection methods that fuse 4D Radar and camera data confront several challenges. First, their absolute depth estimation module is not robust and accurate enough, leading to inaccurate 3D localization. Second, the performance of their temporal fusion module will degrade dramatically or even fail when the ego vehicle's pose is missing or inaccurate. Third, for some small objects, the sparse radar point clouds may completely fail to reflect from their surfaces. In such cases, detection must rely solely on visual unimodal priors. To address these limitations, we propose R4Det, which enhances depth estimation quality via the Panoramic Depth Fusion module, enabling mutual reinforcement between absolute and relative depth. For temporal fusion, we design a Deformable Gated Temporal Fusion module that does not rely on the ego vehicle's pose. In addition, we built an Instance-Guided Dynamic Refinement module that extracts semantic prototypes from 2D instance guidance. Experiments show that R4Det achieves state-of-the-art 3D object detection results on the TJ4DRadSet and VoD datasets.