Search papers, labs, and topics across Lattice.
This paper addresses the lack of thermal imagery in robotics datasets by proposing ThermalDiffusion, a conditional diffusion model for RGB-to-thermal image translation. The model leverages self-attention to learn thermal properties of objects, enabling the synthesis of thermal images from existing RGB data. The authors demonstrate the potential for augmenting multi-modal datasets with synthetic thermal data, facilitating the adoption of thermal cameras in autonomous systems.
Solve the thermal data scarcity problem for robotics by hallucinating realistic thermal images from RGB using a conditional diffusion model.
Autonomous systems rely on sensors to estimate the environment around them. However, cameras, LiDARs, and RADARs have their own limitations. In nighttime or degraded environments such as fog, mist, or dust, thermal cameras can provide valuable information regarding the presence of objects of interest due to their heat signature. They make it easy to identify humans and vehicles that are usually at higher temperatures compared to their surroundings. In this paper, we focus on the adaptation of thermal cameras for robotics and automation, where the biggest hurdle is the lack of data. Several multi-modal datasets are available for driving robotics research in tasks such as scene segmentation, object detection, and depth estimation, which are the cornerstone of autonomous systems. However, they are found to be lacking in thermal imagery. Our paper proposes a solution to augment these datasets with synthetic thermal data to enable widespread and rapid adaptation of thermal cameras. We explore the use of conditional diffusion models to convert existing RGB images to thermal images using self-attention to learn the thermal properties of real-world objects.