Search papers, labs, and topics across Lattice.
The paper introduces a method for identifying rare failure modes (RFMs) in autonomous vehicle perception systems by generating adversarial examples using a custom diffusion model. They invert segmentation masks of objects, combine them with text prompts, and use Stable Diffusion inpainting guided by adversarial noise optimization to create images designed to evade object detection. The generated images and corresponding natural language descriptions of RFMs can then be used to improve the robustness of AV systems.
Uncover hidden weaknesses in your AV perception stack by using adversarial diffusion models to automatically generate and describe the long tail of failure scenarios.
Autonomous Vehicles (AVs) rely on artificial intel-ligence (AI) to accurately detect objects and interpret their surroundings. However, even when trained using millions of miles of real-world data, AVs are often unable to detect rare failure modes (RFMs). The problem of RFMs is commonly referred to as the “long-tail challenge”, due to the distribution of data including many instances that are very rarely seen. In this paper, we present a novel approach that utilizes advanced generative and explainable AI techniques to aid in understanding RFMs. Our methods can be used to enhance the robustness and reliability of AV s when combined with both downstream model training and testing. We extract segmentation masks for objects of interest (e.g., cars) and invert them to create environmental masks. These masks, combined with carefully crafted text prompts, are fed into a custom diffusion model. We leverage the Stable Diffusion inpainting model guided by adversarial noise optimization to generate images containing diverse environments designed to evade object detection models and expose vulnerabilities in AI systems. Finally, we produce natural language descriptions of the generated RFMs that can guide developers and policymakers to improve the safety and reliability of AV systems.