Search papers, labs, and topics across Lattice.
CausalVAD addresses causal confusion in end-to-end autonomous driving models by introducing a de-confounding training framework based on causal intervention. The core of CausalVAD is the sparse causal intervention scheme (SCIS), which uses a dictionary of prototypes representing latent driving contexts to intervene on the model's sparse vectorized queries. Experiments on nuScenes demonstrate that CausalVAD achieves state-of-the-art planning accuracy, safety, and robustness against data bias and noisy scenarios.
Autonomous driving models can be made significantly more robust and safe by explicitly de-confounding their training via causal intervention, eliminating reliance on spurious correlations.
Planning-oriented end-to-end driving models show great promise, yet they fundamentally learn statistical correlations instead of true causal relationships. This vulnerability leads to causal confusion, where models exploit dataset biases as shortcuts, critically harming their reliability and safety in complex scenarios. To address this, we introduce CausalVAD, a de-confounding training framework that leverages causal intervention. At its core, we design the sparse causal intervention scheme (SCIS), a lightweight, plug-and-play module to instantiate the backdoor adjustment theory in neural networks. SCIS constructs a dictionary of prototypes representing latent driving contexts. It then uses this dictionary to intervene on the model's sparse vectorized queries. This step actively eliminates spurious associations induced by confounders, thereby eliminating spurious factors from the representations for downstream tasks. Extensive experiments on benchmarks like nuScenes show CausalVAD achieves state-of-the-art planning accuracy and safety. Furthermore, our method demonstrates superior robustness against both data bias and noisy scenarios configured to induce causal confusion.