Search papers, labs, and topics across Lattice.
This paper investigates the zero-shot cross-city generalization capabilities of end-to-end autonomous driving models, comparing supervised ImageNet-pretrained backbones against self-supervised alternatives like I-JEPA, DINOv2, and MAE. The study reveals a significant generalization gap when transferring models with supervised backbones to unseen cities, especially between different driving-side conventions. Self-supervised representation learning demonstrably reduces this gap, improving performance in both open-loop (nuScenes) and closed-loop (NAVSIM) evaluations, suggesting its importance for robust cross-city planning.
End-to-end driving models trained with standard supervised pre-training can fail catastrophically when deployed in new cities, but self-supervised pre-training offers a surprisingly effective fix.
End-to-end autonomous driving models are typically trained on multi-city datasets using supervised ImageNet-pretrained backbones, yet their ability to generalize to unseen cities remains largely unexamined. When training and evaluation data are geographically mixed, models may implicitly rely on city-specific cues, masking failure modes that would occur under real domain shifts when generalizing to new locations. In this work we investigate zero-shot cross-city generalization in end-to-end trajectory planning and ask whether self-supervised visual representations improve transfer across cities. We conduct a comprehensive study by integrating self-supervised backbones (I-JEPA, DINOv2, and MAE) into planning frameworks. We evaluate performance under strict geographic splits on nuScenes in the open-loop setting and on NAVSIM in the closed-loop evaluation protocol. Our experiments reveal a substantial generalization gap when transferring models relying on traditional supervised backbones across cities with different road topologies and driving conventions, particularly when transferring from right-side to left-side driving environments. Self-supervised representation learning reduces this gap. In open-loop evaluation, a supervised backbone exhibits severe inflation when transferring from Boston to Singapore (L2 displacement ratio 9.77x, collision ratio 19.43x), whereas domain-specific self-supervised pretraining reduces this to 1.20x and 0.75x respectively. In closed-loop evaluation, self-supervised pretraining improves PDMS by up to 4 percent for all single-city training cities. These results show that representation learning strongly influences the robustness of cross-city planning and establish zero-shot geographic transfer as a necessary test for evaluating end-to-end autonomous driving systems.