Search papers, labs, and topics across Lattice.
AutoWeather4D is introduced as a feed-forward 3D-aware weather editing framework that decouples geometry and illumination for autonomous driving video. It uses a G-buffer Dual-pass Editing mechanism, with a Geometry Pass for surface-anchored physical interactions and a Light Pass for analytical light transport and dynamic 3D relighting. Experiments show AutoWeather4D achieves photorealism and structural consistency comparable to generative baselines, while providing fine-grained parametric physical control.
Achieve photorealistic and structurally consistent weather editing for autonomous driving videos without the massive datasets typically required by generative models.
Generative video models have significantly advanced the photorealistic synthesis of adverse weather for autonomous driving; however, they consistently demand massive datasets to learn rare weather scenarios. While 3D-aware editing methods alleviate these data constraints by augmenting existing video footage, they are fundamentally bottlenecked by costly per-scene optimization and suffer from inherent geometric and illumination entanglement. In this work, we introduce AutoWeather4D, a feed-forward 3D-aware weather editing framework designed to explicitly decouple geometry and illumination. At the core of our approach is a G-buffer Dual-pass Editing mechanism. The Geometry Pass leverages explicit structural foundations to enable surface-anchored physical interactions, while the Light Pass analytically resolves light transport, accumulating the contributions of local illuminants into the global illumination to enable dynamic 3D local relighting. Extensive experiments demonstrate that AutoWeather4D achieves comparable photorealism and structural consistency to generative baselines while enabling fine-grained parametric physical control, serving as a practical data engine for autonomous driving.