Search papers, labs, and topics across Lattice.
WildSplatter is introduced, a feed-forward 3D Gaussian Splatting (3DGS) model trained on unconstrained image collections to jointly learn 3D Gaussians and appearance embeddings. By conditioning Gaussian colors on input images, the model achieves flexible appearance modulation under varying lighting conditions. The method enables fast (under one second) 3D Gaussian reconstruction from sparse views and outperforms existing pose-free 3DGS methods on real-world datasets.
Unlock real-time, high-quality 3D scene reconstruction from unconstrained images with varying lighting, thanks to a feed-forward Gaussian Splatting model that learns appearance embeddings.
We propose WildSplatter, a feed-forward 3D Gaussian Splatting (3DGS) model for unconstrained images with unknown camera parameters and varying lighting conditions. 3DGS is an effective scene representation that enables high-quality, real-time rendering; however, it typically requires iterative optimization and multi-view images captured under consistent lighting with known camera parameters. WildSplatter is trained on unconstrained photo collections and jointly learns 3D Gaussians and appearance embeddings conditioned on input images. This design enables flexible modulation of Gaussian colors to represent significant variations in lighting and appearance. Our method reconstructs 3D Gaussians from sparse input views in under one second, while also enabling appearance control under diverse lighting conditions. Experimental results demonstrate that our approach outperforms existing pose-free 3DGS methods on challenging real-world datasets with varying illumination.