Search papers, labs, and topics across Lattice.
DenoiseSplat, a feed-forward 3D Gaussian splatting method, is introduced to address the degradation of NeRF and 3D Gaussian Splatting pipelines when processing noisy multi-view images. The method is trained end-to-end using only clean 2D renderings as supervision, eliminating the need for 3D ground truth. Experiments on a newly created large-scale, scene-consistent noisy-clean benchmark based on RE10K demonstrate that DenoiseSplat outperforms existing methods like vanilla MVSplat and a two-stage baseline (IDF + MVSplat) across various noise types and levels.
Robust 3D scene reconstruction from noisy images is now possible without 3D ground truth, thanks to a feed-forward Gaussian splatting approach trained solely on clean 2D renderings.
3D scene reconstruction and novel-view synthesis are fundamental for VR, robotics, and content creation. However, most NeRF and 3D Gaussian Splatting pipelines assume clean inputs and degrade under real noise and artifacts. We therefore propose DenoiseSplat, a feed-forward 3D Gaussian splatting method for noisy multi-view images. We build a large-scale, scene-consistent noisy–clean benchmark on RE10K by injecting Gaussian, Poisson, speckle, and salt-and-pepper noise with controlled intensities. With a lightweight MVSplat-style feed-forward backbone, we train end-to-end using only clean 2D renderings as supervision and no 3D ground truth. On noisy RE10K, DenoiseSplat outperforms vanilla MVSplat and a strong two-stage baseline (IDF + MVSplat) in PSNR/SSIM and LPIPS across noise types and levels.