Search papers, labs, and topics across Lattice.
UniRecGen unifies feed-forward reconstruction and diffusion-based generation for sparse-view 3D modeling by aligning both models within a shared canonical space. A reconstruction module provides canonical geometric anchors, while a diffusion generator leverages latent-augmented conditioning to refine and complete the geometric structure. Experiments show UniRecGen achieves superior fidelity and robustness compared to existing methods in creating complete and consistent 3D models from sparse observations.
Achieve both fidelity and plausibility in 3D reconstruction by unifying feed-forward reconstruction with diffusion-based generation in a single cooperative system.
Sparse-view 3D modeling represents a fundamental tension between reconstruction fidelity and generative plausibility. While feed-forward reconstruction excels in efficiency and input alignment, it often lacks the global priors needed for structural completeness. Conversely, diffusion-based generation provides rich geometric details but struggles with multi-view consistency. We present UniRecGen, a unified framework that integrates these two paradigms into a single cooperative system. To overcome inherent conflicts in coordinate spaces, 3D representations, and training objectives, we align both models within a shared canonical space. We employ disentangled cooperative learning, which maintains stable training while enabling seamless collaboration during inference. Specifically, the reconstruction module is adapted to provide canonical geometric anchors, while the diffusion generator leverages latent-augmented conditioning to refine and complete the geometric structure. Experimental results demonstrate that UniRecGen achieves superior fidelity and robustness, outperforming existing methods in creating complete and consistent 3D models from sparse observations. Code is available at https://github.com/zsh523/UniRecGen.