Search papers, labs, and topics across Lattice.
Know3D leverages vision-language models (VLMs) to guide the generation of 3D assets, specifically controlling the back-view generation which is typically stochastic. A VLM-diffusion model is used to inject semantic knowledge into the 3D generation process, allowing for language-based control over the unobserved regions. Experiments demonstrate that this approach enables semantically controllable back-view generation, improving the alignment with user intentions and producing more plausible geometries.
Forget random back-view hallucinations – Know3D lets you *prompt* the unseen side of 3D models using language, opening the door to controllable 3D asset creation.
Recent advances in 3D generation have improved the fidelity and geometric details of synthesized 3D assets. However, due to the inherent ambiguity of single-view observations and the lack of robust global structural priors caused by limited 3D training data, the unseen regions generated by existing models are often stochastic and difficult to control, which may sometimes fail to align with user intentions or produce implausible geometries. In this paper, we propose Know3D, a novel framework that incorporates rich knowledge from multimodal large language models into 3D generative processes via latent hidden-state injection, enabling language-controllable generation of the back-view for 3D assets. We utilize a VLM-diffusion-based model, where the VLM is responsible for semantic understanding and guidance. The diffusion model acts as a bridge that transfers semantic knowledge from the VLM to the 3D generation model. In this way, we successfully bridge the gap between abstract textual instructions and the geometric reconstruction of unobserved regions, transforming the traditionally stochastic back-view hallucination into a semantically controllable process, demonstrating a promising direction for future 3D generation models.