Search papers, labs, and topics across Lattice.
SceneFoundry is introduced, a language-guided diffusion framework for generating apartment-scale 3D environments populated with functionally articulated furniture. The framework leverages an LLM to control floor layout generation from natural language prompts and uses diffusion-based posterior sampling to place articulated assets. Differentiable guidance functions are employed to ensure physical usability by regulating object quantity, preventing articulation collisions, and maintaining walkable space.
Forget static scenes – SceneFoundry lets you generate infinite, interactive 3D worlds with articulated furniture, all from a simple language prompt.
The ability to automatically generate large-scale, interactive, and physically realistic 3D environments is crucial for advancing robotic learning and embodied intelligence. However, existing generative approaches often fail to capture the functional complexity of real-world interiors, particularly those containing articulated objects with movable parts essential for manipulation and navigation. This paper presents SceneFoundry, a language-guided diffusion framework that generates apartment-scale 3D worlds with functionally articulated furniture and semantically diverse layouts for robotic training. From natural language prompts, an LLM module controls floor layout generation, while diffusion-based posterior sampling efficiently populates the scene with articulated assets from large-scale 3D repositories. To ensure physical usability, SceneFoundry employs differentiable guidance functions to regulate object quantity, prevent articulation collisions, and maintain sufficient walkable space for robotic navigation. Extensive experiments demonstrate that our framework generates structurally valid, semantically coherent, and functionally interactive environments across diverse scene types and conditions, enabling scalable embodied AI research. project page: https://anc891203.github.io/SceneFoundry-Demo/