Search papers, labs, and topics across Lattice.
HouseMind, a multimodal LLM, is introduced for architectural floor plan understanding, generation, and editing by representing floor plans as discrete room-instance tokens. This tokenization enables the model to bridge layouts and symbolic reasoning, leading to improved geometric validity and controllability in floor plan synthesis from text instructions. The model demonstrates superior performance in generating coherent and controllable layouts while maintaining efficiency and local deployability.
Floor plan generation gets a major upgrade with HouseMind, a multimodal LLM that uses discrete room-instance tokens to achieve unprecedented geometric validity and controllability.
Architectural floor plan design demands joint reasoning over geometry, semantics, and spatial hierarchy, which remains a major challenge for current AI systems. Although recent diffusion and language models improve visual fidelity, they still struggle with coherent spatial reasoning and controllable generation. We present HouseMind, a multimodal large language model that unifies floor plan understanding, generation, and editing in one framework. We introduce discrete room-instance tokens to construct a unified vocabulary that bridges layouts and symbolic reasoning. With multimodal alignment and instruction tuning, the model synthesizes coherent, controllable layouts from text instructions. Experiments show how the framework achieves superior geometric validity and controllability while remaining efficient and locally deployable.