Search papers, labs, and topics across Lattice.
The authors introduce FlatLands, a new dataset and benchmark for single-view bird's-eye view (BEV) floorplan completion, designed to facilitate uncertainty-aware indoor mapping. The dataset comprises 270K observations from 17K real indoor scenes across six existing datasets, complete with aligned observation, visibility, validity, and ground-truth BEV maps. Through benchmarking training-free, deterministic, ensemble, and stochastic generative models, they establish a rigorous testbed and instantiate an end-to-end monocular RGB-to-floormaps pipeline.
Forget photorealistic rendering; the next frontier in scene understanding is generating complete, traversable floorplans from a single egocentric image.
A single egocentric image typically captures only a small portion of the floor, yet a complete metric traversability map of the surroundings would better serve applications such as indoor navigation. We introduce FlatLands, a dataset and benchmark for single-view bird's-eye view (BEV) floor completion. The dataset contains 270,575 observations from 17,656 real metric indoor scenes drawn from six existing datasets, with aligned observation, visibility, validity, and ground-truth BEV maps, and the benchmark includes both in- and out-of-distribution evaluation protocols. We compare training-free approaches, deterministic models, ensembles, and stochastic generative models. Finally, we instantiate the task as an end-to-end monocular RGB-to-floormaps pipeline. FlatLands provides a rigorous testbed for uncertainty-aware indoor mapping and generative completion for embodied navigation.