We address the Plan2Scene task: converting a floorplan and associated photos into a textured 3D mesh model of a residence. Our method 1) lifts a floorplan image to a 3D mesh, 2) synthesizes textures for observed surfaces based on input photos, and 3) generates textures for unobserved surfaces using a graph neural network architecture. We address the challenge of producing tileable textures for all architectural surfaces (floors, walls, and ceilings) from a sparse set of photos that only partially cover a house. To train and evaluate our system, we curate two texture datasets and extend a dataset of floorplans + photos from prior work with rectified surface crops and additional annotations. Our system produces realistic 3D models that outperform baseline approaches, as identified by a holistic user study and quantified by a suite of texture quality metrics. We release all our code, data, and trained models to the community.
Copyright is held by the author(s).
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Savva, Manolis
Member of collection