We present a data-driven method for synthesizing 3D indoor scenes by inserting objects progressively into an initial, possibly, empty scene. Instead of relying on few hundreds of handcrafted 3D scenes, we take advantage of existing large-scale annotated RGB-D datasets, to form the prior knowledge for our synthesis task. Our object insertion scheme follows a cooccurrence model and an arrangement model, both learned from the SUN RGB_D dataset. Compared to previous works on probabilistic learning for object placement, we make two contributions. First, we learn various classes of higher-order object-object relations which effectively enable considering objects in semantically formed groups rather than by individuals. Second, while our algorithm inserts objects one at a time, it attains holistic plausibility of the whole current scene while offering controllability through progressive synthesis. We conducted several user studies to demonstrate the effectiveness of our synthesis method.
Copyright is held by the author.
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Member of collection