Skip to main content

Learning 3D scene synthesis from RGB-D Images

Resource type
Thesis type
(Thesis) M.Sc.
Date created
2016-06-14
Authors/Contributors
Abstract
We present a data-driven method for synthesizing 3D indoor scenes by inserting objects progressively into an initial, possibly, empty scene. Instead of relying on few hundreds of handcrafted 3D scenes, we take advantage of existing large-scale annotated RGB-D datasets, to form the prior knowledge for our synthesis task. Our object insertion scheme follows a cooccurrence model and an arrangement model, both learned from the SUN RGB_D dataset. Compared to previous works on probabilistic learning for object placement, we make two contributions. First, we learn various classes of higher-order object-object relations which effectively enable considering objects in semantically formed groups rather than by individuals. Second, while our algorithm inserts objects one at a time, it attains holistic plausibility of the whole current scene while offering controllability through progressive synthesis. We conducted several user studies to demonstrate the effectiveness of our synthesis method.
Document
Identifier
etd9641
Copyright statement
Copyright is held by the author.
Permissions
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Scholarly level
Supervisor or Senior Supervisor
Thesis advisor: Zhang, Hao
Thesis advisor: Tan, Ping
Member of collection
Download file Size
etd9641_ZSadeghipourKermani.pdf 12.55 MB

Views & downloads - as of June 2023

Views: 0
Downloads: 0