We analyze the robustness of recent 3D shape analysis methods to SO(3) rotations, something that is fundamental to shape modeling. We build our investigation on a dataset of 3D indoor scenes, where objects occur in different orientations, providing an ideal (and practical) problem setting catered to our needs. Specifically, we pose our problem as one of detecting rotated instances of the same object in a 3D scene. Such a formulation finds utility in applications such as scene retrieval, scene compression, and scene editing. To do so, we start off by benchmarking different methods for feature extraction and classification. We then systematically contrast different choices in a variety of experimental settings, investigating the impact on performance of different rotation distributions, different degrees of partial observations on the object, and different levels of difficulty of an input object pair. Our study reveals that deep learning based rotation invariant methods are effective for relatively easy settings with easy-to-distinguish pairs of objects. However, their performance decreases significantly when the difference in rotations on the input pair is large, or when the degree of observation of input objects is reduced, or the difficulty level of input pair is increased. Finally, we connect feature encodings designed for rotation-invariant methods to 3D geometry that enable them to acquire the property of rotation invariance.
Copyright is held by the author(s).
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Savva, Manolis
Member of collection