We address the task of predicting what parts of an object can open and how they move when they do so. The input is a single image of an object, and as output we detect what parts of the object can open, and the motion parameters describing the articulation of each openable part. To tackle this task, we create two datasets of 3D objects: OPDSynth based on synthetic objects, and OPDReal based on RGBD reconstructions of real objects. We then design OPDRCNN and OPDFormer, neural architectures that detect openable parts and predict their motion parameters. Our experiments show that this is a challenging task especially when considering generalization across object categories, and the limited amount of information in a single image. Our architectures outperform baselines and prior work especially for RGB image inputs.
Copyright is held by the author(s).
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Xuan, Chang, Angel
Member of collection