Skip to main content

Adaptive appearance rendering

Resource type
Thesis type
(Thesis) M.Sc.
Date created
2019-07-11
Authors/Contributors
Author: Deng, Ruizhi
Abstract
We propose an approach to generate images of people given a desired appearance and pose. Disentangled representations of pose and appearance are necessary to handle the compound variability in the resulting generated images. Hence, we develop an approach based on intermediate representations of poses and appearance: our pose-guided appearance rendering network firstly encodes the targets’ poses using an encoder-decoder neural network. Then the targets’ appearances are encoded by learning adaptive appearance filters using a fully convolutional network. Finally, these filters are placed in the encoder-decoder neural networks to complete the rendering. We demonstrate that our model can generate images and videos that are superior to state-of-the-art methods, and can handle pose guided appearance rendering in both image and video generation.
Identifier
etd20342
Copyright statement
Copyright is held by the author.
Permissions
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Scholarly level
Supervisor or Senior Supervisor
Thesis advisor: Mori, Greg
Member of collection
Model
English

Views & downloads - as of June 2023

Views: 0
Downloads: 0