The advancement of Neural Radiance Fields (NeRF) has largely boosted the visual quality of human avatar constructed from RGB inputs. However, existing works either need per-subject training, and thus, cannot generalize to novel subjects (i.e. subject-specific), or can only reproduce the seen human poses contained within the inputs and cannot render novel poses (i.e. non-animatable). To this end, we propose the Subject-Agnostic and Animatable Neural Radiance Fields (SAgA-NeRF) for human avatar modeling from sparse-view videos, which can generalize to novel subjects and poses at the same time. To handle challenges posed by the task, we propose two main techniques, namely pose-based input frame selection, and a novel feature fusion on a parametric human body model. We compare SAgA-NeRF with existing subject-agnostic or animatable works, and show comparable results for both seen and novel poses. We also justify our design choices by showing that our proposed components outperform naive baselines.
Copyright is held by the author(s).
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Tan, Ping
Member of collection