Geometric Methods for Realistic Animation of Faces
View/ Open
Date
2015Author
Bermano, Amit Haim
Item/paper (currently) not available via TIB Hannover.
Metadata
Show full item recordAbstract
Realistic facial synthesis is one of the most fundamental problems in computer
graphics, and has been sought after for approximately four decades. It is desired
in a wide variety of fields, such as character animation for films and advertising,
computer games, video teleconferencing, user-interface agents and avatars, and
facial surgery planning. Humans, on the other hand, are experts in identifying
every detail and every regularity or variation in proportion from one individual
to the next. The task of creating a realistic human face is elusive due to this, as well
as many other factors. Among which are complex surface details, spatially and
temporally varying skin texture and subtle emotions that are conveyed through
even more subtle motions.
In this thesis, we present the most commonly practiced facial content creation process,
and contribute to the quality of each of its steps. The proposed algorithms
significantly increase the level of realism attained by each step and therefore substantially
reduce the amount of manual labor required for production quality facial
content. The thesis contains three parts, each contributing to one step of the
facial content creation pipeline.
In the first part, we aim at greatly increasing the fidelity of facial performance captures,
and present the first method for detailed spatio-temporal reconstruction of
eyelids. Easily integrable with existing high quality facial performance capture
approaches, this method generates a person-specific, time-varying eyelid reconstruction
with anatomically plausible deformations. Our approach is to combine
a geometric deformation model with image data, leveraging multi-view stereo,
optical flow, contour tracking and wrinkle detection from local skin appearance.
Our deformation model serves as a prior that enables reconstruction of eyelids
even under strong self-occlusions caused by rolling and folding skin as the eye
opens and closes.
In the second part, we contribute to the authoring step of the creation process.
We present a method for adding fine-scale details and expressiveness to lowresolution
art-directed facial performances. Employing a high-resolution facial
performance capture system, we augment artist friendly content, such as those
created manually using a rig, via marker-based capture, by fitting a morphable
model to a video, or through Kinect-based reconstruction. From the high fidelity captured data, our system encodes subtle spatial and temporal deformation details
specific to that particular individual, and composes the relevant ones to the
desired input animation. The resulting animations exhibit compelling animations
with nuances and fine spatial details that match captured performances, while
preserving the artistic intent authored by the low-resolution input sequences, outperforming
current state-of-the-art in example-based facial animation.
The third part of the dissertation proposes to enrich digital facial content by
adding a significant sense of presence. Replacing the classic 2D or 3D displaying
techniques of digital content, we propose the first complete process for augmenting
deforming physical avatars using projector-based illumination. Physical
avatars have been long used to give physical presence to a character, both in
the field of entertainment and teleconferencing. Using a human-shaped display
surface provides depth cues and multiple observers with their own perspectives.
Such physical avatars, however, suffer from limited movement and expressiveness
due to mechanical constraints. Given an input animation, our system decomposes
the motion into low-frequency motion that can be physically reproduced by
a robotic head and high-frequency details that are added using projected shading.
The result of our system is a highly expressive physical avatar that features facial
details and motion otherwise unattainable due to physical constraints.