dc.contributor.author | Chai, Jin-xiang | en_US |
dc.contributor.author | Xiao, Jing | en_US |
dc.contributor.author | Hodgins, Jessica | en_US |
dc.contributor.editor | D. Breen and M. Lin | en_US |
dc.date.accessioned | 2014-01-29T06:32:25Z | |
dc.date.available | 2014-01-29T06:32:25Z | |
dc.date.issued | 2003 | en_US |
dc.identifier.isbn | 1-58113-659-5 | en_US |
dc.identifier.issn | 1727-5288 | en_US |
dc.identifier.uri | http://dx.doi.org/10.2312/SCA03/193-206 | en_US |
dc.description.abstract | Controlling and animating the facial expression of a computer-generated 3D character is a difficult problem because the face has many degrees of freedom while most available input devices have few. In this paper, we show that a rich set of lifelike facial actions can be created from a preprocessed motion capture database and that a user can control these actions by acting out the desired motions in front of a video camera. We develop a real-time facial tracking system to extract a small set of animation control parameters from video. Because of the nature of video data, these parameters may be noisy, low-resolution, and contain errors. The system uses the knowledge embedded in motion capture data to translate these low-quality 2D animation control signals into high-quality 3D facial expressions. To adapt the synthesized motion to a new character model, we introduce an efficient expression retargeting technique whose run-time computation is constant independent of the complexity of the character model. We demonstrate the power of this approach through two users who control and animate a wide range of 3D facial expressions of different avatars. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Vision-based Control of 3D Facial Animation | en_US |
dc.description.seriesinformation | Symposium on Computer Animation | en_US |