dc.contributor.author | Miyawaki, Ryosuke | en_US |
dc.contributor.author | Perusquia-Hernandez, Monica | en_US |
dc.contributor.author | Isoyama, Naoya | en_US |
dc.contributor.author | Uchiyama, Hideaki | en_US |
dc.contributor.author | Kiyokawa, Kiyoshi | en_US |
dc.contributor.editor | Hideaki Uchiyama | en_US |
dc.contributor.editor | Jean-Marie Normand | en_US |
dc.date.accessioned | 2022-11-29T07:25:17Z | |
dc.date.available | 2022-11-29T07:25:17Z | |
dc.date.issued | 2022 | |
dc.identifier.isbn | 978-3-03868-179-3 | |
dc.identifier.issn | 1727-530X | |
dc.identifier.uri | https://doi.org/10.2312/egve.20221273 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egve20221273 | |
dc.description.abstract | Knowing the relationship between speech-related facial movement and speech is important for avatar animation. Accurate facial displays are necessary to convey perceptual speech characteristics fully. Recently, an effort has been made to infer the relationship between facial movement and speech with data-driven methodologies using computer vision. To this aim, we propose to use blendshape-based facial movement tracking, because it can be easily translated to avatar movement. Furthermore, we present a protocol for audio-visual and behavioral data collection and a tool running on WEB that aids in collecting and synchronizing data. As a start, we provide a database of six Japanese participants reading emotion-related scripts at different volume levels. Using this methodology, we found a relationship between speech volume and facial movement around the nose, cheek, mouth, and head pitch. We hope that our protocols, WEB-based tool, and collected data will be useful for other scientists to derive models for avatar animation. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Human-centered computing -> Visualization toolkits | |
dc.subject | Human centered computing | |
dc.subject | Visualization toolkits | |
dc.title | A Data Collection Protocol, Tool and Analysis for the Mapping of Speech Volume to Avatar Facial Animation | en_US |
dc.description.seriesinformation | ICAT-EGVE 2022 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments | |
dc.description.sectionheaders | Interaction | |
dc.identifier.doi | 10.2312/egve.20221273 | |
dc.identifier.pages | 27-34 | |
dc.identifier.pages | 8 pages | |