dc.contributor.author | Parrilla, Eduardo | en_US |
dc.contributor.author | Ballester, Alfredo | en_US |
dc.contributor.author | Uriel, Jordi | en_US |
dc.contributor.author | Ruescas-Nicolau, Ana V. | en_US |
dc.contributor.author | Alemany, Sandra | en_US |
dc.contributor.editor | Pelechano, Nuria | en_US |
dc.contributor.editor | Pettré, Julien | en_US |
dc.date.accessioned | 2024-04-16T14:59:33Z | |
dc.date.available | 2024-04-16T14:59:33Z | |
dc.date.issued | 2024 | |
dc.identifier.isbn | 978-3-03868-241-7 | |
dc.identifier.uri | https://doi.org/10.2312/cl.20241048 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/cl20241048 | |
dc.description.abstract | The demand for virtual human characters in Extended Realities (XR) is growing across industries from entertainment to healthcare. Achieving natural behaviour in virtual environments requires digitizing real-world actions, a task typically laborious and requiring specialized expertise. This paper presents an advanced approach for digitizing humans in motion, streamlining the process from capture to virtual character creation. By integrating the proposed hardware, algorithms, and data models, this approach automates the creation of high-resolution assets, reducing manual intervention and software dependencies. The resulting sequences of rigged and textured meshes ensure lifelike virtual characters with detailed facial expressions and hand gestures, surpassing the capabilities of static 3D scans animated via separate motion captures. Robust pose-dependent shape corrections and temporal consistency algorithms guarantee smooth, artifact-free body surfaces in motion, while the export capability in standard formats enhances interoperability and further character development possibilities. Additionally, this method facilitates the efficient creation of large datasets for learning human models, thus representing a significant advancement in XR technologies and digital content creation across industries. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies → Shape modeling; Machine learning; Computer vision; Animation; Image manipulation; Graphics file formats; Mesh models; Mesh geometry models; Image processing; Computer vision problems; Reconstruction | |
dc.subject | Computing methodologies → Shape modeling | |
dc.subject | Machine learning | |
dc.subject | Computer vision | |
dc.subject | Animation | |
dc.subject | Image manipulation | |
dc.subject | Graphics file formats | |
dc.subject | Mesh models | |
dc.subject | Mesh geometry models | |
dc.subject | Image processing | |
dc.subject | Computer vision problems | |
dc.subject | Reconstruction | |
dc.title | Capture and Automatic Production of Digital Humans in Real Motion with a Temporal 3D Scanner | en_US |
dc.description.seriesinformation | CLIPE 2024 - Creating Lively Interactive Populated Environments | |
dc.description.sectionheaders | Mocap and Authoring Virtual Humans | |
dc.identifier.doi | 10.2312/cl.20241048 | |
dc.identifier.pages | 12 pages | |