Show simple item record

dc.contributor.authorParrilla, Eduardoen_US
dc.contributor.authorBallester, Alfredoen_US
dc.contributor.authorUriel, Jordien_US
dc.contributor.authorRuescas-Nicolau, Ana V.en_US
dc.contributor.authorAlemany, Sandraen_US
dc.contributor.editorPelechano, Nuriaen_US
dc.contributor.editorPettré, Julienen_US
dc.date.accessioned2024-04-16T14:59:33Z
dc.date.available2024-04-16T14:59:33Z
dc.date.issued2024
dc.identifier.isbn978-3-03868-241-7
dc.identifier.urihttps://doi.org/10.2312/cl.20241048
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/cl20241048
dc.description.abstractThe demand for virtual human characters in Extended Realities (XR) is growing across industries from entertainment to healthcare. Achieving natural behaviour in virtual environments requires digitizing real-world actions, a task typically laborious and requiring specialized expertise. This paper presents an advanced approach for digitizing humans in motion, streamlining the process from capture to virtual character creation. By integrating the proposed hardware, algorithms, and data models, this approach automates the creation of high-resolution assets, reducing manual intervention and software dependencies. The resulting sequences of rigged and textured meshes ensure lifelike virtual characters with detailed facial expressions and hand gestures, surpassing the capabilities of static 3D scans animated via separate motion captures. Robust pose-dependent shape corrections and temporal consistency algorithms guarantee smooth, artifact-free body surfaces in motion, while the export capability in standard formats enhances interoperability and further character development possibilities. Additionally, this method facilitates the efficient creation of large datasets for learning human models, thus representing a significant advancement in XR technologies and digital content creation across industries.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCCS Concepts: Computing methodologies → Shape modeling; Machine learning; Computer vision; Animation; Image manipulation; Graphics file formats; Mesh models; Mesh geometry models; Image processing; Computer vision problems; Reconstruction
dc.subjectComputing methodologies → Shape modeling
dc.subjectMachine learning
dc.subjectComputer vision
dc.subjectAnimation
dc.subjectImage manipulation
dc.subjectGraphics file formats
dc.subjectMesh models
dc.subjectMesh geometry models
dc.subjectImage processing
dc.subjectComputer vision problems
dc.subjectReconstruction
dc.titleCapture and Automatic Production of Digital Humans in Real Motion with a Temporal 3D Scanneren_US
dc.description.seriesinformationCLIPE 2024 - Creating Lively Interactive Populated Environments
dc.description.sectionheadersMocap and Authoring Virtual Humans
dc.identifier.doi10.2312/cl.20241048
dc.identifier.pages12 pages


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • CLIPE 2024
    ISBN 978-3-03868-241-7 | co-located with EG2024

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License