Space-Time Co-Segmentation of Articulated Point Cloud Sequences
Abstract
Consistent segmentation is to the center of many applications based on dynamic geometric data. Directly segmenting a raw 3D point cloud sequence is a challenging task due to the low data quality and large inter-frame variation across the whole sequence. We propose a local-to-global approach to co-segment point cloud sequences of articulated objects into near-rigid moving parts. Our method starts from a per-frame point clustering, derived from a robust voting-based trajectory analysis. The local segments are then progressively propagated to the neighboring frames with a cut propagation operation, and further merged through all frames using a novel space-time segment grouping technqiue, leading to a globally consistent and compact segmentation of the entire articulated point cloud sequence. Such progressive propagating and merging, in both space and time dimensions, makes our co-segmentation algorithm especially robust in handling noise, occlusions and pose/view variations that are usually associated with raw scan data.
BibTeX
@article {10.1111:cgf.12843,
journal = {Computer Graphics Forum},
title = {{Space-Time Co-Segmentation of Articulated Point Cloud Sequences}},
author = {Yuan, Qing and Li, Guiqing and Xu, Kai and Chen, Xudong and Huang, Hui},
year = {2016},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.12843}
}
journal = {Computer Graphics Forum},
title = {{Space-Time Co-Segmentation of Articulated Point Cloud Sequences}},
author = {Yuan, Qing and Li, Guiqing and Xu, Kai and Chen, Xudong and Huang, Hui},
year = {2016},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.12843}
}