SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation
Date
2022Metadata
Show full item recordAbstract
We present a StyleGAN2-based deep learning approach for 3D shape generation, called SDF-StyleGAN, with the aim of reducing visual and geometric dissimilarity between generated shapes and a shape collection. We extend StyleGAN2 to 3D generation and utilize the implicit signed distance function (SDF) as the 3D shape representation, and introduce two novel global and local shape discriminators that distinguish real and fake SDF values and gradients to significantly improve shape geometry and visual quality. We further complement the evaluation metrics of 3D generative models with the shading-image-based Fréchet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes. Experiments on shape generation demonstrate the superior performance of SDF-StyleGAN over the state-of-the-art. We further demonstrate the efficacy of SDFStyleGAN in various tasks based on GAN inversion, including shape reconstruction, shape completion from partial point clouds, single-view image-based shape generation, and shape style editing. Extensive ablation studies justify the efficacy of our framework design. Our code and trained models are available at https://github.com/Zhengxinyang/SDF-StyleGAN.
BibTeX
@article {10.1111:cgf.14602,
journal = {Computer Graphics Forum},
title = {{SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation}},
author = {Zheng, Xinyang and Liu, Yang and Wang, Pengshuai and Tong, Xin},
year = {2022},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14602}
}
journal = {Computer Graphics Forum},
title = {{SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation}},
author = {Zheng, Xinyang and Liu, Yang and Wang, Pengshuai and Tong, Xin},
year = {2022},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.14602}
}