Texture Synthesis From Photographs
Abstract
The goal of texture synthesis is to generate an arbitrarily large high-quality texture from a small input sample. Generally, it is assumed that the input image is given as a flat, square piece of texture, thus it has to be carefully prepared from a picture taken under ideal conditions. Instead we would like to extract the input texture from any surface from within an arbitrary photograph. This introduces several challenges: Only parts of the photograph are covered with the texture of interest, perspective and scene geometry introduce distortions, and the texture is non-uniformly sampled during the capture process. This breaks many of the assumptions used for synthesis.In this paper we combine a simple novel user interface with a generic per-pixel synthesis algorithm to achieve high-quality synthesis from a photograph. Our interface lets the user locally describe the geometry supporting the textures by combining rational Bezier patches. These are particularly well suited to describe curved surfaces under projection. Further, we extend per-pixel synthesis to account for arbitrary texture sparsity and distortion, both in the input image and in the synthesis output. Applications range from synthesizing textures directly from photographs to high-quality texture completion.
BibTeX
@article {10.1111:j.1467-8659.2008.01139.x,
journal = {Computer Graphics Forum},
title = {{Texture Synthesis From Photographs}},
author = {Eisenacher, C. and Lefebvre, S. and Stamminger, M.},
year = {2008},
publisher = {The Eurographics Association and Blackwell Publishing Ltd},
ISSN = {1467-8659},
DOI = {10.1111/j.1467-8659.2008.01139.x}
}
journal = {Computer Graphics Forum},
title = {{Texture Synthesis From Photographs}},
author = {Eisenacher, C. and Lefebvre, S. and Stamminger, M.},
year = {2008},
publisher = {The Eurographics Association and Blackwell Publishing Ltd},
ISSN = {1467-8659},
DOI = {10.1111/j.1467-8659.2008.01139.x}
}