DeepProp: Extracting Deep Features from a Single Image for Edit Propagation
dc.contributor.author | Endo, Yuki | en_US |
dc.contributor.author | Iizuka, Satoshi | en_US |
dc.contributor.author | Kanamori, Yoshihiro | en_US |
dc.contributor.author | Mitani, Jun | en_US |
dc.contributor.editor | Joaquim Jorge and Ming Lin | en_US |
dc.date.accessioned | 2016-04-26T08:37:53Z | |
dc.date.available | 2016-04-26T08:37:53Z | |
dc.date.issued | 2016 | en_US |
dc.identifier.issn | 1467-8659 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1111/cgf.12822 | en_US |
dc.description.abstract | Edit propagation is a technique that can propagate various image edits (e.g., colorization and recoloring) performed via user strokes to the entire image based on similarity of image features. In most previous work, users must manually determine the importance of each image feature (e.g., color, coordinates, and textures) in accordance with their needs and target images. We focus on representation learning that automatically learns feature representations only from user strokes in a single image instead of tuning existing features manually. To this end, this paper proposes an edit propagation method using a deep neural network (DNN). Our DNN, which consists of several layers such as convolutional layers and a feature combiner, extracts strokeadapted visual features and spatial features, and then adjusts the importance of them. We also develop a learning algorithm for our DNN that does not suffer from the vanishing gradient problem, and hence avoids falling into undesirable locally optimal solutions. We demonstrate that edit propagation with deep features, without manual feature tuning, can achieve better results than previous work. | en_US |
dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | I.4.0 [Image Processing And Computer Vision] | en_US |
dc.subject | General | en_US |
dc.title | DeepProp: Extracting Deep Features from a Single Image for Edit Propagation | en_US |
dc.description.seriesinformation | Computer Graphics Forum | en_US |
dc.description.sectionheaders | Data-driven Images | en_US |
dc.description.volume | 35 | en_US |
dc.description.number | 2 | en_US |
dc.identifier.doi | 10.1111/cgf.12822 | en_US |
dc.identifier.pages | 189-201 | en_US |
Files in this item
This item appears in the following Collection(s)
-
35-Issue 2
EG 2016 - Conference Issue -
EG 2016 - Full Papers - CGF 35-Issue 2