Show simple item record

dc.contributor.authorZheng, Quanen_US
dc.contributor.authorZheng, Changwenen_US
dc.contributor.editorChen, Min and Zhang, Hao (Richard)en_US
dc.date.accessioned2018-01-10T07:43:03Z
dc.date.available2018-01-10T07:43:03Z
dc.date.issued2017
dc.identifier.issn1467-8659
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.13087
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13087
dc.description.abstractRendering with full lens model can offer images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a novel camera lens model, NeuroLens, to emulate the imaging of real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs), which map input rays to output rays. IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a graphics processing unit (GPU). To effectively represent spatially varying imaging properties of a camera lens, the input space spanned by incident rays is subdivided into multiple subspaces and each subspace is fitted with a separate IRF. To further raise the evaluation accuracy, a set of neural networks is trained for each IRF and the output is calculated as the average output of the set. The effectiveness of the NeuroLens is demonstrated by fitting a wide range of real camera lenses. Experimental results show that it provides higher imaging accuracy in comparison to state‐of‐the‐art camera lens models, while maintaining the high efficiency for processing camera rays.Camera lens models are indispensable components of three‐dimensional graphics. Rendering with full lens model offers images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a camera lens model, NeuroLens, to emulate real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs). IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a GPU. To represent spatially varying imaging properties, the input space spanned by incident rays is subdivided, and each subspace is locally fitted with a separate IRF.en_US
dc.publisher© 2017 The Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectcamera lens simulation
dc.subjectneural networks
dc.subjectregression
dc.subjectlens effects
dc.subjectI.3.7 [Computer Graphics]: Three‐Dimensional Graphics and Realism Raytracing
dc.titleNeuroLens: Data‐Driven Camera Lens Simulation Using Neural Networksen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersArticles
dc.description.volume36
dc.description.number8
dc.identifier.doi10.1111/cgf.13087
dc.identifier.pages390-401


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record