Show simple item record

dc.contributor.authorLavoué, Guillaumeen_US
dc.contributor.authorCordier, Frédéricen_US
dc.contributor.authorSeo, Hyewonen_US
dc.contributor.authorLarabi, Mohamed-Chakeren_US
dc.contributor.editorGutierrez, Diego and Sheffer, Allaen_US
dc.date.accessioned2018-04-14T18:23:43Z
dc.date.available2018-04-14T18:23:43Z
dc.date.issued2018
dc.identifier.issn1467-8659
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.13353
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13353
dc.description.abstractUnderstanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye-tracking experiments involving 3D shapes, with both static and time-varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state-of-the-art mesh saliency models in predicting ground-truth fixations using two different metrics. We show that, even combined with a center-bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human-eye fixations and Schelling points and show that their correlation is weak.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectComputing methodologies
dc.subjectInterest point and salient region detections
dc.subjectPerception
dc.subjectMesh models
dc.titleVisual Attention for Rendered 3D Shapesen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersGaze and Attention
dc.description.volume37
dc.description.number2
dc.identifier.doi10.1111/cgf.13353
dc.identifier.pages191-203


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record