Show simple item record

dc.contributor.authorSouth, Lauraen_US
dc.contributor.authorSaffo, Daviden_US
dc.contributor.authorVitek, Olgaen_US
dc.contributor.authorDunne, Codyen_US
dc.contributor.authorBorkin, Michelle A.en_US
dc.contributor.editorBorgo, Ritaen_US
dc.contributor.editorMarai, G. Elisabetaen_US
dc.contributor.editorSchreck, Tobiasen_US
dc.date.accessioned2022-06-03T06:05:41Z
dc.date.available2022-06-03T06:05:41Z
dc.date.issued2022
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14521
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14521
dc.description.abstractLikert scales are often used in visualization evaluations to produce quantitative estimates of subjective attributes, such as ease of use or aesthetic appeal. However, the methods used to collect, analyze, and visualize data collected with Likert scales are inconsistent among evaluations in visualization papers. In this paper, we examine the use of Likert scales as a tool for measuring subjective response in a systematic review of 134 visualization evaluations published between 2009 and 2019. We find that papers with both objective and subjective measures do not hold the same reporting and analysis standards for both aspects of their evaluation, producing less rigorous work for the subjective qualities measured by Likert scales. Additionally, we demonstrate that many papers are inconsistent in their interpretations of Likert data as discrete or continuous and may even sacrifice statistical power by applying nonparametric tests unnecessarily. Finally, we identify instances where key details about Likert item construction with the potential to bias participant responses are omitted from evaluation methodology reporting, inhibiting the feasibility and reliability of future replication studies. We summarize recommendations from other fields for best practices with Likert data in visualization evaluations, based on the results of our survey.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Human-centered computing --> Visualization design and evaluation methods; Empirical studies in visualization
dc.subjectHuman centered computing
dc.subjectVisualization design and evaluation methods
dc.subjectEmpirical studies in visualization
dc.titleEffective Use of Likert Scales in Visualization Evaluations: A Systematic Reviewen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersGuidelines and Accessibility
dc.description.volume41
dc.description.number3
dc.identifier.doi10.1111/cgf.14521
dc.identifier.pages43-55
dc.identifier.pages13 pages


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • 41-Issue 3
    EuroVis 2022 - Conference Proceedings

Show simple item record