Usability inspections by groups of specialists (original) (raw)

2002, CHI '02 extended abstracts on Human factors in computing systems - CHI '02

Evaluators who examine the same system using the same usability evaluation method tend to report substantially different sets of problems. This so-called evaluator effect means that different evaluations point to considerably different revisions of the evaluated system. The first step in coping with t he evaluator effect is to acknowledge its existence. In this study 11 usability specialists individually inspected a website and then met in four groups to combine their findings into group outputs. Although the overlap in reported problems between any two evaluators averaged only 9%, the 11 evaluators felt that they were largely in agreement. The evaluators perceived their disparate observations as multiple sources of evidence in support of the same issues, not as disagreements. Thus, the group work increased the evaluators' confidence in their individual inspections, rather than alerted them to the evaluator effect.

Sign up for access to the world's latest research.

checkGet notified about relevant papers

checkSave papers to use in your research

checkJoin the discussion with peers

checkTrack your impact

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

Koutsabasis, P. Spyrou, T. and Darzentas, J. (2007) Evaluating Usability Evaluation Methods: Criteria, Method and a Case study, Lecture Notes in Computer Science, Vol. 4550, Springer, Proceedings of the 12th International Conference on Human-Computer Interaction, Beijing, China, 2007.

2007