The USINACTS usability assessment tutorial (original) (raw)

Statistical methods to measure usability evaluation

2016

Statistical analysis has meant to provide statistical methods for making tests of significance and trustworthy estimations of the magnitude of the effects indicated by the results for the reduction of data. Statistical method on the other hand involves the use of certain logical ideas appropriate to experimental procedure. As this research intends to review the different statistical methods for data reduction during the administration of usability evaluation as well as provides readers with statistical methods’ definitions and concepts, it will help researchers especially the one who conducts usability evaluation becomes familiar and find appropriate statistical methods to treat the data collected and these methods include: (1) frequency distribution, (2) percentage, (3) mean score, (4) standard deviation, (5) likert scale, (6) analysis of variance, (7) chi-square, and (8) pearson’s r (pearson product-moment correlation coefficient). The main focus of this paper is to provide other researchers a review on the different statistical methods used to treat data obtained from usability evaluations for data reduction. In order to achieve this specified goal, the researcher will present various previous studies like the study of Thuseethan et al., Hammouche, Penha et al., Zhao, Deotale, Iqbal et al., Manlai, Sauro and Kindlund, and Joo and describes the process of how they statistically treated data collected from different usability evaluation techniques. Although this paper’s primary focus is about statistical methods, an overview about usability evaluation and its methods were discussed to provide readers a brief introduction about usability concepts and familiarize them with most usability methods used in usability testing and evaluation. Various evaluation methods exists that serve to measure system’s usability and these methods can be analytical or empirical. Analytical usability methods are conducted by usability experts who put themselves as the users of the application or the system while the empirical usability methods consist of various usability tests and questionnaires. The empirical usability methods can be done if a prototype of the system is already available and ready to use. An example of analytical usability method is the Heuristic Evaluation that serves to measure a system’s usability. The recent Heuristic Evaluation – Usability Techniques used today was designed by Deniese Pierotti of Xerox Company that comprises of 13 heuristics: (1) Visibility of system status, (2) Match between system and the real world, (3) User control and freedom, (4) Consistency and standards, (5) Help users recognize, diagnose, and recover from errors, (6) Error prevention, (7) Recognition rather than recall, (8) Flexibility and minimalist design, (9) Aesthetic and minimalist design, (10) Help and documentation, (11) Skills, (12) Pleasurable and respectful interaction with the user, and (13) Privacy. The empirical usability method on the other hand is an approach which consists of various usability tests and questionnaires. Examples for usability evaluation techniques that measures usability are: (1) System Usability Scale (SUS), (2) Software Usability Measurement Inventory, (3) Post-Study System Usability Questionnaire (PSSUQ) and Web-based Learning Environment Instrument (WLEI), and (4) post-tasks walkthrough. In this study, various statistical methods will be explored and reviewed. The selection of these methods will focus primarily on studies where data obtained from usability evaluation techniques such as: (1) Frequency distribution, (2) Percentage, (3) Mean score, (4) Standard deviation, (5) Likert scale, (6) Analysis of variance, (7) Chi-square, and (8) Pearson product-moment correlation coefficient (or Pearson’s r). Although statistical treatment for the reduction of data is preferably done by statistician to avoid uncertainty of data, listing various statistical methods in this study will help researchers to understand its importance and use. The findings from each selected research show proofs and evidences that choosing appropriate methods for evaluation and statistical treatment is important and the information gathered from previous articles and studies support this research to help other researchers become familiar on choosing appropriate statistical methods used in usability evaluation and since the results of the statistical analysis process were recorded through this research it will help the future researchers with similar study to easily retrieve it for future use.

Standardized Usability Questionnaires: Features and Quality Focus

Computer Science and Information Technology, 2016

For the last few decades more than twenty standardized usability questionnaires for evaluating software systems have been proposed. These instruments have been widely used in the assessment of usability of user interfaces. They have their own characteristics, can be generic or address specific kinds of systems and can be composed of one or several items. Some comparison or comparative studies were also conducted to identify the best one in different situations. All these issues should be considered while choosing a questionnaire. In this paper, we present an extensive review of these questionnaires considering their key features, some classifications and main comparison studies already performed. Moreover, we present the result of a detailed analysis of all items being evaluated in each questionnaire to indicate those that can identify users’ perceptions about specific usability problems. This analysis was performed by confronting each questionnaire item (around 475 items) with usab...

The University of Waikato usability laboratory

2001

is an environment where researchers are able to study and assess the usability of products while being used by their intended users. It allows for flexible configuration, and in particular can accommodate studies involving groups of collaborating users. This paper describes the Usability Laboratory with a particular emphasis on its background-why the Laboratory was established-and the facilities and services that it provides.

Koutsabasis, P. Spyrou, T. and Darzentas, J. (2007) Evaluating Usability Evaluation Methods: Criteria, Method and a Case study, Lecture Notes in Computer Science, Vol. 4550, Springer, Proceedings of the 12th International Conference on Human-Computer Interaction, Beijing, China, 2007.

2007

The paper proposes an approach to comparative usability evaluation that incorporates important relevant criteria identified in previous work. It applies the proposed approach to a case study of a comparative evaluation of an academic website employing four widely-used usability evaluation methods (UEMs): heuristic evaluation, cognitive walkthroughs, think-aloud protocol and co-discovery learning.

Usability evaluation: models, methods, and applications

Usability is evaluated by the quality of communication (interaction) between a technological product (system) and a user (the one who uses that technological product). The unit of measurement is the user's behaviour (satisfaction, comfort, time spent in performing an action, etc.) in a specific context of use (natural and virtual environment as well as the physical environment where communication between user and technological product takes place). The usability concept and its measurement are strictly connected to that of accessibility ("Web Accessibility"), and the space of the problem, shared by the users, in which the interaction takes places (user technology interaction). Accessibility refers to how a technological product can be used by people regardless of their disability (see here "Web Accessibility"; Web Accessibility Initiative (2010). Usability measures how use is perceived by the user. Therefore, by improving communication and sharing information among physical, natural and virtual environments, usability is structured on a "User Centred Design", which is an ergonomic approach suited to the biopsychosocial model of disability (WHO 2001). This model complies with the requests and needs of disabled people summed up by the phrase: "nothing about us without us" (Charlton 1998).

Usability Evaluation Methods

Advances in Systems Analysis, Software Engineering, and High Performance Computing

This chapter aims to identify, analyze, and classify the methodologies and methods described in the literature for the usability evaluation of systems and services based on information and communication technologies. The methodology used was a systematic review of the literature. The studies included in the analysis were classified into empirical and analytical methodologies (test, inquiry, controlled experiment, or inspection). A total of 2116 studies were included, of which 1308 were classified. In terms of results, the inquiry methodology was the most frequent in this review, followed by test, inspection, and finally, the controlled experiment methodology. A combination of methodologies is relatively common, especially the combination of test and inquiry methodologies, probably because they assess different but complementary aspects of usability contributing to a more comprehensive assessment.

Usability measurement in context

Behaviour & Information Technology, 1994

Different approaches to the measurement of usability are reviewed and related to definitions of usability in international standards. It is concluded that reliable measures of overall usability can only be obtained by assessing the effectiveness, efficiency and satisfaction with which representative users carry out representative tasks in representative environments. This requires a detailed understanding of the context of use of a product. The ESPRIT MUSiC project has developed tools which can be used to measure usability in the laboratory and the field. An overview is given of the methods and tools for measuring user performance, cognitive workload and user perceived quality.

Assessment of usability benchmarks: combining standardized scales with specific questions.

International Journal of Emerging Technologies in Learning (iJET), 6 (4), 56-64., 2011

The usability of Web sites and online services is of rising importance. When creating a completely new Web site, qualitative data are adequate for identifying the most usability problems. However, changes of an existing Web site should be evaluated by a quantitative benchmarking process. The proposed paper describes the creation of a questionnaire that allows a quantitative usability benchmarking, i.e. a direct comparison of the different versions of a Web site and an orientation on general standards of usability. The questionnaire is also open for qualitative data. The methodology will be explained by the digital library services of the ZBW.