Uzilla: A new tool for Web usability testing (original) (raw)
Related papers
Journal of Physics D-applied Physics, 2010
In this paper we describe a method for evaluating the usability of web-based applications. Our method is based on remote and automatic capture and semi-automatic analysis of users behavior, in order to find usability problems in the applications' interfaces. The goal of our method is to allow an analysis of the way users actually interact with the evaluated interface. Through the analysis of users behavior is possible to find patterns of interaction. Analyzing the patterns found and comparing it to the expected behavior for the tasks performed by users, we can detect usability problems. In this paper we also briefly describe a first experiment with our method and some initial results that point to the potential of the method in performing remote and automatic usability evaluations.
A Test Management System to Support Remote Usability Assessment of Web Applications
Information
Nowadays, web designers are forced to have an even deeper perception of how users approach their products in terms of user experience and usability. Remote Usability Testing (RUT) is the most appropriate tool to assess the usability of web platforms by measuring the level of user attention, satisfaction, and productivity. RUT does not require the physical presence of users and evaluators, but for this very reason makes data collection more difficult. To simplify data collection and analysis and help RUT moderators collect and analyze user’s data in a non-intrusive manner, this research work proposes a low-cost comprehensive framework based on Deep Learning algorithms. The proposed framework, called Miora, employs facial expression recognition, gaze recognition, and analytics algorithms to capture data about other information of interest for in-depth usability analysis, such as interactions with the analyzed software. It uses a comprehensive evaluation methodology to elicit informati...
Integration of Multiple Techniques to Support Usability Evaluation of Web Applications
In this paper, we discuss some issues related to automatic support to usability evaluation. The discussion is based on our experience with WebRemUSINE, a tool that we have designed and developed in order to perform intelligent analysis of Web browser logs using the information contained in the task model of the application. This approach supports remote usability evaluation of Web sites.
Empirical Assessments of a tool to support Web usability inspection
CLEI Electronic Journal
Usability is one of the most important software quality attributes regarding its acceptability by end users. It is even more important in the context of Web applications. One way of evaluating application usability is through inspections. The WDP (Web Design Perspectives- based Usability Inspection Technique) presents evidence of industrial use feasibility; however some computerized support had been suggested by practitioners. Therefore, the WDP Tool was built, aiming to provide automated support the WDP technique application, reducing the effort involved in usability inspections with WDP. This paper presents two observational studies regarding the use of the WDP Tool use, one in vivo and one in vitro, which aimed to analyze the cost-effectiveness of its application and its appropriateness to the industrial environment through the Technology Acceptance Model (TAM). The results provide indications about the feasibility of using the WDP Tool to support usability inspections in real so...
Usability testing: a review of some methodological and technical aspects of the method
International Journal of Medical Informatics, 2010
The aim of this paper is to review some work conducted in the field of user testing that aims at specifying or clarifying the test procedures and at defining and developing tools to help conduct user tests. The topics that have been selected were considered relevant for evaluating applications in the field of medical and health care informatics. These topics are: the number of participants that should take part in a user test, the test procedure, remote usability evaluation, usability testing tools, and evaluating mobile applications.
Methods for Identifying Usability Problems with Web Sites
Engineering for Human-Computer Interaction, 1999
The dynamic nature of the Web poses problems for usability evaluations. Development times are rapid and changes to Web sites occur frequently, often without a chance to re-evaluate the usability of the entire site. New advances in Web developments change user expectations. In order to incorporate usability evaluations into such an environment, we must produce methods that are compatible with the development constraints. We believe that rapid, remote, and automated evaluation techniques are key to ensuring usable Web sites. In this paper, we describe three studies we carried out to explore the feasibility of using modified usability testing methods or non-traditional methods of obtaining information about usability to satisfy our criteria of rapid, remote, and automated evaluation. Based on lessons learned in these case studies, we are developing tools for rapid, remote, and automated usability evaluations. Our future work includes using these tools on a variety of Web sites to determine I) their effectiveness compared to traditional evaluation methods, 2) the optimal types of sites and stages of development for each tool, and 3) tool enhancements.
In the lab and out in the wild: Remote web usability testing for mobile devices
In this paper we discuss a pilot usability study using wireless Internet-enabled personal digital assistants (PDAs). We compared usability data gathered in traditional lab studies with a proxy-based clickstream logging and analysis tool. We found that this remote testing technique can more easily gather many of the contentrelated usability issues, but device-related issues are more difficult to capture.
Levels of Automation and User Participation in Usability Testing
This paper identifies a number of factors involved in current practices of usability testing and presents profiles for three prototype methods: think-aloud, subjective ratings, and history files. We then identify ideal levels to generate the profile for new methods. These methods involve either a human observer or a self-administration of the test by the user. We propose methods of automating the evaluation form by dynamically adding items and modifying the form and the tasks in the process of the usability test. For self-administration of testing, we propose similar ideas of dynamically automating the forms and the tasks. Furthermore, we propose methods of eliciting the user's goals and focus of attention. Finally, we propose that user testing methods and interfaces should be subjected to usability testing.