On the Analysis of Groupware Usability Using Annotated GOMS (original) (raw)

Analyzing groupware design by means of usability results

2005

Abstract GOMS is a well-known model that has been successfully used in predicting the performance of human-computer interaction, identifying usability problems and improving user-interface design. The focus of GOMS on the individual user, however, explains why it has not been applied in the groupware context. We were inspired by GOMS to define a model that describes collaborative tasks in a formal way.

Analytic evaluation of groupware design

2006

Abstract. We propose an analytic method to evaluate groupware design. The method was inspired by GOMS, a well-known approach to analyze usability problems with single-user interfaces. GOMS has not yet been amply applied to evaluate groupware because of several fundamental distinctions between the single-user and multi-user contexts. The approach described in this paper overcomes such differences.

A Comparison of Usage Evaluation and Inspection Methods for Assessing Groupware Usability

Many researchers believe that groupware can only be evaluated by studying real collaborators in their real contexts, a process that tends to be expensive and timeconsuming. Others believe that it is more practical to evaluate groupware through usability inspection methods. Deciding between these two approaches is difficult, because it is unclear how they compare in a real evaluation situation. To address this problem, we carried out a dual evaluation of a groupware system, with one evaluation applying userbased techniques, and the other using inspection methods. We compared the results from the two evaluations and concluded that, while the two methods have their own strengths, weaknesses, and trade-offs, they are complementary. Because the two methods found overlapping problems, we expect that they can be used in tandem to good effect, e.g., applying the discount method prior to a field study, with the expectation that the system deployed in the more expensive field study has a better chance of doing well because some pertinent usability problems will have already been addressed.

CIAM: A Methodology for the Development of Groupware User Interfaces

Journal of Universal Computer Science, 2008

The design of the groupware systems is a progressively extended task, which is difficult to tackle. There are not proposals to support the joint modeling of collaborative and interactive issues of this kind of systems, that is, proposals that allow designing the presentation layer of these applications. In order to solve this lack we propose a methodological approach, based on a set of notations of both a graphical and a textual nature.

Task analysis for groupware usability evaluation

ACM Transactions on Computer-Human Interaction, 2003

Cooperative Work have recently developed discount evaluation methods for shared-workspace groupware. Most discount methods rely on some understanding of the context in which the groupware systems will be used, which means that evaluators need to model the tasks that groups will perform. However, existing task analysis schemes are not well suited to the needs of groupware evaluation: they either do not deal with collaboration issues, do not use an appropriate level of analysis for concrete assessment of usability in interfaces, or do not adequately represent the variability inherent in group work. To fill this gap, we have developed a new modeling technique called Collaboration Usability Analysis. CUA focuses on the teamwork that goes on in a group task rather than the taskwork. To enable closer links between the task representation and the groupware interface, CUA grounds each collaborative action in a set of group work primitives called the mechanics of collaboration. To represent the range of ways that a group task can be carried out, CUA allows variable paths through the execution of a task, and allows alternate paths and optional tasks to be modeled. CUA's main contribution is to provide evaluators with a framework in which they can simulate the realistic use of a groupware system and identify usability problems that are caused by the groupware interface.

Effects of awareness support on groupware usability

1998

ABSTRACT Collaboration in current real-time groupware systems is often an awkward and clumsy process. We hypothesize that better support for workspace awareness can improve the usability of these shared computational workspaces. We conducted an experiment that compared people's performance on two versions of a groupware interface. The interfaces used workspace miniatures to provide different levels of support for workspace awareness.

A Comparison of Groupware Evaluation Methodologies

cos.ufrj.br

Single user system evaluation techniques are less than ideal to evaluate groupware, for they don't take into account important aspects, such as user and work contexts or communicability between users. Evaluation methodologies have been developed that attempt to handle specific issues related to group work besides usability problems. The objective of this paper is to explore the peculiarities and characteristics of two groupware evaluation methodologies through a comparison between them, using them to analyze a cooperative work tool, Groove. The chosen methodologies were CUA and OFC.

Team usability testing: development and validation of a groupware usability evaluation method

Cognition, technology & work, 2024

As virtual teamwork became widespread, the importance of the usability of groupware has increased, as groupware must have high usability in order to properly support teamwork. Several studies have focused on the usability evaluation of groupware, yet, both in terms of science and practice, there is a lack of a method that, like existing empirical single-user methods, can be routinely applied in software development practice. The new Team Usability Testing method differs from previous methods in that it is an empirical method for evaluating synchronous (real-time) collaborative software that explores the team usability problems with the help of real or potential users. The method consists of questionnaires, screen recording videos and group interviews, and the framework of data processing is based on the theory of the mechanics of collaboration. In the four steps of the development of the method, empirical studies under laboratory conditions and real working conditions, and then, for validation purposes, analytical, heuristic evaluation with the involvement of usability experts were performed. During the development and the application of the method, the communication patterns of different teams were examined in terms of their relationship to software usability. In this paper we present the results of the second laboratory study and the heuristic evaluation in details. Overall, the method is able to explore team usability problems of different types of teams, making it a valuable element in existing methods for evaluating the usability of groupware. Keywords HCI design and evaluation methods-laboratory experiments • HCI design and evaluation methods-usability testing • Computer supported cooperative work • Empirical studies in collaborative and social computing

Heuristic evaluation of groupware based on the mechanics of collaboration

2001

Despite the increasing availability of groupware, most systems are awkward and not widely used. While there are many reasons for this, a significant problem is that groupware is difficult to evaluate. In particular, there are no discount usability evaluation methodologies that can discover problems specific to teamwork.