Gesture interaction in cooperation scenarios (original) (raw)

Exploring the Use of Gesture in Collaborative Tasks

Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA '17, 2017

Personal assistants such as Siri have changed the way people interact with computers by introducing virtual assistants that collaborate with humans through natural speech-based interfaces. However, relying on speech alone as the medium of communication can be a limitation; non-verbal aspects of communication also play a vital role in natural human discourse. Thus, it is necessary to identify the use of gesture and other nonverbal aspects in order to apply them towards the development of computer systems. We conducted an exploratory study to identify how humans use gesture and speech to communicate when solving collaborative tasks. We highlight differences in gesturing strategies in the presence/absence of speech and also show that the inclusion of gesture with speech resulted in faster task completion times than with speech alone. Based on these results, we present implications for the design of gestural and multimodal interactions.

Usage of gestures along with other interaction modes in collaborative design

Currently many computer-aided multi-modal interaction tools are under development, and some have demonstrated their applications in design. To avoid disruptive transformation from current design tools to multi-modal designing, there is a need for several descriptive studies to understand commonly used interaction modes in design. To understand how gestures are amalgamated in collaborative design while using current design tools, a set of laboratory experiments were conducted with a pair of designers working together to solve a design problem. The two objectives of this paper are: 1. Which interaction mode, among verbal, gestural, textual, graphical, and combination of these, dominates in collaborative designing? and 2. How do these interaction modes change across design stages (requirement identification, development of preliminary concepts, concepts elaboration, evaluation, and detailing of chosen concepts)? The results aim to provide directions to develop new design tools which are aligned with designer's current interaction patterns as observed in using conventional CAD design tools.

A Study of Gestures in a Video-Mediated Collaborative Assembly Task

Advances in Human-Computer Interaction, 2011

This paper presents the results of an experimental investigation of two gesture representations (overlaying hands and cursor pointer) in a video-mediated scenario—remote collaboration on physical task. Our study assessed the relative value of the two gesture representations with respect to their effectiveness in task performance, user's satisfaction, and user's perceived quality of collaboration in terms of the coordination and interaction with the remote partner. Our results show no clear difference between these two gesture representations in the effectiveness and user satisfaction. However, when considering the perceived quality of collaboration, the overlaying hands condition was statistically significantly higher than the pointer cursor condition. Our results seem to suggest that the value of a more expressive gesture representation is not so much a gain in performance but rather a gain in user's experience, more specifically in user's perceived quality of colla...

Gestures in Machine Interaction

University of the Arts London, 2011

Vnencumbered-gesture-interaction (VGI) describes the use of unrestricted gestures in machine interaction. The development of such technology will enable users to interact with machines and virtual environments by performing actions like grasping, pinching or waving without the need of peripherals. Advances in image-processing and pattern recognition make such interaction viable and in some applications more practical than current modes of keyboard, mouse and touch-screen interaction provide. VGI is emerging as a popular topic amongst Human-Computer Interaction (HCI), Computer-vision and gesture research; and is developing into a topic with potential to significantly impact the future of computer-interaction, robot-control and gaming. This thesis investigates whether an ergonomic model of VGI can be developed and implemented on consumer devices by considering some of the barriers currently preventing such a model of VGI from being widely adopted. This research aims to address the development of freehand gesture interfaces and accompanying syntax. Without the detailed consideration of the evolution of this field the development of un-ergonomic, inefficient interfaces capable of placing undue strain on interface users becomes more likely. In the course of this thesis some novel design and methodological assertions are made. The Gesture in Machine Interaction (GiMI) syntax model and the Gesture-Face Layer (GFL), developed in the course of this research, have been designed to facilitate ergonomic gesture interaction. The GiMI is an interface syntax model designed to enable cursor control, browser navigation commands and steering control for remote robots or vehicles. Through applying state-of-the-art image processing that facilitates three-dimensional (3D) recognition of human action, this research investigates how interface syntax can incorporate the broadest range of human actions. By advancing our understanding of ergonomic gesture syntax, this research aims to assist future developers evaluate the efficiency of gesture interfaces, lexicons and syntax. 2.3.1 Methodology framework. 2.3.1.1 Design analysis. 2.3.1.2 Interface design vi 35 35 40 41 41 46 48 49 49 49 50 50 51 2.3.1.3 Evaluation and comparison 2.3.1.4 Guideline Application. . .

Gesture-based interaction and implication for the future

Human Computer Interaction Computational Science and Engineering Master thesis 1-year level, 15 credits Master thesis 1-year level, 15 credits SPM 2011.01 SPM 2011.01 Abstract Currently there exists an advanced and natural form of interaction, named gesture-based interaction. It has been applied in some fields successfully, especially in the form of game applications. Although people are filled with wonder at the new interaction, it has not replaced the traditional interaction, like keyboard and mouse. From the user experience point of view, gesture-based interaction has advantages which traditional interactive form cannot offer, but also shortcomings which people seem dissatisfied with. This thesis focuses on expressing user experience of gesture-based interaction by conducting interview method. Finally, this paper presents a new design and tries to find implications for its practical usage in the future.

Perspectives of Gestures for Gestural-Based Interaction Systems: Towards Natural Interaction

Human IT: Journal for Information Technology Studies as a Human Science, 2019

A frequently mentioned benefit of gesture-based input to computing systems is that it provides naturalness in interaction. However, it is not uncommon to find gesture sets consisting of arbitrary (hand) formations with illogicallymapped functions. This defeat the purpose of using gestures as a means to facilitate natural interaction. The root of the issue seems to stem from a separation between what is deemed as gesture in the computing field and what is deemed as gesture linguistically. To find a common ground, this paper explores the fundamental aspects of gestures in the literature of psycholinguistic-based studies and HCI-based studies. The discussion focuses on the connection between the two perspectives-in the definition aspect through the concept of meaning and context, and in the classification aspect through the mapping of tasks (manipulative or communicative) to gesture functions (ergotic, epistemic or semiotic). By highlighting how these two perspectives interrelate, this paper provides a basis for research works that intend to propose gestures as the interaction modality for interactive systems.

Gestures Over Video Streams to Support Remote Collaboration on Physical Tasks

Human–Computer …, 2004

This article considers tools to support remote gesture in video systems being used to complete collaborative physical tasks-tasks in which two or more individuals work together manipulating three-dimensional objects in the real world. We first discuss the process of conversational grounding during collaborative physical tasks, particularly the role of two types of gestures in the grounding process: pointing gestures, which are used to refer to task objects and locations, and representational gestures, which are used to represent the form of task objects and the nature of actions to be used with those objects. We then consider ways in which both pointing and representational gestures can be instantiated in systems for remote collaboration on physical tasks. We present the results of two studies that use a "surrogate" approach to remote gesture, in which images are intended to express the meaning of gestures through visible embodiments, rather than direct views of the hands. In Study 1, we compare performance with a cursor-based 274 FUSSELL ET AL.

Red-handed: collaborative gesture interaction with a projection table

Proceedings of the …, 2004

Collaboration is an essential mechanism for productivity. Projection tables such as the SociaDesk enable collaboration through the sharing of audio, video and data. To enhance this form of interaction, it is beneficial to enable local, multiuser interaction with this media. This paper introduces a computer vision-based gesture recognition system that detects and stores the gestures of multiple users at the SociaDesk. This system was designed to be a submodule of high-level applications that implement multiuser interaction. With such a system, users can collaboratively interact with software-based tools at a projection table.

Evaluating a User-Elicited Gesture Set for Interactive Displays

Recently, many studies were conducted which focused on eliciting gestures from users in order to come up with gesture sets for surface computing. However, there are still many questions to clarify concerning the value of this method regarding to the usability of such gesture sets in real systems. In this work, we contribute a usability test of an implemented gesture set based on user suggested pen and hand gestures for node-link diagram editing on interactive displays. The results of the usability test gave valuable insight in how users interact spontaneously with such a gestural interface. In particular, we found that the methodology of eliciting gestures from users reveals what kinds of gestures users prefer but that it does not necessarily show how they are applied. Beyond that, we observed how participants differentiate between touch and pen within complex workflows.