Can Gaze Beat Touch? A Fitts' Law Evaluation of Gaze, Touch, and Mouse Inputs (original) (raw)
Related papers
Gaze typing compared with input by head and hand
2004
This paper investigates the usability of gaze-typing systems for disabled people in a broad perspective that takes into account the usage scenarios and the particular users that these systems benefit. Design goals for a gaze-typing system are identified: productivity above 25 words per minute, robust tracking, high availability, and support of multimodal input. A detailed investigation of the efficiency and user satisfaction with a Danish and a Japanese gaze-typing system compares it to head-and mouse (hand)typing. We found gaze typing to be more erroneous than the other two modalities. Gaze typing was just as fast as head typing, and both were slower than mouse (hand-) typing. Possibilities for design improvements are discussed.
A Comparison of Gaze-Based and Gesture-Based Input for a Point-and-Click Task
Universal Access in Human-Computer Interaction. Access to Interaction, 2015
Alternative input devices to the computer mouse are becoming more affordable and accessible. With greater availability, they have the potential to provide greater access to information for more users in more environments, perhaps while also providing more natural or efficient interaction. However, most user interfaces are built to be mouse-driven, and the adoption of these new technologies may depend on their ability to work with these existing interfaces. This study examined performance with gesture-control and gaze-tracking devices and compared them to a traditional mouse for a standard Fitts' point-and-click task. Both gesture-controlled and gaze-tracking proved to be viable alternatives, though they were significantly slower and more taxing than the familiar mouse. In order to make effective use of these devices, researchers, designers, and developers must find or create control schemes which take advantage of the alternative devices' benefits while curtailing the drawbacks.
Command without a click: Dwell time typing by mouse and gaze selections
2003
Abstract. With dwell time activation, completely hands free interaction may be achieved by tracking the user's gaze positions. The first study presented compares typing by mouse click with dwell time typing on Danish onscreen keyboard with 10 large buttons which change according to character prediction. The second study compares mouse and eye-gaze dwell input on a similar Japanese keyboard, but without dynamic changes.
Using Gesture, Gaze, and Combination Input Schemes as Alternatives to the Computer Mouse
Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2018
Novel input devices can increase the bandwidth between users and their devices. Traditional desktop computing uses windows, icons, menus, and pointers – an interface built for the computer mouse and very effective for pointing-and-clicking. Alternative devices provide a variety of interactions including touch-free, gesture-based input and gaze-tracking to determine the user’s on-screen gaze location, but these input channels are not well-suited to a point-and-click interface. This study evaluates five new schemes, some multi-modal. These experimental schemes perform worse than mouse-based input for a picture sorting task, and motion-based gesture control creates more errors. Some gaze-based input has similar performance to the mouse while not creating additional workload.
Improving the accuracy of gaze input for interaction
Proceedings of the …, 2008
Using gaze information as a form of input poses challenges based on the nature of eye movements and how we humans use our eyes in conjunction with other motor actions. In this paper, we present three techniques for improving the use of gaze as a form of input. We first present a saccade detection and smoothing algorithm that works on real-time streaming gaze information. We then present a study which explores some of the timing issues of using gaze in conjunction with a trigger (key press or other motor action) and propose a solution for resolving these issues. Finally, we present the concept of Focus Points, which makes it easier for users to focus their gaze when using gaze-based interaction techniques. Though these techniques were developed for improving the performance of gaze-based pointing, their use is applicable in general to using gaze as a practical form of input.
Comparing Dwell time, Pursuits and Gaze Gestures for Gaze Interaction on Handheld Mobile Devices
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
Figure 1: We evaluate the performance of the three widely used gaze-based interaction methods: Dwell time (A), Pursuits (B) and Gaze gestures (C), for target selections on handheld mobile devices while sitting (left) and while walking (right). All participants performed all selections using the three different techniques while sitting and while walking. The red arrow in (B) illustrates the direction in which a yellow dot stimuli was rotating around a selectable target. The red arrows in (C) indicate the directions in which the user could perform a gaze gesture. All arrows are for illustration and were not shown to participants.
EyePoint: practical pointing and selection using gaze and keyboard
Proceedings of the SIGCHI …, 2007
We present a practical technique for pointing and selection using a combination of eye gaze and keyboard triggers. EyePoint uses a two-step progressive refinement process fluidly stitched together in a look-press-look-release action, which makes it possible to compensate for the accuracy limitations of the current state-of-the-art eye gaze trackers. While research in gaze-based pointing has traditionally focused on disabled users, EyePoint makes gaze-based pointing effective and simple enough for even able-bodied users to use for their everyday computing tasks. As the cost of eye gaze tracking devices decreases, it will become possible for such gaze-based techniques to be used as a viable alternative for users who choose not to use a mouse depending on their abilities, tasks and preferences.
2020
One of the main challenges of gaze-based interactions is the ability to distinguish normal eye function from a deliberate interaction with the computer system, commonly referred to as 'Midas touch'. In this paper we propose, EyeTAP (Eye tracking point-and-select by Targeted Acoustic Pulse) a hands-free interaction method for point-and-select tasks. We evaluated the prototype in two separate user studies, each containing two experiments with 33 participants and found that EyeTAP is robust even in presence of ambient noise in the audio input signal with tolerance of up to 70 dB, results in a faster movement time, and faster task completion time, and has a lower cognitive workload than voice recognition. In addition, EyeTAP has a lower error rate than the dwell-time method in a ribbon-shaped experiment. These characteristics make it applicable for users for whom physical movements are restricted or not possible due to a disability. Furthermore, EyeTAP has no specific requiremen...
The Costs and Benefits of Combining Gaze and Hand Gestures for Remote Interaction
Lecture Notes in Computer Science, 2015
Gaze has been proposed as an ideal modality for supporting remote target selection. We explored the potential of integrating gaze with hand gestures for remote interaction on a large display in terms of user experience and preference. We conducted a lab study to compare interaction in a photo-sorting task using gesture only, or the combination of gaze plus gesture. Results from the study show that a combination of gaze and gesture input can lead to significantly faster selection, reduced hand fatigue and increased ease of use compared to using only hand input. People largely preferred the combination of gaze for target selection and hand gestures for manipulation. However, gaze can cause particular kinds of errors and can induce a cost due to switching modalities.
Gaze and head pointing for hands-free text entry
Proceedings of the 2018 ACM Symposium on Eye Tracking Research & Applications, 2018
With the proliferation 1 of small-screen computing devices, there has been a continuous trend in reducing the size of interface elements. In virtual keyboards, this allows for more characters in a layout and additional function widgets. However, vision-based interfaces (VBIs) have only been investigated with large (e.g., full-screen) keyboards. To understand how key size reduction affects the accuracy and speed performance of text entry VBIs, we evaluated gaze-controlled VBI (g-VBI) and head-controlled VBI (h-VBI) with unconventionally small (0.4°, 0.6°, 0.8° and 1°) keys. Novices (N = 26) yielded significantly more accurate and fast text production with h-VBI than with g-VBI, while the performance of experts (N = 12) for both VBIs was nearly equal whe n a 0.8-1° key size was used. We discuss advantages and limitations of the VBIs for typing with ultra-small keyboards and emphasize relevant factors for designing such systems. CCS CONCEPTS • Human-centered computing Human computer interaction (HCI); Interaction techniques; Text input;