PalmGazer: Unimanual Eye-hand Menus in Augmented Reality (original) (raw)
Related papers
Motion marking menus: An eyes-free approach to motion input for handheld devices
International Journal of Human-Computer Studies, 2009
The increasing complexity of applications on handheld devices requires the development of rich new interaction methods specifically designed for resource-limited mobile use contexts. One appealingly convenient approach to this problem is to use device motions as input, a paradigm in which the currently dominant interaction metaphors are gesture recognition and visually mediated scrolling. However, neither is ideal. The former suffers from fundamental problems in the learning and communication of gestural patterns, while the latter requires continual visual monitoring of the mobile device, a task that is undesirable in many mobile contexts and also inherently in conflict with the act of moving a device to control it. This paper proposes an alternate approach: a gestural menu technique inspired by marking menus and designed specifically for the characteristics of motion input. It uses rotations between targets occupying large portions of angular space and emphasizes kinesthetic, eyes-free interaction. Three evaluations are presented, two featuring an abstract user interface (UI) and focusing on how user performance changes when the basic system parameters of number, size and depth of targets are manipulated. These studies show that a version of the menu system containing 19 commands yields optimal performance, compares well against data from the previous literature and can be used effectively eyes free (without graphical feedback). The final study uses a full graphical UI and untrained users to demonstrate that the system can be rapidly learnt. Together, these three studies rigorously validate the system design and suggest promising new directions for handheld motion-based UIs. r
Graphical Menus using a Mobile Phone for Wearable AR Systems
2011
Abstract In this paper, we explore the design of various types of graphical menus via a mobile phone for use in a wearable augmented reality system. For efficient system control, locating menus is vital. Based on previous relevant work, we determine display-, manipulator-and target-referenced menu placement according to focusable elements within a wearable augmented reality system. Moreover, we implement and discuss three menu techniques using a mobile phone with a stereo head-mounted display.
Earpod: eyes-free menu selection using touch input and reactive audio feedback
2007
ABSTRACT We present the design and evaluation of earPod: an eyesfree menu technique using touch input and reactive auditory feedback. Studies comparing earPod with an iPod-like visual menu technique on reasonably-sized static menus indicate that they are comparable in accuracy. In terms of efficiency (speed), earPod is initially slower, but outperforms the visual technique within 30 minutes of practice.
FingARtips: gesture based direct manipulation in Augmented Reality
Proceedings of the 2nd …, 2004
This paper presents a technique for natural, fingertip-based interaction with virtual objects in Augmented Reality (AR) environments. We use image processing software and finger-and hand-based fiducial markers to track gestures from the user, stencil buffering to enable the user to see their fingers at all times, and fingertip-based haptic feedback devices to enable the user to feel virtual objects. Unlike previous AR interfaces, this approach allows users to interact with virtual content using natural hand gestures. The paper describes how these techniques were applied in an urban planning interface, and also presents preliminary informal usability results.
Markerless 3D gesture-based interaction for handheld Augmented Reality interfaces
2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2013
Conventional 2D touch-based interaction methods for handheld Augmented Reality (AR) cannot provide intuitive 3D interaction due to a lack of natural gesture input with real-time depth information. The goal of this research is to develop a natural interaction technique for manipulating virtual objects in 3D space on handheld AR devices. We present a novel method that is based on identifying the positions and movements of the user's fingertips, and mapping these gestures onto corresponding manipulations of the virtual objects in the AR scene. We conducted a user study to evaluate this method by comparing it with a common touchbased interface under different AR scenarios. The results indicate that although our method takes longer time, it is more natural and enjoyable to use.
Marker-Based Finger Gesture Interaction in Mobile Augmented Reality
2019
This paper proposes a mobile AR application that allows the user to perform 2D interactions with the 3D virtual objects. The user is supposed to hold the mobile with his left hand and interact using the other one. In addition, the user is supposed to attach two colored markers (stickers) to his fingers; blue for the thumb and green for the index. User studies (experiments) were conducted to test the different interaction types; translation, scaling and rotation. The application run on a Samsung Note 5 device with Android 7 as OS. Our results were based on the performance (completion) time per each task per each participant in addition to a subjective questionnaire that was answered by the participants after finishing the user studies. According to the results, it was found that this approach had a delay which implies low performance and users faced a slight difficulty in accomplishing all tasks, yet this approach proved to be engaging and fun.
Eye gaze is potentially fast and ergonomic for target selection in AR. Nevertheless, it is reported to be inaccurate. To compensate for its low accuracy in selecting targets in an AR menu, previous researchers proposed dividing a menu into several sub-menus where targets are arranged regularly, and mapping the one pointed by eye gaze to a Google Glass’ touchpad on which the user confirms the selection of a target in the sub-menu via swipe or tap. However, whether this technique was effective in enhancing the basic gaze-touch target selection was not investigated. We coined the terminology TangibleCounterpart for this technique's essence and suggested using a cellphone touchscreen as the “tangible counterpart” for sub-menus. Further, we proposed the design space of the cellphone-based TangibleCounterpart concept that includes three dimensions, i.e., sub-menu size, the way a user holds the cellphone, and the touch technique for selection confirmation. Our empirical study showed th...
Bare Hand Interface for Interaction in the Video See-Through HMD Based Wearable AR Environment
Lecture Notes in Computer Science, 2006
In this paper, we propose a natural and intuitive bare hand interface for wearable augmented reality environment using the video see-through HMD. The proposed methodology automatically learned color distribution of the hand object through the template matching and tracking the hand objects by using the Meanshift algorithm under the dynamic background and moving camera. Furthermore, even though users are not wearing gloves, extracting of the hand object from arm is enabled by applying distance transform and using radius of palm. The fingertip points are extracted by convex hull processing and assigning constraint to the radius of palm area. Thus, users don't need attaching fiducial markers on fingertips. Moreover, we implemented several applications to demonstrate the usefulness of proposed algorithm. For example, "AR-Memo" can help user to memo in the real environment by using a virtual pen which is augmented on the user's finger, and user can also see the saved memo on his/her palm by augmenting it while moving around anywhere. Finally, we experimented performance and did usability studies.
exTouch: spatially-aware embodied manipulation of actuated objects mediated by augmented reality
2013
As domestic robots and smart appliances become increasingly common, they require a simple, universal interface to control their motion. Such an interface must support a simple selection of a connected device, highlight its capabilities and allow for an intuitive manipulation. We propose "exTouch", an embodied spatially-aware approach to touch and control devices through an augmented reality mediated mobile interface. The "exTouch" system extends the users touchscreen interactions into the real world by enabling spatial control over the actuated object. When users touch a device shown in live video on the screen, they can change its position and orientation through multi-touch gestures or by physically moving the screen in relation to the controlled object. We demonstrate that the system can be used for applications such as an omnidirectional vehicle, a drone, and moving furniture for reconfigurable room.