Towards joint attention for a domestic service robot - person awareness and gesture recognition using Time-of-Flight cameras (original) (raw)

HAND GESTURE COMMAND TO UNDERSTANDING OF HUMAN-ROBOT INTERACTION

Global Scientific Journal, 2020

The concept of human and robot interaction has been raised in many ways, such as social, economic, medical, and military. Human and robot collaboration is the new direction that allows humans to perform tasks efficiently. Communication between humans and robots is limited due to robot-human interfacing difficulties. This study provides an overview of hand gesture recognition to understand robothuman interaction by providing methods to identify gesture models, objects, and persons by using image processing techniques. The study aims to develop an interactive service robot eye which is capable of the vision-based system to identify the human hand gesture. In this study, Microsoft Kinect Camera Sensor was used to capture the hand gesture. The efficiency of the system was found that approximately 90% within the optimal distance of 4-5 feet from Kinetic Sensor.

A Modular Approach to Gesture Recognition for Interaction with a Domestic Service Robot

Lecture Notes in Computer Science, 2011

In this paper, we propose a system for robust and flexible visual gesture recognition on a mobile robot for domestic service robotics applications. This adds a simple yet powerful mode of interaction, especially for the targeted user group of laymen and elderly or disabled people in home environments. Existing approaches often use a monolithic design, are computationally expensive, rely on previously learned (static) color models, or a specific initialization procedure to start gesture recognition. We propose a multi-step modular approach where we iteratively reduce the search space while retaining flexibility and extensibility. Building on a set of existing approaches, we integrate an on-line color calibration and adaptation mechanism for hand detection followed by feature-based posture recognition. Finally, after tracking the hand over time we adopt a simple yet effective gesture recognition method that does not require any training.

Visual recognition of pointing gestures for human–robot interaction

Image and Vision Computing, 2007

In this paper, we present an approach for recognizing pointing gestures in the context of human-robot interaction. In order to obtain input features for gesture recognition, we perform visual tracking of head, hands and head orientation. Given the images provided by a calibrated stereo camera, color and disparity information are integrated into a multi-hypothesis tracking framework in order to find the 3D-positions of the respective body parts. Based on the hands' motion, an HMM-based classifier is trained to detect pointing gestures. We show experimentally that the gesture recognition performance can be improved significantly by using information about head orientation as an additional feature. Our system aims at applications in the field of human-robot interaction, where it is important to do run-on recognition in real-time, to allow for robot egomotion and not to rely on manual initialization.

Gesture Recognition Supporting the Interaction of Humans with Socially Assistive Robots

Advances in Visual Computing, 2014

We propose a new approach for vision-based gesture recognition to support robust and efficient human robot interaction towards developing socially assistive robots. The considered gestural vocabulary consists of five, user specified hand gestures that convey messages of fundamental importance in the context of human-robot dialogue. Despite their small number, the recognition of these gestures exhibits considerable challenges. Aiming at natural, easy-to-memorize means of interaction, users have identified gestures consisting of both static and dynamic hand configurations that involve different scales of observation (from arms to fingers) and exhibit intrinsic ambiguities. Moreover, the gestures need to be recognized regardless of the multifaceted variability of the human subjects performing them. Recognition needs to be performed online, in continuous video streams containing other irrelevant/unmodeled motions. All the above need to be achieved by analyzing information acquired by a possibly moving RGBD camera, in cluttered environments with considerable light variations. We present a gesture recognition method that addresses the above challenges, as well as promising experimental results obtained from relevant user trials.

A real-time Human-Robot Interaction system based on gestures for assistive scenarios

Computer Vision and Image Understanding, 2016

Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system-which is composed by a NAO and Wifibot robots, a Kinect TM v2 sensor and two laptops-is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.

Real-time recognition of pointing gestures for robot to robot interaction

2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014

This paper addresses the idea of establishing symbolic communication between mobile robots through gesturing. Humans communicate using body language and gestures in addition to other linguistic modalities like prosody and text or dialog structure. This research aims to develop a pointing gesture detection system for robot to robot communication scenarios to grant robots an ability to convey object identity information without global localization of the agents. The detection is based on RGB-D and a NAO humanoid robot is used as the pointing agent in the experiments. The presented algorithms are based on PCL library. The results indicate that real-time detection of pointing gesture can be performed with little information about the embodiment of the pointing agent and that an observing agent can use the gesture detection to perform actions on the pointed targets.

Probabilistic Detection of Pointing Directions for Human-Robot Interaction

2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2015

Deictic gestures-pointing at things in humanhuman collaborative tasks-constitute a pervasive, non-verbal way of communication, used e.g. to direct attention towards objects of interest. In a human-robot interactive scenario, in order to delegate tasks from a human to a robot, one of the key requirements is to recognize and estimate the pose of the pointing gesture. Standard approaches rely on full-body or partial-body postures to detect the pointing direction. We present a probabilistic, appearance-based object detection framework to detect pointing gestures and robustly estimate the pointing direction. Our method estimates the pointing direction without assuming any human kinematic model. We propose a functional model for pointing which incorporates two types of pointing, finger pointing and tool pointing using an object in hand. We evaluate our method on a new dataset with 9 participants pointing at 10 objects.

Learning to Interpret Pointing Gestures with a Time-of-Flight Camera

Pointing gestures are a common and intuitive way to draw somebody's attention to a certain object. While humans can easily interpret robot gestures, the perception of human behavior using robot sensors is more difficult. In this work, we propose a method for perceiving pointing gestures using a Time-of-Flight (ToF) camera. To determine the intended pointing target, frequently the line between a person's eyes and hand is assumed to be the pointing direction. However, since people tend to keep the line-of-sight free while they are pointing, this simple approximation is inadequate. Moreover, depending on the distance and angle to the pointing target, the line between shoulder and hand or elbow and hand may yield better interpretations of the pointing direction. In order to achieve a better estimate, we extract a set of body features from depth and amplitude images of a ToF camera and train a model of pointing directions using Gaussian Process Regression. We evaluate the accuracy of the estimated pointing direction in a quantitative study. The results show that our learned model achieves far better accuracy than simple criteria like head-hand, shoulder-hand, or elbow-hand line. Figure 1: Application scenario from ICRA 2010 Mobile Manipulation Challenge. Our robot approaches the user and he selects a drink by pointing to it.

Robot-supported pointing interaction for intelligent environments

A natural interaction with appliances in smart environment is a highly desired form of controlling the surroundings using intuitively learned interpersonal means of communication. Hand and arm gestures, recognized by depth cameras, are a popular representative of this interaction paradigm. However they usually require stationary units that limit applicability in larger environments. To overcome this problem we are introducing a self-localizing mobile robot system that autonomously follows the user in the environment, in order to recognize performed gestures independent from the current user position. We have realized a prototypical implementation using a custom robot platform and evaluated the system with various users.

A gesture based interface for human-robot interaction

Autonomous Robots

Service robotics is currently a pivotal research area in robotics, with enormous societal potential. Since service robots directly interact with people, nding natural" and easy-to-use user interfaces is of fundamental importance. While past work has predominately focussed on issues such a s n a vigation and manipulation, relatively few robotic systems are equipped with exible user interfaces that permit controlling the robot by natural" means.