Multi-modal features for real-time detection of human-robot interaction categories (original) (raw)
Related papers
Recognizing interaction from a robot's perspective
IEEE International Symposium on Robot and Human Interactive Communication, 2005
For meaningful interaction between a robot and a human, an autonomous robot must recognize whether the experienced situation is created by people or by the environment. Using only proprioceptive data from a mobile robotic platform, we discover that it is possible to distinguish sensory data patterns involving interaction. These patterns are obtained whilst navigating varying environments, both human populated and
Bidirectional Multi-modal Signs of Checking Human-Robot Engagement and Interaction
International Journal of Social Robotics
The anthropomorphization of human-robot interactions is a fundamental aspect of the design of social robotics applications. This article describes how an interaction model based on multimodal signs like visual, auditory, tactile, proxemic, and others can improve the communication between humans and robots. We have examined and appropriately filtered all the robot sensory data needed to realize our interaction model. We have also paid a lot of attention to communication on the backchannel, making it both bidirectional and evident through auditory and visual signals. Our model, based on a task-level architecture, was integrated into an application called W@ICAR, which proved efficient and intuitive with people not interacting with the robot. It has been validated both from a functional and user experience point of view, showing positive results. Both the pragmatic and the hedonic estimators have shown how many users particularly appreciated the application. The model component has bee...
ArXiv, 2020
It is crucial for any assistive robot to prioritize the autonomy of the user. For a robot working in a task setting to effectively maintain a user's autonomy it must provide timely assistance and make accurate decisions. We use four independent high-precision, low-recall models, a mutual gaze model, task model, confirmatory gaze model, and a lexical model, that predict a user's need for assistance. Improving upon our four independent models, we used a sliding window method and a random forest classification algorithm to capture temporal dependencies and fuse the independent models with a late fusion approach. The late fusion approach strongly outperforms all four of the independent models providing a more wholesome approach with greater accuracy to better assist the user while maintaining their autonomy. These results can provide insight into the potential of including additional modalities and utilizing assistive robots in more task settings.
A dataset of human and robot approach behaviors into small free-standing conversational groups
PLOS ONE, 2021
The analysis and simulation of the interactions that occur in group situations is important when humans and artificial agents, physical or virtual, must coordinate when inhabiting similar spaces or even collaborate, as in the case of human-robot teams. Artificial systems should adapt to the natural interfaces of humans rather than the other way around. Such systems should be sensitive to human behaviors, which are often social in nature, and account for human capabilities when planning their own behaviors. A limiting factor relates to our understanding of how humans behave with respect to each other and with artificial embodiments, such as robots. To this end, we present CongreG8 (pronounced ‘con-gre-gate’), a novel dataset containing the full-body motions of free-standing conversational groups of three humans and a newcomer that approaches the groups with the intent of joining them. The aim has been to collect an accurate and detailed set of positioning, orienting and full-body beh...
Machine Learning of Social States and Skills for Multi-Party Human-Robot Interaction
MACHINE LEARNING FOR INTERACTIVE SYSTEMS: BRIGDING THE GAP BETWEEN LANGUAGE, MOTOR CONTROL AND VISION, 2012
Abstract. We describe several forms of machine learning that are being applied to social interaction in Human-Robot Interaction (HRI), using a robot bartender as our scenario. We first present a data-driven approach to social state recognition based on supervised learning. We then describe an approach to social interaction management based on reinforcement learning, using a data-driven simulation of multiple users to train HRI policies. Finally, we discuss an alternative unsupervised learning framework that combines social ...
Robot Classification of Human Interruptibility and a Study of Its Effects
ACM Transactions on Human-Robot Interaction, 2018
As robots become increasingly prevalent in human environments, there will inevitably be times when the robot needs to interrupt a human to initiate an interaction. Our work introduces the first interruptibility-aware mobile-robot system, which uses social and contextual cues online to accurately determine when to interrupt a person. We evaluate multiple non-temporal and temporal models on the interruptibility classification task, and show that a variant of Conditional Random Fields (CRFs), the Latent-Dynamic CRF, is the most robust, accurate, and appropriate model for use on our system. Additionally, we evaluate different classification features and show that the observed demeanor of a person can help in interruptibility classification; but in the presence of detection noise, robust detection of object labels as a visual cue to the interruption context can improve interruptibility estimates. Finally, we deploy our system in a large-scale user study to understand the effects of inter...