Robots Can Train Humans Using Principles of Operant Conditioning Through Visual Reinforcement Tools (original) (raw)

What can robots learn from humans?

Annual Reviews in Control, 1996

Programming by Demonstration (PbD) is an intuitive method to program a robot. The user, acting as a teacher or programmer, shows how a particular task should be carried out. The demonstration is monitored using an interface device that ~d/ows the measurement and recording of both the applied commands as well as the data simultaneously perceived by robot's sensors. This paper identilies the kind of knowledge that the robot can actuul/y acquire from the human user through demonstrations and the requiroments that must be met/n order to be able to interpret what has been demonstrated. Finally, it presents and experimentally eva/uatss an approach to integrated acquisition, evaluation, tuning, and execution of elementary skills and ta~.levei programs for robots based on human demonstrations.

Differences in Learning from Complex Versus Simple Visual Interfaces When Operating a Model Excavator

2016

The goal of this study was to test two visual co-robot interfaces (one simple and one more complex) and their effectiveness in teaching a novice participant to operate a complex machine at a later date without assistance. Participants (N = 113) were randomly assigned to one of three groups (one with a basic user interface, one with a more complex guidance interface, and one without an interface) to test the teaching ability of the co-robot in training the user to perform a task with a remote-controlled excavator. Each group was asked to load dirt from a bin into a small model dump truck (in scale with the excavator) with the help of the robot instructor and were asked to return a few days later to complete the task again without the robot instructor. Trials were monitored for completion time and errors and compared to those of an expert operator. The result was that the simple interface was slightly more effective than the more complex version at teaching humans a complicated task. This suggests that novices may learn better and retain more information when given basic feedback (using operant conditioning principles) and less guidance from robot teachers. As robots are increasingly used to help humans learn skills, industries may benefit from simpler guided instructions rather than more complex versions. Such changes in training may result in improved situational awareness and increased safety in the workplace.

Towards understanding how humans teach robots

international conference on user modeling adaptation and personalization, 2011

Our goal is to develop methods for non-experts to teach complex behaviors to autonomous agents (such as robots) by accommodating "natural" forms of human teaching. We built a prototype interface allowing humans to teach a simulated robot a complex task using several techniques and report the results of 44 human participants using this interface. We found that teaching styles varied considerably but can be roughly categorized based on the types of interaction, patterns of testing, and general organization of the lessons given by the teacher. Our study contributes to a better understanding of human teaching patterns and makes specific recommendations for future human-robot interaction systems.

Early Experiments using Motivations to Regulate Human-Robot Interaction

We present the results of some early experiments with an autonomous robot to demonstrate its ability to regulate the intensity of social inter- action with a human. The mode of social inter- action is that of a caretaker-infant pair where a human acts as the caretaker for the robot. With respect to this type of socially situated learning, the ability to regulate the intensity of the interac- tion is important for promoting and maintaing a suitable learning environment where the learner (infant or robot) is neither overwhelmed nor un- der stimulated. The implementation and early demonstrations of this skill by our robot is the topic of this paper.

Hitting the Nail on the Head: An Investigation of Robot Learning

The human brain’s control strategies for using a tool such as a hammer show promise for improving tool use on robotic arms. The specific case of teaching a robotic arm to drive a nail into a board is utilized in this study to evaluate two biologically-inspired controller designs for the ballistic swing of a hammer. The study compares a Q-Learning Table Controller and a Population Coded Neural Network Controller and presents simulation results. In both cases, the proposed controller designs learn to improve their efficiency at driving nails into the board as they drive more nails.

The Impact of Human-Robot Interfaces on the Learning of Visual Objects

2013

This paper studies the impact of interfaces, allowing nonexpert users to efficiently and intuitively teach a robot to recognize new visual objects. We present challenges that need to be addressed for real-world deployment of robots capable of learning new visual objects in interaction with everyday users. We argue that in addition to robust machine learning and computer vision methods, well-designed interfaces are crucial for learning efficiency. In particular, we argue that interfaces can be key in helping nonexpert users to collect good learning examples and, thus, improve the performance of the overall learning system. Then, we present four alternative human-robot interfaces: Three are based on the use of a mediating artifact (smartphone, wiimote, wiimote and laser), and one is based on natural human gestures (with a Wizard-of-Oz recognition system). These interfaces mainly vary in the kind of feedback provided to the user, allowing him to understand more or less easily what the robot is perceiving and, thus, guide his way of providing training examples differently. We then evaluate the impact of these interfaces, in terms of learning efficiency, usability, and user's experience, through a real world and large-scale user study. In this experiment, we asked participants to teach a robot 12 different new visual objects in the context of a robotic game. This game happens in a home-like environment and was designed to motivate and engage users in an interaction where using the system was meaningful. We then discuss results that show significant differences among interfaces. In particular, we show that interfaces such as the smartphone interface allows nonexpert users to intuitively provide much better training examples to the robot, which is almost as good as expert users who are trained for this task and are aware of the different visual perception and machine learning issues. We also show that artifact-mediated teaching is significantly more efficient for robot learning, and equally good in terms of usability and user's experience, than teaching thanks to a gesture-based human-like interaction.

On an educational approach to behavior learning for robots

2012

Abstract—This paper introduces a system for teaching biologically-motivated robot learning in university classrooms that might be used in courses such as Artificial Intelligence and/or Robotics. For this, we present a simple hardware robot that is able to learn a forward ...

The Impact of Human–Robot Interfaces on the Learning of Visual Objects

IEEE Transactions on Robotics, 2000

This paper studies the impact of interfaces, allowing nonexpert users to efficiently and intuitively teach a robot to recognize new visual objects. We present challenges that need to be addressed for real-world deployment of robots capable of learning new visual objects in interaction with everyday users. We argue that in addition to robust machine learning and computer vision methods, well-designed interfaces are crucial for learning efficiency. In particular, we argue that interfaces can be key in helping nonexpert users to collect good learning examples and, thus, improve the performance of the overall learning system. Then, we present four alternative human-robot interfaces: Three are based on the use of a mediating artifact (smartphone, wiimote, wiimote and laser), and one is based on natural human gestures (with a Wizard-of-Oz recognition system). These interfaces mainly vary in the kind of feedback provided to the user, allowing him to understand more or less easily what the robot is perceiving and, thus, guide his way of providing training examples differently. We then evaluate the impact of these interfaces, in terms of learning efficiency, usability, and user's experience, through a real world and large-scale user study. In this experiment, we asked participants to teach a robot 12 different new visual objects in the context of a robotic game. This game happens in a home-like environment and was designed to motivate and engage users in an interaction where using the system was meaningful. We then discuss results that show significant differences among interfaces. In particular, we show that interfaces such as the smartphone interface allows nonexpert users to intuitively provide much better training examples to the robot, which is almost as good as expert users who are trained for this task and are aware of the different visual perception and machine learning issues. We also show that artifact-mediated teaching is significantly more efficient for robot learning, and equally good in terms of usability and user's experience, than teaching thanks to a gesture-based human-like interaction.