When the User is Instrumental to Robot Goals (original) (raw)
Related papers
G.F.: When the user is instrumental to robot goals. First try: Agent uses agent
2008
To create a robot with a mind of its own, we extended a formalized version of a model that explains affect-driven interaction with mechanisms for goaldirected behavior. We ran simulation experiments with intelligent software agents and found that agents preferred affect-driven decision options to rational decision options in situations where choices for low expected utility are irrational. This behavior counters current models in decision making, which generally have a hedonic bias and always select the option with the highest expected utility. 1.
Lecture Notes in Computer Science, 2019
In this paper we present a fully autonomous and intrinsically motivated robot usable for HRI experiments. We argue that an intrinsically motivated approach based on the Predictive Information formalism, like the one presented here, could provide us with a pathway towards autonomous robot behaviour generation, that is capable of producing behaviour interesting enough for sustaining the interaction with humans and without the need for a human operator in the loop. We present a possible reactive baseline behaviour for comparison for future research. Participants perceive the baseline and the adaptive, intrinsically motivated behaviour differently. In our exploratory study we see evidence that participants perceive an intrinsically motivated robot as less intelligent than the reactive baseline behaviour. We argue that is mostly due to the high adaptation rate chosen and the design of the environment. However, we also see that the adaptive robot is perceived as more warm, a factor which carries more weight in interpersonal interaction than competence.
— We propose an architecture that integrates Theory of Mind into a robot's decision-making to infer a human's intention and adapt to it. The architecture implements human-robot collaborative decision-making for a robot incorporating human variability in their emotional and intentional states. This research first implements a mechanism for stochastically estimating a human's belief over the state of the actions that the human could possibly be executing. Then, we integrate this information into a novel stochastic human-robot shared planner that models the human's preferred plan. Our contribution lies in the ability of our model to handle the conditions: 1) when the human's intention is estimated incorrectly and the true intention may be unknown to the robot, and 2) when the human's intention is estimated correctly but the human doesn't want the robot's assistance in the given context. A robot integrating this model into its decision-making process would better understand a human's need for assistance and therefore adapt to behave less intrusively and more reasonably in assisting its human companion.
Early Experiments using Motivations to Regulate Human-Robot Interaction
We present the results of some early experiments with an autonomous robot to demonstrate its ability to regulate the intensity of social inter- action with a human. The mode of social inter- action is that of a caretaker-infant pair where a human acts as the caretaker for the robot. With respect to this type of socially situated learning, the ability to regulate the intensity of the interac- tion is important for promoting and maintaing a suitable learning environment where the learner (infant or robot) is neither overwhelmed nor un- der stimulated. The implementation and early demonstrations of this skill by our robot is the topic of this paper.
Investigating Adjustable Social Autonomy in Human Robot Interaction
2021
More and more often, Human Robot Interaction(HRI) applications require the design of robotics systems whose decision process implies the capability to evaluate not only the physical environment, but especially the mental states and the features of its human interlocutor, in order to adapt their social autonomy every time humans require the robot’s help. Robots will be really cooperative and effective when they will expose the capability to consider not only the goals or interests explicitly required by humans, but also those one that are not declared and to provide help that go beyond the literal task execution. In order to improve the quality of this kind of smart help, a robot has to operate a meta-evaluation of its own predictive skills to build a model of the interlocutor and of her/his goals. The robot’s capability to self-trust its skills to interpret the interlocutor and the context, is a fundamental requirement for producing smart and effective decisions towards humans. In t...
A Model of a Robot's Will Based on Higher-Order Desires
— Autonomous robots implement decision making capacities on several layers of abstraction. Put in terms of desires, decision making evaluates desires to eventually commit to some most rational one. Drawing on the philosophical literature on volition and agency, this work introduces a conceptual model that enables robots to reason about which desires they want to want to realize, i.e., higher-order desires. As a result, six jointly exhaustive and pairwise disjoint types of choices are defined. Technical evaluation shows how to add a robot's will to its rational decision-making capacity. This guarantees that informed choices are possible even in cases rational decision making alone is indecisive. Further applications to modeling personality traits for human-robot interaction are discussed.
Attribution of Mental State in Strategic Human-Robot Interactions
Research Square (Research Square), 2022
The paper, based on an experiment in which human subjects are paired with either another human or an anthropomorphic robot, when playing an iterated prisoner's dilemma, investigates whether (and how) the level of mental state attributed to an anthropomorphic robot by subjects depends on the the "earnestness" of the robot, i.e. the correspondence of what the robot said and how the robot behaved after a non optimal social outcome is achieved.
Frontiers in Neurorobotics
A key goal in human-robot interaction (HRI) is to design scenarios between humanoid robots and humans such that the interaction is perceived as collaborative and natural, yet safe and comfortable for the human. Human skills like verbal and non-verbal communication are essential elements as humans tend to attribute social behaviors to robots. However, aspects like the uncanny valley and different technical affinity levels can impede the success of HRI scenarios, which has consequences on the establishment of long-term interaction qualities like trust and rapport. In the present study, we investigate the impact of a humanoid robot on human emotional responses during the performance of a cognitively demanding task. We set up three different conditions for the robot with increasing levels of social cue expressions in a between-group study design. For the analysis of emotions, we consider the eye gaze behavior, arousal-valence for affective states, and the detection of action units. Our ...
A motivational system for regulating human-robot interaction
Proceedings of the National Conference on Artificial …, 1998
This paper presents a motivational system for an autonomous robot which is designed to regulate human-robot interaction. The mode of social in-teraction is that of a caretaker-infant dyad where a human acts as the caretaker for the robot. An infant's emotions and drives play a very ...