Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions (original) (raw)

Inferring Human Personality Traits in Human-Robot Social Interaction

2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2019

In this report, a new framework is proposed for inferring the user's personality traits based on their habitual behaviors during face-to-face human-robot interactions, aiming to improve the quality of human-robot interactions. The proposed framework enables the robot to extract the person's visual features such as gaze, head and body motion, and vocal features such as pitch, energy, and Mel-Frequency Cepstral Coefficient (MFCC) during the conversation that is lead by Robot posing a series of questions to each participant. The participants are expected to answer each of the questions with their habitual behaviors. Each participant's personality traits can be assessed with a questionnaire. Then, all data will be used to train the regression or classification model for inferring the user's personality traits.

Understanding Nonverbal Communication Cues of Human Personality Traits in Human-Robot Interaction

IEEE/CAA Journal of Automatica Sinica, 2020

With the increasing presence of robots in our daily life, there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’ mood, intention, and other aspects. During human-human interaction, personality traits have an important influence on human behavior, decision, mood, and many others. Therefore, we propose an efficient computational framework to endow the robot with the capability of under-standing the user’s personality traits based on the user’s nonverbal communication cues represented by three visual features including the head motion, gaze, and body motion energy, and three vocal features including voice pitch, voice energy, and mel-frequency cepstral coefficient (MFCC). We used the Pepper robot in this study as a communication robot to interact with each participant by asking questions, and meanwhile, the robot extracts the nonverbal features from each participant’s habitual behavior using its on-board sensors. On the other hand, each participant’s personality traits are evaluated with a questionnaire. We then train the ridge regression and linear support vector machine (SVM) classifiers using the nonverbal features and personality trait labels from a questionnaire and evaluate the performance of the classifiers. We have verified the validity of the proposed models that showed promising binary classification performance on recognizing each of the Big Five personality traits of the participants based on individual differences in nonverbal communication cues.

Affect-Driven Modelling of Robot Personality for Collaborative Human-Robot Interactions

ArXiv, 2020

Collaborative interactions require social robots to adapt to the dynamics of human affective behaviour. Yet, current approaches for affective behaviour generation in robots focus on instantaneous perception to generate a one-to-one mapping between observed human expressions and static robot actions. In this paper, we propose a novel framework for personality-driven behaviour generation in social robots. The framework consists of (i) a hybrid neural model for evaluating facial expressions and speech, forming intrinsic affective representations in the robot, (ii) an Affective Core, that employs self-organising neural models to embed robot personality traits like patience and emotional actuation, and (iii) a Reinforcement Learning model that uses the robot's affective appraisal to learn interaction behaviour. For evaluation, we conduct a user study (n = 31) where the NICO robot acts as a proposer in the Ultimatum Game. The effect of robot personality on its negotiation strategy is ...

From multimodal features to behavioural inferences: A pipeline to model engagement in human-robot interactions

PLOS ONE, 2023

Modelling the engaging behaviour of humans using multimodal data collected during human-robot interactions has attracted much research interest. Most methods that have been proposed previously predict engaging behaviour directly from multimodal features, and do not incorporate personality inferences or any theories of interpersonal behaviour in human-human interactions. This work investigates whether personality inferences and attributes from interpersonal theories of behaviour (like attitude and emotion) further augment the modelling of engaging behaviour. We present a novel pipeline to model engaging behaviour that incorporates the Big Five personality traits, the Interpersonal Circumplex (IPC), and the Triandis Theory of Interpersonal Behaviour (TIB). We extract first-person vision and physiological features from the MHHRI dataset and predict the Big Five personality traits using a Support Vector Machine. Subsequently, we empirically validate the advantage of incorporating personality in modelling engaging behaviour and present a novel method that effectively uses the IPC to obtain scores for a human's attitude and emotion from their Big Five traits. Finally, our results demonstrate that attitude and emotion are correlates of behaviour even in human-robot interactions, as suggested by the TIB for human-human interactions. Furthermore, incorporating the IPC and the Big Five traits helps generate behavioural inferences that supplement the engaging behaviour prediction, thus enriching the pipeline. Engagement modelling has a wide range of applications in domains like online learning platforms, assistive robotics, and intelligent conversational agents. Practitioners can also use this work in cognitive modelling and psychology to find more complex and subtle relations between humans' behaviour and personality traits, and discover new dynamics of the human psyche. The code will be made available at: https://github.com/soham-joshi/ engagement-prediction-mhhri.