The Role of Social Dialogue and Errors in Robots (original) (raw)

Social robots establish rapport with human users. This work explores the extent to which rapport-building can benefit (or harm) conversations with robots, and under what circumstances this occurs. For example, previous work has shown that agents that make conversational errors are less capable of influencing people than agents that do not make errors [1]. Some work has shown this effect with robots, but prior research has not considered additional factors such as the level of rapport between the person and the robot. We predicted that building rapport through a social dialogue (such as an ice-breaker) could mitigate the detrimental effect of a robot's errors on influence. Our study used a Nao robot programmed to persuade users to agree with its rankings on two "survival tasks" (e.g., lunar survival task). We manipulated both errors and social dialogue:the robot either exhibited errors in the second survival task or not, and users either engaged in an ice-breaker with t...