Who Should I Trust (Human vs. Automation)? The Effects of Pedigree in a Dual Advisor Context (original) (raw)
Related papers
Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting, 2016
Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that trust is an antecedent to reliance, and often influences how individuals prioritize and integrate information presented from a human and/or automated information source. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measured how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information regarding which route was safest from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs and systems for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.
Trust in decision aids: A model and its training implications
in Proc. Command and …, 1998
Figure 1. Toulmin's model of argument. The structure can be read: Grounds, so Qualified Claim, unless Rebuttal, since Warrant, on account of Backing. Grounds: Awareness of current features of system and situation Qualified Claim: Chance of: Correct system action over period t (conditional on grounds) Backing: Assumptions, experience with system outcomes, knowledge of system design Vary in completeness (Limited number of features, wide range of features, extrapolation of good/ bad performance across virtually all conditions) Varies in reliability of data or information Resolution of probabilty (closeness to 0 or 1) Calibration of probabilty (closeness to true frequency) Varies in temporal scope (1. All recommendations by this kind of system. 2. All recommendations during a mission. 3. Current system recommendation. 4. After specific recommendation is carried out.) Warrant: Belief that certain features are generally correlated with system performance Rebuttals: Ways in which estimate could be wrong Uncertainty about system quality
The influence of agent reliability on trust in human-agent collaboration
Proceedings of the 15th European conference on Cognitive ergonomics the ergonomics of cool interaction - ECCE '08, 2008
Motivation -To investigate ways to support humanautomation teams with real-world, imperfect automation where many system failures are the result of systematic failure. Research approach -An experimental approach was used to investigate how variance in agent reliability may influence human's trust and subsequent reliance on agent's decision aids. Sixty command and control (C2) teams, each consisting of a human operator and two cognitive agents, were asked to detect and respond to battlefield threats in six ten-minute scenarios. At the end of each scenario, participants completed the SAGAT queries, followed by the NASA TLX queries. Findings/Design -Results revealed that teams with experienced human operators accepted significantly less inappropriate recommendations from agents than teams with inexperienced operators. More importantly, the knowledge of agent's reliability and the ratio of unreliable tasks have significant effects on human's trust, as manifested in both team performance and human operators' rectification of inappropriate recommendations from agents. Originality/Value -It represents an important step toward uncovering the nature of human trust in humanagent collaboration.
Unravelling the Trust Spectrum: Insights into Preferential Trust in AI over Human Judgement
16th Conference of the European Sociological Association, 2024
In an era where artificial intelligence (AI) increasingly permeates various aspects of human life, understanding the dimensions of trust in AI becomes crucial. This study, inspired by previous research on societal perceptions of AI, explores the reasons behind a significant portion of the UK population exhibiting greater trust in AI over human judgment. Utilizing a mixed-methods approach, this study gathered 894 responses, with 451 valid completions, to probe the psychological, sociocultural, and technological aspects of trust. The findings highlight a predominant trust in AI, driven by perceptions of its neutrality and accuracy, in contrast to scepticism toward human integrity, influenced by beliefs in self-interest and dishonesty. This paper contributes to the ongoing discourse on AI's integration into society, emphasizing the need for nuanced understanding and policy considerations to foster trust in AI technologies.
Encouraging human operators to appropriately rely on automated decision aids
Information technology is changing the nature of the military decision-making process. However, the underlying assumption in employing human-automated system teams, namely that the team will be more productive than the human or the automated system would be alone, is not always met. Under certain conditions, human operators err by overly relying or under-utilizing automated systems [Parasuraman and Riley, 1997]. A Framework of Automation Use [Dzindolet et al., 1999] posits that cognitive, social, and motivational processes combine to predict automation use. Two studies were performed to examine social processes, controlling for cognitive and motivational processes. The framework posits that when human operators' estimates of the reliability of the automated system and manual operation are accurate, appropriate use of automation is most likely to occur. Various ways of communicating to human operators information concerning the reliability of their own and an automated aid's ...
Human-Machine Communication
This study investigates the effects of task demonstrability and replacing a human advisor with a machine advisor. Outcome measures include advice-utilization (trust), the perception of advisors, and decision-maker emotions. Participants were randomly assigned to make a series of forecasts dealing with either humanitarian planning (low demonstrability) or management (high demonstrability). Participants received advice from either a machine advisor only, a human advisor only, or their advisor was replaced with the other type of advisor (human/machine) midway through the experiment. Decision-makers rated human advisors as more expert, more useful, and more similar. Perception effects were strongest when a human advisor was replaced by a machine. Decision-makers also experienced more negative emotions, lower reciprocity, and faulted their advisor more for mistakes when a human was replaced by a machine.
A framework for explaining reliance on decision aids
International Journal of Human-Computer Studies, 2013
This study presents a framework for understanding task and psychological factors affecting reliance on advice from decision aids. The framework describes how informational asymmetries in combination with rational, motivational and heuristic factors explain human reliance behavior. To test hypotheses derived from the framework, 79 participants performed an uncertain pattern learning and prediction task. They received advice from a decision aid either before or after they expressed their own prediction, and received feedback about performance. When their prediction conflicted with that of the decision aid, participants had to choose to rely on their own prediction or on that of the decision aid. We measured reliance behavior, perceived and actual reliability of self and decision aid, responsibility felt for task outcomes, understandability of one's own reasoning and of the decision aid, and attribution of errors. We found evidence that (1) reliance decisions are based on relative trust, but only when advice is presented after people have formed their own prediction; (2) when people rely as much on themselves as on the decision aid, they still perceive the decision aid to be more reliable than themselves; (3) the less people perceive the decision aid's reasoning to be cognitively available and understandable, the less people rely on the decision aid; (4) the more people feel responsible for the task outcome, the more they rely on the decision aid; (5) when feedback about performance is provided, people underestimate both one's own reliability and that of the decision aid; (6) underestimation of the reliability of the decision aid is more prevalent and more persistent than underestimation of one's own reliability; and (7) unreliability of the decision aid is less attributed to temporary and uncontrollable (but not external) causes than one's own unreliability. These seven findings are potentially applicable for the improved design of decision aids and training procedures.
Information Trust and Distrust in a Sensemaking Task
2006
This paper reports on a DoD-funded experiment into human trust and distrust of information in the context of a military sensemaking task. Twenty-two British Army majors undertook an intelligence picture compilation task during a simulated coalition engagement with enemy forces. The task required them to evaluate and compile incoming intelligence reports from a variety of sources. One in four of the reports were in fact deficient with respect to some parameter of information quality (correctness, completeness, timeliness, etc.), and the subjects were advised to eliminate any untrustworthy items from the ongoing picture. The appropriateness of their information trust/distrust responses under different conditions, and their ratings of the perceived trustworthiness of information items, were assessed. Two key factors were manipulated: (1) the subjects were given either a high or low prior understanding of the situation at the start of the task; (2) at one point the subjects were presented an information network alert informing them that a breach of the network had occurred and that information quality may have been compromised. In fact, these interventions had little effect on the information trust data. This was found to be due to an overriding effect of the subjects' awareness of the information sources, and the biases and assumptions associated with that.
Aiding Human Reliance Decision Making Using Computational Models of Trust
Journal of Physics D-applied Physics, 2007
This paper involves a human-agent system in which there is an operator charged with a pattern recognition task, using an automated decision aid. The objective is to make this human-agent system operate as effectively as possible. Effectiveness is gained by an increase of appropriate reliance on the operator and the aid. We studied whether it is possible to contribute to this objective by, apart from the operator, letting the aid as well calibrate trust in order to make reliance decisions. In addition, the aid's calibration of trust in reliance decision making capabilities of both the operator and itself is also expected to contribute, through reliance decision making on a metalevel, which we call metareliance decision making. In this paper we present a formalization of these two approaches: a reliance (RDMM) and metareliance decision making model (MetaRDMM), respectively. A combination of laboratory and simulation experiments shows significant improvements compared to reliance decision making solely done by operators.