The Relationship Between Perceived Agency and Trust in Robots (original) (raw)

1 Introduction

Conceptualizations of robots have been steadily changing over the last decade. Many robots were originally considered to be tools [[36](/article/10.1007/s12369-025-01285-8#ref-CR36 "Phillips E, Ososky S, Grove J et al (2011) From tools to teammates: toward the development of appropriate mental models for intelligent robots. Proc Hum Factors Ergon Soc Annu Meet 55(1):1491–1495. https://doi.org/10.1177/1071181311551310

            ")\], useful for their consistency and reliability \[[5](/article/10.1007/s12369-025-01285-8#ref-CR5 "Desai M, Medvedev M, Vázquez M et al (2012) Effects of changing reliability on trust of robot systems. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. Association for Computing Machinery, HRI’12, pp 73–80). 
              https://doi.org/10.1145/2157689.2157702
              
            "), [25](/article/10.1007/s12369-025-01285-8#ref-CR25 "Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors: The J Hum Factors And Ergon Soc 46(1):50–80. 
              http://www.extenza-eps.com/extenza/loadHTML?objectIDValue=30392%26type=abstract
              
            ")\]. As robots have become more prevalent in society and work, they have also become more social (thanks in many ways to the fields of social robotics and human robot interaction). The increasing ubiquity of robots has led people to not only treat robots as social teammates, but also for robot designers to build more social sophistication into robots. This additional social sophistication has encouraged some researchers to consider robots as a new ontological category: neither a tool nor a teammate, but some combination of both \[[20](/article/10.1007/s12369-025-01285-8#ref-CR20 "Kahn PH, Reichert AL, Gary HE et al (2011) The new ontological category hypothesis in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction. Association for Computing Machinery, HRI’11, pp 159–160). 
              https://doi.org/10.1145/1957656.1957710
              
            "), [57](/article/10.1007/s12369-025-01285-8#ref-CR57 "Williams T, Ayers D, Kaufman C et al (2021) Deconstructed trustee theory: disentangling trust in body and identity in multi-robot distributed systems. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’21, pp 262–271). 
              https://doi.org/10.1145/3434073.3444644
              
            ")\].

These different and changing conceptualizations of robots have impacted not only our impressions of robots and how we interact with them, but also how much we trust robots. In general, people seem to trust robots along at least two primary dimensions: competency and integrity. Competency related trust typically relates to how capable, reliable, and predictable a robot is while performing tasks [[12](/article/10.1007/s12369-025-01285-8#ref-CR12 "Hancock PA, Billings DR, Schaefer KE et al (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors: The J Hum Factors And Ergon Soc 53(5):517–527. https://doi.org/10.1177/0018720811417254

            "), [58](/article/10.1007/s12369-025-01285-8#ref-CR58 "Xie Y, Bodala IP, Ong DC et al (2019) Robot capability and intention in trust-based decisions across tasks. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp 39–47). 
              https://doi.org/10.1109/HRI.2019.8673084
              
            . ISSN: 2167-2148")\] and is frequently associated with robot-as-tool. Integrity related trust typically relates to interpersonal or moral trust based on how dependable, sincere, moral, and reliable an agent is in a social relationship \[[13](/article/10.1007/s12369-025-01285-8#ref-CR13 "Hancock PA, Kessler TT, Kaplan AD et al (2021) Evolving trust in robots: specification through sequential and comparative meta-analyses. Human factors. J Sport Hist Of The Human Factors And Ergonomics Society 63(7):1196–1229. 
              https://doi.org/10.1177/0018720820922080
              
            ")\]. Integrity related trust is frequently associated with the robot-as-teammate conceptualization.

Competency related trust (also called robot performance) was the largest influence of trust based on a meta-analysis by [[12](/article/10.1007/s12369-025-01285-8#ref-CR12 "Hancock PA, Billings DR, Schaefer KE et al (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors: The J Hum Factors And Ergon Soc 53(5):517–527. https://doi.org/10.1177/0018720811417254

            ")\]. There are also many research studies that have shown that the worse a robot performs, the less people will trust it. For example, when a robot makes errors, people trust it less \[[6](/article/10.1007/s12369-025-01285-8#ref-CR6 "Desai M, Kaniarasu P, Medvedev M et al (2013) Impact of robot failures and feedback on real-time trust. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp 251–258). 
              https://doi.org/10.1109/HRI.2013.6483596
              
            . ISSN: 2167-2148")\]. Similarly, when a robot performs poorly, people trust it less and usage of the robot decreased \[[38](/article/10.1007/s12369-025-01285-8#ref-CR38 "Robinette P, Howard AM, Wagner AR (2017) Effect of robot performance on human–robot trust in time-critical situations. IEEE Trans On Hum Mach Syst 47(4):425–436. 
              https://doi.org/10.1109/THMS.2017.2648849
              
            . conference Name: IEEE Transactions on Human-Machine Systems")\]. Reliability over time, in particular false alarms, and misses, has also been shown to impact a person’s trust in robots. de Visser & Parasuraman found that trust decreased when a robot was less reliable \[[49](/article/10.1007/s12369-025-01285-8#ref-CR49 "de Visser E, Parasuraman R (2011) Adaptive aiding of human-robot teaming: effects of imperfect automation on performance, trust, and workload. J Cogn Eng Decis Mak 5(2):209–231. 
              https://doi.org/10.1177/1555343411410160
              
            . publisher: SAGE Publications")\]. Overall, the research is quite consistent: when a robot performs poorly on some performance-based activity, people trust it less.

Integrity related trust has also been found to be a strong component of how much people trust robots [[21](/article/10.1007/s12369-025-01285-8#ref-CR21 "Kaniarasu P, Steinfeld AM (2014) Effects of blame on trust in human robot interaction. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp 850–855). https://doi.org/10.1109/ROMAN.2014.6926359

            . ISSN: 1944-9437"), [29](/article/10.1007/s12369-025-01285-8#ref-CR29 "Martelaro N, Nneji VC, Ju W et al (2016) Tell me more designing HRI to encourage more trust, disclosure, and companionship. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 181–188). 
              https://doi.org/10.1109/HRI.2016.7451750
              
            . ISSN: 2167-2148"), [40](/article/10.1007/s12369-025-01285-8#ref-CR40 "Sebo SS, Krishnamurthi P, Scassellati B (2019) “I don’t believe you”: investigating the effects of robot trust violation and repair. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 57–65). 
              https://doi.org/10.1109/HRI.2019.8673169
              
            . ISSN: 2167-2148")\]. For example, a robot that shakes a person’s hand seems to increase trust \[[24](/article/10.1007/s12369-025-01285-8#ref-CR24 "Law T, Malle BF, Scheutz M (2021) A touching connection: how observing robotic touch can affect human trust in a robot. Int J Soc Robot 13(8):2003–2019. 
              https://doi.org/10.1007/s12369-020-00729-7
              
            .")\], and explicit lying seems to decrease trust \[[40](/article/10.1007/s12369-025-01285-8#ref-CR40 "Sebo SS, Krishnamurthi P, Scassellati B (2019) “I don’t believe you”: investigating the effects of robot trust violation and repair. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 57–65). 
              https://doi.org/10.1109/HRI.2019.8673169
              
            . ISSN: 2167-2148"), [54](/article/10.1007/s12369-025-01285-8#ref-CR54 "Wijnen L, Coenen J, Grzyb B (2017) “It’s not my fault!”: investigating the effects of the deceptive behaviour of a humanoid robot. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’17, (pp 321–322). 
              https://doi.org/10.1145/3029798.3038300
              
            ")\]. Similarly, showing non-verbal cues that signal distrust also decreases trust for people interacting with or watching the robot \[[7](/article/10.1007/s12369-025-01285-8#ref-CR7 "DeSteno D, Breazeal C, Frank RH et al (2012) Detecting the trustworthiness of novel partners in economic exchange. Psychological Sci 23(12):1549–1556. 
              https://doi.org/10.1177/0956797612448793
              
            . publisher: SAGE Publications Inc")\]. While there is a relatively consistent impact of robot’s perceived integrity on human trust, the general consensus is that the impact of competency on trust is stronger than the impact of integrity on trust \[[13](/article/10.1007/s12369-025-01285-8#ref-CR13 "Hancock PA, Kessler TT, Kaplan AD et al (2021) Evolving trust in robots: specification through sequential and comparative meta-analyses. Human factors. J Sport Hist Of The Human Factors And Ergonomics Society 63(7):1196–1229. 
              https://doi.org/10.1177/0018720820922080
              
            ")\].

In addition to competency and integrity, there are also several other findings that do not seem to fit cleanly into either category. For example, Haring, Matsumoto and Watanabe have shown that when people perceive a robot as more intelligent, they trust it more [[14](/article/10.1007/s12369-025-01285-8#ref-CR14 "Haring KS, Mosley A, Pruznick S et al (2019) Robot authority in human-machine teams: Effects of human-like appearance on compliance. In: Chen J, Fragomeni G (eds.). Virtual, augmented and mixed reality. Applications and case studies. Springer International Publishing, Lecture Notes in Computer Science, pp 63–78. https://doi.org/10.1007/978-3-030-21565-1_5

            ")\]. Intelligence could be considered a competency, but there is not necessarily or always performance associated with how intelligent a robot is. Similarly, the more anthropomorphic a robot is, the more people seem to trust it \[[27](/article/10.1007/s12369-025-01285-8#ref-CR27 "Lyons JB, Vo T, Wynne KT et al (2021) Trusting autonomous security robots: the role of reliability and stated social intent. Hum Factors 63(4):603–618. 
              https://doi.org/10.1177/0018720820901629
              
            . publisher: SAGE Publications Inc"), [55](/article/10.1007/s12369-025-01285-8#ref-CR55 "Williams M (2007) Building genuine trust through interpersonal emotion management: a threat regulation model of trust and collaboration across boundaries. Acad Manage Rev 32(2):595–621. 
              https://doi.org/10.5465/amr.2007.24351867
              
            ")\]. Anthropomorphism does not seem to fall in either the competency or the integrity aspects of trust. Hancock et al. suggested that a robot that had higher levels of autonomy is also trusted more \[[13](/article/10.1007/s12369-025-01285-8#ref-CR13 "Hancock PA, Kessler TT, Kaplan AD et al (2021) Evolving trust in robots: specification through sequential and comparative meta-analyses. Human factors. J Sport Hist Of The Human Factors And Ergonomics Society 63(7):1196–1229. 
              https://doi.org/10.1177/0018720820922080
              
            ")\]. While higher levels of autonomy can sometimes be considered a positive competency, in many situations it can also be considered a negative competency. Additionally, robots that are more transparent are also perceived as more trustworthy \[[51](/article/10.1007/s12369-025-01285-8#ref-CR51 "Vorm ES, Combs DJY (2022) Integrating transparency, trust, and acceptance: the intelligent systems technology acceptance model (ISTAM). Int J Multiling Human–Computer Interaction 38(18):1828–1845. 
              https://doi.org/10.1080/10447318.2022.2070107
              
            ")\]

We suggest that many of these “non-standard” features that are related to trust fall into a broad category of “perceived agency.” People perceive agency in another entity when their actions may be assumed by an outside observer to be driven by its internal thoughts and feelings and less by the external environment [[4](/article/10.1007/s12369-025-01285-8#ref-CR4 "Dennett DC (1978) Current issues in the philosophy of mind. Am Philos Q 15(4):249–261. https://www.jstor.org/stable/20009723,

             publisher: [North American Philosophical Publications, University of Illinois Press]"), [45](/article/10.1007/s12369-025-01285-8#ref-CR45 "Trafton JG, Frazier C, Zish K et al (2023) The perception of agency: scale reduction and construct validity. In 2023 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)"), [46](/article/10.1007/s12369-025-01285-8#ref-CR46 "Trafton JG, McCurry JM, Zish K et al (2024) The perception of agency. ACM Trans On Hum Rob Interact 13(1):1–23")\]. Following from this definition, all of these non-standard features – perceived intelligence, anthropomorphism, levels of autonomy, and transparency are all possible signals of perceived agency. For example, the higher a robot’s level of autonomy, the more a person may perceive the robot to be acting on its internal thoughts. Similarly, the type and quality of feedback and transparency may also be considered to be performance-related but robots that are perceived as understanding enough to provide good feedback and be transparent in their actions may also be considered to have some perceived agency \[[34](/article/10.1007/s12369-025-01285-8#ref-CR34 "Natarajan M, Gombolay M (2020) Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’20, (pp 33–42). 
              https://doi.org/10.1145/3319502.3374839
              
            "), [56](/article/10.1007/s12369-025-01285-8#ref-CR56 "Williams T, Briggs P, Scheutz M (2015) Covert robot-robot communication: human perceptions and implications for human-robot interaction. J Appl Psychol Human-Robot Interaction 4(2):24–49. 
              https://doi.org/10.5898/JHRI.4.2.Williams
              
            ")\]. Overall, we believe that perceived agency is another determinant of trust in robots and AI agents.

There are other similar concepts to “perceived agency” including “sense of agency” and “agency.” Agency [[10](/article/10.1007/s12369-025-01285-8#ref-CR10 "Gray HM, Gray K, Wegner DM (2007) Dimensions of mind perception. Science 315(5812):619–619. https://doi.org/10.1126/science.1134475

            ")\] is frequently described as how much an entity can plan and act \[[11](/article/10.1007/s12369-025-01285-8#ref-CR11 "Gray K, Young L, Waytz A (2012) Mind perception is the essence of morality. Psychological Inq 23(2):101–124")\] or the amount an entity can plan, think, communicate, etc. \[[53](/article/10.1007/s12369-025-01285-8#ref-CR53 "Weisman K, Dweck CS, Markman EM (2017) Rethinking people’s conceptions of mental life. Proc Natl Acad Sci, India, Sect B Biol Sci 114(43):11374–11379")\]. The sense of agency is the registration that I am the initator of my actions \[[43](/article/10.1007/s12369-025-01285-8#ref-CR43 "Synofzik M, Vosgerau G, Voss M (2013) The experience of agency: an interplay between prediction and postdiction. Front Psychol 4:43277")\] or the experience of controlling 1’s own actions \[[2](/article/10.1007/s12369-025-01285-8#ref-CR2 "Chambon V, Haggard P (2012) Sense of control depends on fluency of action selection, not motor performance. Cognition 125(3):441–451")\]. Sense of agency is measured explicitly through examining how much control people have over their actions and outcomes \[[39](/article/10.1007/s12369-025-01285-8#ref-CR39 "Saad L, Trafton JG (in press) Action and outcome predictability impact sense of agency. In Proceedings of the Annual Meeting of the Cognitive Science Society")\]. The perception of agency focuses on how much another entity is perceived to be acting under their own thoughts, feelings, and less by the environment, while both agency and sense of agency focus on the first person \[[4](/article/10.1007/s12369-025-01285-8#ref-CR4 "Dennett DC (1978) Current issues in the philosophy of mind. Am Philos Q 15(4):249–261. 
              https://www.jstor.org/stable/20009723,
              
             publisher: [North American Philosophical Publications, University of Illinois Press]"), [46](/article/10.1007/s12369-025-01285-8#ref-CR46 "Trafton JG, McCurry JM, Zish K et al (2024) The perception of agency. ACM Trans On Hum Rob Interact 13(1):1–23")\]. Especially for sense of agency, most scholars focus on how much the individual feel under their own control, while perceived agency is concerned with how much agency another entity appears to have.

From this brief review, three of the largest determinants for how and when people trust robots: competency, integrity, and perceived agency. There is a great deal of evidence supporting the link between competency and trust [[13](/article/10.1007/s12369-025-01285-8#ref-CR13 "Hancock PA, Kessler TT, Kaplan AD et al (2021) Evolving trust in robots: specification through sequential and comparative meta-analyses. Human factors. J Sport Hist Of The Human Factors And Ergonomics Society 63(7):1196–1229. https://doi.org/10.1177/0018720820922080

            "), [38](/article/10.1007/s12369-025-01285-8#ref-CR38 "Robinette P, Howard AM, Wagner AR (2017) Effect of robot performance on human–robot trust in time-critical situations. IEEE Trans On Hum Mach Syst 47(4):425–436. 
              https://doi.org/10.1109/THMS.2017.2648849
              
            . conference Name: IEEE Transactions on Human-Machine Systems"), [48](/article/10.1007/s12369-025-01285-8#ref-CR48 "Ullman D, Malle BF (2019) Measuring gains and losses in human-robot trust: evidence for differentiable components of trust. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 618–619). 
              https://ieeexplore.ieee.org/document/8673154/
              
            "), [49](/article/10.1007/s12369-025-01285-8#ref-CR49 "de Visser E, Parasuraman R (2011) Adaptive aiding of human-robot teaming: effects of imperfect automation on performance, trust, and workload. J Cogn Eng Decis Mak 5(2):209–231. 
              https://doi.org/10.1177/1555343411410160
              
            . publisher: SAGE Publications"), [54](/article/10.1007/s12369-025-01285-8#ref-CR54 "Wijnen L, Coenen J, Grzyb B (2017) “It’s not my fault!”: investigating the effects of the deceptive behaviour of a humanoid robot. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’17, (pp 321–322). 
              https://doi.org/10.1145/3029798.3038300
              
            ")\]. There is also increasing support for the link between trust and integrity – that people will trust robots more if they have higher integrity \[[7](/article/10.1007/s12369-025-01285-8#ref-CR7 "DeSteno D, Breazeal C, Frank RH et al (2012) Detecting the trustworthiness of novel partners in economic exchange. Psychological Sci 23(12):1549–1556. 
              https://doi.org/10.1177/0956797612448793
              
            . publisher: SAGE Publications Inc"), [24](/article/10.1007/s12369-025-01285-8#ref-CR24 "Law T, Malle BF, Scheutz M (2021) A touching connection: how observing robotic touch can affect human trust in a robot. Int J Soc Robot 13(8):2003–2019. 
              https://doi.org/10.1007/s12369-020-00729-7
              
            ."), [26](/article/10.1007/s12369-025-01285-8#ref-CR26 "Lee JJ, Knox B, Baumann J et al (2013) Computationally modeling interpersonal trust. Front Psychol 4. 
              https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00893
              
            "), [29](/article/10.1007/s12369-025-01285-8#ref-CR29 "Martelaro N, Nneji VC, Ju W et al (2016) Tell me more designing HRI to encourage more trust, disclosure, and companionship. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 181–188). 
              https://doi.org/10.1109/HRI.2016.7451750
              
            . ISSN: 2167-2148"), [30](/article/10.1007/s12369-025-01285-8#ref-CR30 "McDonald RI, Crandall CS (2015) Social norms and social influence. Curr Opin In Behavioral Sci 3:147–151. 
              https://www.sciencedirect.com/science/article/pii/S2352154615000558
              
            "), [54](/article/10.1007/s12369-025-01285-8#ref-CR54 "Wijnen L, Coenen J, Grzyb B (2017) “It’s not my fault!”: investigating the effects of the deceptive behaviour of a humanoid robot. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’17, (pp 321–322). 
              https://doi.org/10.1145/3029798.3038300
              
            ")\]. The suggested link between trust and perceived agency is novel to this work, but has some support in the literature already \[[27](/article/10.1007/s12369-025-01285-8#ref-CR27 "Lyons JB, Vo T, Wynne KT et al (2021) Trusting autonomous security robots: the role of reliability and stated social intent. Hum Factors 63(4):603–618. 
              https://doi.org/10.1177/0018720820901629
              
            . publisher: SAGE Publications Inc"), [51](/article/10.1007/s12369-025-01285-8#ref-CR51 "Vorm ES, Combs DJY (2022) Integrating transparency, trust, and acceptance: the intelligent systems technology acceptance model (ISTAM). Int J Multiling Human–Computer Interaction 38(18):1828–1845. 
              https://doi.org/10.1080/10447318.2022.2070107
              
            "), [55](/article/10.1007/s12369-025-01285-8#ref-CR55 "Williams M (2007) Building genuine trust through interpersonal emotion management: a threat regulation model of trust and collaboration across boundaries. Acad Manage Rev 32(2):595–621. 
              https://doi.org/10.5465/amr.2007.24351867
              
            ")\]. Of course, we are not suggesting that a robot who has perceived agency but is incompetent and has no integrity will automatically be perceived as highly trustworthy; rather, we are proposing that perceived agency is another important dimension on which people judge trust.

1.1 Social Norms as a Method to Study Perceptions of Robots

A social norm is an expectation about appropriate behavior that occurs within a group context [[18](/article/10.1007/s12369-025-01285-8#ref-CR18 "Horne C (2001) The enforcement of norms: group cohesion and meta-norms. Soc Psychol Q 64(3):253–266. https://www.jstor.org/stable/3090115,

             publisher: [Sage Publications, Inc., American Sociological Association]"), [30](/article/10.1007/s12369-025-01285-8#ref-CR30 "McDonald RI, Crandall CS (2015) Social norms and social influence. Curr Opin In Behavioral Sci 3:147–151. 
              https://www.sciencedirect.com/science/article/pii/S2352154615000558
              
            "), [41](/article/10.1007/s12369-025-01285-8#ref-CR41 "Sherif M (1953) The concept of reference groups in human relations. Group Relat At The Crossroads Harper 203–231"), [52](/article/10.1007/s12369-025-01285-8#ref-CR52 "Voss T (2001) Game-theoretical perspectives on the emergence of social norms 105–136")\]. Social norms are usually formed in groups about both individuals and groups. Some norms are actively transmitted (e.g., by teaching), though most norms are passively transmitted (e.g., by observation). Social norms are especially useful and relevant in novel situations where the individual can observe current behavior and merge it with their understanding in order to “fit in” or to perform a task under uncertainty. Social norms range from not cutting in line \[[17](/article/10.1007/s12369-025-01285-8#ref-CR17 "Helweg-Larsen M, LoMonaco BL (2008) Queuing among u2 fans: reactions to social norm violations 1. J Appl Soc Psychol 38(9):2378–2393")\] to not littering \[[3](/article/10.1007/s12369-025-01285-8#ref-CR3 "Cialdini RB, Reno RR, Kallgren CA (1990) A focus theory of normative conduct: recycling the concept of norms to reduce littering in public places. J Pers Soc Psychol 58(6):1015")\] to walking on the culturally approved side of the sidewalk.

One of the strengths of using social norms as a context is that it is possible to provide a natural situation where the expectations of people are relatively well understood. Expanding this to robots allows researchers to explore how people interact with robots with different expectations, provide some normative information, determine differences between people and robots, and examine how people perceive the robot’s actions within different contexts. While it is an open question of whether robots should always follow human social norms, several studies have shown that people think that robots default behavior should be to follow the social norms in the context of groups of humans [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            "), [32](/article/10.1007/s12369-025-01285-8#ref-CR32 "Mumm J, Mutlu B (2011) Human-robot proxemics: physical and psychological distancing in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction. Association for Computing Machinery, HRI’11, (pp 331–338). 
              https://doi.org/10.1145/1957656.1957786
              
            ")\].

Because our goal is to vary the amount of perceived agency a robot has, we can use social norms as a manipulation.

Previous research has shown that when robots follow social norms, they have higher perceived agency than robots that do not follow social norms [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            ")\].

Korman et al. examined the effect of social norms on perceived agency by manipulating a robot’s behavior within different normative contexts [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            ")\]. In their study, participants watched videos of a DRC-HUBO conducting realistic tasks that were norm-conforming, norm-violating, or a mistake. In one set of videos, participants watched either a robot enter a queue at the end (norm-conforming), cut in line (norm-violating), or inadvertently cut in line as the line bent (mistake). Korman et al. found that people perceived the robots who conformed to social norms had more perceived agency than robots that violated norms or made a mistake.

Other researchers have found that cheating, an apparent social norm violation, increases perceived agency [[42](/article/10.1007/s12369-025-01285-8#ref-CR42 "Short E, Hart J, Vu M et al (2010) No fair!!: an interaction with a cheating robot. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI’10, ACM Press, (pp. 219). http://portal.acm.org/citation.cfm?doid=1734454.1734546

            "), [59](/article/10.1007/s12369-025-01285-8#ref-CR59 "Yasuda S, Doheny D, Salomons N et al (2020) Perceived agency of a social norm violating robot. In Proceedings of the Annual Meeting of the Cognitive Science Society. 
              https://par.nsf.gov/biblio/10284325-perceived-agency-social-norm-violating-robot
              
            ")\]. \[[42](/article/10.1007/s12369-025-01285-8#ref-CR42 "Short E, Hart J, Vu M et al (2010) No fair!!: an interaction with a cheating robot. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI’10, ACM Press, (pp. 219). 
              http://portal.acm.org/citation.cfm?doid=1734454.1734546
              
            ")\] found that robots who cheat seem to have more perceived agency than robots that do not cheat or that are perceived to have made a mistake. \[[59](/article/10.1007/s12369-025-01285-8#ref-CR59 "Yasuda S, Doheny D, Salomons N et al (2020) Perceived agency of a social norm violating robot. In Proceedings of the Annual Meeting of the Cognitive Science Society. 
              https://par.nsf.gov/biblio/10284325-perceived-agency-social-norm-violating-robot
              
            ")\] suggested that violating social norms increase perceived agency and hence the reason for the differences found by \[[42](/article/10.1007/s12369-025-01285-8#ref-CR42 "Short E, Hart J, Vu M et al (2010) No fair!!: an interaction with a cheating robot. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI’10, ACM Press, (pp. 219). 
              http://portal.acm.org/citation.cfm?doid=1734454.1734546
              
            ")\]. Yasuda et al. explored this possibility by using a nao robot and an in-person experiment where the robot played a game of rock paper scissors with individual participants \[[59](/article/10.1007/s12369-025-01285-8#ref-CR59 "Yasuda S, Doheny D, Salomons N et al (2020) Perceived agency of a social norm violating robot. In Proceedings of the Annual Meeting of the Cognitive Science Society. 
              https://par.nsf.gov/biblio/10284325-perceived-agency-social-norm-violating-robot
              
            ")\]. They examined several different violations of social norms – cheating, insulting, and cursing. Interestingly, the found that only cheating increased perceptions of agency for the robot, suggesting that social norms per se was not the source of increase of agency; in fact, other norm violation conditions (insulting, cursing) were no different from a control condition in terms of agency perception. We suggest that this apparent discrepancy between \[[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). 
              https://ieeexplore.ieee.org/document/8673293/
              
            ")\] and \[[59](/article/10.1007/s12369-025-01285-8#ref-CR59 "Yasuda S, Doheny D, Salomons N et al (2020) Perceived agency of a social norm violating robot. In Proceedings of the Annual Meeting of the Cognitive Science Society. 
              https://par.nsf.gov/biblio/10284325-perceived-agency-social-norm-violating-robot
              
            ")\] is because people do not always see particular behaviors (e.g., cheating) primarily as norm violations. In this case, cheating may cause people to believe the robot has a strong self-interest. We verify that the stimuli we used in our studies are in fact perceived as norm violations and norm conforming (see our norming study, below).

While there seems to be very little empirical research on the relationship between social norms and trust, people do seem to trust robots more when they act in an expected manner [[8](/article/10.1007/s12369-025-01285-8#ref-CR8 "Falcone R, Castelfranchi C, Cardoso HL et al (2013) Norms and trust. In: Ossowski S (ed) Agreement technologies. Springer Netherlands, pp 221–231. https://doi.org/10.1007/978-94-007-5583-3_15

            ")\]. Thus, in our experiment we expect that people will trust robots who follow social norms more than robots that violate social norms.

1.2 Attitudes Towards Robots Impacting Trust

One other component that could influence how much an individual trusts a robot is how they feel about robots in general: if people dislike or fear or have a strong negative attitude toward robots, they may also mistrust them. Given that there is widespread press coverage of how robots may cause people to lose jobs [1], it would not be surprising if those same people also mis-trusted them. In fact, some researchers have found exactly this relationship [[47](/article/10.1007/s12369-025-01285-8#ref-CR47 "Tussyadiah I, Zach F, Wang J (2020) Do travelers trust intelligent service robots? Ann Tour Res 81:102886. https://doi.org/10.1016/j.annals.2020.102886

            ")\], described in more detail in experiment 2\. In contrast, a recent review found that people seem to be positively pre-disposed toward robots \[[33](/article/10.1007/s12369-025-01285-8#ref-CR33 "Naneva S, Sarda Gou M, Webb TL et al (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot 12(6):1179–1201")\]. \[[33](/article/10.1007/s12369-025-01285-8#ref-CR33 "Naneva S, Sarda Gou M, Webb TL et al (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot 12(6):1179–1201")\] examined the standardized effects across many studies and found that people generally have a positive view of social robots, and did not either trust or mis-trust robots.

In this report, we examine three different hypotheses for human robot interaction based on social norms, perceived agency, and trust. First, we will examine the social norms-trust hypothesis: Robots that follow social norms will be trusted more than robots that violate social norms. Second, we will examine the social norms-perceived agency hypothesis: Robots that conform to social norms will be perceived as having more agency than robots that violate social norms. Finally, we propose the new perceived agency-trust hypothesis: Robots that have more perceived agency will be trusted more than robots that have less perceived agency; that perceived agency is an additional dimension upon which people decide to trust a robot. Our hypotheses suggest that both social norms and perceived agency can influence a person’s trust. In some cases, however, social norms and perceived agency can be in conflict – if a robot violates social norms but an individual believes the robot has a great deal of perceived agency, an individual may trust the robot less because it is violating social norms (social norms-trust hypothesis) but trust it more because it has higher perceived agency (perceived agency-trust hypothesis). We suggest that there should be a positive relationship between perceived agency and trust even when a robot violates social norms. We examine this possibility explicity in the following experiments.

We first present a norming study to verify that the stimuli we used elicit norm violating and norm conforming impressions in individuals. We then use the final stimuli from the norming study in a study to explore the above three hypotheses, finding support for all three. Finally, we perform another study that attempts to replicate our first study and additionally examines whether people’s negative impression of robots could account for our results or attenuate our findings; we do replicate our original findings and also find that people’s negative attitudes do not impact our results.

2 Norming Study

The original design of Korman et al. [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            ")\] was loosely modeled on \[[42](/article/10.1007/s12369-025-01285-8#ref-CR42 "Short E, Hart J, Vu M et al (2010) No fair!!: an interaction with a cheating robot. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI’10, ACM Press, (pp. 219). 
              http://portal.acm.org/citation.cfm?doid=1734454.1734546
              
            ")\] and had three different conditions: a norm violating condition, a norm conforming condition, and a mistake condition. Korman et al. developed three different scenarios (line, trash, elevator), one for each condition. We wanted to verify that naive participants would perceive differences between conditions (i.e., norm violating behavior was considered distinct and different from a mistake and norm conforming behavior) and that each scenario appropriately characterized the condition.

In all cases, the videos showed a robot in an environment where the social norm was obvious – do not litter, do not cut in line, do not get too close to another in an elevator. A brief description of each norm and scenario is displayed in Table 1. We note that the norms displayed in Table 1 are quite different. The robot in the elevator scenario not only encroaches on the person in the elevator, violating a proxemic norm (in the US), but also adds a physical safety norm violation. There is recent work that suggests that people’s social norms overlap with perceptions of safety [19]. While not littering is a strong social norm, we deliberately had the human involved leave the scene. Cutting in line is also a strong social norm, and there were multiple people impacted by the robot’s actions (i.e., the length of the line). We expected that participants would be able to identify whether the robot followed the social norm, violated the social norm, or made a mistake.

Table 1 Description of social norm-behavior by scenario

Full size table

2.1 Method

2.1.1 Participants

198 participants were recruited through Amazon’s Mechanical Turk online platform and paid $2.00 (US). Recruitment criteria were that their native language needed to be English and that they were from the USA because we were using social norms common in the USA. The average age of participants was 39 (SD = 12) years old ranging from 21 to 74 years old with moderate skewness (skewness = 0.85). 78 participants were women, 119 participants were men, and 1 participant was unreported. The study took 3.5 minutes to watch the video and answer the questions.

2.1.2 Design and Procedure

The design was a between-subject 3 (scenario) ×3 (norms): participants saw a single video showing a single scenario. Participants were then asked to provide a summary of the video and how well the robot in the video performed. Participants were also asked whether the robot violated any social rules or made a mistake or operated as expected.

2.2 Results

Our goal was that more than 60% of participants would be able to recognize when a social norm was violated and to be able to differentiate it from mistakes.

We found that the robot in the trash scenario was perceived to rarely violate a social norm (\( < 10\%\) of the time): participants thought that any time the robot was unsuccessful at disposing of the piece of trash, participants believed it was simply a mistake, not a norm violation. Comments from participants in the violation condition supported this interpretation: “tried to put something in a garbage can and missed the can” or “Walked; picked up some trash and then missed the trash can when tossing it in.”

Participants in the elevator condition could accurately differentiate social norm violations from mistakes \(85\%\) of the time and also correctly perceived when the robot was operating as expected without violating any norms \(86\%\) of the time. However, when the robot made a mistake, participants thought that the robot was violating a social norm rather than making a mechanical or programmatic mistake \(46\%\) of the time. Comments in the no violation condition supported this condition (“It went to the elevator; pushed the button; and got inside the elevator.”), as did comments in the mistake condition (“The robot got into an elevator and got too close to another passenger”).

Participants in the line condition accurately differentiated social norm violations from mistakes (\(67\%\) of time), and also were able to accurately differentiate correct behavior from other conditions (\(90\%\) of the time). However, participants thought that mistakes were frequently norm violations (\(76\%\) of time). Comments from participants supported this interpretation. For example, a participant in the norm violation condition said, “the robot moved passed everyone on a line and cut through to front of the line” which is indistinguishable from comments that participants in the mistake condition made (e.g., “It cut in front of some people that were waiting in line.”).

While other studies have successfully used the mistake condition to suggest a mechanical or programmatic error [[42](/article/10.1007/s12369-025-01285-8#ref-CR42 "Short E, Hart J, Vu M et al (2010) No fair!!: an interaction with a cheating robot. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI’10, ACM Press, (pp. 219). http://portal.acm.org/citation.cfm?doid=1734454.1734546

            ")\], none of these scenarios elicited such a clear concept. We note that the difficulty that people have of differentiating norm violations from mistakes does not greatly impact the previous findings (the mistake condition in \[[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). 
              https://ieeexplore.ieee.org/document/8673293/
              
            ")\] did not play a major theoretical role). Additionally, the trash condition was not perceived to have a norm violation at all, perhaps because there was only a single unimpacted individual in the video.

If we examine how well people were able to differentiate norm violations from norm conforming behavior without the mistake condition or the trash scenario, we find that participants could differentiate norm-violating behaviors from norm-conforming behaviors \(67\%\) of the time and they could differentiate norm conforming behaviors from norm-violating behaviors \(93\%\) of the time.

Given these results, we decided to remove the mistake condition because participants found it difficult to differentiate between mistakes and norm violations. We also decided to use only the line and elevator scenarios because the trash scenario was normatively quite ambiguous. Thus, the following experiments will use the line and elevator scenarios with the norm violating and norm conforming conditions.

3 Experiment 1

3.1 Methodology

3.1.1 Participants

An a priori power analysis was conducted using G*Power Software [9]. The number of participants needed to detect a medium effect size with 80% power and \(\alpha = 0.05\) was 134. One hundred forty-one participants (84 males, 55 females, 2 preferred not to answer) participated in this study. Of the 141 participants, 8 (6%) participants were excluded for missing an attention check question. The final sample included 133 (79 males, 52 females, 2 preferred not to answer) with ages ranging from 23 to 70 years old (M = 36, SD = 9.38; skewness = 1.1). Participants were recruited from Amazon’s Mechanical Turk online platform and were invited to complete the online survey in exchange for $2. None of the participants were enrolled in the norming study. Recruitment criteria were that their native language needed to be English and that they were from the USA because we were using social norms common in the USA.

3.1.2 Design

The study manipulated whether a robot violated a social norm or conformed to a social norm; the video stimuli were taken from [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            ")\] and each video lasted approximately 30 seconds. Two different scenarios for each condition were used (Line scenario _n_ \= 66; Elevator scenario _n_ \= 67) and were combined for analysis. This resulted in a between-subjects design with two conditions: norm-violation (_n_ \= 67) and norm-conforming (_n_ \= 66).

3.1.3 Materials

Stimuli The experimental setup included four sets of the 30-second videos from [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            ")\] showing norm conforming and norm-violating scenarios. The robot in the video was the DRC-HUBO \[[54](/article/10.1007/s12369-025-01285-8#ref-CR54 "Wijnen L, Coenen J, Grzyb B (2017) “It’s not my fault!”: investigating the effects of the deceptive behaviour of a humanoid robot. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’17, (pp 321–322). 
              https://doi.org/10.1145/3029798.3038300
              
            ")\] with the SCIPRR head \[[15](/article/10.1007/s12369-025-01285-8#ref-CR15 "Harrison AM, Xu WM, Trafton JG (2018) User-centered robot head design: a sensing computing interaction platform for robotics research (sciprr). In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp 215–223")\]. Screenshots from the video are shown in Fig. [1](/article/10.1007/s12369-025-01285-8#Fig1)

Fig. 1

Screenshots of the line and elevator conditions

Measures Perceived Agency: Participants answered the three questions from [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            ")\]: “Did the robot perform this behavior intentionally (1 = definitely not) to (7 = definitely); Was the robot aware of engaging in this behavior(1 = not at all aware) to (7 = very much aware); Did the robot want to perform this behavior (1 = didn’t want to at all) to (7 = very much wanted to)?” The questions were averaged into a single 7 point “perceived agency” scale. This scale has been shown to differentiate robots in terms of their perceived agency in other work \[[46](/article/10.1007/s12369-025-01285-8#ref-CR46 "Trafton JG, McCurry JM, Zish K et al (2024) The perception of agency. ACM Trans On Hum Rob Interact 13(1):1–23")\].

Trust: Trust was assessed using Malle and Ullman’s Multi-Dimensional Measure of Trust (MDMT-v2) where participants rated the robot on a scale from (0 = Not at all) to (7 = Very) on a total of 20 Likert scale items. The MDMT-v2 consists of two sub-scales including performance trust (reliable, and competent) and moral trust (ethical, transparent, benevolent). Additionally, the MDMT-v2 contains a “Does not fit” option that participants could select when they felt that the item was not relevant for the observed video. An overall measure of trust was computed by averaging all scale items; subscale measures were likewise calculated by averaging subscale-relevant items. If a participant selected the “Does not fit” option, averages were calculated without that item. One participant felt that none of the trust items fit the robot, and two participants felt that 15 trust items did not fit the robot. If averages were not available for a trust scale or subscale, that participant was removed from that analysis.

3.1.4 Procedure

Experimental Procedure After filling out demographic information, participants were instructed to watch one video of the DRC-HUBO robot. After watching the video, participants were prompted to describe the robot’s behavior in a few sentences. Next, participants responded to the perceived agency and trust surveys in which one attention check question was inserted. The order of the two surveys was counter-balanced across participants. After responding to the two surveys, participants were prompted with a free-text box for general feedback and debriefed on the experiment. The entire experiment took approximately 10 minutes.

3.1.5 Results

Reliability measures for perceived agency were \(\omega_{total} =.79\) and \(\alpha =.79\). Reliability measures for performance trust were \(\omega_{total} =.95\) and \(\alpha =.93\). Reliability measures for moral trust were \(\omega_{total} =.98\) and \(\alpha =.97\). Reliability measures for all trust measures were \(\omega_{total} =.98\) and \(\alpha =.97\).

We first examined the social norms-trust hypothesis. If robots that follow social norms will be more trusted than robots that violate social norms, we should see that participants trusted robots in the norm-conforming condition more than robots in the norm-violating condition. Indeed, we found that participants felt a greater sense of trust for a norm-conforming robot (M = 5.34; SD = 1.08) compared to a norm-violating robot (M = 4.05; SD = 2.46), \(t(127) = 4.36, p <.001, d =.77\). Because a norm violating robot could be considered to be less competent, we examined how much participants trusted the robots using the performance trust subscale. Consistent with the competency-trust link discussed earlier, participants trusted the robot that conformed to norms (M = 5.72; SD = 1.13) more than the robot who violated social norms (M = 4.75; SD = 1.60), \(t(113) = 3.97, p < 0.001, d=.70\). Additionally, a norm violating robot could be considered to have less integrity, so we examined how much participants trusted the robots using the moral trust subscale. Consistent with the integrity-trust link discussed earlier, participants trusted the robot that conformed to norms (M = 5.11, SD = 1.72) more than robots that violated norms (M = 3.47, SD = 2.31), \(t(112) = 4.48, p < 0.001, d=.81\). Figure 2 shows these results in a graphical form.

Fig. 2

Overall, performance, and moral trust by the norm conforming and norm violating conditions. Error bars are 95% confidence intervals. In all cases, norm conforming robots are rated as having higher trust than norm violating robots (experiment 1)

We next examined the social norms-perceived agency hypothesis by examining participants’ ratings of perceived agency for robots who follow social norms and robots who violated social norms. As Fig. 3 suggests and consistent with the social norms - perceived agency hypothesis, we found that participants in the norm-conforming condition (M = 5.83; SD = 1.65) rated the robot as having more perceived agency than participants in the norm-violating condition (M = 5.14; SD = 1.80), t(128) = 2.94, p = 0.004, d = 0.52.

Fig. 3

Norm conforming robots are perceived to have more agency. Error bars are 95% confidence intervals (experiment 1)

Finally, we examined the perceived agency-trust hypothesis: that people will trust robots more as the robot’s perceived agency increases. The most straightforward manner to explore this hypothesis is to examine the simple correlation between perceived agency and trust for all participants. As Fig. 4 suggests, there was a positive relationship between perceived agency and trust, Pearson r(127) = 0.27, p = 0.002. To further investigate the relationship between perceived agency and trust through social norms, we used an analysis of covariance (ANCOVA). The rationale behind conducting an ANCOVA was to move beyond merely identifying condition differences, as provided by the t-tests, and to formally test for interaction effects. Specifically, ANCOVA allows us to examine whether the relationship between perceived agency and trust varies based on social norms, thereby providing a more nuanced understanding of their interplay. An ANCOVA examined how trust was impacted by condition (violation or no violation) and the amount of perceived agency participants rated the robots. As Fig. 4 suggests and consistent with earlier analyses and the social norms-trust hypothesis, participants trusted robots that did not violate norms more than robots that did violate social norms, F(1, 127) = 20.4, p < 0.001, \(\eta_{\text{p}}^{2}\) = .14. Consistent with the perceived-agency-trust hypothesis and the above correlation, the more perceived agency participants thought the robot had (collapsed across norm type), the more participants trusted the robot, F(1, 127) = 5.6, p < 0.05, \(\eta_{\text{p}}^{2}\) = .04. Finally, there was no interaction between norm-violation condition and perceived agency, \(F(1, 127) =.11, n.s.\) Nearly identical results were found for performance trust F(1, 125) = 17.4, p < 0.001, \(\eta_{\text{p}}^{2}\) = .12, perceived agency F(1, 125) = 14.2, p < 0.001, \(\eta_{\text{p}}^{2}\) = .10, and interaction \(F(1, 127) =.36, n.s.\) Moral trust also showed comparable results for condition \(F(1, 120) = 20.8, p < 0.001, \eta_{\text{p}}^{2} =.15\), perceived agency \(F(1, 120) = 6.3, p < 0.05, \eta_{\text{p}}^{2} =.05\), and interaction \(F(1, 120) =.1, n.s.\)

Fig. 4

Scatterplot and best fitting lines showing the relationship between perceived agency and trust scores in norm-conforming and norm-violation conditions (experiment 1)

While a simple correlation analysis showed that there was a relationship and the ANCOVA suggested that social norms and trust do not interact with respect to trust, we can also perform a mediation analysis to determine the impact of perceived agency on trust [16, [44](/article/10.1007/s12369-025-01285-8#ref-CR44 "Tingley D, Yamamoto T, Hirose K et al (2014) Mediation: R package for causal mediation analysis. UCLA Statistics/American Statistical Association. https://dspace.mit.edu/handle/1721.1/91154,

             accepted: 2014-10-23T17:27:39Z Publisher: UCLA Statistics/American Statistical Association")\]

Mediation analysis allow the researcher to understand the (possibly complex) relationship between three variables and is useful for describing how one variable mediates or intervenes in the pathway between two different variables. Mediation analysis allows us to determine if changes in perceived agency cause changes in trust through social norms. Specifically, a mediation analysis can show either a complete or a partial mediation. In our case, a complete mediation would show that the entire effect of social norms on trust is transmitted entirely through perceived agency – that the effect of social norms would have no direct effect. A partial mediation would show that social norms impact trust both directly and indirectly, through perceived agency. We note that complete mediations are relatively rare. Figure 5 shows our proposed mediation model.

Fig. 5

Results of the mediation analysis of social norms to trust by perceived agency (experiment 1). (a) is the beta weight between social norms violation and perceived agency; (b) is the beta weight between perceived agency and trust; (c) is the beta weight between social norms and trust; (c’) is the beta weight between social norms violation and trust when perceived agency was included in the analysis

Regression analysis was used to investigate whether perceived agency mediates the effect of social norms violation on trust. First, this analysis showed that social norm violations were a significant predictor of trust (B = 1.3, SE = .29, t = 4.5, p < 0.001) (path c). Second, we checked for a positive relationship between social norm violations and perceived agency. Results confirmed that social norm violations were a significant predictor of perceived agency (B = .68, SE = .23, t = 2.9, p < 0.005) (path a). Third, we checked whether the mediator (perceived agency) affected the outcome (trust). Indeed, perceived agency was a significant predictor of trust (B = .36, SE = .11, t = 3.3, p < 0.005) (path b). Finally, the effect of social norm violations on trust became smaller when perceived agency was included in the regression (B = 1.1, SE = .30, t = 3.8, p < 0.001) (path c’), suggesting a partial mediation. It is possible to calculate the amount of the mediation, or in our case the effect of social norms on trust that works through the mediator, perceived agency; this is called the indirect effect. The indirect effect was \((.68)\times(.26) =.18\). We tested the statistical significance of this indirect effect using bootstrapping procedures from the R mediation package [[44](/article/10.1007/s12369-025-01285-8#ref-CR44 "Tingley D, Yamamoto T, Hirose K et al (2014) Mediation: R package for causal mediation analysis. UCLA Statistics/American Statistical Association. https://dspace.mit.edu/handle/1721.1/91154,

             accepted: 2014-10-23T17:27:39Z Publisher: UCLA Statistics/American Statistical Association")\]. Unstandardized indirect effects were computed for each of 1000 bootstrapped samples. The bootstrapped unstandardized indirect effect was 0.18, \\(p < 0.05\\)). This mediation analysis shows that perceived agency is a partial mediator of trust.

3.1.6 Discussion

Experiment 1 examined how much social norms and perceived agency impacted how much people trust robots. Consistent with the social norms - trust hypothesis, we found that people trusted norm-conforming robots more than norm-violating robots. This finding also provides general evidence in support of both the competency-trust link and the integrity-trust link. It seems reasonable to assume that robots that violate social norms will be perceived as less competent and having less integrity: a robot that cuts in line may cut because of a sensor failure (competency) or because it wanted to get to the front of the line before others (integrity). Indeed, when examining the appropriate trust subscales, we found that participants trusted norm conforming robots more in terms of both performance and integrity than robots that violated norms. These findings are consistent with previous research showing strong relationships between competency-trust and integrity-trust.

Experiment 1 also provided support for the social norms - perceived agency hypothesis: people rated a robot that followed social norms as having more perceived agency than robots that violated social norms. This finding is also consistent with previous research [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            ")\] and highlights the subtle influence of social norms on people’s perception of robots.

Finally, experiment 1 showed a novel relationship between perceived agency and trust: people who rate a robot as having more agency also trust the robot more. Social norms influenced the perception of perceived agency, which in turn led people to trust the robot more. This relationship between perceived agency and trust seems to occur both when the robot is following social norms (the positive slope of the top line of Figure 4) and also when the robot is not following social norms (the positive slope of the bottom line of Figure 4). The lack of an interaction between perceived agency and trust and the mediation analysis is strong support that perceived agency is an important factor that people use to determine trust of robots.

One possible explanation for the findings reported here concerns people’s predisposition of their attitudes about robots. For example, if people have a negative predisposition to robots, they may have rated robots especially low on perceived agency or trust. Other researchers have shown that trust is influenced by their attitudes about robots in general [[47](/article/10.1007/s12369-025-01285-8#ref-CR47 "Tussyadiah I, Zach F, Wang J (2020) Do travelers trust intelligent service robots? Ann Tour Res 81:102886. https://doi.org/10.1016/j.annals.2020.102886

            "), [50](/article/10.1007/s12369-025-01285-8#ref-CR50 "de Visser Ej, Peeters MMM, Jung MF et al (2020) Towards a theory of longitudinal trust calibration in human–robot teams. Int J Soc Robot 12(2):459–478. 
              https://doi.org/10.1007/s12369-019-00596-x
              
            ")\]. Experiment 2 will examine people’s negative attitudes towards robots while also replicating the findings from experiment 1.

4 Experiment 2

Most researchers generally agree that psychological dispositions are critical factors for trust development between humans and robots [[22](/article/10.1007/s12369-025-01285-8#ref-CR22 "Kohn SC, de Visser EJ, Wiese E et al (2021) Measurement of trust in automation: a narrative review and reference guide. Front Psychol 12. https://www.frontiersin.org/articles/10.3389/fpsyg.2021.604977

            "), [31](/article/10.1007/s12369-025-01285-8#ref-CR31 "Mcknight DH, Carter M, Thatcher JB et al (2011) Trust in a specific technology: an investigation of its components and measures. ACM Trans Manage Inf Syst 2(2)::12:1–:12:25. 
              https://doi.org/10.1145/1985347.1985353
              
            "), [50](/article/10.1007/s12369-025-01285-8#ref-CR50 "de Visser Ej, Peeters MMM, Jung MF et al (2020) Towards a theory of longitudinal trust calibration in human–robot teams. Int J Soc Robot 12(2):459–478. 
              https://doi.org/10.1007/s12369-019-00596-x
              
            ")\]. For example, a negative attitude towards a robot is likely to decrease the amount a person trusts a robot. Tussyadiah et al. found that trust was negatively influenced by negative attitudes towards robots and technology in general while investigating perceptions of U.S. travelers’ trust in intelligent service robots \[[47](/article/10.1007/s12369-025-01285-8#ref-CR47 "Tussyadiah I, Zach F, Wang J (2020) Do travelers trust intelligent service robots? Ann Tour Res 81:102886. 
              https://doi.org/10.1016/j.annals.2020.102886
              
            ")\]. Regardless of robot morphology, Tussyadiah et al. found that people who had negative attitudes towards robots trusted robots less \[[47](/article/10.1007/s12369-025-01285-8#ref-CR47 "Tussyadiah I, Zach F, Wang J (2020) Do travelers trust intelligent service robots? Ann Tour Res 81:102886. 
              https://doi.org/10.1016/j.annals.2020.102886
              
            ")\]. Others have suggested that people may mistrust robots and have negative attitudes toward them because robots may cause people to lose their jobs \[[1](/article/10.1007/s12369-025-01285-8#ref-CR1 "Broadbent E, Tamagawa R, Patience A et al (2012) Attitudes towards health-care robots in a retirement village. Australas J Ageing 31(2):115–120")\], though a recent review suggests that most people do not have strong negative attitudes towards robots \[[33](/article/10.1007/s12369-025-01285-8#ref-CR33 "Naneva S, Sarda Gou M, Webb TL et al (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot 12(6):1179–1201")\].

While experiment 1 demonstrated an important positive relationship between both social norms and perceived agency on trust, the purpose of experiment 2 was to explore whether the relationship between social norms, perceived agency and trust was dependent upon people’s attitudes towards robots. If people’s negative attitudes towards robots impacts their perception of social norms, trust, or perceived agency, we should see that the relationships found in experiment 1 no longer holds. Of course, if there is no relationship between negative attitudes toward robots and trust, social norms, or perceived agency, experiment 2 should replicate the findings of experiment 1.

4.1 Methodology

4.1.1 Participants

Two hundred sixty-nine participants (160 males, 106 females, 1 other, 2 preferred not to answer) participated in experiment 2. None of the participants were enrolled in the norming study or experiment 1. Of the 269 participants, 22 (8%) participants were excluded for missing the attention check question. Five participants provided incomplete responses; they were excluded from analysis on those dimensions for which they did not have complete data. The final sample included 247 (142 males, 102 females, 1 other, 2 preferred not to answer) with ages ranging from 19 to 77 years old (M = 37, SD = 10.7, skewness = 0.96). Participants were recruited from Amazon’s Mechanical Turk online platform and were invited to complete the online survey in exchange for $2. Recruitment criteria were that their native language needed to be English and that they were from the USA because we were using social norms common in the USA.

4.1.2 Design

The study design was identical to experiment 1 with the addition of an additional scale (Negative Attitude towards Robots Scale, described below). Table 2 shows the number of participants in each condition.

Table 2 Number of participants in each experimental condition (experiment 2)

Full size table

4.1.3 Materials

Stimuli The stimuli for experiment 2 were identical to experiment 1.

Measures The measures administered in experiment 2 included the perceived agency questionnaire [[23](/article/10.1007/s12369-025-01285-8#ref-CR23 "Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/

            ")\] and MDMT-v2 from experiment \[[28](/article/10.1007/s12369-025-01285-8#ref-CR28 "Malle BF, Ullman D (2021) Chapter 1 - a multidimensional conception and measure of human-robot trust. In: Nam C, Lyons J (eds.). Trust in human-robot interaction. Academic, pp 3–25. 
              https://www.sciencedirect.com/science/article/pii/B9780128194720000010
              
            ")\]. Additionally, negative attitudes towards robots were assessed using the Negative Attitude toward Robots Scale \[[35](/article/10.1007/s12369-025-01285-8#ref-CR35 "Nomura T, Suzuki T, Kanda T et al (2006) Measurement of negative attitudes toward robots. Glob Health Res Policy 7(3):437–454. 
              https://doi.org/10.1075/is.7.3.14nom
              
            ")\]. The NARS scale consisted of 14 Likert items that ranged from one to five (1: I strongly disagree, 2: I disagree, 3: Undecided, 4: I agree, 5: I strongly agree). NARS consisted of three sub-scales (negative attitudes towards situations of interaction / social influence / emotions in interaction of robots). Example items were “I would feel paranoid talking with a robot” and “Something bad might happen if robots developed into living beings”. Consistent with other researchers \[[37](/article/10.1007/s12369-025-01285-8#ref-CR37 "Riek LD, Rabinowitch TC, Bremner P et al (2010) Cooperative gestures: effective signaling for humanoid robots. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 61–68). 
              https://doi.org/10.1109/HRI.2010.5453266
              
            . ISSN: 2167-2148"), [50](/article/10.1007/s12369-025-01285-8#ref-CR50 "de Visser Ej, Peeters MMM, Jung MF et al (2020) Towards a theory of longitudinal trust calibration in human–robot teams. Int J Soc Robot 12(2):459–478. 
              https://doi.org/10.1007/s12369-019-00596-x
              
            ")\], an overall measure of negative attitudes towards robots was computed by averaging all scale items. Three of the NARS items required reverse coding, which was performed before analysis.

4.1.4 Procedures

Experimental Procedure After filling out demographic information and before seeing any videos, participants responded to the NARS questionnaire. The remainder of the procedure for experiment 2 was identical to experiment 1.

4.2 Results

Reliability measures for perceived agency were \(\omega_{total} =.79\) and \(\alpha =.77\). Reliability measures for performance trust were \(\omega_{total} =.91\) and \(\alpha =.90\). Reliability measures for moral trust were \(\omega_{total} =.97\) and \(\alpha =.97\). Reliability measures for all trust measures were \(\omega_{total} =.97\) and \(\alpha =.96\). Reliability measures for the NARS scale were \(\omega_{total} =.91\) and \(\alpha =.86\).

We first examined whether negative attitudes towards robots differed according to social norms. There were no significant differences in negative attitudes towards robots by condition, \(t(242.56) = -.5, p >.5\) (Violation \(M = 2.96; SD =.67\), No Violation \(M = 2.91,SD =.78\)). As Fig. 6 suggests and contrary to what Tussyadiah et al. found, there was also no relation between negative attitudes towards robots and how much participants trusted robots, \(r(243) =.06, n.s.\). There was also no relation between negative attitudes towards robots and perceived agency, \(r(244) =.03, n.s.\). A careful observer may notice that there are relatively few people that have extreme NARS scores (\( < 2\) or \( > 4\)). In fact, 14% of people score \( < 2\) and only 2.4% of our sample scored \( > 4\) on a five point scale. These results are consistent with the results from [33]: people in general do not have strong negative feelings about robots in general. Because negative attitudes did not impact any of our other measures, it will not be considered in any future analyses.

Fig. 6

Scatterplot and best fitting line showing the relationship between negative attitudes towards robots (NARS) and trust

We next examined the social norms - trust hypothesis by examining the relationship between condition and trust rating. Consistent with experiment 1 and the social norms – trust hypothesis, participants trusted norm conforming robots (M = 5.37; SD = 1.25) more than norm violating robots (M = 4.54, SD = 1.67), \(t(239) = 4.49, p <.001, d = 0.58\). Consistent with the competency – trust link discussed in the introduction, participants trusted the norm conforming robot’s performance (M = 5.71; SD = 0.96) more than the robot who violated social norms (M = 4.97; SD = 1.42), \(t(200) = 4.72, p < 0.001, d=.61\). Consistent with the integrity – trust link, participants trusted the robot’s integrity/morality when the robot conformed to norms (M = 5.04, SD = 1.67) more than robots that violated norms (M = 4.15, SD = 2.06), \(t(221) = 3.68, p < 0.001, d=.48\).

Figure 7 shows these results in a graphical form.

Fig. 7

Overall, performance, and moral trust by the norm conforming and norm violating conditions. Error bars are 95% confidence intervals. In all cases, norm conforming robots are rated as having higher trust than norm violating robots (experiment 2)

We also examined the social norms – perceived agency hypothesis by examining how perceived agency was impacted by social norm condition. In support of the social norms – perceived agency hypothesis and consistent with experiment 1, and shown in Fig. 8, we found that participants in the norm-conforming condition (M = 5.66; SD = 1.09) rated the robot as having more perceived agency than participants in the norm-violating condition (M = 5.12; SD = 1.41), t(240) = 3.35, p = 0.001 d = 0.43.

Fig. 8

Norm conforming robots are perceived to have more agency. Error bars are 95% confidence intervals (experiment 2)

Finally, we examined the perceived agency-trust hypothesis by examining both a simple correlation between perceived agency and trust as well as a mediation analysis. Consistent with experiment 1 and supporting the perceived agency-trust hypothesis, a Pearson correlation showed a positive relationship between perceived agency and trust, \(r(244) = 0.32, p <.001\). It is also possible to examine the relationship between perceived agency and trust for each condition separately. Not surprisingly participants in the no violation condition trusted the robot more the greater perceived agency it had, r(127) = 0.27, p = 0.002. The positive relationship between trust and perceived agency also held for the norm violating condition, \(r(115) =.29, p =.001\). Figure 9 shows regression lines for both conditions. We also examined the same effect using an ANCOVA with norm violation (violation vs. no violation) as a factor and the amount of perceived agency as a covariate with overall trust as the dependent measure. As Fig. 9 suggests and consistent with earlier analyses and the social norms-trust hypothesis, participants trusted robots that did not violate norms more than robots that did violate social norms, \(F(1, 242) = 21.1, p < 0.001, \eta_{\text{p}}^{2} =.08\). Consistent with the perceived-agency-trust hypothesis and the above correlations, the more perceived agency participants thought the robot had (collapsed across norm type), the more participants trusted the robot, \(F(1, 242) = 21.2, MSE = 43.1, p < 0.001\). \(F(1, 242) = 21.6, p < 0.001, \eta_{\text{p}}^{2} =.08\). Finally, there was no interaction between norm-violation condition and perceived agency, \(F(1, 242) =.1, n.s.\) Very similar results were found for performance trust condition F(1, 242) = 25.8, p < 0.001, \(\eta_{\text{p}}^{2}\) = .10, perceived agency F(1, 242) = 29.0, p < 0.001, \(\eta_{\text{p}}^{2}\) = .11, and interaction \(F(1, 242) =.8, n.s.\) Moral trust also showed comparable results for condition \(F(1, 236) = 14.7, p < 0.001, \eta_{\text{p}}^{2} =.06\), perceived agency \(F(1, 236) = 18.7, p < 0.001, \eta_{\text{p}}^{2} =.07\), and interaction \(F(1, 236) = 1.0, n.s.\)

Fig. 9

Scatterplot and best fitting lines showing the relationship between perceived agency and trust scores in norm-conforming and norm-violation conditions (experiment 2)

We also performed a mediation analysis to examine whether perceived agency was a direct or indirect mediator of social norms on trust. Because of the non-significant correlation with NARS on condition, perceived agency, and trust, NARS cannot be a mediator [16, [44](/article/10.1007/s12369-025-01285-8#ref-CR44 "Tingley D, Yamamoto T, Hirose K et al (2014) Mediation: R package for causal mediation analysis. UCLA Statistics/American Statistical Association. https://dspace.mit.edu/handle/1721.1/91154,

             accepted: 2014-10-23T17:27:39Z Publisher: UCLA Statistics/American Statistical Association")\]. We can, however, attempt to replicate the mediation analysis from experiment 1\. Figure [10](/article/10.1007/s12369-025-01285-8#Fig10) shows our proposed mediation model.

Fig. 10

Results of the mediation analysis of social norms to trust by perceived agency (experiment 2). (a) is the beta weight between social norms violation and perceived agency; (b) is the beta weight between perceived agency and trust; (c) is the beta weight between social norms and trust; (c’) is the beta weight between social norms violation and trust when perceived agency was included in the analysis

Consistent with experiment 1, regression analysis was used to investigate whether perceived agency mediates the effect of social norms on trust. As in experiment 1, this analysis showed that social norms were a significant predictor of trust (B = .82, SE = .19, t = 4.4, p < 0.001) (path c). Next, we checked for a positive relationship between social norms and perceived agency. Results showed that social norms was a significant predictor of perceived agency (B = .52, SE = .16, t = 3.2, p < 0.005) (path a). Third, we checked whether perceived agency (the mediator) affected the outcome (trust). Perceived agency was a significant predictor of trust (B = .39, SE = .07, t = 5.3, p < 0.001) (path b). As in Experiment 1, the effect of social norms on trust became smaller when perceived agency was included in the regression (B = .65, SE = .18, t = 3.5, p < 0.001) (path c’), consistent with at least a partial mediation. The indirect effect was \((.52)\times(.33) =.17\). We tested the statistical significance of this indirect effect using bootstrapping procedures from the R mediation package [[44](/article/10.1007/s12369-025-01285-8#ref-CR44 "Tingley D, Yamamoto T, Hirose K et al (2014) Mediation: R package for causal mediation analysis. UCLA Statistics/American Statistical Association. https://dspace.mit.edu/handle/1721.1/91154,

             accepted: 2014-10-23T17:27:39Z Publisher: UCLA Statistics/American Statistical Association")\]. Unstandardized indirect effects were computed for each of 1000 bootstrapped samples. The bootstrapped unstandardized indirect effect was 0.17, \\(p < 0.05\\)). This mediation analysis, shown in Fig. [10](/article/10.1007/s12369-025-01285-8#Fig10) replicates experiment 1 and shows that perceived agency is an indirect mediator.

4.3 Discussion

Experiment 2 examined whether negative attitudes towards robots could account for the results in experiment 1. There was no impact of NARS on any measure in experiment 2. We did not expect any relationship between participants’ negative attitudes towards robots and condition since condition was randomly selected and the NARS instrument was completed before they saw any videos. There also was not a previous theoretical reason or empirical evidence to think that NARS would be related to perceived agency. However, it was somewhat surprising that there was no relationship between NARS and trust since Tussyadiah et al. found that people with lower NARS scores trusted robots less [[47](/article/10.1007/s12369-025-01285-8#ref-CR47 "Tussyadiah I, Zach F, Wang J (2020) Do travelers trust intelligent service robots? Ann Tour Res 81:102886. https://doi.org/10.1016/j.annals.2020.102886

            ")\]. A close look at their results, however, reveals several key differences between their study and ours. First, they used brief descriptions and a single image to portray two different robots, while we used a video scenario. Second, their item reliability of NARS was low enough that they needed to remove over a third of the items (the loadings from NARS in our experiment 2 were all above 0.4, one of their criterion). Third, they used a different method to measure trust than the MDMT-v2 instrument that we used. Finally, they did not directly examine NARS and trust, instead performing a path analysis where NARS was related to trusting belief (along with several other variables) which was then related to trusting intention. It is also possible that in the time between each of our data collection, people have changed their baseline for negative attitudes towards robots. Our research suggests, at the very least, that there is not always a strong relationship between negative attitudes towards robots and people’s predilection to trust them.

Experiment 2 replicated experiment 1. Experiment 2, like experiment 1, found support for the social norms-trust hypothesis, the social norms-perceived agency hypothesis, and the perceived agency-trust hypothesis.

Of particular interest is the continued support for the impact of perceived agency and trust. Experiment 2 again showed that there is a positive relationship between perceived agency and trust, both when the robot followed social norms (the positive slope of the top line of Figure 9) and also when the robot did not follow social norms (the positive slope of the bottom line of Figure 9). Experiment 2 also replicated Experiment 1 and showed no interaction between perceived agency and trust. These results along with the mediation analysis continues to suggest that perceived agency is a possible component for how much people trust a robot.

5 Limitations

There are, of course, several limitations to the current study. First, our study used video stimuli and was not in-person. While we believe that we would get the same pattern of results if the study was performed in vivo, that needs to be tested explicitly.

Second, we did not explicitly examine in studies 1 and 2 how people perceived the norm-violating robots. It is possible that participants could have seen the behavior as a mistake (e.g., a programming or mechanical error), making the perception of agency less certain. Our norming experiment suggested that the majority of people perceive the robots in the norm-violating condition to be norm violating, not making mistakes, but it is possible that the individuals in studies 1 and 2 had a different impression.

6 General Discussion

Why do people trust a robot? Current research suggests that if people believe a robot can perform well [[6](/article/10.1007/s12369-025-01285-8#ref-CR6 "Desai M, Kaniarasu P, Medvedev M et al (2013) Impact of robot failures and feedback on real-time trust. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp 251–258). https://doi.org/10.1109/HRI.2013.6483596

            . ISSN: 2167-2148"), [12](/article/10.1007/s12369-025-01285-8#ref-CR12 "Hancock PA, Billings DR, Schaefer KE et al (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors: The J Hum Factors And Ergon Soc 53(5):517–527. 
              https://doi.org/10.1177/0018720811417254
              
            "), [38](/article/10.1007/s12369-025-01285-8#ref-CR38 "Robinette P, Howard AM, Wagner AR (2017) Effect of robot performance on human–robot trust in time-critical situations. IEEE Trans On Hum Mach Syst 47(4):425–436. 
              https://doi.org/10.1109/THMS.2017.2648849
              
            . conference Name: IEEE Transactions on Human-Machine Systems"), [49](/article/10.1007/s12369-025-01285-8#ref-CR49 "de Visser E, Parasuraman R (2011) Adaptive aiding of human-robot teaming: effects of imperfect automation on performance, trust, and workload. J Cogn Eng Decis Mak 5(2):209–231. 
              https://doi.org/10.1177/1555343411410160
              
            . publisher: SAGE Publications"), [58](/article/10.1007/s12369-025-01285-8#ref-CR58 "Xie Y, Bodala IP, Ong DC et al (2019) Robot capability and intention in trust-based decisions across tasks. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp 39–47). 
              https://doi.org/10.1109/HRI.2019.8673084
              
            . ISSN: 2167-2148")\] or if it has high integrity \[[7](/article/10.1007/s12369-025-01285-8#ref-CR7 "DeSteno D, Breazeal C, Frank RH et al (2012) Detecting the trustworthiness of novel partners in economic exchange. Psychological Sci 23(12):1549–1556. 
              https://doi.org/10.1177/0956797612448793
              
            . publisher: SAGE Publications Inc"), [12](/article/10.1007/s12369-025-01285-8#ref-CR12 "Hancock PA, Billings DR, Schaefer KE et al (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors: The J Hum Factors And Ergon Soc 53(5):517–527. 
              https://doi.org/10.1177/0018720811417254
              
            "), [13](/article/10.1007/s12369-025-01285-8#ref-CR13 "Hancock PA, Kessler TT, Kaplan AD et al (2021) Evolving trust in robots: specification through sequential and comparative meta-analyses. Human factors. J Sport Hist Of The Human Factors And Ergonomics Society 63(7):1196–1229. 
              https://doi.org/10.1177/0018720820922080
              
            "), [21](/article/10.1007/s12369-025-01285-8#ref-CR21 "Kaniarasu P, Steinfeld AM (2014) Effects of blame on trust in human robot interaction. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp 850–855). 
              https://doi.org/10.1109/ROMAN.2014.6926359
              
            . ISSN: 1944-9437"), [29](/article/10.1007/s12369-025-01285-8#ref-CR29 "Martelaro N, Nneji VC, Ju W et al (2016) Tell me more designing HRI to encourage more trust, disclosure, and companionship. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 181–188). 
              https://doi.org/10.1109/HRI.2016.7451750
              
            . ISSN: 2167-2148"), [40](/article/10.1007/s12369-025-01285-8#ref-CR40 "Sebo SS, Krishnamurthi P, Scassellati B (2019) “I don’t believe you”: investigating the effects of robot trust violation and repair. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 57–65). 
              https://doi.org/10.1109/HRI.2019.8673169
              
            . ISSN: 2167-2148"), [54](/article/10.1007/s12369-025-01285-8#ref-CR54 "Wijnen L, Coenen J, Grzyb B (2017) “It’s not my fault!”: investigating the effects of the deceptive behaviour of a humanoid robot. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’17, (pp 321–322). 
              https://doi.org/10.1145/3029798.3038300
              
            ")\] people have a higher chance of trusting it. Across two experiments, we found support for both: Norm conforming robots were trusted more in terms of both performance and integrity than norm violating robots.

We also proposed that there is another important factor that people use when evaluating how trustworthy a robot is: how much agency a robot is perceived to have. Across two experiments, we found strong support that the more perceived agency a robot has, the more people trust it. Note that this effect holds even when the robot is perceived overall to have poorer performance and less integrity – in the norm violation condition. We also found that perceived agency is an indirect mediator of social norms that influences trust. We should note that we are not claiming that perceived agency is the only or even a primary method of how people determine whether to trust a robot; there are likely other factors that also impact how much a person trusts a robot and how much agency a person thinks a robot has.

In these experiments, we varied social norms to be a partial influence of perceived agency and found explicit evidence that social norms influenced trust and perceived agency. However, there are likely many other methods to increase or decrease perceived agency. Robots that are more goal-oriented are likely to be perceived as having more perceived agency. Similarly, robots that can create new goals or respond based on their (perceived) thoughts or (perceived) feelings are also likely to have higher perceived agency.

We found that there seemed to be no impact of negative attitudes towards robots in our study at all. In fact, the overall score of NARS was quite near the average of the scale, echoing the general sense that people do not have strong positive or negative feelings toward robots. It is possible that robots that have increased capabilities – particularly in terms of social norms or perceived agency – will increase people’s positive affect toward them.

Why would perceived agency be related to trust? We believe that people may reason that if a robot can make their own decisions, they are not simply following programmed rules and have the competency to change their behavior depending on context. Thus, robots may perform well or have high integrity because they are able to make their own decisions. Sometimes, of course, robots may make incorrect or poor decisions, but robots with high perceived agency may not be “locked into” those decisions and therefore be perceived as more trustworthy.

These results also have several implications for the design and programming of robots that interact with people. These results suggest that robots or AIs or characters that should be trusted (i.e., traffic robots, first responder robots) would benefit from having explicit agentic features so that people will be more likely to trust them. For instance, designers and programmers might consider the adverse effects of a highly agentic first responder robot such that people are willing to trust it out right rather than verifying whether the robot executed a task incorrectly. Similarly, a robot extremely low in agency might result in a delay of task completion, or worse, increase of accidents and safety violation due to mistrust between a person and a robot. Of course, future or long-term interactions should allow people to calibrate their trust, but we expect perceived agency to be a component of that calibration process as well.

Furthermore, future work aims to validate the generalizability of the study by investigating a wide range of scenarios where individuals must make decisions about placing trust in a diverse array or robots to achieve a goal. These scenarios will ideally encompass various facets of trust, as well as different degrees and categories of situations involving conformity and violation of social norms. Potential environments may include a robot functioning as a tutor or teleoperator, exhibiting behaviors that are either socially acceptable or not, such as offering assistance, shaking hands, or making direct eye contact during conversation. Also, we believe that people hold strong judgments about robots in a manner similar to their judgments about other people. While this study did not reveal a significant impact of negative attitudes towards robots, an exploration of how people perceive robots in alignment with broader societal discourse could yield intriguing insights.

References

  1. Broadbent E, Tamagawa R, Patience A et al (2012) Attitudes towards health-care robots in a retirement village. Australas J Ageing 31(2):115–120
    Article Google Scholar
  2. Chambon V, Haggard P (2012) Sense of control depends on fluency of action selection, not motor performance. Cognition 125(3):441–451
    Article Google Scholar
  3. Cialdini RB, Reno RR, Kallgren CA (1990) A focus theory of normative conduct: recycling the concept of norms to reduce littering in public places. J Pers Soc Psychol 58(6):1015
    Article Google Scholar
  4. Dennett DC (1978) Current issues in the philosophy of mind. Am Philos Q 15(4):249–261. https://www.jstor.org/stable/20009723, publisher: [North American Philosophical Publications, University of Illinois Press]
    Google Scholar
  5. Desai M, Medvedev M, Vázquez M et al (2012) Effects of changing reliability on trust of robot systems. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. Association for Computing Machinery, HRI’12, pp 73–80). https://doi.org/10.1145/2157689.2157702
  6. Desai M, Kaniarasu P, Medvedev M et al (2013) Impact of robot failures and feedback on real-time trust. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp 251–258). https://doi.org/10.1109/HRI.2013.6483596. ISSN: 2167-2148
  7. DeSteno D, Breazeal C, Frank RH et al (2012) Detecting the trustworthiness of novel partners in economic exchange. Psychological Sci 23(12):1549–1556. https://doi.org/10.1177/0956797612448793. publisher: SAGE Publications Inc
    Article Google Scholar
  8. Falcone R, Castelfranchi C, Cardoso HL et al (2013) Norms and trust. In: Ossowski S (ed) Agreement technologies. Springer Netherlands, pp 221–231. https://doi.org/10.1007/978-94-007-5583-3_15
    Chapter Google Scholar
  9. Faul F, Erdfelder E, Buchner A et al (2009) Statistical power analyses using g* power 3.1: tests for correlation and regression analyses. Behav Res Methods 41(4):1149–1160
    Article Google Scholar
  10. Gray HM, Gray K, Wegner DM (2007) Dimensions of mind perception. Science 315(5812):619–619. https://doi.org/10.1126/science.1134475
    Article Google Scholar
  11. Gray K, Young L, Waytz A (2012) Mind perception is the essence of morality. Psychological Inq 23(2):101–124
    Article Google Scholar
  12. Hancock PA, Billings DR, Schaefer KE et al (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors: The J Hum Factors And Ergon Soc 53(5):517–527. https://doi.org/10.1177/0018720811417254
    Article Google Scholar
  13. Hancock PA, Kessler TT, Kaplan AD et al (2021) Evolving trust in robots: specification through sequential and comparative meta-analyses. Human factors. J Sport Hist Of The Human Factors And Ergonomics Society 63(7):1196–1229. https://doi.org/10.1177/0018720820922080
    Article Google Scholar
  14. Haring KS, Mosley A, Pruznick S et al (2019) Robot authority in human-machine teams: Effects of human-like appearance on compliance. In: Chen J, Fragomeni G (eds.). Virtual, augmented and mixed reality. Applications and case studies. Springer International Publishing, Lecture Notes in Computer Science, pp 63–78. https://doi.org/10.1007/978-3-030-21565-1_5
  15. Harrison AM, Xu WM, Trafton JG (2018) User-centered robot head design: a sensing computing interaction platform for robotics research (sciprr). In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp 215–223
  16. Hayes AF (2017) Introduction to mediation, moderation, and conditional process analysis, second edition: a regression-based approach. Guilford Publications. Google Books ID: 6uk7DwAAQBAJ
    Google Scholar
  17. Helweg-Larsen M, LoMonaco BL (2008) Queuing among u2 fans: reactions to social norm violations 1. J Appl Soc Psychol 38(9):2378–2393
    Article Google Scholar
  18. Horne C (2001) The enforcement of norms: group cohesion and meta-norms. Soc Psychol Q 64(3):253–266. https://www.jstor.org/stable/3090115, publisher: [Sage Publications, Inc., American Sociological Association]
    Article Google Scholar
  19. Ikeda B, Higger M, Song CS et al (2025) Overlapping social navigation principles: a framework for social robot navigation. In 2025 IEEE International Conference on Robotics and Automation (ICRA), IEEE
  20. Kahn PH, Reichert AL, Gary HE et al (2011) The new ontological category hypothesis in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction. Association for Computing Machinery, HRI’11, pp 159–160). https://doi.org/10.1145/1957656.1957710
  21. Kaniarasu P, Steinfeld AM (2014) Effects of blame on trust in human robot interaction. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp 850–855). https://doi.org/10.1109/ROMAN.2014.6926359. ISSN: 1944-9437
  22. Kohn SC, de Visser EJ, Wiese E et al (2021) Measurement of trust in automation: a narrative review and reference guide. Front Psychol 12. https://www.frontiersin.org/articles/10.3389/fpsyg.2021.604977
  23. Korman J, Harrison A, McCurry M et al (2019) Beyond programming: can robots’ norm-violating actions elicit mental state attributions? In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 530–531). https://ieeexplore.ieee.org/document/8673293/
  24. Law T, Malle BF, Scheutz M (2021) A touching connection: how observing robotic touch can affect human trust in a robot. Int J Soc Robot 13(8):2003–2019. https://doi.org/10.1007/s12369-020-00729-7.
    Article Google Scholar
  25. Lee JD, See KA (2004) Trust in automation: designing for appropriate reliance. Hum Factors: The J Hum Factors And Ergon Soc 46(1):50–80. http://www.extenza-eps.com/extenza/loadHTML?objectIDValue=30392%26type=abstract
    Article Google Scholar
  26. Lee JJ, Knox B, Baumann J et al (2013) Computationally modeling interpersonal trust. Front Psychol 4. https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00893
  27. Lyons JB, Vo T, Wynne KT et al (2021) Trusting autonomous security robots: the role of reliability and stated social intent. Hum Factors 63(4):603–618. https://doi.org/10.1177/0018720820901629. publisher: SAGE Publications Inc
    Article Google Scholar
  28. Malle BF, Ullman D (2021) Chapter 1 - a multidimensional conception and measure of human-robot trust. In: Nam C, Lyons J (eds.). Trust in human-robot interaction. Academic, pp 3–25. https://www.sciencedirect.com/science/article/pii/B9780128194720000010
  29. Martelaro N, Nneji VC, Ju W et al (2016) Tell me more designing HRI to encourage more trust, disclosure, and companionship. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 181–188). https://doi.org/10.1109/HRI.2016.7451750. ISSN: 2167-2148
  30. McDonald RI, Crandall CS (2015) Social norms and social influence. Curr Opin In Behavioral Sci 3:147–151. https://www.sciencedirect.com/science/article/pii/S2352154615000558
    Article Google Scholar
  31. Mcknight DH, Carter M, Thatcher JB et al (2011) Trust in a specific technology: an investigation of its components and measures. ACM Trans Manage Inf Syst 2(2)::12:1–:12:25. https://doi.org/10.1145/1985347.1985353
    Article Google Scholar
  32. Mumm J, Mutlu B (2011) Human-robot proxemics: physical and psychological distancing in human-robot interaction. In Proceedings of the 6th international conference on Human-robot interaction. Association for Computing Machinery, HRI’11, (pp 331–338). https://doi.org/10.1145/1957656.1957786
  33. Naneva S, Sarda Gou M, Webb TL et al (2020) A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot 12(6):1179–1201
    Article Google Scholar
  34. Natarajan M, Gombolay M (2020) Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’20, (pp 33–42). https://doi.org/10.1145/3319502.3374839
  35. Nomura T, Suzuki T, Kanda T et al (2006) Measurement of negative attitudes toward robots. Glob Health Res Policy 7(3):437–454. https://doi.org/10.1075/is.7.3.14nom
    Article Google Scholar
  36. Phillips E, Ososky S, Grove J et al (2011) From tools to teammates: toward the development of appropriate mental models for intelligent robots. Proc Hum Factors Ergon Soc Annu Meet 55(1):1491–1495. https://doi.org/10.1177/1071181311551310
    Article Google Scholar
  37. Riek LD, Rabinowitch TC, Bremner P et al (2010) Cooperative gestures: effective signaling for humanoid robots. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 61–68). https://doi.org/10.1109/HRI.2010.5453266. ISSN: 2167-2148
  38. Robinette P, Howard AM, Wagner AR (2017) Effect of robot performance on human–robot trust in time-critical situations. IEEE Trans On Hum Mach Syst 47(4):425–436. https://doi.org/10.1109/THMS.2017.2648849. conference Name: IEEE Transactions on Human-Machine Systems
    Article Google Scholar
  39. Saad L, Trafton JG (in press) Action and outcome predictability impact sense of agency. In Proceedings of the Annual Meeting of the Cognitive Science Society
  40. Sebo SS, Krishnamurthi P, Scassellati B (2019) “I don’t believe you”: investigating the effects of robot trust violation and repair. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp 57–65). https://doi.org/10.1109/HRI.2019.8673169. ISSN: 2167-2148
  41. Sherif M (1953) The concept of reference groups in human relations. Group Relat At The Crossroads Harper 203–231
  42. Short E, Hart J, Vu M et al (2010) No fair!!: an interaction with a cheating robot. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI’10, ACM Press, (pp. 219). http://portal.acm.org/citation.cfm?doid=1734454.1734546
  43. Synofzik M, Vosgerau G, Voss M (2013) The experience of agency: an interplay between prediction and postdiction. Front Psychol 4:43277
    Article Google Scholar
  44. Tingley D, Yamamoto T, Hirose K et al (2014) Mediation: R package for causal mediation analysis. UCLA Statistics/American Statistical Association. https://dspace.mit.edu/handle/1721.1/91154, accepted: 2014-10-23T17:27:39Z Publisher: UCLA Statistics/American Statistical Association
  45. Trafton JG, Frazier C, Zish K et al (2023) The perception of agency: scale reduction and construct validity. In 2023 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
  46. Trafton JG, McCurry JM, Zish K et al (2024) The perception of agency. ACM Trans On Hum Rob Interact 13(1):1–23
    Article Google Scholar
  47. Tussyadiah I, Zach F, Wang J (2020) Do travelers trust intelligent service robots? Ann Tour Res 81:102886. https://doi.org/10.1016/j.annals.2020.102886
    Article Google Scholar
  48. Ullman D, Malle BF (2019) Measuring gains and losses in human-robot trust: evidence for differentiable components of trust. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, (pp 618–619). https://ieeexplore.ieee.org/document/8673154/
  49. de Visser E, Parasuraman R (2011) Adaptive aiding of human-robot teaming: effects of imperfect automation on performance, trust, and workload. J Cogn Eng Decis Mak 5(2):209–231. https://doi.org/10.1177/1555343411410160. publisher: SAGE Publications
    Article Google Scholar
  50. de Visser Ej, Peeters MMM, Jung MF et al (2020) Towards a theory of longitudinal trust calibration in human–robot teams. Int J Soc Robot 12(2):459–478. https://doi.org/10.1007/s12369-019-00596-x
    Article Google Scholar
  51. Vorm ES, Combs DJY (2022) Integrating transparency, trust, and acceptance: the intelligent systems technology acceptance model (ISTAM). Int J Multiling Human–Computer Interaction 38(18):1828–1845. https://doi.org/10.1080/10447318.2022.2070107
    Article Google Scholar
  52. Voss T (2001) Game-theoretical perspectives on the emergence of social norms 105–136
  53. Weisman K, Dweck CS, Markman EM (2017) Rethinking people’s conceptions of mental life. Proc Natl Acad Sci, India, Sect B Biol Sci 114(43):11374–11379
    Article Google Scholar
  54. Wijnen L, Coenen J, Grzyb B (2017) “It’s not my fault!”: investigating the effects of the deceptive behaviour of a humanoid robot. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’17, (pp 321–322). https://doi.org/10.1145/3029798.3038300
  55. Williams M (2007) Building genuine trust through interpersonal emotion management: a threat regulation model of trust and collaboration across boundaries. Acad Manage Rev 32(2):595–621. https://doi.org/10.5465/amr.2007.24351867
    Article Google Scholar
  56. Williams T, Briggs P, Scheutz M (2015) Covert robot-robot communication: human perceptions and implications for human-robot interaction. J Appl Psychol Human-Robot Interaction 4(2):24–49. https://doi.org/10.5898/JHRI.4.2.Williams
    Article Google Scholar
  57. Williams T, Ayers D, Kaufman C et al (2021) Deconstructed trustee theory: disentangling trust in body and identity in multi-robot distributed systems. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. Association for Computing Machinery, HRI’21, pp 262–271). https://doi.org/10.1145/3434073.3444644
  58. Xie Y, Bodala IP, Ong DC et al (2019) Robot capability and intention in trust-based decisions across tasks. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp 39–47). https://doi.org/10.1109/HRI.2019.8673084. ISSN: 2167-2148
  59. Yasuda S, Doheny D, Salomons N et al (2020) Perceived agency of a social norm violating robot. In Proceedings of the Annual Meeting of the Cognitive Science Society. https://par.nsf.gov/biblio/10284325-perceived-agency-social-norm-violating-robot

Download references