Uncalculating cooperation is used to signal trustworthiness (original) (raw)

Experimental evidence of selective inattention in reputation-based cooperation

Scientific reports, 2018

Reputation-based cooperation is often observed in modern society. People gain several types of information by assessing others. Among these, the most important information is the actions of people and those of their recipients. However, almost all studies assume that people consider all of the information they receive. This assumption is extreme, and people engaging in reputation-based cooperation may not pay attention to some information, i.e., they may display selective inattention. We demonstrate that subjects' decision-making in relation to cooperative action depends on the content of the information they receive about their recipients. Our results show that subjects either consider or ignore information depending on the content of that information. When their recipients had cooperated previously, subjects cooperated without considering the information they received. When the recipients had played before with those who had bad reputations, subjects did not use that informati...

Whatever you say, your reputation precedes you: Observation and cheap talk in the trust game

Journal of Public Economics, 2009

Behavior in trust games has been linked to general notions of trust and trustworthiness, important components of social capital. In the equilibrium of a trust game, the investor does not invest, foreseeing that the allocator would keep all of the returns. We use a human-subjects experiment to test the effects of changes to the game designed to increase cooperation and efficiency. We add a pre-play stage in which the investor receives a cheap-talk message from the allocator, observes the allocator's previous decision, or both. None of these changes alter the game's theoretical predictions. We find that allowing observation results in substantially higher cooperation and efficiency, but cheap talk has little effect.

Reputational cues in repeated trust games

The Journal of Socio-Economics, 2009

The importance of reputation in human societies is highlighted both by theoretical models and empirical studies. In this paper, we have extended the scope of previous experimental studies based on trust games by creating treatments where players can rate their opponents' behavior and know their past ratings. Our results showed that being rated by other players and letting this rating be known are factors that increase cooperation levels even when rational reputational investment motives are ruled out. More generally, subjects tended to respond to reputational opportunities even when this was neither rational nor explainable by reciprocity.

The effect of consequential thinking on trust game behavior

Journal of Behavioral Decision Making, 2009

Contrary to rational Expected Monetary Value (EMV) predictions that no money will be transferred in Trust Games, in experiments players make positive transfers. Theorists have proposed modifying the Sender's utility function while retaining utilitymaximization assumptions to account for this behavior. Such accounts assume that Senders can grasp the possible outcomes of their choices, their probabilities, and utilities. In reality, however, Senders' choices are unexpectedly complex, and the assumption that they approximate expected utility maximization is highly implausible. Instead, we suggest that Senders are guided by general propensities to trust others. Two experiments examine the effect of inducing consequential thought on Sender behavior. One induced consequential thought directly; the other did so indirectly. The amount sent was significantly reduced following either manipulation. This suggests that models of Sender behavior in Complex Trust Games should not assume that participants routinely engage in consequential thinking (CT) of the depth that would be required for utility maximization.

The effects of consequential thinking on trust game behavior

Contrary to rational Expected Monetary Value (EMV) predictions that no money will be transferred in Trust Games, in experiments players make positive transfers. Theorists have proposed modifying the Sender’s utility function while retaining utility- maximization assumptions to account for this behavior. Such accounts assume that Senders can grasp the possible outcomes of their choices, their probabilities, and utilities. In reality, however, Senders’ choices are unexpectedly complex, and the assumption that they approximate expected utility maximization is highly implausible. Instead, we suggest that Senders are guided by general propensities to trust others. Two experiments examine the effect of inducing consequential thought on Sender behavior. One induced consequential thought directly; the other did so indirectly. The amount sent was significantly reduced following either manipulation. This suggests that models of Sender behavior in Complex Trust Games should not assume that participants routinely engage in consequential thinking (CT) of the depth that would be required for utility maximization.

The psychological foundations of reputation-based cooperation

Philosophical Transactions of the Royal Society B: Biological Sciences, 2021

Humans care about having a positive reputation, which may prompt them to help in scenarios where the return benefits are not obvious. Various game-theoretical models support the hypothesis that concern for reputation may stabilize cooperation beyond kin, pairs or small groups. However, such models are not explicit about the underlying psychological mechanisms that support reputation-based cooperation. These models therefore cannot account for the apparent rarity of reputation-based cooperation in other species. Here, we identify the cognitive mechanisms that may support reputation-based cooperation in the absence of language. We argue that a large working memory enhances the ability to delay gratification, to understand others' mental states (which allows for perspective-taking and attribution of intentions) and to create and follow norms, which are key building blocks for increasingly complex reputation-based cooperation. We review the existing evidence for the appearance of th...

Assigning Intentions When Actions Are Unobservable: The Impact of Trembling in the Trust Game

Southern Economic Journal, 2006

The authors gratefully acknowledge the support of the National Science Foundation. The paper has benefited from the suggestions of the anonymous referees. 2 This paper reports laboratory experiments investigating behavior when players may make inferences about the intentions behind others' prior actions based on higher-or lower-accuracy information about those actions. We investigate a trust game with first mover trembling, a game in which nature determines whether the first mover's decision is implemented or reversed. The results indicate that second movers give first movers the benefit of the doubt. However, first movers do not anticipate this response. Ultimately, it appears that subjects are thinking on at least three levels when making decisions: they are concerned with their own material well being, the trustworthiness of their counterpart, and how their own actions will be perceived.

Deception Undermines the Stability of Cooperation in Games of Indirect Reciprocity

PLOS ONE, 2016

Indirect reciprocity is often claimed as one of the key mechanisms of human cooperation. It works only if there is a reputational score keeping and each individual can inform with high probability which other individuals were good or bad in the previous round. Gossip is often proposed as a mechanism that can maintain such coherence of reputations in the face of errors of transmission. Random errors, however, are not the only source of uncertainty in such situations. The possibility of deceptive communication, where the signallers aim to misinform the receiver cannot be excluded. While there is plenty of evidence for deceptive communication in humans the possibility of deception is not yet incorporated into models of indirect reciprocity. Here we show that when deceptive strategies are allowed in the population it will cause the collapse of the coherence of reputations and thus in turn it results the collapse of cooperation. This collapse is independent of the norms and the cost and benefit values. It is due to the fact that there is no selection for honest communication in the framework of indirect reciprocity. It follows that indirect reciprocity can be only proposed plausibly as a mechanism of human cooperation if additional mechanisms are specified in the model that maintains honesty.

Scarce and directly beneficial reputations support cooperation

Scientific Reports

A human solution to the problem of cooperation is the maintenance of informal reputation hierarchies. Reputational information contributes to cooperation by providing guidelines about previous group-beneficial or free-rider behaviour in social dilemma interactions. How reputation information could be credible, however, remains a puzzle. We test two potential safeguards to ensure credibility: (i) reputation is a scarce resource and (ii) it is not earned for direct benefits. We test these solutions in a laboratory experiment in which participants played two-person Prisoner’s Dilemma games without partner selection, could observe some other interactions, and could communicate reputational information about possible opponents to each other. Reputational information clearly influenced cooperation decisions. Although cooperation was not sustained at a high level in any of the conditions, the possibility of exchanging third-party information was able to temporarily increase the level of st...

Partner Selection Supported by Opaque Reputation Promotes Cooperative Behavior

Microeconomics: Welfare Economics & Collective Decision-Making eJournal, 2016

Reputation plays a major role in human societies, and it has been proposed as an explanation for the evolution of cooperation. While the majority of previous studies equates reputation with a transparent and complete history of players' past decisions, in real life, reputations are often ambiguous and opaque. Using web-based experiments, we explore the extent to which opaque reputation works in isolating defectors, with and without partner selection opportunities. Our results show that low reputation works as a signal of untrustworthiness, whereas medium or high reputation are not taken into account by participants for orienting their choices. We also find that reputation without partner selection does not promote cooperative behavior; that is, defectors do not turn into cooperators only for the sake of getting a positive reputation. Finally, in a third study, we find that, when reputation is pivotal to selection, then a substantial proportion of would-be defectors turn into coo...