Jillian J. Jordan - Faculty & Research - Harvard Business School
Photo of Jillian  J. Jordan

Jillian J. Jordan

Assistant Professor of Business Administration

Negotiation, Organizations & Markets

Journal Articles
  1. When Do We Punish People Who Don't?

    Justin W. Martin, Jillian J. Jordan, David G. Rand and Fiery Cushman

    People often punish norm violations. In what cases is such punishment viewed as normative—a behavior that we “should”or even“must”engage in? We approach this question by asking when people who fail to punish a norm violator are, themselves, punished. (For instance, a boss who fails to punish transgressive employees might,herself, befired.) We conducted experiments exploring the contexts in which higher-order punishment occurs, using both incentivized economic games and hypothetical vignettes describing everyday situations. We presented participants with cases in which an individual fails to punish a transgressor, either as a victim (second party) or as an observer (third party). Across studies, we consistently observed higher-order punishment of non-punishing observers. Higher-order punishment of non-punishing victims, however, was consistently weaker, and sometimes non-existent. These results demonstrate the selective application of higher-order punishment, provide a new perspective on the psychological mechanisms that support it, and provide some clues regarding itsfunction.

    Keywords: punishment; norms; Cooperation; Societal Protocols; Adaptation;

    Citation:

    Martin, Justin W., Jillian J. Jordan, David G. Rand, and Fiery Cushman. "When Do We Punish People Who Don't?" Cognition 193 (December 2019).  View Details
  2. Signaling When Nobody Is Watching: A Reputation Heuristics Account of Outrage and Punishment in One-shot Anonymous Interactions

    Jillian J. Jordan and David G. Rand

    Moralistic punishment can confer reputation benefits by signaling trustworthiness to observers. However, why do people punish even when nobody is watching? We argue that people often rely on the heuristic that reputation is typically at stake, such that reputation concerns can shape moralistic outrage and punishment even in one-shot anonymous interactions. We then support this account using data from Amazon Mechanical Turk. In anonymous experiments, subjects (total n = 8,440) report more outrage in response to others’ selfishness when they cannot signal their trustworthiness through direct prosociality (sharing with a third party)—such that if the interaction were not anonymous, punishment would have greater signaling value. Furthermore, mediation analyses suggest that sharing opportunities reduce outrage by influencing reputation concerns. Additionally, anonymous experiments measuring costly punishment (total n = 6,076) show the same pattern: subjects punish more when sharing is not possible. Moreover, and importantly, moderation analyses provide some evidence that sharing opportunities do not merely reduce outrage and punishment by inducing empathy toward selfishness or hypocrisy aversion among non-sharers. Finally, we support the specific role of heuristics by investigating individual differences in deliberateness. Less deliberative individuals (who typically rely more on heuristics) are more sensitive to sharing opportunities in our anonymous punishment experiments, but, critically, not in punishment experiments where reputation is at stake (total n = 3,422); and not in our anonymous outrage experiments (where condemning is costless). Together, our results suggest that when nobody is watching, reputation cues nonetheless can shape outrage and—among individuals who rely on heuristics—costly punishment.

    Keywords: signaling; Morality; trustworthiness; anger; third-party punishment; Moral Sensibility; Behavior; Trust; Reputation;

    Citation:

    Jordan, Jillian J., and David G. Rand. "Signaling When Nobody Is Watching: A Reputation Heuristics Account of Outrage and Punishment in One-shot Anonymous Interactions." Journal of Personality and Social Psychology 118, no. 1 (January 2020).  View Details
  3. Which Accusations Stick?

    Jillian J. Jordan

    The social function of witchcraft accusations remains opaque. An empirical study of Chinese villagers shows that the label ‘z hu’ influences who interacts across a social network, but appears not to tag defectors in service of promoting cooperation. An open question thus remains: from witchcraft to gossip, which accusations stick?

    Keywords: Society; Reputation;

    Citation:

    Jordan, Jillian J. "Which Accusations Stick?" Nature Human Behaviour 2, no. 1 (January 2018): 19–20.  View Details
  4. Statistical Physics of Human Cooperation

    Matjaž Perc, Jillian J. Jordan, David G. Rand, Zhen Wang, Stefano Boccaletti and Attila Szolnoki

    Extensive cooperation among unrelated individuals is unique to humans, who often sacrifice personal benefits for the common good and work together to achieve what they are unable to execute alone. The evolutionary success of our species is indeed due, to a large degree, to our unparalleled other-regarding abilities. Yet, a comprehensive understanding of human cooperation remains a formidable challenge. Recent research in the social sciences indicates that it is important to focus on the collective behavior that emerges as the result of the interactions among individuals, groups, and even societies. Non-equilibrium statistical physics, in particular Monte Carlo methods and the theory of collective behavior of interacting particles near phase transition points, has proven to be very valuable for understanding counterintuitive evolutionary outcomes. By treating models of human cooperation as classical spin models, a physicist can draw on familiar settings from statistical physics. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among humans often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. The complexity of solutions therefore often surpasses that observed in physical systems. Here we review experimental and theoretical research that advances our understanding of human cooperation, focusing on spatial pattern formation, on the spatiotemporal dynamics of observed solutions, and on self-organization that may either promote or hinder socially favorable states.

    Keywords: human cooperation; evolutionary game theory; public goods; reward; punishment; tolerance; self-organization; pattern formation; Cooperation; Behavior; Game Theory;

    Citation:

    Perc, Matjaž, Jillian J. Jordan, David G. Rand, Zhen Wang, Stefano Boccaletti, and Attila Szolnoki. "Statistical Physics of Human Cooperation." Physics Reports 687 (May 8, 2017): 1–51.  View Details
  5. Third-Party Punishment as a Costly Signal of High Continuation Probabilities in Repeated Games

    Jillian J. Jordan and David G. Rand

    Why do individuals pay costs to punish selfish behavior, even as third-party observers? A large body of research suggests that reputation plays an important role in motivating such third-party punishment (TPP). Here we focus on a recently proposed reputation-based account (Jordan et al., 2016) that invokes costly signaling. This account proposed that “trustworthy type” individuals (who are incentivized to cooperate with others) typically experience lower costs of TPP, and thus that TPP can function as a costly signal of trustworthiness. Specifically, it was argued that some but not all individuals face incentives to cooperate, making them high-quality and trustworthy interaction partners; and, because the same mechanisms that incentivize cooperation also create benefits for using TPP to deter selfish behavior, these individuals are likely to experience reduced costs of punishing selfishness. Here, we extend this conceptual framework by providing a concrete, “from-the-ground-up” model demonstrating how this process could work in the context of repeated interactions incentivizing both cooperation and punishment. We show how individual differences in the probability of future interaction can create types that vary in whether they find cooperation payoff-maximizing (and thus make high-quality partners), as well as in their net costs of TPP – because a higher continuation probability increases the likelihood of receiving rewards from the victim of the punished transgression (thus offsetting the cost of punishing). We also provide a simple model of dispersal that demonstrates how types that vary in their continuation probabilities can stably coexist, because the payoff from remaining in one’s local environment (i.e. not dispersing) decreases with the number of others who stay. Together, this model demonstrates, from the group up, how TPP can serve as a costly signal of trustworthiness arising from exposure to repeated interactions.

    Keywords: direct reciprocity; evolution; dispersal; Cooperation; Trust; Reputation; Game Theory;

    Citation:

    Jordan, Jillian J., and David G. Rand. "Third-Party Punishment as a Costly Signal of High Continuation Probabilities in Repeated Games." Journal of Theoretical Biology 421 (May 21, 2017): 189–202.  View Details
  6. Why Do We Hate Hypocrites? Evidence for a Theory of False Signaling

    Jillian J. Jordan, Roseanna Sommers, Paul Bloom and David G. Rand

    Why do people judge hypocrites, who condemn immoral behaviors that they in fact engage in, so negatively? We propose that hypocrites are disliked because their condemnation sends a false signal about their personal conduct, deceptively suggesting that they behave morally. We show that verbal condemnation signals moral goodness (Study 1) and does so even more convincingly than directly stating that one behaves morally (Study 2). We then demonstrate that people judge hypocrites negatively—even more negatively than people who directly make false statements about their morality (Study 3). Finally, we show that “honest” hypocrites—who avoid false signaling by admitting to committing the condemned transgression—are not perceived negatively even though their actions contradict their stated values (Study 4). Critically, the same is not true of hypocrites who engage in false signaling but admit to unrelated transgressions (Study 5). Together, our results support a false-signaling theory of hypocrisy.

    Keywords: moral psychology; condemnation; vignettes; deception; social signaling; Open Data; open materials; Moral Sensibility; Behavior; Perception;

    Citation:

    Jordan, Jillian J., Roseanna Sommers, Paul Bloom, and David G. Rand. "Why Do We Hate Hypocrites? Evidence for a Theory of False Signaling." Psychological Science 28, no. 3 (March 2017): 356–368.  View Details
  7. No Unique Effect of Intergroup Competition on Cooperation: Non-competitive Thresholds Are as Effective as Competitions between Groups for Increasing Human Cooperative Behavior

    Matthew R. Jordan, Jillian J. Jordan and David G. Rand

    Explaining cooperation remains a central topic for evolutionary theorists. Many have argued that group selection provides such an explanation: theoretical models show that intergroup competition could have given rise to cooperation that is costly for the individual. Whether group selection actually did play an important role in the evolution of human cooperation, however, is much debated. Recent experiments have shown that intergroup competitions do increase human cooperation, which has been taken as evidence for group selection as a mechanism for the evolution of cooperation. Here we challenge this standard interpretation. Competitions change the payoff structure by creating a threshold effect whereby the group that contributes more earns an additional prize, which creates some incentive for individuals to cooperate. We present four studies that disentangle competition and thresholds, and strongly suggest that it is thresholds–rather than competitions per se–that increase cooperation. Thus, prior intergroup competition experiments provide no evidence of a unique or special role for intergroup competition in promoting human cooperation, and shed no light on whether group selection shaped human evolution.

    Keywords: Intergroup competition; threshold public goods game; multi-level selection; Cooperation; Groups and Teams; Competition;

    Citation:

    Jordan, Matthew R., Jillian J. Jordan, and David G. Rand. "No Unique Effect of Intergroup Competition on Cooperation: Non-competitive Thresholds Are as Effective as Competitions between Groups for Increasing Human Cooperative Behavior." Evolution and Human Behavior 38, no. 1 (January 2017): 102–108.  View Details
  8. Uncalculating Cooperation Is Used to Signal Trustworthiness

    Jillian J. Jordan, Moshe Hoffman, Martin A. Nowak and David G. Rand

    Humans frequently cooperate without carefully weighing the costs and benefits. As a result, people may wind up cooperating when it is not worthwhile to do so. Why risk making costly mistakes? Here, we present experimental evidence that reputation concerns provide an answer: people cooperate in an uncalculating way to signal their trustworthiness to observers. We present two economic game experiments in which uncalculating versus calculating decision-making is operationalized by either a subject’s choice of whether to reveal the precise costs of cooperating (Exp. 1) or the time a subject spends considering these costs (Exp. 2). In both experiments, we find that participants are more likely to engage in uncalculating cooperation when their decision-making process is observable to others. Furthermore, we confirm that people who engage in uncalculating cooperation are perceived as, and actually are, more trustworthy than people who cooperate in a calculating way. Taken together, these data provide the first empirical evidence, to our knowledge, that uncalculating cooperation is used to signal trustworthiness, and is not merely an efficient decision-making strategy that reduces cognitive costs. Our results thus help to explain a range of puzzling behaviors, such as extreme altruism, the use of ethical principles, and romantic love.

    Keywords: Social Evaluation; experimental economics; moral psychology; Cooperation; Reputation; Decision Making;

    Citation:

    Jordan, Jillian J., Moshe Hoffman, Martin A. Nowak, and David G. Rand. "Uncalculating Cooperation Is Used to Signal Trustworthiness." Proceedings of the National Academy of Sciences 113, no. 31 (August 2, 2016): 8658–8663.  View Details
  9. Third-party Punishment as a Costly Signal of Trustworthiness

    Jillian J. Jordan, Moshe Hoffman, Paul Bloom and David G. Rand

    Third-party punishment (TPP), in which unaffected observers punish selfishness, promotes cooperation by deterring defection. But why should individuals choose to bear the costs of punishing? We present a game theoretic model of TPP as a costly signal of trustworthiness. Our model is based on individual differences in the costs and/or benefits of being trustworthy. We argue that individuals for whom trustworthiness is payoff-maximizing will find TPP to be less net costly (for example, because mechanisms that incentivize some individuals to be trustworthy also create benefits for deterring selfishness via TPP). We show that because of this relationship, it can be advantageous for individuals to punish selfishness in order to signal that they are not selfish themselves. We then empirically validate our model using economic game experiments. We show that TPP is indeed a signal of trustworthiness: third-party punishers are trusted more, and actually behave in a more trustworthy way, than non-punishers. Furthermore, as predicted by our model, introducing a more informative signal—the opportunity to help directly—attenuates these signalling effects. When potential punishers have the chance to help, they are less likely to punish, and punishment is perceived as, and actually is, a weaker signal of trustworthiness. Costly helping, in contrast, is a strong and highly used signal even when TPP is also possible. Together, our model and experiments provide a formal reputational account of TPP, and demonstrate how the costs of punishing may be recouped by the long-run benefits of signalling one’s trustworthiness.

    Keywords: third-party punishment; trustworthiness; Behavior; Trust; Game Theory;

    Citation:

    Jordan, Jillian J., Moshe Hoffman, Paul Bloom, and David G. Rand. "Third-party Punishment as a Costly Signal of Trustworthiness." Nature 530, no. 7591 (2016): 473–476.  View Details
  10. The Effects of Endowment Size and Strategy Method on Third Party Punishment

    Jillian J. Jordan, Katherine McAuliffe and David G. Rand

    Numerous experiments have shown that people often engage in third-party punishment (3PP) of selfish behavior. This evidence has been used to argue that people respond to selfishness with anger, and get utility from punishing those who mistreat others. Elements of the standard 3PP experimental design, however, allow alternative explanations: it has been argued that 3PP could be motivated by envy (as selfish dictators earn high payoffs), or could be influenced by the use of the strategy method (which is known to influence second-party punishment). Here we test these alternatives by varying the third party’s endowment and the use of the strategy method, and measuring punishment. We find that while third parties do report more envy when they have lower endowments, neither manipulation significantly affects punishment. We also show that punishment is associated with ratings of anger but not of envy. Thus, our results suggest that 3PP is not an artifact of self-focused envy or use of the strategy method. Instead, our findings are consistent with the hypothesis that 3PP is motivated by anger.

    Keywords: third-party punishment; norm-enforcement; strategy method; economic games; Cooperation; Emotions; Fairness;

    Citation:

    Jordan, Jillian J., Katherine McAuliffe, and David G. Rand. "The Effects of Endowment Size and Strategy Method on Third Party Punishment." Experimental Economics 19, no. 4 (December 2016): 741–763.  View Details
  11. Costly Third-party Punishment in Young Children

    Katherine McAuliffe, Jillian J. Jordan and Felix Warneken

    Human adults engage in costly third-party punishment of unfair behavior, but the developmental origins of this behavior are unknown. Here we investigate costly third-partypunishment in 5- and 6-year-old children. Participants were asked to accept (enact) or reject (punish) proposed allocations of resources between a pair of absent, anonymous children. In addition, we manipulated whether subjects had to pay a cost to punish proposed allocations. Experiment 1 showed that 6-year-olds (but not 5-year-olds) punished unfair proposals more than fair proposals. However, children punished less when doing so was personally costly. Thus, while sensitive to cost, they were willing to sacrifice resources to intervene against unfairness. Experiment 2 showed that 6-year-olds were less sensitive to unequal allocations when they resulted from selfishness than generosity. These findings show that costly third-party punishment of unfair behavior is present in young children, suggesting that from early in development children show a sophisticated capacity to promote fair behavior.

    Keywords: third-party punishment; inequity aversion; social cognition; Cooperation; Fairness; Behavior;

    Citation:

    McAuliffe, Katherine, Jillian J. Jordan, and Felix Warneken. "Costly Third-party Punishment in Young Children." Cognition 134 (January 2015): 1–10.  View Details
  12. Heuristics Guide the Implementation of Social Preferences in One-Shot Prisoner's Dilemma Experiments

    Jillian J. Jordan, Valerio Capraro and David G. Rand

    Cooperation in one-shot anonymous interactions is a widely documented aspect of human behavior. Here we shed light on the motivations behind this behavior by experimentally exploring cooperation in a one-shot continuous-strategy Prisoner’s Dilemma (i.e. one-shot two-player Public Goods Game). We examine the distribution of cooperation amounts, and how that distribution varies based on the benefit-to-cost ratio of cooperation (b/c). Interestingly, we find a trimodal distribution at all b/c values investigated. Increasing b/c decreases the fraction of participants engaging in zero cooperation and increases the fraction engaging in maximal cooperation, suggesting a role for efficiency concerns. However, a substantial fraction of participants consistently engage in 50% cooperation regardless of b/c. The presence of these persistent 50% cooperators is surprising, and not easily explained by standard models of social preferences. We present evidence that this behavior is a result of social preferences guided by simple decision heuristics, rather than the rational examination of payoffs assumed by most social preference models. We also find a strong correlation between play in the Prisoner’s Dilemma and in a subsequent Dictator Game, confirming previous findings suggesting a common prosocial motivation underlying altruism and cooperation.

    Keywords: Human behavior; social evolution; Behavior; Cooperation; Decision Making; Game Theory;

    Citation:

    Jordan, Jillian J., Valerio Capraro, and David G. Rand. "Heuristics Guide the Implementation of Social Preferences in One-Shot Prisoner's Dilemma Experiments." Art. 6790. Scientific Reports 4 (2014).  View Details
  13. Development of In-Group Favoritism in Children's Third-Party Punishment of Selfishness

    Jillian J. Jordan, Katherine McAuliffe and Felix Warneken

    When enforcing norms for cooperative behavior, human adults sometimes exhibit in-group bias. For example, third-party observers punish selfish behaviors committed by out-group members more harshly than similar behaviors committed by in-group members. Although evidence suggests that children begin to systematically punish selfish behavior around the age of 6 y, the development of in-group bias in their punishment remains unknown. Do children start off enforcing fairness norms impartially, or is norm enforcement biased from its emergence? How does bias change over development? Here, we created novel social groups in the laboratory and gave 6- and 8-year-olds the opportunity to engage in costly third-party punishment of selfish sharing behavior. We found that by age 6, punishment was already biased: Selfish resource allocations received more punishment when they were proposed by out-group members and when they disadvantaged in-group members. We also found that although costly punishment increased between ages 6 and 8, bias in punishment partially decreased. Although 8-y-olds also punished selfish out-group members more harshly, they were equally likely to punish on behalf of disadvantaged in-group and out-group members, perhaps reflecting efforts to enforce norms impartially. Taken together, our results suggest that norm enforcement is biased from its emergence, but that this bias can be partially overcome through developmental change.

    Keywords: ontogeny; Cooperation; Equality and Inequality;

    Citation:

    Jordan, Jillian J., Katherine McAuliffe, and Felix Warneken. "Development of In-Group Favoritism in Children's Third-Party Punishment of Selfishness." Proceedings of the National Academy of Sciences 111, no. 35 (September 2, 2014): 12710–12715.  View Details
  14. Contagion of Cooperation in Static and Fluid Social Networks

    Jillian J. Jordan, David G. Rand, Samuel Arbesman, James H. Fowler and Nicholas A. Christakis

    Cooperation is essential for successful human societies. Thus, understanding how cooperative and selfish behaviors spread from person to person is a topic of theoretical and practical importance. Previous laboratory experiments provide clear evidence of social contagion in the domain of cooperation, both in fixed networks and in randomly shuffled networks, but leave open the possibility of asymmetries in the spread of cooperative and selfish behaviors. Additionally, many real human interaction structures are dynamic: we often have control over whom we interact with. Dynamic networks may differ importantly in the goals and strategic considerations they promote, and thus the question of how cooperative and selfish behaviors spread in dynamic networks remains open. Here, we address these questions with data from a social dilemma laboratory experiment. We measure the contagion of both cooperative and selfish behavior over time across three different network structures that vary in the extent to which they afford individuals control over their network ties. We find that in relatively fixed networks, both cooperative and selfish behaviors are contagious. In contrast, in more dynamic networks, selfish behavior is contagious, but cooperative behavior is not: subjects are fairly likely to switch to cooperation regardless of the behavior of their neighbors. We hypothesize that this insensitivity to the behavior of neighbors in dynamic networks is the result of subjects’ desire to attract new cooperative partners: even if many of one’s current neighbors are defectors, it may still make sense to switch to cooperation. We further hypothesize that selfishness remains contagious in dynamic networks because of the well-documented willingness of cooperators to retaliate against selfishness, even when doing so is costly. These results shed light on the contagion of cooperative behavior in fixed and fluid networks, and have implications for influence-based interventions aiming at increasing cooperative behavior.

    Keywords: social contagion; social networks; Cooperation; Behavior;

    Citation:

    Jordan, Jillian J., David G. Rand, Samuel Arbesman, James H. Fowler, and Nicholas A. Christakis. "Contagion of Cooperation in Static and Fluid Social Networks." PLoS ONE 8, no. 6 (June 2013).  View Details
Book Chapters