Misjudgment Exacerbates Collective Action Problems

Joshua Lewis (New York University), Shalena Srna (University of Michigan), Erin Morrissey (New York University), Matti Wilks (University of Edinburgh), Christoph Winter (Instituto Tecnológico Autónomo de México and Harvard Univeristy) and Lucius Caviola (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 2-2024

In collective action problems, suboptimal collective outcomes arise from each individual optimizing their own wellbeing. Past work assumes individuals do this because they care more about themselves than others. Yet, other factors could also contribute. We examine the role of empirical beliefs. Our results suggest people underestimate individual impact on collective problems. When collective action seems worthwhile, individual action often does not, even if the expected ratio of costs to benefits is the same. It is as if people believe “one person can’t make a difference.” We term this the collective action bias. It results from a fundamental feature of cognition: people find it hard to appreciate the impact of action that is on a much smaller scale than the problem it affects. We document this bias across nine experiments. It affects elected policymakers’ policy judgments. It affects lawyers’ and judges’ interpretation of a climate policy lawsuit. It occurs in both individualist and collectivist sample populations and in both adults and children. Finally, it influences real decisions about how others should use their money. These findings highlight the critical challenge of collective action problems. Without government intervention, not only will many individuals exacerbate collective problems due to self-interest, but even the most altruistic individuals may contribute due to misjudgment.

Other working papers

A bargaining-theoretic approach to moral uncertainty – Owen Cotton-Barratt (Future of Humanity Institute, Oxford University), Hilary Greaves (Global Priorities Institute, Oxford University)

This paper explores a new approach to the problem of decision under relevant moral uncertainty. We treat the case of an agent making decisions in the face of moral uncertainty on the model of bargaining theory, as if the decision-making process were one of bargaining among different internal parts of the agent, with different parts committed to different moral theories. The resulting approach contrasts interestingly with the extant “maximise expected choiceworthiness”…

Non-additive axiologies in large worlds – Christian Tarsney and Teruji Thomas (Global Priorities Institute, Oxford University)

Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say ‘yes’, but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say ‘no’…

What power-seeking theorems do not show – David Thorstad (Vanderbilt University)

Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.