Misjudgment Exacerbates Collective Action Problems
Joshua Lewis (New York University), Shalena Srna (University of Michigan), Erin Morrissey (New York University), Matti Wilks (University of Edinburgh), Christoph Winter (Instituto Tecnológico Autónomo de México and Harvard Univeristy) and Lucius Caviola (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 2-2024
In collective action problems, suboptimal collective outcomes arise from each individual optimizing their own wellbeing. Past work assumes individuals do this because they care more about themselves than others. Yet, other factors could also contribute. We examine the role of empirical beliefs. Our results suggest people underestimate individual impact on collective problems. When collective action seems worthwhile, individual action often does not, even if the expected ratio of costs to benefits is the same. It is as if people believe “one person can’t make a difference.” We term this the collective action bias. It results from a fundamental feature of cognition: people find it hard to appreciate the impact of action that is on a much smaller scale than the problem it affects. We document this bias across nine experiments. It affects elected policymakers’ policy judgments. It affects lawyers’ and judges’ interpretation of a climate policy lawsuit. It occurs in both individualist and collectivist sample populations and in both adults and children. Finally, it influences real decisions about how others should use their money. These findings highlight the critical challenge of collective action problems. Without government intervention, not only will many individuals exacerbate collective problems due to self-interest, but even the most altruistic individuals may contribute due to misjudgment.
Other working papers
The evidentialist’s wager – William MacAskill, Aron Vallinder (Global Priorities Institute, Oxford University) Caspar Österheld (Duke University), Carl Shulman (Future of Humanity Institute, Oxford University), Johannes Treutlein (TU Berlin)
Suppose that an altruistic and morally motivated agent who is uncertain between evidential decision theory (EDT) and causal decision theory (CDT) finds herself in a situation in which the two theories give conflicting verdicts. We argue that even if she has significantly higher credence in CDT, she should nevertheless act …
Imperfect Recall and AI Delegation – Eric Olav Chen (Global Priorities Institute, University of Oxford), Alexis Ghersengorin (Global Priorities Institute, University of Oxford) and Sami Petersen (Department of Economics, University of Oxford)
A principal wants to deploy an artificial intelligence (AI) system to perform some task. But the AI may be misaligned and aim to pursue a conflicting objective. The principal cannot restrict its options or deliver punishments. Instead, the principal is endowed with the ability to impose imperfect recall on the agent. The principal can then simulate the task and obscure whether it is real or part of a test. This allows the principal to screen misaligned AIs during testing and discipline their behaviour in deployment. By increasing the…
The scope of longtermism – David Thorstad (Global Priorities Institute, University of Oxford)
Longtermism holds roughly that in many decision situations, the best thing we can do is what is best for the long-term future. The scope question for longtermism asks: how large is the class of decision situations for which longtermism holds? Although longtermism was initially developed to describe the situation of…