Misjudgment Exacerbates Collective Action Problems
Joshua Lewis (New York University), Shalena Srna (University of Michigan), Erin Morrissey (New York University), Matti Wilks (University of Edinburgh), Christoph Winter (Instituto Tecnológico Autónomo de México and Harvard Univeristy) and Lucius Caviola (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 2-2024
In collective action problems, suboptimal collective outcomes arise from each individual optimizing their own wellbeing. Past work assumes individuals do this because they care more about themselves than others. Yet, other factors could also contribute. We examine the role of empirical beliefs. Our results suggest people underestimate individual impact on collective problems. When collective action seems worthwhile, individual action often does not, even if the expected ratio of costs to benefits is the same. It is as if people believe “one person can’t make a difference.” We term this the collective action bias. It results from a fundamental feature of cognition: people find it hard to appreciate the impact of action that is on a much smaller scale than the problem it affects. We document this bias across nine experiments. It affects elected policymakers’ policy judgments. It affects lawyers’ and judges’ interpretation of a climate policy lawsuit. It occurs in both individualist and collectivist sample populations and in both adults and children. Finally, it influences real decisions about how others should use their money. These findings highlight the critical challenge of collective action problems. Without government intervention, not only will many individuals exacerbate collective problems due to self-interest, but even the most altruistic individuals may contribute due to misjudgment.
Other working papers
The paralysis argument – William MacAskill, Andreas Mogensen (Global Priorities Institute, Oxford University)
Given plausible assumptions about the long-run impact of our everyday actions, we show that standard non-consequentialist constraints on doing harm entail that we should try to do as little as possible in our lives. We call this the Paralysis Argument. After laying out the argument, we consider and respond to…
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…
Moral demands and the far future – Andreas Mogensen (Global Priorities Institute, Oxford University)
I argue that moral philosophers have either misunderstood the problem of moral demandingness or at least failed to recognize important dimensions of the problem that undermine many standard assumptions. It has been assumed that utilitarianism concretely directs us to maximize welfare within a generation by transferring resources to people currently living in extreme poverty. In fact, utilitarianism seems to imply that any obligation to help people who are currently badly off is trumped by obligations to undertake actions targeted at improving the value…