Misjudgment Exacerbates Collective Action Problems
Joshua Lewis (New York University), Shalena Srna (University of Michigan), Erin Morrissey (New York University), Matti Wilks (University of Edinburgh), Christoph Winter (Instituto Tecnológico Autónomo de México and Harvard Univeristy) and Lucius Caviola (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 2-2024
In collective action problems, suboptimal collective outcomes arise from each individual optimizing their own wellbeing. Past work assumes individuals do this because they care more about themselves than others. Yet, other factors could also contribute. We examine the role of empirical beliefs. Our results suggest people underestimate individual impact on collective problems. When collective action seems worthwhile, individual action often does not, even if the expected ratio of costs to benefits is the same. It is as if people believe “one person can’t make a difference.” We term this the collective action bias. It results from a fundamental feature of cognition: people find it hard to appreciate the impact of action that is on a much smaller scale than the problem it affects. We document this bias across nine experiments. It affects elected policymakers’ policy judgments. It affects lawyers’ and judges’ interpretation of a climate policy lawsuit. It occurs in both individualist and collectivist sample populations and in both adults and children. Finally, it influences real decisions about how others should use their money. These findings highlight the critical challenge of collective action problems. Without government intervention, not only will many individuals exacerbate collective problems due to self-interest, but even the most altruistic individuals may contribute due to misjudgment.
Other working papers
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…
Can an evidentialist be risk-averse? – Hayden Wilkinson (Global Priorities Institute, University of Oxford)
Two key questions of normative decision theory are: 1) whether the probabilities relevant to decision theory are evidential or causal; and 2) whether agents should be risk-neutral, and so maximise the expected value of the outcome, or instead risk-averse (or otherwise sensitive to risk). These questions are typically thought to be independent – that our answer to one bears little on our answer to the other. …
What power-seeking theorems do not show – David Thorstad (Vanderbilt University)
Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.