Misjudgment Exacerbates Collective Action Problems

Joshua Lewis (New York University), Shalena Srna (University of Michigan), Erin Morrissey (New York University), Matti Wilks (University of Edinburgh), Christoph Winter (Instituto Tecnológico Autónomo de México and Harvard Univeristy) and Lucius Caviola (Global Priorities Institute, University of Oxford)

GPI Working Paper No. 2-2024

In collective action problems, suboptimal collective outcomes arise from each individual optimizing their own wellbeing. Past work assumes individuals do this because they care more about themselves than others. Yet, other factors could also contribute. We examine the role of empirical beliefs. Our results suggest people underestimate individual impact on collective problems. When collective action seems worthwhile, individual action often does not, even if the expected ratio of costs to benefits is the same. It is as if people believe “one person can’t make a difference.” We term this the collective action bias. It results from a fundamental feature of cognition: people find it hard to appreciate the impact of action that is on a much smaller scale than the problem it affects. We document this bias across nine experiments. It affects elected policymakers’ policy judgments. It affects lawyers’ and judges’ interpretation of a climate policy lawsuit. It occurs in both individualist and collectivist sample populations and in both adults and children. Finally, it influences real decisions about how others should use their money. These findings highlight the critical challenge of collective action problems. Without government intervention, not only will many individuals exacerbate collective problems due to self-interest, but even the most altruistic individuals may contribute due to misjudgment.

Other working papers

Measuring AI-Driven Risk with Stock Prices – Susana Campos-Martins (Global Priorities Institute, University of Oxford)

We propose an empirical approach to identify and measure AI-driven shocks based on the co-movements of relevant financial asset prices. For that purpose, we first calculate the common volatility of the share prices of major US AI-relevant companies. Then we isolate the events that shake this industry only from those that shake all sectors of economic activity at the same time. For the sample analysed, AI shocks are identified when there are announcements about (mergers and) acquisitions in the AI industry, launching of…

Can an evidentialist be risk-averse? – Hayden Wilkinson (Global Priorities Institute, University of Oxford)

Two key questions of normative decision theory are: 1) whether the probabilities relevant to decision theory are evidential or causal; and 2) whether agents should be risk-neutral, and so maximise the expected value of the outcome, or instead risk-averse (or otherwise sensitive to risk). These questions are typically thought to be independent – that our answer to one bears little on our answer to the other. …

The unexpected value of the future – Hayden Wilkinson (Global Priorities Institute, University of Oxford)

Various philosophers accept moral views that are impartial, additive, and risk-neutral with respect to betterness. But, if that risk neutrality is spelt out according to expected value theory alone, such views face a dire reductio ad absurdum. If the expected sum of value in humanity’s future is undefined—if, e.g., the probability distribution over possible values of the future resembles the Pasadena game, or a Cauchy distribution—then those views say that no real-world option is ever better than any other. And, as I argue…