It Only Takes One: The Psychology of Unilateral Decisions
Joshua Lewis (New York University), Carter Allen (UC Berkeley), Christoph Winter (ITAM, Harvard University and Institute for Law & AI) and Lucius Caviola (Global Priorities Institute, Oxford University)
GPI Working Paper No. 14-2024
Sometimes, one decision can guarantee that a risky event will happen. For instance, it only took one team of researchers to synthesize and publish the horsepox genome, thus imposing its publication even though other researchers might have refrained for biosecurity reasons. We examine cases where everybody who can impose a given event has the same goal but different information about whether the event furthers that goal. Across 8 experiments (including scenario studies with elected policymakers, doctors, artificial-intelligence researchers, and lawyers and judges and economic games with laypeople, N = 1,518, and 3 supplemental studies, N = 847) people behave suboptimally, balancing two factors. First, people often impose events with expected utility only slightly better than the alternative based on the information available to them, even when others might know more. This approach is insufficiently cautious, leading people to impose too frequently, a situation termed the unilateralist’s curse. Second, counteracting the first factor, people avoid sole responsibility for unexpectedly bad outcomes, sometimes declining to impose seemingly desirable events. The former heuristic typically dominates and people unilaterally impose too often, succumbing to the unilateralist’s curse. But when only few people can impose, who know the stakes are high, responsibility aversion reduces over-imposing.
Other working papers
The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists – Elliott Thornley (Global Priorities Institute, University of Oxford)
I explain and motivate the shutdown problem: the problem of designing artificial agents that (1) shut down when a shutdown button is pressed, (2) don’t try to prevent or cause the pressing of the shutdown button, and (3) otherwise pursue goals competently. I prove three theorems that make the difficulty precise. These theorems suggest that agents satisfying some innocuous-seeming conditions will often try to prevent or cause the pressing of the shutdown button, even in cases where it’s costly to do so. I end by noting that…
Can an evidentialist be risk-averse? – Hayden Wilkinson (Global Priorities Institute, University of Oxford)
Two key questions of normative decision theory are: 1) whether the probabilities relevant to decision theory are evidential or causal; and 2) whether agents should be risk-neutral, and so maximise the expected value of the outcome, or instead risk-averse (or otherwise sensitive to risk). These questions are typically thought to be independent – that our answer to one bears little on our answer to the other. …
In search of a biological crux for AI consciousness – Bradford Saad (Global Priorities Institute, University of Oxford)
Whether AI systems could be conscious is often thought to turn on whether consciousness is closely linked to biology. The rough thought is that if consciousness is closely linked to biology, then AI consciousness is impossible, and if consciousness is not closely linked to biology, then AI consciousness is possible—or, at any rate, it’s more likely to be possible. A clearer specification of the kind of link between consciousness and biology that is crucial for the possibility of AI consciousness would help organize inquiry into…