The Global Priorities Institute is an interdisciplinary research centre at the University of Oxford.
Our aim is to conduct foundational research that informs the decision-making of individuals and institutions seeking to do as much good as possible. We use the tools of multiple academic disciplines, especially philosophy, economics and psychology, to explore the issues at stake.
We prioritise projects whose contributions are unlikely to be otherwise made by the normal run of academic research, and that speak directly to the most crucial considerations such an actor must confront.
Featured links
Papers
In Defence of Moderation – Jacob Barrett (Vanderbilt University)
A decision theory is fanatical if it says that, for any sure thing of getting some finite amount of value, it would always be better to almost certainly get nothing while having some tiny probability (no matter how small) of getting sufficiently more finite value. Fanaticism is extremely counterintuitive; common sense requires a more moderate view. However, a recent slew of arguments purport to vindicate it, claiming that moderate alternatives to fanaticism are sometimes similarly counterintuitive, face a powerful continuum argument…
Read MoreMeasuring AI-Driven Risk with Stock Prices – Susana Campos-Martins (Global Priorities Institute, University of Oxford)
We propose an empirical approach to identify and measure AI-driven shocks based on the co-movements of relevant financial asset prices. For that purpose, we first calculate the common volatility of the share prices of major US AI-relevant companies. Then we isolate the events that shake this industry only from those that shake all sectors of economic activity at the same time. For the sample analysed, AI shocks are identified when there are announcements about (mergers and) acquisitions in the AI industry, launching of…
Read MoreImperfect Recall and AI Delegation – Eric Olav Chen (Global Priorities Institute, University of Oxford), Alexis Ghersengorin (Global Priorities Institute, University of Oxford) and Sami Petersen (Department of Economics, University of Oxford)
A principal wants to deploy an artificial intelligence (AI) system to perform some task. But the AI may be misaligned and aim to pursue a conflicting objective. The principal cannot restrict its options or deliver punishments. Instead, the principal is endowed with the ability to impose imperfect recall on the agent. The principal can then simulate the task and obscure whether it is real or part of a test. This allows the principal to screen misaligned AIs during testing and discipline their behaviour in deployment. By increasing the…
Read More