GLOBAL PRIORITIES INSTITUTE

Foundational academic research on how to do the most good.

UNIVERSITY OF OXFORD

The Global Priorities Institute is an interdisciplinary research centre at the University of Oxford.

Our aim is to conduct foundational research that informs the decision-making of individuals and institutions seeking to do as much good as possible. We use the tools of multiple academic disciplines, especially philosophy, economics and psychology, to explore the issues at stake.

We prioritise projects whose contributions are unlikely to be otherwise made by the normal run of academic research, and that speak directly to the most crucial considerations such an actor must confront.

Input to UN Interim Report on Governing AI for Humanity

This document was written by Bradford Saad, with assistance from Andreas Mogensen and Jeff Sebo. Jakob Lohmar provided valuable research assistance. The document benefited from discussion with or feedback from Frankie Andersen-Wood, Adam Bales, Ondrej Bajgar, Thomas Houlden, Jojo Lee, Toby Ord, Teruji Thomas, Elliot Thornley and Eva Vivalt.

Read More

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …

Read More

How much should governments pay to prevent catastrophes? Longtermism’s limited role – Carl Shulman (Advisor, Open Philanthropy) and Elliott Thornley (Global Priorities Institute, University of Oxford)

Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. …

Read More