Topics in Global Priorities Research (April 2019)

Course Instructors:  William MacAskill; Christian Tarsney   |    Trinity term 2019

This course was offered to Oxford University graduate students who were studying philosophy. It covers global priorities research from a perspective of academic philosophy.

Before engaging with the reading on this course, you might find it helpful to familiarise yourself with the Foundational Issues of Effective Altruism. For a much longer reading list and list of topics, please see the Global Priorities Institute research agenda.

Course description

Suppose we want to know how we can do the most good with a given unit of resources (e.g., because we accept the dictum of effective altruism that we should be trying to do the most good with at least a significant fraction of our resources). The first question to ask, plausibly, is what causes we should direct our resources towards: what problems we should be trying to solve, or what opportunities we should be trying to exploit, in order to do the most good per unit of resource. Global priorities research aims to answer this question. It compares the case for investing in various high-value causes (like global public health, animal welfare, mitigating existential risks, or improving institutional decision-making) and broad categories of causes (like “short-term” vs. “long-term” causes) using tools from philosophy, economics, and an open-ended range of other traditional academic disciplines.

This seminar explores some of the central questions of global priorities research, with a central focus on the “longtermist paradigm” – very roughly, the view in most situations, if our aim is to do the most good, we should focus primarily on the effects of our present choices on the very distant future (thousands, millions, or billions of years from the present). The seminar considers the case for longtermism, then considers a number of worries and objections, and finally considers its practical implications (e.g., whether we should focus on minimizing existential risks, minimizing the risk of outcomes worse than extinction, bringing about long-lasting “trajectory changes”, etc).