6th Oxford Workshop on Global Priorities Research

17-19 March 2021, online

Topic

Global priorities research investigates the question, ‘What should we do with our limited resources, if our goal is to do the most good?’ This question has close connections with central issues in philosophy and economics, among other fields.

This event will focus on philosophical aspects of the longtermism paradigm. Longtermism claims that, because of the potential vastness of the future of sentient life, agents aiming to do the most good should focus on improving the very long-run future, rather than on more immediate considerations. We are interested in articulating, considering arguments for and against, and exploring the implications of this longtermist thesis. This topic is elaborated in more detail in Sections 1 of GPI’s research agenda.

Agenda

Wednesday 17 March - Session A

TimeSession
14:30Arrival and informal networking
15:00Hilary Greaves, Welcome and overview of GPI
15:20Keynote address: Lara Buchak, “Risk and ambiguity in ethical decision-making”
16:20Coffee break and informal networking
16:40Loren Fryxell, “A lexicographic expected utility representation for infinite ethics”
17:35Heather Browning, “Longtermism and animals”
18:00Informal networking

Wednesday 17 March - Session B
Thursday 18 March - Session A
Thursday 18 March - Session B
Friday 19 March

Talk abstracts

Wednesday 17 March - Keynote Speech

15:20 - Lara Buchak, “Risk and ambiguity in ethical decision-making”

This talk concerns the ethical principles that should govern our choices for others under conditions of risk and conditions of ambiguity.  After distinguishing attitudes towards risk from attitudes towards ambiguity, I will argue that we should be risk-avoidant and ambiguity-neutral when making choices for others, unless we know their actual attitudes.

Wednesday 17 March - Session A

16:40 - Loren Fryxell, “A lexicographic expected utility representation for infinite ethics”

Aggregative consequentialist theories suffer from infinite paralysis—if there is any positive probability that the world contains infinite moral value, and individual actions can only cause a finite change in value, then we should be morally indifferent between all actions (the expected aggregate moral value is infinite/undefined for all actions). That being said, classical expected utility theory (von-Neumann-Morgenstern) does not apply in such environments. In particular, the continuity axiom prohibits moral preferences from considering any outcome as "infinitely better'' than another. I generalize the von-Neumann-Morgenstern theory to only require continuity in a subset of cases, allowing moral preferences to view some outcomes as infinitely better than others. The axioms characterize a lexicographic expected utility representation. The theory does not suffer from infinite paralysis—indeed, for any set of lotteries with the same probability of infinitely good/bad outcomes, the preference is represented by a (finite) von-Neumann-Morgenstern expected utility function.

17:35 - Heather Browning, “Longtermism and animals”

Work on longtermism has thus far primarily focussed on the existence and wellbeing of future humans, without corresponding consideration of animal welfare. Given the sheer expected number of future animals, as well as the likelihood of their suffering, I argue that equal, if not greater, consideration should be given to the welfare of animals in the long-term future and discuss some potential interventions and areas of research focus that are likely to have the greatest impact.

Wednesday 17 March - Session B

16:40 - Andreas Mogensen, “Tough enough? Robust satisficing as a decision norm for long-term policy analysis”

This paper aims to open a dialogue between philosophers working in decision theory and operations researchers and engineers whose research addresses the topic of decision making under deep uncertainty. Specifically, we assess the recommendation to follow a norm of robust satisficing when making decisions under deep uncertainty in the context of decision analyses that rely on the tools of Robust Decision Making developed by Robert Lempert and colleagues at RAND. We discuss decision-theoretic and voting-theoretic motivations for robust satisficing, then use these motivations to select among candidate formulations of the robust satisficing norm. We also discuss two challenges for robust satisficing: whether the norm might in fact derive its plausibility from an implicit appeal to probabilistic representations of uncertainty of the kind that deep uncertainty is supposed to preclude; and whether there is adequate justification for adopting a satisficing norm, as opposed to an optimizing norm that is sensitive to considerations of robustness.

17:20 - Signe Savén, “A rough case for longtermism”

Like other theories that hold that consequences play an important part in determining the value of actions, ex ante longtermism is vulnerable to the cluelessness objection, i.e. that we are (virtually) clueless about the effects of different actions, and thus (virtually) clueless about their value. In this talk, I discuss a rough argument for ex ante longtermism that aims to find a way around this objection. Departing from Hilary Greaves’s distinction between simple and complex cluelessness, the argument grants that we are clueless about the vast majority of the effects of our actions, yet makes the case that we are not entirely clueless about which actions are ex ante best. In short, simple cluelessness can simply be disregarded from the equation. Complex cluelessness cannot simply be disregarded, but in some cases this seems irrelevant, because there seems to be sufficient reason to believe that the ex ante value of these effects, regardless of what credence we assign to them, is by far outweighed by predictable long-term effects of very high value.

Thursday 18 March - Session A

15:00 - Jeff Russell, “Pascalian wagers”

I plan to survey some recent work and open questions about puzzles in decision theory and ethics about gambles involving unbounded or infinite values, with applications to existential risk. Infinite values are perfectly coherent. There are also strong pressures pushing us from large finite values to infinite values and accepting Pascalian wagers. Some of the strongest arguments that some outcomes do have large finite values are based on aggregation over large populations or long intervals of time. Such aggregative principles face a variety of "infinity problems."

16:00 - Lucius Caviola, “An empirical investigation of population ethical intuitions”

In a series of eight studies (N = 4,374), we empirically investigated lay people’s population ethical intuitions. First, we found that people place greater relative weight on, and are more sensitive to, suffering compared to happiness. Second, we found that—in contrast to so-called person-affecting views—people do not consider the creation of new people as morally neutral. Participants considered it good to create a new happy person and bad to create a new unhappy person. Third, we found that people take into account both the average level (averagism) and the total level (totalism) of happiness when evaluating populations. However, when participants were prompted to reflect as opposed to rely on their intuitions, their preferences became more totalist.

Thursday 18 March - Session B

15:00 - Emma Curran, “Longtermism, aggregation and catastrophic risk”

There is a growing number of philosophers who advocate for focusing our philanthropic efforts on improving the prospects of those living in the very distant future. Advocates of longtermism point out that, in expectation, long-term interventions bring about more good than their short-term counterparts which focus on improving the well-being of those living now or in the near future. In this paper, I aim to show that longtermism is in tension with plausible non-consequentialist scepticism about aggregation. I do so by demonstrating that from both an ex ante and ex post perspective, it is difficult to prefer long-term interventions, including those which seek to mitigate global catastrophic risk, to short-term ones without permitting aggregation. Indeed, I claim that preferring long-term interventions not only requires us to permit some form of aggregation but to also permit the allegedly most morally suspect forms.

16:00 - Andreas Schmidt, “But what about future people? Individual freedom and long-term liberalism”

Longtermism seems to make heavy demands on people alive today. Is this compatible with a broadly liberal picture where individuals ought to have wide-ranging freedom to do as they please? A common longtermist response to such challenges is that whatever non-welfarist good you care about, you should be scope-sensitive, which typically implies you should care about the long-term future. Can we make the same argument for individual sociopolitical freedom: should we care about future people’s freedom and, if so, does this take us back to longtermism? And how does this relate to the freedom of present people: for example, if Nick Bostrom is right and human existence is in constant existential vulnerability, does this justify severe restrictions on our personal freedom? Curiously, political philosophers have mostly ignored the question of how we should factor in the freedom of future generations. My aim is to make a start answering this question. I explore how including future generations affects theories of freedom and, conversely, how such theories affect debates around longtermism. Among other things, I argue that freedom provides a somewhat independent justification of longtermism, that long-term freedom in theory justifies significant restrictions of our freedom now, and that most existential risk reduction ‘by definition’ benefits people alive today by increasing their freedom. Throughout I focus on liberal theories of freedom. If I have time, I add some thoughts on how conclusions might change if one adopts so-called neo-republican theories of freedom instead.

Attending the workshop

Applications to attend the 6th Oxford Workshop on Global Priorities Research have now closed.

If you are interested in presenting at future similar workshops, please email [email protected] with an outline of your proposed topic.

Schedules of previous workshops can be found on the following links: