11th Oxford Workshop on Global Priorities Research

5-6 December 2022, University of Oxford

Topic

Global priorities research investigates the question, ‘What should we do with our limited resources, if our goal is to do the most good?’ This question has close connections with central issues in philosophy and economics, among other fields.

This event will focus on the following two areas of global priorities research:

  1. The longtermism paradigm. Longtermism claims that, because of the potential vastness of the future of sentient life, agents aiming to do the most good should focus on improving the very long-run future, rather than on more immediate considerations. We are interested in articulating, considering arguments for and against, and exploring the implications of this longtermist thesis.
  2. General issues in cause prioritisation. We will also host talks on various cause prioritisation issues not specific to longtermism - for instance, issues in  decision theory, epistemology, game theory and optimal timing/optimal stopping theory.

These two categories of topics are elaborated in more detail in Sections 1 and 2 (respectively) of GPI’s research agenda.

Agenda

Day 1, Monday 5 December

TimeSession
9:15Registration, coffee and snacks
10:00Rossa O’Keeffe-O’Donovan, Introduction to GPI
10:15Paolo G. Piacquadio, “Intergenerational Population Ethics”
11:05Julian Jamison, “Empirical Facts about Normative Values”
11:40Break
12:10Hayden Wilkinson, “The impact of research and research of impact”
13:00Oliver Hauser, “An Economics & Interdisciplinary Research Agenda towards a Long-term Future of Humanity”
13:35Lunch
14:35Benjamin Enke, “Moral Universalism: Global Evidence”
15:10Pauline Vorjohann, “Fairness-based altruism”
15:45Break
16:15Richard Yetter Chappell, “Moral Importance” - Link to recording
17:40End of session

Day 2, Tuesday 6 December

Talk details

Monday 5 December, 2022

10:15 - Paolo G. Piacquadio, “Intergenerational Population Ethics”

In less than half a century, the world's human population has doubled in size to the current 8 billion. Yet, despite the importance of population change for growth, resource scarcity, and climate change, there is no “reasonable” welfare criterion to assess alternatives with different number of individuals. We reexamine the evaluation of intergenerational allocations with endogenous population dynamics. Our proposal identifies a new balance between the well-being and the size of generations, obtained by respecting the fertility preferences of parents. The axiomatic characterization singles out a simple and intuitive family of recursive welfare criteria, which generalize discounted utilitarianism and avoid controversial implications of existing criteria.

11:05 - Julian Jamison, “Empirical Facts about Normative Values”

Policy decisions often depend on normative tradeoffs without objectively correct answers, such as the relative [dis]value of death at different ages; the wellbeing neutral points of being dead and of nonexistence; weights on animal vs human welfare; and of course the [social] rate of pure time preference. Although careful survey evidence regarding people's expressed values are far from dispositive, it constitutes one useful input into practical decisions that must somehow be taken (and often defended). We provide several related results from a survey based on representative samples in Brazil, China, and the UK.

12:10 - Hayden Wilkinson, “The impact of research and research of impact”

Funding bodies and research councils often attempt to measure and compare the impact of philosophical research projects. Almost universally, they do this poorly. Perhaps, it's inevitable that institutional attempts to measure such impact are counterproductive. But it may still seem an appropriate goal for individual philosophers to do research that (most) improves the world. In this paper, I investigate how impact-minded individuals can best assess and choose between potential research projects. I present a tentative case that such individuals should seek to maximise the expected moral value of information of the work they do. Applying this criterion, global priorities research (or, at least, much of it) does well, but it's not clear that topics within global priorities research do better than all alternatives.

13:00 - Oliver Hauser, “An Economics & Interdisciplinary Research Agenda towards a Long-term Future of Humanity”

In this presentation, I will give an overview of my growing research agenda that aims to contribute to building and sustaining the long-term future of humanity, by combining lab/field experimental methods with theoretical modelling. Some projects will pursue fundamental research around long-term decision-making, while others will be applied and field-oriented in nature. I would be grateful for your feedback on these early-stage ideas and directions for this research agenda. While I expect economists to find this work relatable, I really hope to be challenged, advised and guided by questions from both economists and philosophers: this is the time to shape the direction of this research agenda and make sure I ask the right questions. I want to make sure this is truly interdisciplinary at the outset, and am grateful for your thoughts.

14:35 - Benjamin Enke, “Moral Universalism: Global Evidence”

This paper presents novel stylized facts about the global variation in universalism, leveraging nationally representative surveys across 60 countries (N=64,000). We find large variation in universalism within and across countries, which almost entirely reflects heterogeneity in people's moral views regarding how to treat different types of relationships. Universalism is strongly predictive of political views, civic engagement, and the radius of trust, and varies with the economic, political and religious organization of societies. We provide tentative evidence that experience with democracy makes people more universalist. Overall, our results suggests that moral universalism shapes and is shaped by politico-economic outcomes across the globe.

15:10 - Pauline Vorjohann, “Fairness-based altruism”

Why do people give when asked, but prefer not to be asked, and even take when possible? We introduce a novel analytical framework that allows us to express context dependence and narrow bracketing axiomatically. We then derive the utility representation of distributive preferences additionally obeying standard axioms such as separability and scaling invariance. Such preferences admit a generalized prospect-theoretical utility representation reminiscent of fairness-based altruism. As in prospect theory, the underlying preferences are reference dependent and non-convex, which directly predicts the previously irreconcilable empirical evidence on giving, sorting, and taking. We test the model quantitatively on data from seminal experiments and observe significantly improved fit in relation to existing models, both in-sample and out-of-sample.

16:15 - Richard Yetter Chappell, “Moral Importance”

Moral philosophers have traditionally focused on the concepts of right and wrong, permissibility and obligation. I propose a reconceptualization of the field which instead gives primacy to the question of what is important.  I argue that this clarifies central debates in the literature, and removes an unwarranted biasing effect that previously favored deontological approaches to ethics.

Tuesday 6 December, 2022

10:00 - Kevin Kuruc, “Scale effects and speeding up history”

The unprecedented phase of population decline that the world will soon enter raises important issues for long-run social welfare. The rate of population decline determines the number of people on the planet at any given time -- an issues of relevance for most population axiologies -- and through this, influences the rate of economic (and perhaps moral) progress. This paper shows that, for reasonable parameterizations of economic growth models, changes in population growth are akin to "speeding up history." Whether this is valuable depends on how the model ends. If extinction risk is entirely man-made, population growth is neutral (it brings forward people and extinction proportionately). If there are natural components to existential risk, such that it has some time-component, then speeding up history is valuable.

10:35 - Jonathan Birch, “Artificial sentience and the gaming problem”

There is a reasonably well-established marker-based strategy for testing for sentience in animals, despite ongoing debate about which markers are most informative and why. Could the same type of strategy help us detect sentience in AI systems? The strategy seems most likely to be useful for "low intelligence" artificial agents, such as emulations of animal brains. In many current and future AI systems, the strategy is undermined by the "gaming problem": the problem of AI systems using corpuses of human-generated training data to mimic behaviours likely to persuade human users of their sentience. I will consider some possible ways around the gaming problem.

11:55 - Vincent Conitzer, “AI Agents May Cooperate Better if They Don't Resemble Us”

AI systems control an ever growing part of our world. As a result, they will increasingly interact with each other directly, with little or no potential for human mediation. If each system stubbornly pursues its own objectives, this runs the risk of familiar game-theoretic tragedies – along the lines of the Tragedy of the Commons, the Prisoner’s Dilemma, or even the Traveler’s Dilemma – in which outcomes are reached that are far worse for every party than what could have been achieved cooperatively.

However, AI agents can be designed in ways that make them fundamentally unlike strategic human agents. This approach is often overlooked, as we are usually inspired by our own human condition in the design of AI agents. But I will argue that this approach has the potential to avoid the above tragedies in new ways. The price to pay for this, for us as researchers, is that many of our intuitions about game and decision theory, and even belief formation, start to fall short. I will discuss how foundational research from the philosophy and game theory literatures provides a good starting point for pursuing this approach.

This talk covers joint work with Caspar Oesterheld, Scott Emmons, Andrew Critch, Stuart Russell, Abram Demski, Yuan Deng, and Catherine Moon.

12:45 - Richard von Maydell, “Artificial Intelligence and its Effect on Competition and Factor Income Shares”

We examine the effect of Artificial Intelligence (AI) and its capacity of self-learning - it improves by being applied, tested and trained - on market competition in industrial production. In our model, AI operates via three main channels. Firstly, AI is directly employed as an input factor. Secondly, AI improves the coordination, recruiting and organization of input factors - every firm has a capacity limit for labor and capital that gets relaxed the higher the AI incorporation of a firm. Thirdly, firms can charge higher price markups with an increasing level of AI due to scalability effects. For the incorporation of AI in production, firms have to pay variable costs for software acquisition and fixed costs to build up infrastructure for AI. Fixed costs may constitute a market barrier for firms with a low productivity and we examine the effect of a rise of AI on competition and factor income shares. Furthermore, we discuss policies that impede AI-induced monopolies, income divergence between heterogeneously-skilled agents and enhance the long-term growth rate of AI. We emphasize that appropriate legislation for the commercial use of AI should be designed to broadly share the benefits of AI, to develop an competition-oriented early warning system to prevent a rise in market dominance and to promote growth-enhancing corporate AI integration.

14:30 - Hilary Greaves, “Concepts of existential risk”

I will discuss various ways in which the concept of existential risk has been or could be defined, and consider which is best (for various purposes).

15:05 - William D'Alessandro, “Existential Risk in the Safest Distant Futures”

Even if you value your present and future welfare equally, you should prefer a million dollars now over a promise of a million dollars in 50 years, because there's a good chance you won't be around then to enjoy it. Similarly, even if we reject a discount rate on future utility, we should adjust our estimate of the value of longtermist interventions by an amount proportional to the chance that we'll have gone extinct before the benefits are realized. Indeed, as Christian Tarsney has shown, longtermist interventions may outweigh neartermist acts in expected value only if the rate of extinction risk is eventually almost zero. Is it? Some have suggested that it may be, provided we survive long enough to colonize the galaxy and develop aligned AGI. In this talk, I consider what sorts of extinction risks we might face even in this apparently safest of futures, and what this entails about the prospects for longtermism.

16:10 - Mattie Toma, “Inter-temporal Preferences and Political Investments: Evidence from India”

There are numerous structural, informational, and behavioral channels that may limit the weight political decision-makers place on long-run outcomes. We seek to run a large-scale field experiment among elected village leaders in India to shed light on these channels and, using these insights, identify interventions that shift policy-relevant time preferences. Time preference elicitations incentivized through real policy decisions will be used to estimate the extent to which politicians value long-run outcomes in decision-making. Similar elicitations will be conducted among the villagers themselves, to assess alignment in policy preferences between villagers and their leaders. Finally, we will randomly assign village leaders to interventions aimed at addressing barriers to longer-run decision-making – for instance, communicating villager preferences to politicians or increasing the transparency of policy decisions. We will study how the interventions influence the weight placed on longer-run policy outcomes. 

16:30 - H. Orri Stefánsson, "Identified person "bias" as decreasing marginal value of chances"

Many philosophers think that we should use a lottery to decide who gets a good to which two persons have an equal claim but which only one person can get. Some philosophers (but not as many) think that we should save identified persons from harm even at the expense of saving a somewhat greater number of statistical persons from the same harm. I defend a principled way of justifying both judgements, namely, by appealing to the decreasing marginal moral value of survival chances. I identify four desiderata that, I contend, any such justification should satisfy, and explain how my account meets these desiderata, unlike some previous accounts. Finally, I compare my view to both ex ante and ex post versions of egalitarianism and prioritarianism, and show that my view is importantly different from each of the other views.