2nd Oxford Workshop on Global Priorities Research

1-2 July 2019, Oxford University

This term's workshop is organised jointly with the Forethought Foundation for Global Priorities Research and includes contributions by their Global Priorities Fellows.

Topic

Global priorities research investigates issues that arise in response to the question, ‘What should we do with a given amount of limited resources if our aim is to do the most good?’ This question naturally draws upon central themes in the fields of economics and philosophy.

Thus defined, global priorities research is in principle a broad umbrella. This workshop will specifically focus on:

  1. The longtermism paradigm. This paradigm centres around the idea that because of the potential vastness of the future portion of the history of sentient life, it may well be that the primary determinant of which actions are best is the effects of those actions on the very long-run future, rather than on more immediate considerations. However, a number of thorny issues arise in the course of articulating, evaluating and drawing out the implications of a longtermist thesis.
  2. General issues in cause prioritisation. These are issues that are not specific to a longtermist point of view, but that arise for almost all agents engaged in an exercise of global prioritisation. They include, for instance, certain issues in decision theory, epistemology, game theory, and optimal timing and optimal stopping theory.

These two categories of topics are elaborated in more detail in Sections 1 and 2 (respectively) of GPI’s research agenda.

Venue

The workshop will take place in the Radcliffe Humanities Building, Woodstock Rd, Oxford.

Schedule

Monday 1 July - Morning Session

TimeSession
8.45amRegistration, Coffee and Snacks
9.25amHilary Greaves, Introduction to GPI
9.30amWill MacAskill, “Prioritising among longtermist interventions”
10.10amJohan Gustafsson, “Moral Uncertainty and Moral Progress”
10.40amChristian Tarsney, “Can We Ignore Infinite Ethics?”
10.55amBreak
11.15amKine Josefine Aurland-Bredersen, “Expected utility, non-expected utility and catastrophic risk”
11.55amAndreas Mogensen, “The only ethical argument for positive 𝛿'? Partiality as a justification for pure time preference”
12.25pmMichael Wulfsohn, “How should policymakers weigh the costs and benefits of avoiding human extinction?”
12.55pmLunch

Monday 1 July - Afternoon Session

TimeSession ASession B
2.00pmHeather Browning, “Measuring Animal Suffering”Brian Jabarian, “The Normative Uncertainty Survey”
2.35pmZach Groff, “Does Suffering Dominate Enjoyment in the Animal Kingdom? An Update to Welfare Biology”Hayden Wilkinson, “Comparing infinite, chaotic outcomes”
3.10pmMaximilian Negele, "Global Priorities Research and Moral Status”Nick Otis, “Generating interventions, forecasting their outcomes, and valuing their effects”
3.30pmBreakBreak
3.50pmZoe Hitzig & Brian Jabarian, “Equity, Efficiency and Tax Deductions for Charity”Signe Saven, “Longtermism – causation, counterfactual difference-making and overdetermination”
4.25pmDanny Bressler, “Integrated Assessment Modeling of Climate Change with Endogenous Mortality”Ben Grodeck, “Cooperating with future generations: An experimental investigation of altruism in identity-affecting decisions”
4.55pmBreakBreak
5.15pmSven Neth, “Decision Theory for Bounded Agents”David Bernard, “Identifying long-run impacts without long-run data”
5.50pmPhil Trammell, "Subsidizing Patience”Lewis Ho, “Catastrophe, timing and imperfect prediction”

Tuesday 2nd July - Morning Session

TimeSession
8.45amCoffee and Snacks
9.30amPaolo Piacquadio, “The Ethics of Intergenerational Risk”
10.10amOrri Stefansson, “On the Limits of the Precautionary Principle”
10.50amBreak
11.10amDavid Thorstad, “Procedural rationality for longtermists”
11.50amJohn Weymark, “Demographic Issues in Formal Models of Population Axiologies”
12.20pmSpeed talks
1.00pmLunch

Tuesday 2nd July - Afternoon Session

TimeSession
2.00pmOlle Haggstrom, “Challenges to the Omohundro—Bostrom framework for AI motivations”
2.40pmLucius Caviola, “How bad is human extinction? The psychology of existential risk”
3.10pmBreak
3.30pmKarim Jebari, “Resetting the tape of history: what can we infer about history from instances of convergent cultural evolution?”
4.10pmTeru Thomas, “Doomsday and Objective Chance”
4.40pmBreak
5.00pmTomi Francis, “The Procreation Asymmetry, Different-Number Incomparability, and the Long Term”
5.30pmRoger Crisp, “Pessimism about the Future”
6.00pmEnd of workshop

(The agenda of the 1st Oxford Workshop on Global Priorities Research, which was focussed on longtermism, can be found here.)

Talk Abstracts - Mon 1 July, Morning Session

Will MacAskill, “Prioritising among longtermist interventions”

There are potentially a number of ways of influencing the very long-term future, including speeding up economic growth, reducing extinction risk, and improving civilisation’s trajectory conditional on survival. In this talk I distinguish between these different activities, offer a simple model to make this classification more precise, and ask: from an impartial, long-term perspective, how do each of these approaches compare in their cost-effectiveness?

Johan Gustafsson, “Moral Uncertainty and Moral Progress”

When one takes the long-term effects into account, it makes sense to take into account not just moral uncertainty about the value of the potential long-term effects but also moral progress (that is, our credences for moral theories will change) on the way to the long-term effects. I argue that the main proposals in the literature on moral uncertainty that do not rely on intertheoretic comparisons of value are vulnerable to a new sort of value pump in cases where there will, predictably, be moral progress (in some currently unknown direction).  The value pump against these approaches is a possible situation where, in order to adhere to these approaches, one is forced to follow a plan that has, according to all moral theories in which one has some credence, a worse expectation than some other available plan. I argue that My Favourite Theory is vulnerable to this kind of value pump. Moreover, the argument generalizes to other approaches that avoid intertheoretic comparisons of value, such as My Favourite Option, the Borda Rule, and the Principle of Maximizing Expected Moral Value.

Christian Tarsney, “Can We Ignore Infinite Ethics?”

Despite ongoing efforts in philosophy and economics, we have yet to find satisfactory methods for extending finite axiological and ethical theories to the context of infinite worlds. Since the actual world is very probably infinite, this seems worrisome - it could turn out that infinite ethics will radically change our practical conclusions about how to do the most good. In this short talk, I offer one limited note of reassurance: Though the world is very probably infinite, the part of the world that we can affect (or at least, that we can predictably affect) is very probably finite. And in this case, many views in infinite ethics imply that we can reach correct decisions by applying finite ethical criteria to the part of the world we can (predictably) affect. So, although nothing is guaranteed, it's reasonable to hope that we can keep using our preferred finite ethical tools for practical purposes without being led too badly astray.

Kine Josefine Aurland-Bredersen,  “Expected utility, non-expected utility and catastrophic risk”

Expected utility theory is criticized for having a narrow representation of risk preferences and rationality, especially in the face of catastrophic risk. Since the normative criterion in welfare economics is preference satisfaction, using expected utility for normative or prescriptive assessment can be problematic. This paper compares how expected utility and non-expected utility frameworks rank probability distributions that differ in expected value and catastrophic risk. The results show that the rank depend more on small changes in the parameters of the utility function than on choice of framework. From a policy perspective, this implies it may be more fruitful to focus on finding the right level of risk aversion for expected utility analysis, than resolving the uncertainty of which framework to use. For all frameworks the higher catastrophic risk is, the less sensitive individuals are to changes in catastrophes risk.

Andreas Mogensen “The only ethical argument for positive 𝛿'? Partiality as a justification for pure time preference”

I consider whether a positive rate of pure time preference is justifiable in terms of agent-relative moral reasons pertaining to partiality between generations, an idea I call discounting for kinship. I respond to Parfit's objections to discounting for kinship, and then highlight a number of additional problems that follow on naturally from Parfit's discussion. I show that there exists an apparently persuasive answer to each of these challenges if we conceive of climate change obligations as shared by the current generation as a whole, a view I call global collectivism. However, I point to a number of important difficulties facing any attempt to make global collectivism suitably rigorous and precise.

Michael Wulfsohn, “How should policymakers weigh the costs and benefits of avoiding human extinction?”

When policymakers assess projects for public funding, they may be required to stick to mainstream economic methods. That might mean using the Ramsey equation for discounting. However, for projects that reduce the risk of human extinction, the Ramsey equation is inadequate. For example, under its assumptions, zero consumption gives infinitely negative utility. Is there an alternative that is still sufficiently mainstream? I enhance the underlying model by adding an endogenous probability of extinction. The result is a stronger theoretical foundation for assessing such projects. The model ties together the optimal level of human extinction risk mitigation, its marginal cost, the value of a statistical life, the role of time preference, and the role of population growth.

Talk Abstracts - Mon 1 July, Afternoon Session

Heather Browning, “Measuring Animal Suffering”

When making decisions about action to improve animal lives, it is important that we have accurate estimates of animal suffering under different conditions. The current frameworks for making comparative estimates of suffering all fall along the lines of multiplying numbers of animals used by length of life and amount of suffering experienced. However, the numbers used to quantify suffering are usually generated through unreliable and subjective processes. I look at how we might apply more principled methods to arrive at more accurate scores, which will then help us in making the best decisions for animals.

Brian Jabarian, “The Normative Uncertainty Survey”

The aim of the Normative Uncertainty Survey is to elicit normative uncertainty over first-order attitudes and hypothetical policy choices of a US representative population. Our main motivation is to provide empirical evidence to normative uncertainty. This enterprise is crucial for the future of normative uncertainty in economics, social sciences and its operational implementation in public decision-making. The NUS has two main aims. One aim of the NUS is testing different potential empirical measures of the extent of someone's normative uncertainty. Another aim of the NUS is to provide empirical evidence of NU related to different demographics. In this talk, I will be presenting the design of NUS and preliminary results from pilot tests and online randomized samples.

Zach Groff, “Does Suffering Dominate Enjoyment in the Animal Kingdom? An Update to Welfare Biology”

A 1995 paper by Yew-Kwang Ng models the evolutionary dynamics underlying the existence of suffering and enjoyment and concludes that there is likely to be more suffering than enjoyment in nature, an influential result among wild animal welfare advocates and researchers. In a new paper, we find an error in Ng's model that, when fixed, negates the original conclusion. Instead, the model only offers ambiguity as to whether suffering or enjoyment predominates in nature. The paper also lays out a groundwork for a future research agenda on wild animal welfare, one that could yield evidence on the underpinnings of affective states more generally and how natural selection can shape them over the long run.

Hayden Wilkinson, “Comparing infinite, chaotic outcomes”

Longtermism is often justified by an (at least implicit) appeal to an aggregative, total view of betterness which is impartial over times. But such a view faces the problem of cluelessness. Our universe is physically chaotic, so we are clueless about the long-term effects of our actions. We can never be confident that an act produces a better outcome than another.

This problem of cluelessness is well known, and is solvable - we may not know which outcome will be better, but we know that one of them will be. We can still take expected values and, as it turns out, retain many of our subjective normative judgements. But chaos poses a greater problem if our universe is infinite. Our future likely contains an infinite population, and infinite total moral value. This means that standard aggregation doesn’t work for comparing outcomes. Instead, we must use some method of infinite aggregation. In this paper, I apply our existing methods to realistic cases, in which outcomes are also chaotic. Unfortunately, our methods all fail. They don’t just imply that we should be uncertain about which outcomes will be better - none of them will be. In cases of chaos, our current methods guarantee that outcomes are incomparable. And we cannot resort to subjective normativity - gambles over these incomparable outcomes will be incomparable as well. So we have no justification for longtermism, or for any betterness claims in practice. I show that we can avoid this conclusion, but only by abandoning some dearly held moral principles.

Zoe Hitzig & Brian Jabarian , “Equity, Efficiency and Tax Deductions for Charity”

Many tax and transfer systems in developed countries offer deductions for charitable giving. From a public finance perspective, there are two leading rationales for such a feature of the tax code: one deriving from concerns about equity, and the other from concerns about efficiency. First, the government would like to encourage choices that have positive externalities, and so they effectively subsidize charitable giving in order to incentivize taxpayers to give more away. If charitable giving is conceived as a form of redistribution, then this incentive serves the government’s concern for equity among citizens. Second, there is an efficiency rationale. There are limits to the information a centralized government can amass about the set of public goods desired by its citizens— decentralized donations to charitable organizations fund public goods that directly respond to the preferences of its citizens, and so it serves overall efficiency of the tax system to subsidize such giving. This paper systematically investigates the extent to which tax deductions for charitable giving serve the efficiency and equity of a tax system. We treat tax deductions for charity through the lens of optimal tax theory, presenting a model closely related to those of Saez (2004) and Saez and Stantcheva (2016). We complement our theoretical analyses with explorations of charitable giving data in the United States and online surveys that elicit social preferences.

Signe Savén , “Longtermism – causation, counterfactual difference-making and overdetermination”

Roughly put, longtermism is the view that the primary determinant of the value-difference between our actions is their long-term consequences. This explication of longtermism raises the question of how ‘consequences’ should be understood in this context. The term could be used both in a purely causal sense such that X is a causal consequence of action A if A caused the realisation of X, or in a counterfactual sense such that X is a counterfactual consequence of action A if A caused the realisation of X and X would not have been realised had it not been for A.

Intuitively, supposing that X is a valuable outcome, doing A seems to be of no value if X would have been realised (at the same time and in the same way) by some other action, and be of value if X would not have been realised, had it not been for A. That is, mere causation seems to be insufficient to hold an action to be valuable, but counterfactual difference-making seems to be sufficient for this. Therefore, the counterfactual understanding of consequences is intuitively more plausible than the purely causal understanding with regard to attributing value to an action with respect to its consequences.

However, the counterfactual understanding faces problems in cases of overdetermination, because in such cases none of the actions make a counterfactual difference to the outcome. Thus, in cases of overdetermination, there are no counterfactual consequences, merely causal ones. How is value to be attribute to actions in these cases? And what are the implications of this for longtermism? In this talk, I explore these questions and evaluate potential answers to them.

Maximilian Negele, “Global Priorities Research and Moral Status”

When thinking about the long term future, we care about morally relevant beings. At the moment, the two most salient examples of morally relevant beings are humans and non-human animals. In the future there might exist additional morally relevant beings in the form of machine intelligence, and the structure of human minds might change through enhancement. Hence, there might exist mental states that we cannot comprehend and whose moral significance we cannot evaluate at the present time. Reviewing some relevant questions in consciousness research and the philosophy of moral status, I argue why Global Priorities Research ought to take into account this possibility. I further make the case for a greater involvement of philosophy of mind and neuroethics in Global Priorities Research.

Nick Otis, “Generating interventions, forecasting their outcomes, and valuing their effects”

This presentation reviews three ongoing empirical projects. The first examines who can accurately forecast the causal effects of interventions, and whether accuracy and calibration can be improved through decision tools or aggregation. This research can inform decisions made under incomplete information. The second project explores whether tools from mechanism-design theory can be leveraged to design and select effective interventions. The third project examines how to value and compare policies affecting different outcome domains.

Danny Bressler, “Integrated Assessment Modeling of Climate Change with Endogenous Mortality”

A large body of recent empirical literature has suggested that global warming is likely to have significant mortality effects including impacts on human health, interpersonal violence, and war. Despite this, the effect of global warming on human population levels is not currently incorporated into integrated assessment models (IAMs) that assess the welfare impacts of climate change. To the limited extent that IAMs have included climate mortality impacts, they have accounted for them as damage to economic output levels. In this paper, I explicitly account for the effect of climate on mortality and population levels using mortality response estimates from the empirical literature. I create an extension to the DICE-2016 model called DICE-EMR (Dynamic Integrated Climate-Economy model with an Endogenous Mortality Response). I find that explicitly accounting for climate mortality costs triples the welfare costs of climate change. More broadly, IAMs are potentially good tools to assess other global catastrophic risks like nuclear war and pandemics. However, they are currently limited in their ability to deal with phenomena that have large mortality effects and subsequent changes in population levels. This paper develops a methodology that explicitly addresses this issue. In addition, this methodology can be used to assess to determine how welfare costs vary with different interpretations of population ethics, such as person vs. non-person-affecting views.

Ben Grodeck, “Cooperating with future generations: An experimental investigation of altruism in identity-affecting decisions”

Future-oriented policies not only determine the welfare of future generations, but also the identities of the individuals who come to exist. These future-affecting policies generate social dilemmas that make cooperation especially difficult because their potential deleterious effects are remote in time, so the parties who require our cooperation do not yet exist. Over the next 100 years, myriad chance events will affect which particular individuals are conceived, and hence, for any large-scale future-oriented policy adopted now, had we adopted a different policy, a different population of individuals would exist in the future. Typical moral principles, applied to such decisions, yield paradoxical results, because it is arguable no one alive in 100 years will have been harmed by our present actions, as the alternative “better” policy would result in those individuals not existing. This puzzle has given rise to extensive philosophical theorizing (e.g. Parfit 1984) and some discussion in policy circles (e.g. IPCC2014, chapter 3), but minimal investigation of economic behaviour.

In this study, we conduct an incentivized laboratory experiment to investigate what principles individuals use to guide decision making in identity-affecting contexts. Using a dictator game, we have subjects make decisions that affect both the endowment and the identity of future recipients. We find rates of generosity are significantly lower in the identity-affecting task compared to an ordinary dictator game with a single possible recipient. We further investigate whether this behaviour is due to subjects employing different moral principles in identity-affecting contexts, or if they instead are exploiting this context which minimizes the social image costs of non-normative behaviour. We find evidence that the lack of generosity is largely explained by the choices of excuse-driven types, rather than due to a difference in moral attitudes.

Sven Neth, “Decision Theory for Bounded Agents”

How can we incorporate the cost of reasoning into decision theory? After motivating why this question matters, I sketch some ideas for how to construct a decision theory for resource-bounded agents.

David Bernard, “Identifying long-run impacts without long-run data”

Consider the case of a policy-maker who wants to know the effect of a cash transfer during childhood on adult income. She could start running an RCT now, but would have to wait at least 20 years before the results on adult income were estimable. This is too long a time-frame to be practically relevant for the policy-maker's decision today. One approach is to instead look at the impact of the program on an intermediate outcome called a 'surrogate', for example in this case, test scores at age 11. We can then also look at the general relationship between test scores and income, and then use the combination of the two relationships to identify the long-term impact of the cash transfer on income without waiting to observe incomes of those in the experiment. This technique is often used in medicine but has not been applied in a social science context as it requires relatively strong assumptions. A recent paper by Athey et al. (2016) extends the theory behind this method to include many surrogates, making it more useful for social science. I propose to empirically test this new method in social science contexts by comparing the results from this method to long-run RCTs in development economics.

Phil Trammell, “Subsidizing Patience”

Most people act roughly so as to maximize their future welfare discounted at some positive rate of pure time preference. Here, I consider the problem of a ‘patient philanthropist’, who aims to maximize total individual welfare discounted at a lower (perhaps zero) rate of pure time preference. A naïve patient philanthropist might invest his resources for beneficiaries’ future consumption; but given rational expectations and complete markets, impatient beneficiaries will counteract this patient philanthropy by investing less on their own behalf, or by borrowing more. The patient philanthropist therefore does best to subsidize beneficiaries’ investment. Given an economy with AK growth and individuals with isoelastic utility in consumption, I determine the schedule on which the philanthropist optimally disburses these subsidies. I also discuss the analogy between this simple model and more realistic characterizations of the problem of patient philanthropy.

Lewis Ho, “Catastrophe, timing and imperfect prediction”

When considering x-risk mitigation, how do we trade off between acting now (when we have more time), and later (when we have more expertise and can work more efficiently)? How valuable is increasing our information about the future? Looking specifically at the case of AI, I model progress in AI as a counting process, the decision to initiate AI safety research as an optimal timing problem, and examine the expected value of increasing the precision of a policymaker’s beliefs over the trajectory of AI. I further consider scenarios in which the agent in question is not a policymaker but a philanthropist with totalist moral beliefs, who is interested in leveraging public policy to maximize the cost-effectiveness of her mitigation efforts.

Talk Abstracts - Tues 2 July, Morning Session

Paolo Piacquadio, “The Ethics of Intergenerational Risk”

This paper addresses the evaluation of intergenerational allocations in an uncertain world. It axiomatically characterizes a class of criteria, named reference-dependent utilitarian, that assess allocations relative to a stochastic reference. The characterized criteria combine social concerns for ex-ante equity—capturing the idea that generations should be treated equitably before risk is resolved—and for ex-post fairness—capturing the idea that generations should be treated equitably after risk is resolved. Social discounting is endogenous and is governed by two opposite forces: extinction risk pushes society to reduce the weight on future generations, while (uninsurable) technological risk pushes society to increase the weight on future generations.

Orri Stefansson, “On the Limits of the Precautionary Principle”

The Precautionary Principle (PP) is an influential principle for catastrophic risk management, that may seem particularly relevant when our choices could have long-lasting (perhaps even irreversible) effects. The principle has been widely introduced into environmental legislation, and it plays an important role in most international environmental agreements. Yet, there is little consensus on precisely how to understand and formulate the principle. In this talk I discuss some impossibility results for different precisifications of the PP, understood as a (partial) decision-rule. These results illustrate the difficulty in making the PP consistent with the acceptance of any trade-offs between catastrophic risks and more ordinary goods.

David Thorstad, “Procedural rationality for longtermists”

Axiological longtermism is the view that the vast majority of an option’s value is typically determined by its impact on the long-term future. Despite its plausibility, axiological longtermism generates two problems. First, we are often clueless about the long-term effects of our actions. Decision paralysis threatens: it is unclear if and how clueless longtermists may rationally act at all. Second, longtermist decision-making may be cognitively demanding: rational decision-making seems to require agents to evaluate a vast number of future contingencies.

After reviewing existing solutions, I suggest that progress can be made on both problems by turning from substantive to procedural rationality. Where substantive rationality asks how agents should act, procedural rationality asks how agents should make up their minds about how to act.

I argue that both problems for longtermism are most helpfully understood at the procedural level, and show how progress can be made on procedural versions of both problems.

John Weymark, “Demographic Issues in Formal Models of Population Axiologies”

Unsatisfactory implicit demographic assumptions are identified in some of the axiomatic characterizations of axiologies in formal models of variable population social choice.

Talk Abstracts - Tues 2 July, afternoon session

Olle Haggstrom, “Challenges to the Omohundro—Bostrom framework for AI motivations”

If and when we manage to create a superintelligent AI, we cannot count on remaining in control, so much will depend on the machine's goals and motivations. Working out what these will be is difficult, and pretty much the only serious attempt at present towards thinking systematically about this is the Omohundro—Bostrom theory of instrumental vs final AI goals. Here we discuss some concerns regarding the validity and applicability of that theory.

Lucius Caviola, “How bad is human extinction? The psychology of existential risk”

The 21st century is likely to see growing risks of human extinction, but currently, relatively small resources are invested in reducing these risks. We study how the general public thinks about human extinction (five studies; total N = 2,507). We find that lay people do not judge human extinction to be uniquely bad relative to near-extinction catastrophes, which allow for recovery. We identify two causes. First, people focus on the immediate harm that the catastrophes cause. Second, people do not focus on the catastrophes’ long-term consequences, partly because they are not particularly optimistic about how good the long-term future will be. Overall, we find that lay people do not find human extinction uniquely bad, but that they may change their mind under careful reflection.

Karim Jebari, “Resetting the tape of history: what can we infer about history from instances of convergent cultural evolution?”

A global catastrophe may bring about a collapse in human civilization. The normative evaluation of such an outcome ought to consider (among other things) the probability of the re-emergence of modern civilization after such an event, and how soon such re-emergence would take place. In theoretical biology, a discussion relevant to this consideration has taken place between proponents of the “robustness thesis”, according to which the history of life is robust to change, and the “fragility thesis”, according to which the history of life is contingent on specific conditions. In this discussion, instances of convergent evolution have been appealed to as evidence for the robustness thesis. Here, I discuss whether instances of cultural convergent evolution can be used to defend a robustness thesis with regards to human history. In other words, should we expect that history would repeat itself after a collapse in human civilization (or a resetting of the “tape of history”)?  I also discuss the extent to which degrees of freedom with regards to cultural practices can inform these considerations. I argue that (1) we have no evidence that suggests that a modern and industrialized civilization is likely to re-emerge in a “reset” scenario. And (2) that if such a re-emergence would take place, it would likely be later rather than sooner.

Teru Thomas, “Doomsday and Objective Chance”

The so-called Doomsday Argument suggests that our credence that the world ends soon should vastly exceed the objective chance. I'll sketch what I think is the best response to this argument, based on a foundational account of how chances provide norms for credence.

Roger Crisp, “Pessimism about the Future”

This talk argues that there are reasons for thinking that the extinction of sentient life on earth would be overall good.

Attending the workshop

The participant list for the workshop is now closed.

Future workshops

If you are interested in presenting at future similar workshops, please email [email protected] with an outline of your proposed topic.

The 3rd Oxford Workshop on Global Priorities Research is expected to take place on 12-13 December 2019, and the 4th Oxford Workshop on Global Priorities Research is expected to take place on 19-20 March 2020.