8th Oxford Workshop on Global Priorities Research
6-8 December 2021, online
Topic
Global priorities research investigates the question, ‘What should we do with our limited resources, if our goal is to do the most good?’ This question has close connections with central issues in philosophy and economics, among other fields.
This event will focus on the following two areas of global priorities research:
- The longtermism paradigm. Longtermism claims that, because of the potential vastness of the future of sentient life, agents aiming to do the most good should focus on improving the very long-run future, rather than on more immediate considerations. We are interested in articulating, considering arguments for and against, and exploring the implications of this longtermist thesis.
- General issues in cause prioritisation. We will also host talks on various cause prioritisation issues not specific to longtermism - for instance, issues in decision theory, epistemology, game theory and optimal timing/optimal stopping theory.
These two categories of topics are elaborated in more detail in Sections 1 and 2 (respectively) of GPI’s research agenda.
Schedule
The workshop will take place between 3pm and 6pm each day (UK time, GMT).
Time | Session | |
---|---|---|
14:30 | Arrival and informal networking | |
15:00 | Hilary Greaves, Welcome and overview of GPI | |
15:20 | Louise Guillouet, “Precaution, information and the (negative) value of the Precautionary Principle” | |
16:15 | Coffee break and informal networking | |
16:40 | Daniel Heyen, “Disagreement aversion” | |
17:20 | Danae Arroyos-Calvera, “How does the Value of a Life Year (VOLY) depend on the timing of risk reductions?” | |
18:00 | Informal networking |
Talk abstract
Monday 6 December - Day 1 Session A
15:20 - Louise Guillouet, “Precaution, information and the (negative) value of the Precautionary Principle”
We consider a dynamic decision-making problem under irreversibility and uncertainty. A decision-maker enjoys surplus from his current actions but faces the possibility of an irreversible catastrophe, an event that follows a non-homogeneous Poisson process with a rate that depends on the stock of past actions. Passed a tipping point, the probability of a disaster increases once for all. For such a context, the Precautionary Principle has repeatedly been invoked to regulate risk. We ask whether such an institutional commitment to prudent actions in a world of incomplete social contracts has any value and we answer negatively. When only the distribution of possible tipping points is known, the optimal feedback rule should a priori determine actions in terms of first the stock of past actions and second, the beliefs on whether the tipping point has been passed or not. We nevertheless show that Stock-Markov Equilibria, which are sustained with feedback rules that only depend on stock and only allow commitment to actions for infinitesimally short periods of time, suffice to implement an optimal path. Yet, committing to such limited feedback rules once for all is suboptimal; pointing out at the negative value of the Precautionary Principle.
Keywords: Environment; Technology; Risk; Irreversibility
16:40 -Daniel Heyen, “Disagreement aversion”
Decision-makers rely on experts who often disagree. Aversion to expert disagreement is usually modeled with ambiguity-averse preferences which rest on a unanimity principle: if all experts consider one choice better than another, so should the decision-maker. Such unanimity among experts however can be spurious, masking substantial disagreement on the underlying reasons. We introduce a novel notion of disagreement aversion to distinguish spurious from genuine unanimity. We develop a model centered around the cautious aggregation of expert beliefs that is able to capture that novel notion of disagreement aversion. We provide formal results and illustrate them in applications.
Keywords: Disagreement aversion; Uncertainty aversion; Ambiguity; Belief aggregation
17:20 - Danae Arroyos-Calvera, “How does the Value of a Life Year (VOLY) depend on the timing of risk reductions?”
The aim of this paper is to establish whether the rate at which individuals discount future utility can explain individual differences in VOLY (Value of a Life Year) preferences. Using a large (n=1662) survey, we derive personal discount factor estimates underpinning the VOLY and establish how preferences for different VOLY types (one-off risk reductions, ongoing risk reductions and risk reductions that grow over time) depend on these discount factors. The survey includes a substantial learning phase to familiarize participants with conditional risk sequences. Armed with this understanding, participants complete a series of iterated choices between hypothetical policy options, personalised to their age and sex, that reduce their risks of dying in a given future decade. The options are differentiated by the decade in which the risk reductions would occur. We contribute to the growing empirical research on direct elicitation of preferences over life expectancy gains (e.g. Nielsen et al., 2010; Hammitt and Tuncel, 2015) and provide further evidence that people differ in their preferred VOLY type, showing how time preferences explain some of this variation in preferences. Our contribution is important for three reasons: it sheds light on the reasons for preferences over different VOLY types; generates evidence about the time preference rate and functional form for policy analysis; and provides a means of empirically bridging between Value of Statistical Life (VSL) and the VOLY.
Keywords: VOLY; Discounting
Monday 6 December - Day 1 Session B
15:20 -Kenny Easwaran, “A new method for value aggregation”
Many axiological theories ground the goodness or badness of options in the aggregate of the goodness or the badness of these options for individuals. Most commonly, this works by summing (or averaging), and taking the expectation of this result if there is uncertainty. Such theories face problems dealing with infinite populations, for which sums or averages are infinite or undefined. They fetishize certain mathematical operations, in a subject that is not inherently mathematical. The fact that the sum is the target of maximization is said to mean that the theory ignores the separateness of persons. I propose an extension of my 2014 paper, ‘Decision Theory without Representation Theorems’. I illustrate how the resulting theory can account for a case involving an infinite population, dealing with the objections. I connect this to the theory of measurement, and explain why the mathematical operations are co-extensive with the result without grounding it. Because there is no sum in these infinite populations, it avoids the worry that aggregative theories treat individuals as secondary to some aggregate.
Keywords: Aggregation; Axiology; Infinity
16:40 - Karim Jebari, “Sex selection for daughters: demographic consequences of female-biased sex ratios”
Modern fertility techniques like flow cytometry allow parents to carry out preimplantation sex selection at low cost, no medical risk, and without the ethical or medical concerns associated with late abortions. Sex selection for non-medical purposes is legal in many high-income countries, and social norms toward assisted reproductive technology are increasingly permissive and may plausibly become increasingly prevalent in the near future. While concerns over son preference have been widely discussed, sex selection that favors female children is a more likely outcome in high-income countries. If sex selection is adopted, it may bias the sex ratio in a given population. Male-biased populations are likely to experience slower population growth, which limits the long-term viability of corresponding cultural norms. Conversely, female-biased populations are likely to experience faster population growth. Cultural norms that promote female-biased sex ratios are as a consequence therefore also self-reinforcing. In this study, we explore the demographic consequences of a female-biased sex ratio for population growth and population age structure. We also discuss the technology and parental preferences that may give rise to such a scenario.
Keywords: Sex selection; Sex ratios; Assisted reproductive technology; Population growth
17:20 - Petra Kosonen, “Tiny probabilities and the value of the far future”
Morally speaking what matters the most is the far future—at least if accept Longtermism. According to Longtermism, our acts’ expected influence on the value of the world is mainly determined by their effects in the far future. The case for Longtermism is straightforward: Even a tiny probability of a very large population in the far future outweighs the importance of our acts’ effects in the short-term. But, it seems that there is something wrong with a theory that lets very small probabilities of huge payoffs dictate one’s course of action. If, instead, we discount small probabilities down to zero, we may have a response to Longtermism provided that its truth depends on tiny probabilities of vast value. Contrary to this, I will argue that discounting small probabilities does not undermine Longtermism.
Keywords: Longtermism; Fanaticism; Probability discounting
Monday 6 December - Day 1 Session C
15:20 - Preston Greene, “Social bias and not time bias”
People seem to have pure time preferences about tradeoffs concerning their own pleasures and pains, and such preferences contribute to estimates of individual time discount rates. Do pure time preferences also matter to interpersonal welfare tradeoffs, including those concerning the welfare of future generations? Most importantly, should an intergenerational time discount rate include a pure time preference? Descriptivists claim that the intergenerational discount rate should reflect actual peoples’ revealed preferences, and thus it should include a pure time preference. Prescriptivists claim that the intergenerational discount rate should be based on moral analysis, and thus they (often) claim that the rate of pure time preference should be zero. I argue that regardless of which view is correct, a focus on pure time preference is misplaced. First, the most plausible interpretation of actual preferences for intergenerational tradeoffs is that people are socially biased and not time biased. Second, social bias is superior to time bias as a prescriptive reason to discount the welfare of future people: it is more justifiable to base preferences on social features than on non-social features of the physical makeup of our universe like time or space. Third, recent advances in measuring social bias as a 'social discount rate' make social bias a feasible replacement for time bias in economic analyses of intergenerational welfare tradeoffs.
Keywords: Time bias; Time discounting; Future generations; Intergenerational discounting
16:40 - Andreas Mogensen, “On lexical threshold negative utilitarianism”
This talk addresses the plausibility of lexical threshold negative utilitarianism (LTNU): roughly, the view that there is some level of suffering that cannot be outweighed by any increase of well-being. I discuss the status of LeGuin's well-known short-story, "The Ones Who Walk Away from Omelas," as an intuition pump favouring LTNU, and show that it is possible to argue for (a weakened form of) LTNU, appealing to intuitively plausible axiological premises. I also explore the practical implications of rejecting one of the key premises of the argument - a premise which states, roughly, that the value of good lives can always be outweighed by the disvalue of sufficiently many lives with negative lifetime welfare levels, for any negative lifetime welfare level. Finally, I discuss objections to LTNU, and its implications for thinking about the value of the human future.
Keywords: Population axiology; Negative utilitarianism; The value of the future
17:20 - Christopher Cowie & Arabella Lawler, “Malaria nets or asteroid shields? Deontology, high stakes and the very far future”
Should deontologists be longtermists? MacAskill and Greaves have claimed they should. Their argument proceeds via a 'stakes sensitivity argument' according to which, roughly, if the stakes are high enough and all else is equal, deontologists should be guided by outcomes. We claim that this argument fails. While it is true that there are some cases in which if the stakes are high enough deontologists should be guided by outcomes, not all cases are like this. It depends on the distribution of welfare in the outcomes. Specifically, deontologists do not think that high stakes warrant guidance by outcomes *if the welfare is distributed in very small per-person amounts over a very large population*. This is well known from discussion of familiar cases by canonical non-consequentialists e.g. Kamm, Scanlon. And problematically for longtermists, the outcomes that justify their strategy are of exactly this kind: expected welfare is distributed in very small amounts over a large population. So deontologists should not be longtermists. We then explain why this commitment does not render deontological views obviously objectionably anti-aggregative.
Keywords: Longtermism; Deontology; Deontic Strong Longtermism; Welfare distribution; Tie-breaking models
Monday 6 December - Day 1 Session D
15:20 - Jeff Sebo
In this talk I consider some problems that small animals such as arthropods and nematodes raise for utilitarianism. In particular, if small animals have more expected welfare than large animals overall, then utilitarianism implies that we should prioritize the former all else equal. This could lead to a “rebugnant conclusion,” according to which we should create large populations of small animals rather than small populations of large animals. It could also lead to a “Pascal’s bugging,” according to which we should prioritize large populations of small animals even if these animals have an astronomically low chance of existing or being sentient at all. I argue that the utilitarian should accept these implications in principle, but might be able to avoid some of them in practice.
Keywords: Utilitarianism; Animal welfare; Climate change; Population ethics
16:40 - Stephane Zuber, “Long-run discounting is ethically robust”
Social discounting plays a crucial role for long-run project evaluation. However, there is substantial ethical disagreement between competing ethical theories regarding the correct level of social discounting. A decision maker can reasonably be uncertain about what is the right ethical theory and how to compare competing theories. We show that, when ethical uncertainty is dealt with by maximizing expected choiceworthiness, the probability of different moral theories, and the way we compare those moral theories does not matter in the long run. In the long run, only the lowest social discount rate matters. We show that the result holds more generally if we consider other ways to deal with moral uncertainty. We conclude that consideration of the lowest social discount rate for very long run impacts is ethically robust.
Keywords: Discounting; Moral uncertainty; Long-run
17:00 - Christian Tarsney, “Does good beget good? Some very preliminary speculations”
Do actions and events that make the near future better also tend to make the far future better? If so, it might turn out that a significant portion of the optimal "longtermist portfolio" will consist of projects that have substantial near-term value (e.g., building wealthier, more just/equitable, and better-governed societies). If not, it is perhaps suspicious that nearly all the projects longtermists have identified as promising seem either good or neutral in the near term (compared with a "business as usual" use of resources) -- perhaps, for better or worse, there are opportunities to improve the far future that would substantially harm the present and near-future generations. This talk describes some reasons to expect either a positive or a negative correlation between the near-term and long-term value of actions and events, and explores the potential implications of an overall positive correlation.
Keywords: Longtermism
17:20 - Brian Jabarian, “Climate policy and welfare under normative uncertainty”
In this paper, we provide a climate-economy macroeconomic model under normative uncertainty: Nested Inequalities Climate-Economy model with Risk and Ideologies (NICERI). We start by axiomatizing our model. Its representation theorem relies on the introduction of a minimal comparability principle, allowing welfare comparisons across heterogenous ideologies. Then, we provide macroeconomic simulations of our model. Finally, we collect survey data over a representative US population to calibrate our model.
Keywords: Climate change; Welfare economics; Normative uncertainty
17:40 - John Quiggin, “Discounting, future generations and climate change”
In the presence of overlapping generations and under standard conditions for a social welfare ordering (Pareto optimality, transitivity, independence), the only ordering consistent with utilitarianism for all people currently alive at any given point in time is one based on weighting all people equally, regardless of their date of birth. In particular, this implies that, under reasonable conditions, the pure rate of social time preference is zero.
Keywords: Equity; Climate; Overlapping generations
Tuesday 7 December - Day 2 Session A
15:00 - Marcus Pivato, “Population ethics in an infinite universe”
Population ethics studies the tradeoff between the total number of people who will ever live, and their quality of life. But widely accepted theories in modern cosmology say that spacetime is probably infinite. In this case, its population is also probably infinite, so the quantity/quality tradeoff of population ethics is no longer meaningful. Instead, we face the problem of how to ethically evaluate an infinite population of people dispersed throughout time and space. I propose spatiotemporal Cesàro average utility as a way to make this evaluation, and axiomatically characterize it.
Keywords: Repugnant Conclusion; Infinite population ethics; Axiology
16:10 - Pauline Vorjohann, “Welfare-based altruism”
Why do people give when asked, but prefer not to be asked, and even take when possible? We show that standard behavioral axioms including separability, narrow bracketing, and scaling invariance predict these seemingly inconsistent observations. Specifically, these axioms imply that interdependence of preferences (“altruism”) results from concerns for the welfare of others, as opposed to their mere payoffs, where individual welfares are captured by the reference-dependent value functions known from prospect theory. The resulting preferences are nonconvex, which captures giving, sorting, and taking directly. This allows us to consistently predict choices across seminal experiments covering distributive decisions in many contexts.
Keywords: Altruism; Axiomatization; Giving behavior
17:20 -Philip Trammell, “New products and long-term welfare”
Existing frameworks for estimating the welfare benefits of economic growth fail to account properly for the welfare benefits of new product introduction. I introduce a more realistic framework for modeling new product introduction, and I show that it is unique in satisfying the Kaldor Facts alongside several new desiderata. I then show that, in this framework, the long-term benefits of economic growth are far greater than previously supposed. Finally, I note that this framework successfully predicts a difference, in the observed direction, between relative risk aversion and intertemporal substitution elasticity, as appears to underlie the equity premium puzzle and certain other puzzles produced by conventional growth models.
Keywords: Growth theory
Tuesday 7 December - Day 2 Session B
15:00 - Antonin Pottier, “Climate change and population: An assessment of mortality due to health impacts”
We develop a model of population dynamics accounting for the impact of climate change on mortality through five channels (heat, diarrhoeal disease, malaria, dengue, undernutrition). An age-dependent mortality, which depends on global temperature increase, is introduced and calibrated. We consider three climate scenarios (RCP 6.0, RCP 4.5 and RCP 2.6) and find that the five risks induce deaths in the range from 135,000 per annum (in the near term) to 280,000 per annum (at the end of the century) in the RCP 6.0 scenario. We examine the number of life-years lost due to the five selected risks and find figures ranging from 4 to 9 million annually. These numbers are too low to impact the aggregate dynamics but they have interesting evolution patterns. The number of life-years lost is constant (RCP 6.0) or decreases over time (RCP 4.5 and RCP 2.6). For the RCP 4.5 and RCP 2.6 scenarios, we find that the number of life-years lost is higher today than in 2100, due to improvements in generic mortality conditions, the bias of those improvements towards the young, and an ageing population. From that perspective, the present generation is found to bear the brunt of the considered climate change impacts.
Keywords: Climate change; Impacts; Mortality risk; Endogenous population
16:10 - Ezra Karger & Phil Tetlock, “Hybrid forecast-persuasion tournament for existential risk”
Forecasting tournaments are misaligned for producing actionable forecasts of existential risk, an extreme-stakes domain with slow accuracy feedback and elusive proxies for long-run outcomes. But researchers can improve alignment by measuring facets of judgment that play central roles in policy debates but have long been dismissed as unmeasurable. We propose a new type of forecasting tournament that integrates objective accuracy metrics with intersubjective metrics that test forecasters’ skill at predicting others’ judgments of outcomes that are difficult or impossible to score, including the accuracy of long-range forecasts and the persuasiveness of forecast rationales. We plan to implement these methods in an existential forecasting tournament where the forecasters are subject-matter experts and superforecasters working together to forecast unresolvable questions.
Keywords: Forecasting; Belief elicitation; Accuracy
17:20 - Harry Lloyd, “Time discounting, consistency, and special obligations: a defence of Robust Temporalism”
This paper defends the claim that mere temporal proximity always and without exception strengthens certain moral duties, including the duty to save – call this view Robust Temporalism. Although almost all other moral philosophers dismiss Robust Temporalism out of hand, I argue that it is prima facie intuitively plausible, and that it is analogous to a view about special obligations that many philosophers already accept. I also defend Robust Temporalism against several common objections, and I highlight its relevance to a number of practical policy debates, including longtermism. My conclusion is that Robust Temporalism is a moral live option, that deserves to be taken much more seriously in the future.
Keywords: Time discounting; Longtermism; Cost-effectiveness analysis
Tuesday 7 December - Day 2 Session C
15:00 - Lara Buchak, “Combining Risk and Ambiguity”
Expected utility maximization assumes that everyone must have a particular attitude towards risk (global neutrality) and a particular attitude towards ambiguity (sharp probabilities). Each of these assumptions has been relaxed in separate theories. Risk-weighted expected utility maximization allows for a broader range of attitudes towards risk, while a multiplicity of theories--alpha-maximin, Choquet expected utility, and various 'menu-selection theories--allow for a broader range of attitudes towards ambiguity. I will explore what each of these theories says about practical rationality. I will then show how risk-weighted expected utility can be combined with any of the theories about ambiguity. I close by arguing for a specific theory of ambiguity attitudes.
Keywords: Risk; Ambiguity
16:10 - Jacob Barrett, “Neglectedness and social change”
The neglectedness heuristic recommends contributing to or participating in more neglected causes. The standard defense of this heuristic appeals to diminishing marginal returns: in cause areas that are already “crowded,” one’s expected impact is lower because the low-hanging fruit have already been picked. However, many contexts, especially those involving social or institutional change, appear to be characterized by thresholds, in the sense that a valuable change will occur only if a sufficient number of people contribute to or participate in some cause. In these contexts, it is intuitive to think that we instead find increasing marginal returns as we approach the threshold, such that, so far from employing the neglectedness heuristic, we should instead employ a contrary “bandwagon” heuristic: contribute to or participate in less neglected causes. In this paper, we present some results from a work in progress on this topic. The upshot for now is that the neglectedness heuristic is unreliable in many threshold contexts, and that in some such contexts we should employ the bandwagon heuristic.
Keywords: Neglectedness heuristic; Social change; Cause prioritization
17:20 - Teruji Thomas, “Simulation expectation”
Bostrom's 2003 paper "Are we living in a computer simulation?" and related work suggest that an affirmative answer is at least somewhat likely on our current evidence, and that in some not implausible future circumstances it would become a near certainty. But it's unclear how the argument is supposed to go, given how little we would appear to know about the ground-level of reality if we are indeed in a simulation. I will present a new, and arguably more troubling, version of the argument, based on the premise that, supposing I am not in a simulation, the expected ratio of simulant to non-simulant people in environments like ours is high. I will also mention some ways in which this strange-sounding topic might be relevant to questions of prioritization, though that will not be the focus of the talk.
Keywords: Simulation argument; anthropic reasoning
Tuesday 7 December - Day 2 Session D
17:20 - Ilan Noy, “Inequities in climate change-attributed impacts of extreme weather”
Climate change is already increasing the severity of extreme weather events, such as with rainfall during hurricanes. But no research to date investigates if, and to what extent, there are social inequalities in current climate change-attributed flood impacts. As an example, we use climate change attribution science paired with hydrological flood models to estimate climate change-attributed flood depths and damages during Hurricane Harvey in Harris County, Texas. We then combine this information with detailed land-parcel and census tract socio-economic data to describe the socio-spatial characteristics of these climate change-induced impacts. We show that 30-50% of the flooded properties would not have flooded without climate change. These climate change-attributed impacts were particularly felt in Latinx neighbourhoods, and especially so in Latinx neighbourhoods that were low-income and among those that were less likely to be insured. We also demonstrate, using the same attribution approach, that the costs of climate change are typically dramatically under-estimated, and that one can use these bottom-up calculations to arrive at some estimates of the total costs extreme weather events attributable to climate change.
Keywords: Climate change; Attribution; Economic costs
Attending the workshop
Applications to attend this workshop closed 14 November 2021.
Detailed schedules and more details of previous workshops can be found on the links below.
- 1st Oxford Workshop on Global Priorities Research (with a focus on longtermism)
- 2nd Oxford Workshop on Global Priorities Research
- 3rd Oxford Workshop on Global Priorities Research (the second day of this event was a one-day workshop on predicting and influencing the far future)
- 4th Oxford Workshop on Global Priorities Research (cancelled)
- 5th Oxford Workshop on Global Priorities Research
- 6th Oxford Workshop on Global Priorities Research
- 7th Oxford Workshop on Global Priorities Research