Hilary Greaves | The Case for Longtermism
Presentation given at the Global Priorities Institute, June 2019
HILARY GREAVES: (00:07) This is a talk in practical ethics. I want to talk about a paradigm I'll call longtermism. So to get into the longtermist frame of mind, consider the following two observations. One of them is evaluative the other one's descriptive. The evaluative claim is that it seems very plausible that insofar as the consequences of one's actions matter for the purpose of moral evaluation and decision making, all of the consequences of one's actions matter and they matter equally. In particular, it doesn't seem it should make a difference how remote in time or in space the effect in question is from the point of action. The descriptive observation is that if all goes well, that is to say if we don't go extinct prematurely, the number of humans and other morally relevant sentient beings that could be in the future whose well-being could potentially be affected by our actions now, is potentially really astronomical. So when you put those two things together it becomes at least somewhat plausible that even after you take into account all the relevant uncertainties, that is to say, even after you take into account the general fact that as you go further out in time from the point of action whether or not some particular thing will materialize as an effect of one's action becomes more uncertain. Even after you've counted that it's at least somewhat plausible that the very best actions in terms of expected value are the ones that are best primarily because of their effects across the course of the very long-run future rather than because of their more immediate effects. So that kind of thought is the thought that I'm calling longtermism.
Before we go further there are a few things that are worth clarifying. So firstly, there's a question about what the relevant choice that is. You might be thinking… Well I can easily construct a counter example to a longtermist claim. Here's how I'll do it. There are definitely some actions with the property that, at least in expected value terms, their main effects are just (02:00) in the short-term and then they don't have any predictable effects for the far future. So, if I just construct a choice situation where the only actions that are available have that property, then clearly longtermism is going to be false of that choice situation, and that's true. So what that shows is that in order for a longtermist claim to be plausible we can't just be talking about an arbitrary decision context. We have to say something about which decision context we're talking about. In this talk I want to focus mostly on the choice context of what I'll call the ‘open-minded philanthropist’. More on what that means in a moment.
The second clarification is what exactly we mean by the very best actions. In particular we might mean something purely axiological (the actions whose consequences are best in expectation) or we might be trying to say something more like the very most choice worthy actions even after all the relevant deontological considerations have been taken into account, if we like a non-consequentialist moral theory. By way of division of labor what I want to do first is focus mainly on an axiological longtermist claim, but I will say a little bit towards the end of the talk about whether a deontic longtermist claim might plausibly follow from axiological longtermism by the likes of a plausible non-consequentialist theory.
And then finally there are these vague terms in the longtermist claim, the ‘very long-run’ future and the ‘more immediate’ effects. There's a question about what kind of time frame we have in mind for the boundary between those two. So more fundamentally the situation is, for any time t you could formulate a claim as longtermism sub t where… So t is going to be the boundary between the near future and the very long-run future and we could ask whether the longtermist claim is true relative to that value of t.
What I want to do in this talk is to explore the claim that a longtermist claim is true even for some quite surprisingly large value of t, perhaps 100 or even 1000 years. So we're going to be talking about the claim that even if you just ignored (so the next 100 or the next 1000 years) (04:00) and focus solely on the consequences of your actions from the very long-run future. Beyond that time frame, often you would get pretty accurate assessments of which actions were better than which others.
Okay. So to sum all that up, here's a little bit more about the decision context we're talking about, ‘the open-minded philanthropist’. Let's call her Shivani. Shivani controls a pot of say five billion dollars. The amount of money doesn't matter. Her aim is to spend this money in whatever way would most improve the world and she's open to considering any projects as a means to doing this. So the open-mindedness is important here. The point about open-mindedness is that Shivani is not one of these philanthropists who's gone into the business of philanthropy saying because she has a passion for the opera and she specifically wants to fund the opera. No, rather she just wants to improve the world as much as she possibly can. She thinks it's possible to compare apples and oranges across quite different kinds of interventions and she's just looking for which cause area happens to have the property in the world as we find it today but that cause area affords the most cost-effective opportunities to do good with her philanthropic dollars. Okay. So that's Shivani.
And then the axiological longtermist claim, a little bit more precisely, is the claim that the highest expected value options that are available to Shivani have a feature that most of their expected value relative to business as usual comes from effects on a more than 100 or even more than 1000 years into the future rather than from the more short-term effects. This thesis insofar as it's true would be quite revisionary because normally in philanthropic and also in policy context there's an implicit assumption that trying to affect the course of the very long-run future would just be too intractable and so perhaps for that reason we tend to just focus on the effects that our dollars could have within our own lifetimes or even just within the next five years. Okay. So business as usual… In there I should have said, is the scenario where Shivani just does nothing (06:00) with her money in whatever is the most appropriate sense of that phrase. Perhaps she just leaves it in her bank account indefinitely or she takes some money out of her bank account and then burns the bank notes in the fire.
Here's an outline of the remainder of the talk. So having now articulated a longtermist thesis, in Section One, I want to give a plausibility argument for longtermism. We're going a little bit beyond the vague thoughts sketched in that introduction. This is not supposed to be a knockdown argument for longtermism. It's not supposed to be a deductively valid argument, but it is supposed to be a set of considerations that I think should predispose us towards being at least somewhat sympathetic to longtermism until and unless some objection comes along. However, there are objections and the main body of the talk is devoted to examining those. In Section Two, I want to talk about what I myself take to be the most serious objection to longtermism. This is simply the empirical objection that… It's claimed this project of affecting the course of the very far future is just too intractable. Then in Section Three, I want to consider axiological objections. The point here will be, up to that point in the talk I will have been implicitly or otherwise considering the question of longtermism from the point of view of a broadly utilitarian axiology. But of course the utilitarian axiology is controversial, so now we want to do a sensitivity analysis. Here's the question for Section Three: is there some plausible way of deviating from utilitarianism such that even if the argument for longtermism goes through on a utilitarian approach it doesn't go through on this plausible alternative approach? So how sensitive is longtermism to plausible choice points in axiology, in other words? In Section Four, I will briefly sketch an argument by which one might go from axiological longtermism, insofar as that's granted, to a deontic longtermist thesis about what one ought to do by the likes of a plausible moral theory even if it's a non-consequentialist theory. And (08:00) then in Section Five, I want to just very briefly consider the extent to which even if longtermism is true for a decision context like Shivani's, whether it might also be true for other decision contexts? So if we're not talking about an open-minded philanthropist or if we're not talking about a philanthropist at all, how many other actors are suitably positioned such that a longtermist claim will be true of them as well?
Okay. So Section One, the plausibility argument for axiological longtermism. I'll say a little bit more about the size of the future, then I'll sketch a toy model which I find helpful for getting into the frame of mind that makes longtermism seem plausible. Then I want to adduce an additional consideration in favor of longtermism relating to the way other actors other than Shivani are currently behaving and then I'll tie it all together and try to sum up the argument.
Okay. So a bit more about the size of the future. I said at the beginning that if all goes to plan there are potentially astronomical numbers of sentient beings in the future who might be affected by our actions. What kind of numbers should we be thinking of when we hear claims like that? Well it's going to be very hard to come up with a single point estimate as our best guess because there's an enormous array of plausible answers here. So more plausibly we should have a credence distribution that's quite widely distributed across a very large range of numbers. But still we can ask… Okay what kind of range should have a significant amount of our credence in this? So just think about what kind of numbers we should be anchoring on here. Consider the following empirical observations. Firstly, suppose we were thinking… Well the relevant reference class is just the average mammalian species because we're mammals, maybe we’re just like all the other mammals. In that case the relevant anchoring figure comes from the fact that the average mammalian species lasts for something like one to two million years. Meanwhile, homo sapiens so far has only been around for about 200,000 years, so if we think this is the relevant (10:00) reference class, it gives us a figure of something like one million years of humanity remaining, if we don't go extinct prematurely. But you might think that's too pessimistic. Maybe we're better placed than the average mammalian species to survive, perhaps because we have better technology and a foresight. In that case you might think the kind of thing that's more likely to wipe out humans is some mass extinction event. Maybe the kind of thing that wiped out the dinosaurs. Well events like that historically have happened only around once a 100 million years, so insofar as that's the relevant comparison class, it gives us a much longer time frame to think of that. We might be more optimistic still even than that. The earth will become uninhabitable in something over one billion years’ time and furthermore there's a non-trivial possibility that we could conquer the project of interstellar space travel. If we can settle other star systems, then the fact that our star system and our planet becomes uninhabitable, then even that might not spell the end for humanity.
Okay. So there's an enormous range of time frames to spread our credence across, but in light of these numbers, in terms of the expected remaining duration of humanity, we can see that even a very large figure like one million years is at the very most a conservative end of what should be considered a plausible estimate. That number, however, is still extremely large.
Okay. So now take that very conservative estimate, one million years remaining, and feed it into this toy model. Okay. So reprise that this is just a little model you might find helpful for seeing why longtermism might be considered plausible. Let's temporarily assume just for concreteness and for convenience the temporal separability model. That is to say there is some way of assigning values to things that are going on in the world at a particular point in time, so some way of assigning values to time slices with the feature that the goodness of a whole history of the universe from Big Bang to Heat Death is given by adding up these temporally localized values across all the times. (12:00) Temporal separability is validated by, for example, a standard total utilitarian axiology but also by many other axiologies besides. Now t, as before, will be the threshold between the “near” and the “far” future, perhaps a hundred or a thousand years and let T be the “end of time”. So we're thinking of T as being maybe something like a million years for current purposes, although the exact figure doesn't matter.
Now suppose we're comparing two actions, Action One and Action Two. Maybe Action One is the case where Shivani spends her dollars in some particular way and Action Two is the action where she does nothing with the money. We want to know what's the expected value difference between those two actions. In particular we want to know the sign of the expected value difference because we want to know which action is better. In the light of temporal separability, we can write down a pretty simple formula for that. We can decompose the expected value difference into a term that represents the time-averaged expected value difference… Sorry, the time summed expected value difference across the course of the near future (so that next 1000 years) and then the time summed expected value difference across the remaining very long future (so the rest of that one million years). So the point here is… Okay, t is the duration of the near future (that's just a thousand years). This term here is something very close to a million years, so you've got at least a factor of a thousand relating the duration of the far future to the duration of the near future. These little δ’s are the average amount by which our actions are able to affect the goings on at a particular time in the near future and in the far future respectively. The point then is that unless long-term δ (how much we can affect far future goings-on) is even less than one thousandth of short-term δ (how much we can affect short-term goings-on) then the sign of this sum is going to be determined by the the very far future goings-on rather than the more immediate effects. If you just want to know which of the two actions is better. In other words, unless long-term δ is really exceedingly small, you're going to (14:00) get the right answer by just looking at the very far future and even completely ignoring the near future.
Okay. One further point… Things would be different if everybody else were already like Shivani. So if the world were already populated with wealthy billionaires and governments and intergovernmental organizations who were all trying to optimize for this completely time impartial optimization function, they were all just trying to make the world better and they weren't discounting benefits that occur further into the future, then you might think that at the current margin the most cost-effective ways of benefiting the very short term will be just as cost-effective as the very best, very most cost-effective ways of benefiting the far future. You can see this by thinking in terms of an analogy to a fruit tree. So in terms of this analogy think of the fruits that are lower down the tree as being the more cost-effective interventions, the ones higher up the tree are the less cost-effective interventions. So generally speaking actors are coming along and trying to pick the low-hanging fruit, the more cost-effective ones. And imagine further that the left-hand half of the tree is the interventions whose main benefits are in the very short-term (very short-term meaning like the next hundred or a thousand years) and the right-hand half of the tree are the interventions whose main benefits are spread across the course of the very long-run future.
Okay. Now if it was the case that everybody was time impartial, then all the previous actors before Shivani would have just been picking the lowest hanging fruit they found on this tree irrespective of whether it lay on the left or the right, but of course that's not the context we find ourselves in. Instead the vast majority of access controlling resources in the world today for various reasons exhibits a significant amount of near-term bias. They place greater value on short-term benefits. And so they've been selectively picking fruit from the left-hand half, the short-termist half of the tree, even when they were lower hanging fruit remaining on the longtermist half, on the right. So that's a further reason for (16:00) thinking when Shivani comes along, if there were any fruit on the right-hand half of the tree in the first place, they will have been selectively left there by the other actors who are biased towards the near-term.
Okay. So summing that line of thought up then, firstly we should at least tentatively expect there exist interventions in the first place that have sufficiently high “long-term δ” to be more cost effective than the best unfunded short-termist interventions. That's as illustrated in the toy model because of the fact that given how vast the far future is, that claim is going to be true even if the amount by which you're able to effect the very far future is exceedingly small, provided it's not too small.
Secondly, we should expect that many such longtermist interventions are currently unfunded because of the fact that most other actors exhibit significant near-term bias. And so putting those things together we should expect to find axiological longtermism true at the current margin when Shivani comes along with her completely impartial perspective.
Okay. So next I want to take on, as I said, what I consider to be the most plausible, the most forceful objection to this line of thought. On the days when I myself don't believe axiological longtermism, it's because of the stuff in Section Two. Okay. So the idea is, nothing we've said so far rules out the possibility… It's not the true possibility, but nothing we've said so far rules out the possibility that for instance there could just be a complete causal disconnect between the next 1,000 years of the world's history on the one hand and everything that happens after that on the other hand. So for all we've said so far, it could be the case that nothing we do today has any effect whatsoever on the far future beyond the 1,000 year mark. And of course, if that was the causal structure of the world, then longtermism would be false. So this is the point that… Yes, if there were fruits on the longtermist half of the tree in the first place, then they would be still there. But we haven't yet (18:00) shown that there were any there in the first place, although there weren't any… Although they weren’t any that were at all low down in the first place.
If you're skeptical about this you might believe what I'll call the “washing-out hypothesis”. That hypothesis says that the magnitude of the average effects of one's actions on the future tends to decay with time from the point of action and furthermore it tends to decay sufficiently fast, that in fact the short-term effects dominate ex ante value differences, so expected value differences. Okay. It's important we're talking about expected value differences here. If we were instead asking the corresponding ex post question… So we were asking for a given action I take, after all the uncertainty dust settles, will it in fact be the case that the majority of the value difference my action makes occurs in the next 1,000 years or instead over the course of the further future beyond that? For that question, the answer is almost certainly almost always, that is the very far future effect that make the lion's share of the contribution to the value difference. But when we're asking about expected value, it's far less clear that that's true. In particular I think that the washing-out hypothesis is true and so longtermism would be false for sufficiently trivial decision context. If we're just asking about whether it's better for me to click my fingers or not click my fingers now? Well unless there's something special going on in terms of the causal hookup between my clicking my fingers and other goings-on in the world, the only real effect that's going to have in expected value terms is that my fingers are a little bit more painful or a little bit less painful for the next few seconds. So that would be a case when the expected value difference is dominated by the very short-term. So there are decision contexts like that. What I want to argue though is that washing-out is false and so longtermism is true for some decision context. In particular I think that's the case of the decision context of the “open-minded philanthropist” the character I'm calling Shivani.
Okay. So to make the case (20:00) for that we just have to roll up our sleeves and think about… Well what are the plausible things that Shivani could do? If she's trying to influence the course of the very long-run future in expectation, what kind of things might you try to do and then how plausible do those seem to be? Here, I want to break things down into a couple of categories. Firstly, Shivani could try to mitigate risks of premature human extinction or secondly, she could try to do something that increases the general average well-being level in the future conditional on premature extinction not happening and then within that second bucket (improving the value of the future conditional on survival) I want to further break things down into two possibilities: she could try to effect this improvement by speeding up progress or she can she can instead try to improve the value of the future by changing the path along which the progress happens. Okay. So next I want to say a bit more about each of those three things in turn.
Okay. So firstly, extinction risk mitigation. Here, and importantly, it makes some difference which perspective we take in population axiology. A totalist population axiology familiarly says that: how good a given history of the world is is a matter of how much well-being there is, a total summed up across all the people and all the times. Now if you take a totalist view, then you're going to tend to think that premature human extinction is bad, and not just bad, not just very bad, but really astronomically bad because when you look at plausible estimates of how many lives would then be lost if the humanity went extinct in say the next hundred years compared to how long we expected to survive for otherwise, that number is really enormous. So here's Nick Bostrom giving voice to that line of thought:
“Even if we use the most conservative estimate of how many descendants present humans could have if we don't go prematurely extinct, we (22:00) find that the expected loss of an existential catastrophe…
So think here a premature extinction event, extinction in the next century or so.
… is greater than the value of 10 to the 16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives.”
And then he continues.
One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any ordinary good such as the direct benefit of saving one billion lives.”
Okay. So that's the forceful argument for the claim that at least if you accept totalism then you're likely to think that… And just anything you can do more or less to reduce extinction risk even by a tiny amount is going to likely, is likely to count as more cost-effective than say, anything you could do to alleviate poverty in the near term.
So I want to come back to this in Section Three, where we talk about the sensitivity analysis. The take-home point for now is: Yes. If you accept totalism then this conclusion seems pretty robustly to follow. When we do the sensitivity analysis, I'll talk a little bit about population axiology and suggest that this conclusion is more or less only going to be followed by the likes totalism. So this particular argument for longtermism based on this example doesn't look like it's very robust to perhaps plausible variations in axiology. Therefore, it's interesting to also consider the second bucket. What about things we could do to improve the value of the future conditional on no premature extinction? So conditional on humanity surviving. And then first sub-bucket, speeding up progress. Suppose that seems pretty plausible that humanity is generally on some kind of welfare element upward trajectory. That seems right. Conditions of human life have been generally improving over time. We're much better off now than our forebearers were a few centuries ago. And suppose this trajectory is set to continue. You might notice because (24:00) of moral progress, economic progress or something else, (the [inaudible 24:03] doesn't matter) then if that's the case anything we can do now to speed up that progress curve… So kind of shift it to the left. Here's the… This horizontal axis is time, the vertical axis is something like average well-being level. If we can do something to shift the curve to the left, that is to say make it the case that the goings-on in say 2020, what they otherwise would have been in 2025 and then things continue from that point exactly as they otherwise would have, then what that intervention will tend to do is make it the case that at any given future point in time people living at that time are a little bit better off than they otherwise would have been. Suppose this is 2075, for example, without the speeding up we would have been on this curve, so people would have had that well-being level. With the speeding up, the whole curve has been shifted to the left. Instead the people of 2075 are up here. They're a bit better off. Now if that's true at every future time, then you can see how this kind of intervention might perform very well for longtermist reasons across the course of the very far future because given at how big the future is again, you've got an awful lot of times throughout those little benefits across.
Okay. The problem with arguing for longtermism in this way is that whether or not the longtermist conclusion will follow from interventions like this depends fairly sensitively on what the shape of the “progress curve” is. So in that top diagram here I had it that the progress was basically linear. It was just on an upward trajectory with a constant gradient that carried on forever. If that were the case then this very high long-term value would materialize. But I think in many cases the shape of the “progress curve” we’re actually talking about is something more like an S-curve that eventually plateaus off. And then the picture looks very different because if you shift an S-curve to the left, then the benefit it generates is just… Imagine a shaded area between these two curves is something that's much more tightly bounded and then particularly you don't get any significant benefits materializing after the time when you would have plateaued off anyway. (26:00) So because I think that realistic forms of human progress at least, often tend to have more like this shape, I myself am relatively skeptical about the prospects for speeding up progress generating a very high value longtermist intervention.
However, there's a different sub-bucket we could try instead that I think is more promising and deserves a lot more thought. But first I want to sketch an abstract structure with the property that if this abstract structure is realized in the real world then places where you find that are very promising opportunities for Shivani to have extremely high impact for longtermist reasons. So first the abstract structure and then we'll go on to the question of whether there really are any cases in the world where this structure is substantiated. To articulate the abstract structure I want to borrow the concept of an attractor state from chaos theory. So picture the state space of all possible ways the world could be at a given instant in time. The world then is time evolving by wandering through, wandering on a particular path through this very high dimensional state space. An attractor state is a chunk of that space with the property that if the world once wanders into this attractor state, it tends to persist there for a very long time. So it tends to go round and round within the attractor state rather than wandering out again. So suppose now that there are two or more such attractor states the world could end up in. Suppose that we're not yet in either of them. Suppose further that one of the attractor states is significantly better than the other one. One of them is much more conducive to well-being than the other one. And suppose finally there's something we can now do to influence the probability that the world ends up in the better versus the worse attractor state. If there's something we can do like that then that action would tend to have very high value for longtermist reasons and its value tend not to “wash out” with time. The lack of washing out is (28:00) basically built into the definition of attractor state. It's built into the fact that these states are sticky. They tend to persist once you get into them.
Okay. So there's the structure. The million dollar question or the five billion dollar question is: can we find places in the world where this structure is substantiated? Here are a few suggestions… All quite tentative, but I think all worth thinking about.
Firstly, you might think that certain political arrangements have this feature. The US Constitution, for instance, has already persisted for several hundred years and continues to have extremely significant influence on the conduct of US politics including in ways that are highly relevant for human well-being to this day. You can imagine a similar thing might happen on a grander scale. Suppose that seems not completely implausible, that within the next century or two there will be constructed a world government or there will be reckoned some kind of world constitution. If that happens then it's plausible to think that that political arrangement would be even more persistent than a given national constitution because often the kind of things that lead to change in the national level essentially involve some kind of interference from outside and people might look at other countries and say, “Okay, I think that arrangement is better,” and therefore start agitating for change within their own country. If we're talking at the level of world government there is no such outside interference. So that source of instability has been removed. So if it's at all plausible that such a constitution or arrangement might come into being, then anything that we can do now, for example, by funding research into how to do it or by saving up resources so that we have more influence when the relevant time comes. Anything we can do now to increase the probabilities that the content of that Constitution is more rather than less conducive to well-being. That intervention could have effects that persist for a very long time, that resists washing out.
Second example, (30:00) artificial agents. Researchers working on machine learning currently assigned really quite significant probability to the proposition that within the next century or so at least, there could be created an artificially intelligent system that is more intelligent than human beings. If so, then it's crucially important what values get built into this artificial system because those values are then likely to have significant influence over the conduct of world affairs and again, the conduct of world affairs in ways that are highly relevant are considerations of human well-being potentially indefinitely. Here it's particularly relevant that artificial agents would not be mortal. They wouldn't be made of flesh and blood like us. They can just be copied from one hardware system to another so they're not even threatened by the physical decay of the hardware system that they're originally running on. So these kind of agents could persist potentially indefinitely and correspondingly their influence could. So one thing Shivani might do, in other words, is fund the project of artificial intelligence safety trying to increase the probabilities that the values built into this artificial general super intelligence are ones that are more conducive to well-being going forward.
Thirdly, you might think climate change is an example. So insofar as what's bad about more rather than less extreme climate change is that the more extreme climate is just permanently less amenable to well-being than the less extreme climate. As opposed to that, is that the process of adapting to the change is a painful period we have to go through. Well-being is low during that period but then afterwards things are okay again. If you think it's the first thing rather than the second, then things we can do now to mitigate climate change could also count as extremely high value for longtermist reasons. Other places we might look for this kind of attractor state structure concern the values that are built into religions or just the values substantiated in other influential value systems. I won't say so much about those. (32:00)
Okay. So those are a few examples that I think are all worthy of consideration and one thing that is clear from contemplating that list though, one thing one is painfully aware of when giving a talk like this, is that the items on that list at least at the moment, are extremely speculative. They all seem to be worthy of further consideration, but none of them has received really serious in-depth research to dot all the I’s and cross all the T's to date. This though suggests another thing that Shivani might do with her five billion dollars. She might take the view that… Well, it's quite plausible there is some kind of attractor state structure out there in the world that offers Shivani an opportunity to leverage the size of the future and generate extraordinarily high expected value, but it's just unclear at the moment which one it is. If she thinks that, then one very sensible strategy would be to say, spend the first half of her philanthropic pot on funding research into the items on that list, into what else should be on that list and then afterwards, in the light of the fruits of that research, devote the remainder of her philanthropic budget to funding whichever intervention then looks best in the light of the research. For familiar reasons concerning the value of information, even if there's a relatively small probability that the research comes up and says, “Yes. I found something. You should do this intervention.” If that probabilities is say 20%, provided that conditional success, the condition on the research saying that, the expected value of the thing that's then recommended is more than five times greater than the expected value of the best short-termist thing Shivani could do (like for instance funding anti-malarial bed nets) is going to be the case now relative to the current probabilities that this compound option of funding research and then acting accordingly has higher expected value than anything Shivani could do to benefit merely the short term.
Okay. So (34:00) that wraps up my discussion of the empirical objection. Next, I want to assume for the sake of argument that axiological longtermism is true according to a broadly utilitarian axiology but take on this question of sensitivity analysis. So for various plausible ways one might disagree with utilitarianism, how strong is the argument for longtermism by the likes of those alternative axiologies? So I want to talk about four things: Firstly, non-totalist population axiologies. Secondly, discounting the future. Thirdly non-aggregationist approaches. And fourthly, the idea that one should ignore so-called “indirect” effects for moral purposes. By the way, a spoiler. What I'm going to argue in each of these cases is that it doesn't undermine the case for longtermism broadly speaking because either longtermism still looks plausible by the likes of this axiology or I just really think it's an implausible axiology and that that's relatively uncontroversial. Well finally, it turns out on closer inspection that the thing we're talking about is not really a matter of axiology at all. So it might be relevant when we start talking about deontic longtermism in those cases but it's irrelevant to the truth of axiological longtermism, which is what we're examining at the moment.
Okay. So firstly, population axiology. Here, that division between extinction risk mitigation on the one hand and improving the value of the future conditional on survival on the other hand, is really crucial. We noted earlier that according to totalism, premature human extinction is really astronomically bad. However, that future is somewhat specific to totalism and there are many other approaches to population axiology which will maybe agree that extinction is bad, maybe they'll even agree it's very bad, but they're unlikely to think it's so astronomically bad that even a reduction in extinction risk of one millionth of one percentage point dominates anything else we could do with the same money. (36:00) Roughly, the reasons totalism is controversial here is that it treats the addition of happiness to the world via the creation of new happy lives as just morally on a par with increasing the amount of happiness in the world by delivering benefits to people who are going to exist either way. And many people think that that's just wrong-headed. So you might think that the sense in which 10 to the 16 lives are lost if we go prematurely extinct is not morally to the point. It's not equivalent according to this alternative line of thought to the premature deaths of 10 to the 16 people who are going to be born either way. So these are the kind of intuitions that grant a so-called “person-affecting” approach to population axiology. The slogan that’s often taken to sum up the “person-affecting” mindset is this one from Narveson:
“We want to make people happy rather than make happy people.”
And in this way of thinking premature extinction isn't really so bad because, yes, there are all these people who don't get to exist, but since they don't get to exist this is kind of a victimless crime. There is not in fact a person who was harmed by premature human extinction, setting aside the pain involved in the process of going extinct. Okay. So because of that whether or not these tiny reductions in extinction risk are extremely high value is controversial and it looks like extinction risk mitigation may be a high priority intervention only conditional on totalism and not conditional on other reasonably plausible approaches to population axiology. In other words, I think this first type of longtermist intervention fails a sensitivity analysis. It's not very robust to plausible variations in axiology.
However matters are really quite different when we look at the second bucket, namely attempts to improve the value of the future conditional on survival. Here we're not saliently talking about changing the value of the future by massively changing the number of people who get to exist. Instead we are (38:00) roughly speaking, holding the number of future people constant and we're just increasing the general or average well-being level of those future people. And there I think that basically any remotely plausible population axiology is going to agree that this project is valuable. You're going to get agreement on that from various non-totalist circles in personal theories like averagism or variable value theory or critical level theory. You're also going to get agreement on this from so-called ‘wide’ person-affecting theories. The class of approaches to population ethics that would disagree with this apparently obvious statement are so-called ‘narrow’ person-affecting theories. Those are theories which hold that as soon as you're talking about a different set of people existing in the future, even if it's a set of roughly the same size, then the two futures that you're comparing to one another are incomparable in terms of goodness. Neither is better than the other and also they're not equally good. So that kind of approach is going to say, when you intervene to improve the value of the future conditional on survival, because it incidentally also changed the identities of future people, you do something that renders the future incomparable to how it would have been otherwise. Okay. So that's going to be a population ethical theory that disagrees with this obvious claim, but basically for that reason the ‘narrow’ person-affecting theories are widely regarded as very implausible. Most people who want to be person-affecting theorists in population axiology put this kind of reason, take a wide rather than a narrow approach.
Okay. So to sum that up then, I think, once we're talking about improving the value of the future conditional on survival rather than extinction risk mitigation, the case for longtermism survives the sensitivity analysis with respect to population axiology.
Okay. So let's move on now to the second way of deviating from a standard utilitarian axiology. Instead of being an undiscounted utilitarian, (which is what I've been thinking of so far) you could adopt a so-called (40:00) “discounted utilitarianism” where you deny that the distance in time of the effect from the point of action is morally irrelevant. A discounted utilitarian approach will wait future increments to well-being less than present or near future increments to well-being just because they're further in the future. So if you have this kind of approach, suppose that you discount future well-being as something like a rate of 1% per annum. In this diagram here, what you have is a graph of, by what factor one should discount future well-being at a given point in the future. So you can see that if the unit of well-being at time zero counts for one then in the same unit, a unit of well-being a hundred years from now counts for only about 0.3 and a unit of well-being 300 years from now counts for less than one tenth of the corresponding amount of well-being today. So clearly this is going to be a view that massively dampens the importance of the future and for that reason I think it's just really quite clear that longtermism is false according to discounted utilitarianism, even if we have a model where I say the population is going to stay constant forever. Literally forever, people are going to carry on existing forever in this model and also the average well-being level in the future is constant forever. This discounted utilitarianism with a discount rate of 1% is going to say that the amount of value in the far future from a hundred years out to infinity is only about one third of the amount of value in the first hundred years and again you can kind of see this in the diagram. The area under the curve to the left of the 100 years is several times greater than the area under the curve to the right of the 100 years even if you imagine that's going on literally forever.
Okay. So all I have to say about that is that I agree with just about every… In fact I think every moral philosopher who has written on the topic of discounting, that (42:00) it is not appropriate to discount future well-being in this way. So this is a case where I think, “Yes. If this was a plausible axiology, it will be one that undermines longtermism.” It's just not a plausible axiology. It is one of those rare cases were all moral philosophers seem to agree.
Thirdly, non-aggregationism. Here is one thing that a lot of people really hate about utilitarianism. It tends to be willing to add up arbitrarily small well-being improvements across arbitrarily many people to get that total well-being improvement to compare favorably with a much more significant benefit one could deliver to a given person. So in less abstract terms here's an example. Suppose you can either save one person from death or instead you can deliver a lollipop lick to an extremely large number of other people. Assuming that lollipop licks count as benefits at all, utilitarianism will tend to say that, “Well, if there are enough people standing to benefit from the lollipop licks, than the total well-being benefit you generated by lollipop licks is greater than the total well-being benefit you generate by saving the one person from death and so you should deliver the lollipop licks.” That, to many people's intuitions, is an unacceptable conclusion and they reject utilitarianism for that reason. So this line of thought leads you to a non-aggregationist view where you roughly say something like, “Instead of adding up all the effects across all the affected parties, maybe sometimes if the amount that's at stake for some people is really tiny compared to the amount that's at stake for other people, we should just completely ignore the small benefits and allow our decision to be driven entirely by where the larger benefits lie.” So in this example we should just clearly save the one person from death. We shouldn't even count the lollipop licks for anything at all.
Okay. So suppose we taken non-aggregationist view, what then will we think about longtermism? Here I think it makes an important difference whether we go for a circle of ex post or an ex ante version of (44:00) the non-aggregationist theory. Here's an example that brings the two apart. Okay. So the ex post approach is the one that says: what's relevant is the possible benefits that might materialize to particular people. In contrast an ex ante non-aggregationist view says: what's relevant is the comparison between the sizes of the expected welfare benefits that can deliver to different people before the uncertainty gets resolved. Okay. So here's the case that pulls those are two versions of non-aggregationism apart. Suppose you can either save Alice’s life for sure on the one hand or you can instead conduct a million ticket lottery and then save the life of whoever wins the lottery on the other hand. Ex post non-aggregationism is going to say: well it's a case of saving one life either way, so there's nothing to choose between these two alternatives. But in contrast to that ex ante non-aggregationism is going to say that: because in the first case you save one person's life for sure, so the ex ante benefit you deliver to her is saving her life, kind of the biggest one that you could deliver to a person. In contrast to that for each of the million lottery ticket holders ex ante, the benefit that they get in expected welfare terms, is just one millionth of a life-saving. And so plausibly this is sufficiently small, but according to non-aggressionism you should just ignore those. You should save Alice rather than conduct the lottery. Okay. So the version of non-aggregationism that I think has some chance of undermining longtermism is the ex ante version. So in that put your ex ante non-aggregationist hat on, consider what you now think about longtermism. Then it's going to be highly relevant that when we think back through those lists of the kind of things Shivani could do to try and influence the course of the very long-run future, they all tend to have the feature of being high-risk, high-reward interventions. They tend to be things that could pay off a lot, but that will pay off only with quite small probability and furthermore, the benefits they've generated and to be dispersed across a large number of (46:00) people. So it's fairly plausible then that if you look at the expected welfare difference any of those interventions makes to any given future person, however you identify those future people, it's going to be quite small and it might compare unfavorably with the benefit that you can deliver to a person who's already alive and in front of you today. So by giving them your million dollars or whatever it is you have to dispense.
Okay. So I think there's some prospect for a non-aggregationist approach to push against longtermism. However, the crucial thing to notice, at this point in the discussion, is that a non-aggregationist approach is more plausible as a deontic thesis about what one ought to choose than it is as an axiological thesis about which outcome is better. That’s generally agreed by writers who are sympathetic to non-aggregationism and roughly the reason is the non-aggregationist approach tends to generate cycles that says, A rather than B, B rather than C, C rather than A and most people think that axiology can't contain cycles, but it's relatively acceptable, relatively plausible. Perhaps we already even have independent reasons for thinking that's deontic considerations ought to choose rather than exhibit cycles sometimes. Okay. But of course if non-aggregationism is not a thesis about axiology in the first place, then it's irrelevant to the truth of axiological longtermism. It's rather going to be something we'll want to consider later in the discussion when we talk about whether to move from axiological to deontic longtermism.
Finally, the possibility of ignoring ‘indirect’ effects. Here’s a claim that's quite common in medical ethics. Suppose you're a doctor in an emergency room. Two patients come to you. They both have medically identical situations. So the cost effectiveness of treating either one in purely medical terms is exactly the same. However, patient A is more useful to the rest of society than (48:00) is patient B. So you could deliver a greater benefit to society at large by treating A and therefore restoring A to full functionality earlier than you could by treating B. Perhaps A is a social worker and B is unemployed or something like that. Most people think in that medical context, so that it will be morally inappropriate for the doctor to try to take any account of the alleged difference in the social utility of the patients. There should only be… The common view holds. The doctor should only be considering the direct medical benefits that they're able to deliver to these two patients.
Okay. So suppose we think: Yeah, that’s right. In the medical context and furthermore, the more general moral theory that explains that, is the thesis that it's morally inappropriate to consider indirect effects in general. We should just be considering the direct effects of our actions. If we run with that idea and then apply it to Shivani's decision context, it looks like it might push against longtermism because, for example, if you consider that option Shivani had to fund research, well what are the direct versus the indirect effects here? I mean as always it's hard to say, but maybe it's something like the direct benefits of Shivani's intervention or just the fact that she provides jobs to a bunch of researchers and all that stuff about how the research might prove useful further down the line and might influence the way future dollars are spent. That's going to be an indirect effect of her actions. If you're thinking in that way and you're ignoring indirect effects, then of course you're going to be ignoring all the features of Shivani's funding research that was supposed to generate the case for doing it. You're going to have arrived at a point of view that doesn't evaluate funding research very favorably at all. So it looks like this might undermine longtermism.
I have two replies to this complementary to one another. The first one is that, this move from the medical context to Shivani's, looks highly suspect. It seems plausible that insofar as the claim that one should ignore indirect effects is true in the medical context, it is true for reasons that are quite (50:00) specific to that decision context. Perhaps it's something about the doctor-patient relationship. Perhaps it's something about the principles that society would expect, would express by having a publicly funded health service behaving in this ways. So, you know, views about the moral worth of individuals or something like that. It doesn't seem to be anything that's going to generalize, to grant a quite general moral theory that always and everywhere, one should ignore indirect effects. Secondly, like the case of non-aggregationism, this doctrine too, on reflection, seems much more plausible as a claim about deontology, what is morally appropriate to do in a given situation, than it is about axiology and indeed on reflection that was clear when we chose what seemed to be the more natural way of phrasing the claim in the first place. It didn't seem immediately natural to say the outcome would not be better if the doctor prioritized the more useful patient. But then it struck us as appropriate to say in that medical context was that it would be morally inappropriate for the doctor to do that. But then again, if this is a claim only about deontology and not about axiology, then even if it does generalize beyond the medical context, it would be irrelevant to the truth of axiological longtermism.
Okay. So we've done the bit about empirical objections to axiological longtermism. We've done the bit about axiological objections to axiological longtermism. In wrapping up I just want to do these two last things briefly. First consider whether we might go from axiological longtermism to a corresponding deontic claim and then finally talk about whether we might generalize beyond Shivani's decision context. Okay. So to consider deontic longtermism, let's remove the feature of Shivani that said her aim is to just improve the world to the greatest extent she possibly can. Consider someone who doesn't explicitly have that aim, perhaps Shivani's close-minded sister Deepti. (52:00) Is it true that even without having that aim, Deepti ought to choose some option with the feature that most of its expected value relative to business as usual comes from effects more than a hundred or more than a thousand years in the future. If so, that would be a deontic longtermist claim.
Okay. So I just want a very briefly sketch one argument by which one might try to argue from axiological longtermism to this corresponding deontic claim. The first premise is again this claim that arose briefly in the context of discussing whether Shivani should fund research. It seems at least plausible but if the axiologically best options are longtermist ones, then those options are not going to be just a little bit better than the best things Shivani could do to influence the short run. They're in second to be enormously better, perhaps in order of magnitude better, maybe even more. So that's the idea that there's a lot at stake in this decision. It's not just a trifling improvement to go longtermist over short-termist. And then secondly, an evaluative claim or a claim about moral theory. It seems like the more plausible versions of non-consequentialism are sensitive to the axiological stakes. So that is to say they consider [inaudible 53:15] prerogatives and site constraints highly important, provided the axiological stakes aren't too high. But plausible non-consequentialist theories include things like a disaster clause that says: look if there's really an enormous amount at stake, throw this deontic stuff out of the window and just do the consequentialist thing. More generally it seems to be that as the stakes get higher perhaps as you move from a private to a large-scale, maybe government public decision making context, the appropriate moral theory becomes more consequentialist in character. If that's all correct they're putting these two things together. It seems like if the axiologically best options are longtermist runs, so if axiological longtermism is true at all, then a plausible version of non-consequentialism is likely to hold that Deepti ought to (54:00) choose some such option because the stakes are so high that the non-consequentialist stuff is going to fade into the background relative to the axiological considerations. Okay. So there's a lot to say about this argument. For now, I just want to throw it out there.
And then finally what can we say about other decision contexts besides that of the open-minded philanthropist? It seems fairly plausible that insofar as axiological and/or deontic longtermism is true of the philanthropic decision context and in particular the philanthropic decision context where you're looking across lots of different cause areas. It might also be true of other decision contexts.
So firstly I think this might happen for what I'll call fixed-cause-area philanthropy and also public policy. So these are cases where unlike Shivani, we're not thinking, well is it football or is it the opera or is it climate change or is it artificial intelligence or is it bed nets? Rather we've got some constraint that tells us which cause area we have to spend our money in in some much more narrow way, but now we're thinking about what we should do within that cause area. Here's an example that suggests longtermism might be relevant even in that case. Suppose the cause area we're thinking about is deworming. So we've got some funds that constraint to be spent on deworming programs but we've got a choice between deworming in two countries. Country A offers a deworming opportunity that's more cost effective in the short term than does country B, perhaps because it's more densely populated. So you can reach more people per dollar if you intervene in country A. On the other hand suppose further that the projected rates of economic growth are different in the two countries in such a way that the way the benefits compound over time is much more favorable in country B than in country A. If we agree that we shouldn't be discounting future benefits then it seems like taking that second thing into account could easily reverse the decision that we would reach based only on (56:00) short-term considerations and again I've said that would be quite revisionary. At the moment these decisions are made largely on just the short-term considerations of what will be most cost effective within the next five years or the next few decades. Similarly if your question is just, “What and how much should we do about climate change,” then it seems likely that the longtermist claim will shift that discussion towards stronger rather than weak on mitigation even without taking into account the opportunity lost, the fact that you could have spent that money on something else besides climate change instead.
And then for a different way of deviating from Shivani's decision context, Shivani was talking about spending money but we might instead have been talking about spending time. It doesn't look as though that would have materially changed any of the discussions. So instead of thinking about how to dispose of a few billion dollars we were thinking about how to dispose of the 80,000 hours that one has over the course of one's career. The same considerations that might drive Shivani to fund say, artificial intelligence safety research might equally drive a young graduate to go into the field of artificial intelligence safety research rather than perhaps the field of bed net distribution.
So to sum up, I've tried to argue that axiological longtermism is prima facie plausible. I think the main open question is this empirical one: is it the case in fact that the project of affecting the expected course of the very long-run future is sufficiently tractable for axiological longtermism to be true given that that's the claim about expected values. I've sketched some reasons for thinking that it is sufficiently tractable. I listed a few possible interventions that might do the trick and then I pointed out that even if we're unconvinced by any of those as first-order suggestions, we should take very seriously the possibility that the best thing Shivani could do is fund some more research into these crucial questions first and then spend the remainder of the money or take it that future philanthropists will spend their money on whatever then it seems best in the light of that research. (58:00) Then we considered the question of sensitivity analysis and there I argued that most of the case for longtermism does not presuppose either totalism or utilitarianism and I'm not aware of any plausible axiologies that undermined the case. The partial exception to that was the fact that if you're not a totalist about population axiology then you might downgrade the value specifically of extinction risk mitigation. But that's not enough to undermine axiological longtermism insofar as there are also plausible opportunities for reference in the far future in the alternative bucket of improving the value of the future conditional on survival rather than increasing the chances of survival. Finally, I tentatively suggested that if the case for axiological longtermism succeeds, then some form of deontic longtermism is likely to follow by the likes of any plausible non-consequentialist theory.
Thanks.
Other videos
- « Previous
- 1
- …
- 14
- 15
- 16