Jeffrey Sanford Russell | Problems for Impartiality
Parfit Memorial Lecture 2022
16 June 2022
The handout for this lecture can be found here.
HILARY GREAVES: (00:00) A warm welcome to the third, not quite annual, Parfit Memorial Lecture Series. The series was started in 𝟤𝟢𝟣𝟫, but with a short break in 𝟤𝟢𝟤𝟤 for reasons everybody is probably familiar with. So let's start off by saying a little bit about GPI and the rationale behind this lecture series before I introduce today's speaker.
(00:26) So, GPI exists to conduct and to facilitate others conducting academic research that speaks especially directly to questions that are crucial for impartial agents trying to do the most good with fixed resources. So if you have this pot of money say, and you want to do the most good, what should you do with it? That's of course in principle, an enormous question, but it's one that we believe we can make a lot of progress on collectively if we appropriately use the tools of various academic disciplines. And here at GPI, we focus in particular on economics and on philosophy. The aim of the Parfit Memorial Lecture Series is to exhibit and to facilitate research of this character. It's named, of course, after Derek Parfit, a towering figure in moral philosophy whose name will probably be familiar to everyone here. Parfit was in particular, a leading light in the enterprise of simultaneously doing absolutely top quality academic research and taking on these big questions that really matter for important practical decisions. That's the theme that inspires the lecture series that's brought us together today.
(01:29) But before I introduce today's speaker, just a quick note on logistics. Today's session will consist of a one-hour talk followed by a one-hour Q&A session. The lecture, but not the Q&A, will be recorded for posting online. There will be a follow-up discussion seminar tomorrow, that's at 10AM in the seminar room at Trajan House and we particularly encourage graduate students to come along to that seminar for a follow-up informal discussion with today's lecturer.
(01:57) So onto the main event. This year, we're delighted to welcome Professor Jeff Russell to deliver this third Parfit Memorial Lecture. I was privileged to meet Jeff initially in, I don't know, maybe something like 𝟤𝟢𝟢𝟦 when we were both graduate students at Rutgers University in New Jersey and now here's the bit where we embarrass the speaker. It was already abundantly clear then that Jeff was going to be a philosophical force of nature, and this has turned out indeed to be the case. When I start talking about the range of topics that he's published on, I'm going to have to read from my script because it's too long to remember. It says that he is published in logic, metaphysics, epistemology, decision theory, philosophy of religion and philosophy of science and most recently, and particularly excitingly for GPI, he's been turning his attention to formal ethics. Jeff is now an associate professor at the University of Southern California in Los Angeles, where among other things, he heads up a grant on the Big Decisions project. Please join me in welcoming Jeff.
JEFFREY SANFORD RUSSELL: (03:04) Thank you so much for that very gracious introduction. I'm really honored to be giving this lecture today with all of you. I'm really grateful to the Global Priorities Institute for inviting me out to do that. They're just a fantastic group of people who I respect intellectually and personally very much and I'm very honored to be giving this lecture in the memory of Derek Parfit. I didn't know Parfit personally, but I do know his voice through his writing, and it comes through so clearly and thoughtfully and compassionately, and it's inspired so many people to do very good work on very important topics. So I'm humbled to be contributing a little bit to his legacy today.
(03:50) One of the areas of philosophy that Parfit deeply shaped is our moral relationship to people in the distant future. (So, I meant to put this slide up while I was… during that whole thing) The outline and it's also in your handout and I apologize that the text on the handout is so small.
(04:09) So one of the topics the Parfit shaped is our relationship to people in the distant future. Here's a famous quotation from his book. He says,
"Why should costs and benefits receive less weight, simply because they are further in the future? When the future comes, these benefits and costs will be no less real. Imagine finding out that you, having just reached your 21st birthday, must soon die of cancer because one evening Cleopatra wanted an extra helping of dessert. How could this be justified?"
So Parfit is here quite forcefully arguing for a principle of Impartiality. The principle says that for any two equally well-off people, it will be just as good overall for one of them to be a certain amount better off as for the other and this holds wherever or whenever these people may live. And this principle of impartiality Hilary mentions, it's even in her introduction, it's been very influential in Effective Altruism, in particular.
(05:08) One of the ways it's been influential, we'll do this in two steps, is via this consequence. The consequence says that for any harm to each of 𝑛 people, the same harm to a different 𝑛 equally well-off people would be just as bad overall, even if those people live much further in the future. So the idea is that the numbers count for what's better overall and they count in a way where distance in space and time doesn't count. So, this idea has been an important part of arguments for what's called longtermism.
(05:43) Longtermism is the idea that we ought to, if we're trying to do as much good as possible, prioritize actions that affect the very, very long-term future. Hilary Greaves as well as MacAskill put it like this. So they summarize the idea.
“The idea then is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 or even 1000 years, focusing primarily on the further future effects. Short-run effects act as little more than tiebreakers.”
So I'm not going to go through all the details, but you kind of see how a commitment to numbers would get you to this conclusion. The first thought is that morally significant effects on people in the distant future should be weighed just as heavily as effects on people now or in the next 𝟣𝟢𝟢 years. And second, you think that well, in the distant future potentially, there could be many, many more people. The future is very long. So from there, you get to the idea that since each of them individually count for just as much, all of them collectively count for much more. So that's one of the reasons why impartiality has been a particularly important principle.
(06:59) I'm also going to talk about an opposing view. I have two young daughters at home and so a lot of my philosophical content comes from the Disney movie “Frozen”. Elsa sings this view. She says,
"It's funny how some distance makes everything seem small."
And this is a natural idea and it's an idea that's built into the way economists standardly do cost benefit analysis. The standard way of doing it is by using what's called a positive rate of pure time preference or discount rate. And the basic idea is that harms or benefits of people in the future get counted for less. They get discounted according to how far in the future they are. So one way you might do this is by a 𝟣% annual discount rate. A harm to 𝟫𝟫 people today would count for the same as the same harm to 𝟣𝟢𝟢 people next year. That would be worse if you keep on going 𝟣%, 𝟣%, 𝟣% year by year. That'd be worse than the same harm to 𝟤 million people in 𝟣,𝟢𝟢𝟢 years and that in turn would be worse than the same harm to 𝟦𝟢 billion people in 𝟤,𝟢𝟢𝟢 years. It seems rather extreme.
(08:06) There are some sensible things that you could mean by using a discount rate like this, some of the most sensible things you could represent. And here I'm drawing on Parfit's influential discussion of this issue in “Reasons and Persons”. It might make perfect sense to discount the value of money or the value of commodities, like food or timber and what have you, and the reason it might make sense to discount those things is because we might expect the people in the future will be richer than us because the economy is growing year by year and it's better to give an extra sandwich to somebody who doesn't have one than to give it to somebody who already has several very nice sandwiches. So it's better to give things to us poor people than to give similar things to rich people in the future, relatively speaking. But what we're talking about here isn't discounting the value of commodities like sandwiches, we're thinking about discounting the value of a benefit or harm itself. The reason you might discount the value of a sandwich is because the sandwich is a smaller benefit to somebody who is well off than it is to somebody who is worse off. And in any case, we're concerned not just with futures in which people are richer than us, but also futures in which they're not. One of the things that we should care about is making plans that have to do with futures in which there has, for example, been catastrophic climate change that's collapsed world agriculture and made people much poorer than we are. So discounting for this reason wouldn't make any sense in that context.
(09:36) Another thing that the discount rate can sensibly be used to represent is uncertainty. Typically, we know a lot about what our actions are going to do now and less about what they'll do next year and less about their effects the year after that and so on. And so, you might accordingly think that it makes sense to discount those harms and benefits according to our being less sure that they'll take place. That can make some sense, but it's really important not to double count your discounting for uncertainty. So there's a standard way of making decisions, which involves splitting things up into two parts. First, you assign utilities to outcomes which represent, roughly speaking, how good or bad those outcomes are and then you weigh those utilities according to how probable it is that they will in fact result from your actions. Now, the right place to build in our greater uncertainty is going to be in those probabilities. If we also build it into the utilities that we assigned to these outcomes, then we're going to end up double counting. We'll count our uncertainty twice over. So we don't want to do that.
(10:35) And there are a few other interpretations like this that we can set aside. But what I'm going to talk about is the view that says, “No. I really mean what it sounds like I mean. Benefiting people in the far future is just worth less than benefiting people now.” That view is not very popular and that's kind of an understatement. Ramsey famously called it ethically indefensible. Parfit called it outrageous, John Broome calls it reprehensible, and while it may be outrageous and it may be reprehensible, I'm going to argue that it's not ethically indefensible. I'm going to be talking about some ethical defenses of this ethically indefensible position. Really the position, I mean what I say that harms and benefits to people in the arbitrarily far future are generally worth much less than similar harms or benefits to those alive today. You might wonder why in the world I would do such a thing. I sometimes wonder this. It's not because I'm very confident that impartiality is really false, but I do think that the question of whether it's true is very important. And so, it's important to really put all our mettle into stress testing this. One reason it's important is because the question of whether longtermism is right about how we ought to direct our resources. And because it's such an important question, I think that we want to hit as hard as we can on all of the various premises that are used in arguments to support that conclusion to see what stands up. But also, I'm not just playing devil's advocate, I actually think impartiality might be false. And I think that the case against it is generally underrated and it deserves to be taken very seriously. If you take impartiality seriously and draw it to its logical conclusions, then you get very weird and paradoxical consequences specifically for reasoning about moral value and risk. So maybe it's true even so, but maybe not. And in any case, I think that those weird and paradoxical consequences are things that we really have to contend with.
(12:46) A couple of brief preparatory remarks (oops where's my, there it is). I'm going to be talking specifically about infinite futures and you might also wonder why in the world I would do that. You might be not sure how seriously to take questions that come up only when we consider the idea that our actions might have consequences not just for a very long time, but for an infinite amount of time. And, honestly, I'm not sure how serious to take this either. Here's my general attitude towards infinite ethics. It might turn out that it's really just this pointless, irrelevant distraction from the important questions or it might turn out that it's literally the most important thing in the entire world. And I don't know which one it is and I think that's enough reason for us to check.
(13:42) Some other reasons for taking infinite futures seriously. One is that our future might really be infinite. I mean, maybe not, but there are various ways that it might be. There are certain religious and supernatural hypotheses, according to which we have an infinite afterlife ahead of us. There are science fiction hypotheses, according to which maybe someday we'll be able to create black hole tunnels to pocket universes and people there will be able to create more pocket universes in a way that branches off forever. You may not take these hypotheses very seriously, but I think it would be premature to totally rule them out. And as long as they have any positive probability of being true, as long as any of them has any positive probability of being true, these issues are going to arise.
(14:28) Another reason is because even if infinity isn't realistic, it still can be helpful as a testing ground for moral principles to consider unrealistic cases. I've never really encountered a drowning child in a small pond but thinking about how things would feel if I did can be a helpful way of learning about ethics.
(14:47) And finally, in general we are concerned realistically about effects in the very, very distant future. We may well affect them for a very long time, even if not for an infinite amount of time. And one of our general tools in our toolkit for thinking about the very, very large or the very, very long amounts of time is by way of thinking about what happens in the infinite limit. I think this is often a useful thing to do. And furthermore, some of the problems that I'm going to talk about arise, not just if the future is infinite, but also if it is very long but finite and we don't know exactly how long. So, there are extra wrinkles there.
(15:28) There are some well-known technical problems for thinking about impartiality in the context of infinite populations and infinite futures. I'm not going to talk about those today, though I'm happy to talk about them in Q&A if people are interested. But what I'm going to be arguing is that in addition to those technical problems, there are some very serious ethical problems for impartiality. And I'm going to say upfront here that this is a really cool area to be working in, in decision theory and infinite ethics. There's a lot of very exciting things happening and there have been a lot of cool discoveries in the last few years. I'm going to be drawing on a bunch of those. Some of them are from my own work, some of them are from works of some of my great grad students at USC, some of them are from work done by people here at the Global Priorities Institute. I will not always remember to attribute these things as I get on, so I just want to say that.
(16:24) Let's go on to the problems. Problem (1)
“Oh no! There's been a terrible disaster, producing some extremely hazardous hypermatter that gravely endangers the two planets of Alderaan and Bespin. It's stable for the moment, but eventually, it's going to spontaneously decay and cause widespread suffering. This could take an indefinitely long amount of time. Unfortunately, the longer the hypermatter takes to decay, the longer its bad effects are going to last until it will cause more and more generations of people to suffer.”
So we've got a picture like this. This is also on your handouts for easy reference. I forgot to label it. The vertical axis is time or the number of generations we're going into the future. So there's some probability which I’ve split into two chunks of a 6ᵗʰ, even though they're the same in this case, for reasons that will become clear. So there's a chance of ²⁄₆ that one generation on both planets is going to suffer, a chance of ²⁄₉ that the next two generations on those planets will suffer, a chance of ⁴⁄₂₇ that the next four generations will suffer and so on. Don't worry about these numbers. They're magic numbers. They have some important properties, but what you don't do is care about them right now. They're getting smaller.
(17:44) Now, you've got another option. You don't just have to leave it alone. You could bury the hypermatter in a special underground structure, which will have two effects. Effect (1) is that one of the two planets will be completely spared at random. Effect (2) is that the decay of the hypermatter is going to be both delayed and prolonged. And here's how that looks. So we've moved all of the suffering down a bit. So the first generation is going to escape scot-free. There's a probability of ⅙ that two generations on Alderaan suffer and ⅙ that two generations on Bespin suffer, ⅑ that the next four generations on Alderaan suffer, ⅑ that the next four generations on Bespin suffer, and so on. A little complicated, but we're going to work through with the important features as we get there. So don't worry if you don't have all the details yet.
(18:34) You got two advisors who you are consulting on this very important decision. Their names are Dominic and Parvati. Dominic is going to give this advice, “Let's look at these two tables column by column.” Consider all of the various ways that things might go. So we've got a ⅙ chance that… If you leave it, you've got a ⅙ chance that one generation on both planets are going to suffer. If you bury it instead, two generations on just one planet are going to suffer, but that's the same number of people either way. So the principle numbers tell us that that outcome is equally bad either way. It doesn't matter whether they're farther in the future, it doesn't matter which planet they're on. Either way, it's the same number of people. The same goes in the next possibility and that case instead, it'll be Bespin rather than Alderaan that suffers. But it will still be the same number of people, two planet generations worth of people.
(19:27) In the next case, if you leave it alone, then you'll have two generations on two planets. If you bury it, you'll have four generations on one planet, again, equally good or equally bad according to numbers. And the same goes in the next case and so on. So here's what Dominic's says, "Leave it alone. There's no point in burying it. It's not going to make things any better. We're sure that the same number of people are going to suffer no matter which one you do." And so, this option of leaving it alone is sure to turn out just as well overall as burying it. So there's no moral reason to bury it. It's not morally any better.
(20:05) Oh, shoot I skipped it.
(20:13) Now Parvati comes and offers her advice. She says, "Look at these tables row-by-row.” How do things go for the first generation? Well, if you leave the hypermatter alone, then they have a probability of ²⁄₆ of suffering whichever planet they're on. If you bury it, you spare them completely. They have a probability of 0 of suffering. So you've reduced the risk of suffering to everybody in the first generation. What about on the second generation? Well, if you leave it alone, then they have a probability of ²⁄₉ of suffering, whichever planet they're on. If you bury it, then they have a probability of just ⅙ of suffering, whichever planet they're on, here at Alderaan, here at Bespin, either way, it's ⅙ and ⅙ is smaller than ²⁄₉. That's the magic property of the numbers. So you've also reduced the risk of suffering for the second generation. And the same goes clearly for the third generation. And likewise, for the fourth generation. Here we've got, if you leave it alone, a ⁴⁄₂₇ chance of suffering for everybody there and if you bury it, then you've reduced that to a probability of ⅑ and again, ⅑ is less than ⁴⁄₂₇. And so, you've reduced the risk of sufferings on two and so on, all the way forever. So Parvati says, "You should definitely bury it. That way, everyone in every generation on either planet faces a smaller chance of suffering." So the idea is that if you've reduced the risk of suffering to everybody, that's making things better.
(21:39) But those two pieces of advice contradict each other. So we can sum up the principles that they're each appealing to. Dominic is appealing to what's called a Dominance Principle, which I here call ‘Surely as Good’. If an option is guaranteed to turn out just as well, then this is good overall. And Parvati is appealing to what's called a Pareto Principle, which I call here ‘Better for Everyone’. If an option gives everyone who will ever live a better prospect, then it's better overall. These two principles can't both be true in the situation if numbers is also true. And so, the first problem is that ‘Better for Everyone’ and ‘Surely as Good’, the Pareto Principle and the Dominance Principle, are together inconsistent with impartiality. That seems bad. Now, all three of these principles, I think, have a lot going for them. They each seem extremely plausibly true. Now, impartiality seems like extremely plausibly true, but I think it's not obvious that it's the one we should keep and one of these other ones should go, especially as we start to build up a cumulative case against it. We'll come back to this later, I guess.
(22:49) One thing I want to say though, is this problem is particularly a hard problem for utilitarians. I mean, utilitarians in a fairly broad sense, are people who have an aggregated theory of the good according to which you can figure out how good things are overall by adding up how good things are for each individual on some scale and then taking expected values of that. One reason this is especially a problem for them is because all three of these principles are pretty core parts of utilitarianism as I just characterized it. And in fact, there's an argument for utilitarianism that's been very influential. It comes from... Defended from an argument from Harsanyi. It's been defended prominently by John Broome. And that's the argument that appeals to these three principles, specifically, the first two. So if you are the sort of utilitarian who really likes that kind of argument, then Problem (1) is particularly a hard problem for you.
(23:56) I’m going to pause this. There's something I meant to say up front and I'm just going to insert it here before we go into the second problem just to clarify what I'm talking about when I talk about impartiality. All the way through here, what I'm really officially talking about is a principle about the good rather than about the should and the jargon is about axiology rather than deontology. And what I mean is, what I'm comparing here all along is not what any particular person ought to do morally, but rather, which of several options is better overall. I think I had slipped when I was describing things before and used the word should, but that's not officially what's going on here. It's really just about better. And one way to see how this can matter… So a lot of people think that you should show partiality to people who are especially close to you. So perhaps if there's some life-saving medicine and I have a choice of whether to give it to my child or to your child, I really should give it to my child instead of yours because I have a special and important relationship of care to my child which I don't have to yours. But if that's true, if that's what I should do, it's not because it's better overall for my child to get the medicine rather than yours, and the principle of impartiality that we're talking about is a principle about what's good overall. So it's that principle there that I think this kind of example isn't a challenge against. It's a principle that says that it's not any worse overall for my child to miss out on the medicine than for yours, even if the connection between that and what I personally should do is a little bit complicated.
(25:43) Let's move on to the second problem. “The Serenity to Accept What I Cannot Change”. This is closely related.
Oh no, Alderaan! Things are just as before, but this time, instead of one cache of hypermatter there are two, one on each planet. And this time, instead of being in charge of the general policy for both planets, you are just in charge of the hypermatter policy for Alderaan. Nothing you do can make any difference at all to what's going to happen on Bespin. As before, if you bury the hypermatter, then Alderaan has a 𝟧𝟢% chance of escaping any harm and Effect (2), the suffering on Alderaan will be delayed and prolonged.
(26:24) So earlier, Parvati gave you one kind of reason to bury the hypermatter. Let's suppose we were not convinced by that. We thought that maybe Dominic had a stronger argument. We’re not prepared to give up impartiality. Well, here on Alderaan, we have another advisor. (Actually, this here, this is just showing exact same tables as before, except I've removed the ringed planet from all of it. So you can get this figure also on your handout by just crossing out all the ringed planets and just paying attention to the other one). So your advisor Stockley comes along and gives you an argument. Stockley says, "Okay, let's consider all the various bad possible outcomes, all the different ways things could go badly.” One possibility is that one generation is going to suffer. If you leave it, what's the probability of that? Well, it's ²⁄₆. If you bury it, what's the probability of that? It's 0. There's no way if you bury it that exactly one generation is going to suffer. What's the probability that four generations will suffer? Well, if you leave it alone, it's ²⁄₉. If you bury it, we've reduced it to ⅙, which is a smaller probability. And what's the probability that 𝟣𝟨 generations are going to suffer? Well, if you leave it alone it’s ⁴⁄₂₇ and if you... (Whoops, my figure is broken. Oh no, it's not, it's fine) And if you bury it, you reduce that probability to ⅑. So if you bury it, any possible bad outcome that might arise, you reduce the probability of that happening. However badly things might turn out burying the hypermatter lowers the probability that things will be as bad as that.
(28:07) And here is just a table of what those probabilities actually are. Stockley is appealing to what's called a Stochastic Dominance Principle and the principle says that if you've reduced the probability of every bad outcome, and you haven't reduced the probability of any good outcome, then it's better overall. The principle is really more general than that and this is a special case of it. So suppose you're convinced by Stockley, this stochastic dominance principle seems pretty good and on the basis of that you decide to bury the hypermatter. But wait, over on Bespin, they have their own Stockley making exactly the same arguments. And so, the people on Bespin are also convinced, and so they bury the hypermatter too. What's the effect of that? Well, we both decided to bury the hypermatter, rather than leaving it alone, but as we saw before, both of us burying the hypermatter doesn't make things any better than both of us leaving it alone. No matter how things turn out, both of us burying the hypermatter, that's just exactly the same as the first problem that we faced. If we both choose to bury the hypermatter, then we're sure not to reduce the number of people who suffer.
(29:26) So this conflicts with what's called a Separability Principle and here's the basic idea. It says that if we're on Alderaan, we don't have to worry about what's going on on Bespin as long we can't make any difference to what's happening there. So if we have options that are all exactly the same with respect to the various probabilities of how things might go on Bespin in this other distant part of the universe, then if we choose just based on what's going on in our part of the universe, we'll get the same answer as if we did take what's going on in Bespin into account. So if you apply this principle twice, then it tells you that both of us leaving it is going to be... Well since burying it is better than leaving it then both of us leaving it is going to be worse than one of us leaving and the other burying it and you apply the same principle again and then you end up with a conclusion that both of us burying it is better than both of us leaving it.
(30:19) But that conflicts with Dominance. Okay. Let me say a couple of things about this, about how weird this is. So first, you might think this is like a prisoner's dilemma. That's also a case where both of two people doing something that seems good leads to something bad overall. It's not like a prisoner's dilemma, though. Here's how prisoner's dilemmas work. Could be that if I act in my self-interest, I end up harming you and if you act in your self-interest, you end up harming me. Both of us acting in our own self-interest ends up harming both of us to make us worse overall. But this isn't like that. The people on Alderaan just looking after Alderaan, they're doing it because they're not going to make any difference to Bespin. And so, it's not as if the kind of the harm in each direction ends up adding up to a bigger harm than the benefit that they got. There isn't any harm going in each direction. They're just happening separately.
(31:17) Now if Separability is false, then ethics is really, really hard. So here's a pretty hard problem with ethics – you think about I'm going to take some action and I'm going to think about all the people that it might affect and how good or bad those effects are and I weigh them up and I weigh them according to how probable they are and then decide what to do. If Separability is false, that's not a good way of choosing between options, of trying to figure out which of them is better. Because if separability is false, then in addition to taking into account all the effects on people that you're going to affect, you also need to take into account how things are going for all the people that your choice doesn't affect, everybody in the whole universe potentially. One way this is particularly challenging is if you're thinking about infinite ethics. So one kind of strategy people have advocated for infinite ethics is, well, even if infinite ethics is super hard, maybe we can just kind of cordon off the bit of the universe we're actually going to make a difference to, which is going to be much smaller. We can figure out what the best things are to do for our little chunk of the universe in a way that doesn't worry about infinite ethics and then figure we're going to be okay overall. If Separability is false, then that strategy is highly suspect. There's no reason to expect that it's going to give you the right answers. What's going on out there very well can make a difference to which of your options is better overall.
(32:44) That is the second problem, Stochastic Dominance and Separability are inconsistent with impartiality. As we saw Stochastic Dominance tells... It was Stochastic principles that told us that each people on each planet ought to bury the hypermatter, but then that together with separability implies that burying the hypermatter on both planets is better than burying it on neither. But it's not by Dominance.
(33:11) Okay. Problem (3) – this one's different. (You can put that fancy diagram away now. You won't need it for a while.) “Any Balm or Beauty of the Earth.” You've got another choice. First option is Utopia. With certainty, we will build a glorious utopia for everyone that will ever live that will endure for a trillion trillion years. And after that, there’s nobody. Option 2 – almost certainly, we're going to sink into mediocrity and despair for a trillion trillion years, no utopia, everything is kind of lousy. But there's a one in a trillion trillion chance that you're going to get a utopia that lasts literally forever. Which one is better?
(34:05) Well, this guy from the 𝟣𝟩ᵗʰ century has something to say about this. He has this rather famous argument. It's known as Pascal's Wager. And he argues for this principle.
"Wherever the infinite is, and there is not an infinity of chances of loss against one chance of winning, there are no two ways about it, all must be given.”
I think that's a very nice way of putting it, but we can maybe sum it up in a way that's a little less evocative and more precise. The principle says that no finite gain is as good as any small increase in the probability of an infinite good. So if you have a choice between on the one hand, giving a tiny tiny boost in the probability of some infinite good or on the other hand, getting for sure some arbitrarily large finite good, say a utopia for a trillion trillion years, you should take the chance at the infinite good instead. So Pascal's Principle says that you should take the long shot, but more than that, Pascal's Principle says that the numbers don't even matter here. Take that trillion trillion years and multiply it by any number you want, take that one and a trillion trillion chance and divide it by any number you want and you should still take the long shot rather than the utopia, according to this principle. So the second or third problem rather, is that impartiality makes it very hard to avoid Pascal's Principle.
(35:35) The argument for this is more technical and so I'm not going to go through the details in the talk. I'm really happy to talk about it in the Q&A if people are interested or in the seminar tomorrow. You see, there's a little footnote mark on here, on the handout, it's got a little number spelling out exactly what this says. There's a couple of side premises involved in the argument, but I think they're very, very modest, so I'm not going to bother talking about them. Feel free to take me to task if you want to.
(36:05) So what's wrong with Pascal's principle? Why is this a problem? I’m going to talk about two reasons. The first reason is basically an incredulous stare. I mean really! You really think that you should take this long shot rather than get this utopia? This long shot that is almost certainly going to make things much, much, much, much worse for everyone? That seems pretty weird. And in general, what this is telling us is that insofar as our actions only make a difference to finite things, and that includes our whole earthly lives, all our nearest and dearest, all of the effects that we might have on public health or on global poverty, at any arbitrarily large finite scale, anything that we might do for millions and millions of years in the future, as long as it eventually peters out, all of those things are utterly inconsequential, except in the case where their possible effects on infinite goods are exactly balanced on the knife's edge and so don't make any difference at all to the probabilities of the infinite goods. So remember, Greaves and MacAskill had this line about longtermism. They say for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first hundred or even thousand years, focusing primarily on the further future effects. Short run effects are just no more than tiebreakers. But the Pascalian view says that no, no, no, you can ignore the further future effects too. You can ignore everything except for the infinite effects. Anything else counts as nothing more than a tiebreaker.
(37:42) Another more poetic way of putting it comes from this nice poem by Wallace Stevens "Sunday morning". The poem what it's about, is finding meaning in a secular life and in mortality in contrast to a religious life and thinking about immortality. He writes,
"Shall she not find... Shall she not find in comforts of the sun, in pungent fruit and bright green wings, or else in any balm or beauty of the earth, things to be cherished like the thought of heaven?"
So Pascal's Principle says, no, there isn't any balm or beauty in the earth to be cherished like the thought of heaven, even a very, very, very, slight thought of heaven. And that seems hard to lift.
(38:25) We can take this further. Well think about a second problem here. So what does Pascal's principle actually tell us about which of our options are best? Now, you may have heard of what Pascal said. Pascal argues that... What this principle actually presses you towards, is that you ought to try to live life with faith, specifically Catholic faith, in order to ensure your, improve your chances at gaining eternal life. And maybe even some kind of religious activity is going to be a way of increasing the probability of some kind of infinitely good afterlife. And remember, according to Pascal's Principle, it doesn't matter how much you increase that probability, so long as you do it at all.
(39:08) But there's a standard objection to Pascal's Wager, which points out... Well, it's not as if participating in Catholic rites is the only epistemically possible route to eternal life. There are other religious hypotheses that say that you should practice different things or have different goals if you're going to try to achieve eternal life. So maybe you should be doing some of those things instead or in addition. And there's also the Sci-Fi hypothesis we talked about, building pocket universes and such, which may also be about infinite goods. But if there's more than one route to infinity, then we need to somehow sensibly compare them and make trade-offs between them in order to figure out how to allocate our resources between these different ways of trying to improve our probabilities of getting different infinite goods. Now some people, and this could be the way the standard many gods objection goes, have argued that that makes no sense, that if you try to make these comparisons, you just end up in inconsistency. I don't think that's true and I'm quite convinced that's not true. I think there are perfectly coherent ways of comparing these infinite goods and making trade-offs between them. But even though it's coherent, that doesn't mean it's not extremely hard and obscure. I mean, if this is right, then I don't think we're in a position to make any evaluative comparisons between any realistic options. We don't know how to compare the values of different infinitely good or bad outcomes and we don't know which options have higher probability of achieving these infinite goods. And furthermore, according to Pascal's Principle, those questions are the only questions that matter when it comes to comparing options, unless we're lucky enough to have exact ties everywhere. And even if we do have exact ties anywhere, even the slightest breath of extra evidence is going to break that tie and we may not know which way.
(41:00) So let's suppose we're deciding whether to donate some money to distribute some anti-malarial bed nets. Seems like a good idea. But the longtermist says, aha, wait a sec, whether that's really a good idea depends on some really difficult questions about the long-run effects of that, how it's going to make a difference to long-term economic growth and various questions about the value of population changes and things like that. So, hold your horses there. The Pascalian says, no, it doesn't depend on any of those things, don't be silly. What it really depends on is how it affects the probabilities of saving people's immortal souls or getting us eventually to creating pocket universes or something like that. And honestly, I can't imagine how we're going to figure out those probabilities. And we have no reason to expect that weighing things according to the more scrutable probabilities of even incredibly momentous finite outcomes, is anywhere near reasonable as an approximation for the correct order of options. We're just messing around way down here in the least significant digits. Except it's worse than that because like there's infinitely many digits up there. So this is a version of what's called the problem of cluelessness. Hilary Greaves has a really nice paper about this, so does Andreas. But I think that this Pascalian infinite cluelessness is really an especially intractable kind.
(42:34) Now what? We've got three problems for impartiality. The first one had to do with the 'Better for Everyone' principle and 'Surely as Good' principle. The second was from Stochastic Dominance and Separability and the final one was from rejecting Pascal's Principle. What are we going to do with this? How are we going to react? I'll survey some possibilities.
(42:57) In the first category of what I am calling 'dismal and deflationary thoughts', these are thoughts that reject some of the presuppositions that I took on board when we started on with trying to figure out these problems. You might think that this kind of impersonal "view from nowhere" betterness relation that I've been talking about just doesn't make any sense. I think that actually would be a particularly deep and thorough going way rejecting impartiality. It's saying that not only does impartiality happen to be false, but it's just... there's nothing that it could be true of. Maybe there's just some choices you can make, there's stuff that's better, kind of, from your vantage point in the universe and from mine, but no god's eye view where we can arbitrate these questions. That's something you might say.
(43:47) Another thing you might say is that it doesn't make any sense to apply this notion of betterness, in particular, to options involving risk, to risky prospects, that I've been doing throughout here. You might think that… Well yeah. No. The way things turn out in the end that might, you know, how things actually go, so that could be better or worse, but when it comes to thinking about mere probabilities of things going one way or another, then we can't make sense of the question anymore of how to compare these things. Or you might think that the combination of those two things is where the problem is. Maybe each of them makes sense on their own. But this kind of simultaneously neutral, impersonal and also risky kind of evaluation is where things break down.
(44:33) Another option is that infinity is impossible and that's what we should take away from this. I think if that's true, it's a really striking bit of armchair physics that we've accomplished because cosmologists take very seriously the hypothesis that the universe is infinite and infinite in a way where you'd expect it to have lots of infinite… an infinite amount of valuable things going on out there, things like people. But maybe… So sometimes physicists get confused and sometimes philosophers can help tell them so.
(45:11) And then the final thought in this category is that moral principles just don't even apply to infinite cases. They break down and so there aren't any truly general moral principles, at least these ones aren't among them.
(45:25) Those are some things you could say. I don't think that they're dumb things to say. I have some sympathy with each of them. But I'm going to go on to the next category, the “Honest Toil” category. We take on board the presuppositions, we recognize that these paradoxes, in fact make sense, that these three problems are like real claims about a real subject matter and are true, and we try to figure out what a moral worldview could look like within the constraints of these three problems.
(45:55) Now, I've put up here a table of a bunch of different reactions here. All of these are different ethical views in the broad tent of utilitarianism, in that spirit. Obviously, these aren't the only possible moral views, but they're in a way, paradigm exemplar moral views of each kind that we might go to in response to these problems. The first column actually belongs in the first category. This is the category of ordinary standard finite utilitarianism, which makes a nice benchmark. That gets you, as I discussed early on, that gets you impartiality, it gets you dominance. This really is a good principle. It gets you the Pareto Principle, ‘Better for Everyone’. It gets you separability. That all seems great. The fanaticism part got cut from this talk, so you can ignore that. It doesn't tell you to reject Pascal's Principle, but that's because it's ordinary, it only applies to finite cases. It just has nothing to say about infinite goods because standard finite utilitarianism involves adding up a bunch of things that you can't add up in general in the infinite case. So that's the big X there. So if we've rejected or trying anyway to explore the non-dismal deflationary thought that infinity makes sense, then we're going to be interested in the other columns of this table.
(47:13) Over on the right here, we've got two categories of views that keep impartiality. And both of these categories have defenders in the recent literature on infinite ethics. The first idea is, we give up the ‘Surely as Good’ principle. We think that a prospect can be sure not to make things any better, but also better overall and you also have to give up some other very closely related dominance principles to escape close variants on the problems. And if you do that... I mean, that seems bad, but you can keep Impartiality, you can keep the Pareto Principle, you can keep Separability. You don't help with Pascal. Pascal is particularly hard to escape.
(47:56) And in the final column here, the view mentioned at the top, that's not a name that's going to mean anything to anybody because it corresponds to something, a work in progress that I'm doing, but a very similar view to this is something that Hayden Wilkinson, who is back there, has been developing. This is a view that says, no, we really want to keep the dominance principle. We also really want to keep the impartiality principle and infinity makes sense. So what do we have to do? Well, we lose on three counts. We lose the ‘Better for Everyone’ principle, so reducing the risk to everybody ever may not be a good idea, the risk of harm, and we lose the Separability principle. So if you want to figure out which of your options are best, the whole infinite universe might be relevant to that even if you're not making any difference to those things and the probabilities are exactly the same, whichever option you choose. And finally, you also keep Pascal's Principle, and all of the really obnoxious consequences of that.
(49:00) And now we've got Column 2. And this is regular old social discounting, like we talked about at the beginning of the talk. This is the Elsa view, “It's funny how some distance makes everything seem small”. Instead of just trying to add up the value of everybody's wellbeing, we're just going to scale it down, the further out you go. And this is a view that's been called outrageous and reprehensible and ethically indefensible. And it may be outrageous and reprehensible, it's not ethically indefensible. Here's the ethical defense. Look at all those beautiful checkmarks. It's true, it's doing bad here, but it's only doing bad there. It's really very well-behaved as a theory and I think we should take that seriously. It gets a lot of what we want from a normal worldview, not everything, but nothing gets everything and it's not clear that this is the wrong sacrifice to make, I think.
(49:53) Now, let me see if I can say a couple things about it that might make it less reprehensible and outrageous. Actually, first let me make it more reprehensible and outrageous. I just want to point out... So one of the things that we're going to have to at least say is partiality here. So it says that for every person, there are most finitely many other people who are as morally weighty, that is, in the sense that harms to those other people make things just as bad as harms to this person. So what can we say about this that might ease this thing a little bit? First, we have to discount some, we don't have to discount a lot. So maybe the discount rate is just extremely small. And this is the thing people have suggested as a way of avoiding some of the technical problems. Maybe the discount rate is not 1% per year but .𝟢𝟢𝟢𝟢𝟢𝟢𝟣% or maybe 𝟣𝟢¹⁰⁰ or something like that. If you say that, then you aren't going to say that's like… You could justify giving somebody cancer on their 𝟤𝟣ˢᵗ birthday because Cleopatra wanted an extra helping of dessert. You're only going to be able to say that if suffering is very, very, very far indeed into the future. Maybe that helps a little, I don't know.
(51:20) Another thing that you might say is that the correct discount function is a vague matter. It's not some precise way of discounting which gives you the correct moral truth. There are many, many different candidates, each of which are not determinately wrong with the upshots that more betterness will sometimes be a vague matter. But with the combination of these two, you could say that it is sometimes a vague matter and in ways that maybe makes things seem less awful, but will still give you some determinate conclusions. For example, that depriving Cleopatra of her dessert is better than giving somebody cancer now. And in fact, you could have the discount rate to be vague in such a way that it's symmetric, which the different candidate persistifications are. So while it's true that it's not determinately false that that person in the future's moral weight is much less than mine, symmetrically it's not determinately false that my moral weight is much less than that person in the distant future. That might help make this seem a little less reprehensible, a little less self-regarding at least. So maybe, I think there's a lot to be explored here. I think this is...
(52:39) I've kind of just scratched the surface of what this kind of view would look like. The main thing that I want to take away is that I think that this direction really should be explored. I think we should take seriously the possibility that impartiality is false and really try to go down this route and figure out what ethics looks like down there. And I think that that's a project that we haven't done a whole lot. You might wonder if we tweaked the view in these ways. Maybe we've deprived partiality of all of its interesting consequences because after all, if this discount rate is super, super small, the argument for longtermism is still going to go through. If people a billion years hence count almost as much as me, well there's still a lot more of them, so yeah, you're going to get similar kinds of conclusions. And that might be right as far as that particular application goes, but there are some real consequences that you get by going for even this kind of vague, small discounting kind of partiality as opposed to impartiality. When it comes to our first problem, “Oh no”, it tells you that you should bury the hypermatter, despite the fact that it's not going to make any difference to the number of people who suffer. When it comes to Pascal, it's going to tell you, no, you shouldn't accept arbitrary Pascalian wagers. If the utopia is good enough and the probability is small enough, then you should take the sure utopia rather than the long shot. And these are important consequences, I think. And finally, more theoretically, even if it turns out that longtermism is still right, the arguments for it are going to have to rest on different foundations if impartiality is false.
(54:24) So what do we make of all of this? I don't know. I told you a bunch of options and I just feel really confused, so I thought maybe I would share that with you. I'm going to kind of end this talk with a whimper. I really don't know whether impartiality is true. Like I said, I think we should take seriously the possibility that it's not, but who knows. Because of that I really don't know whether these arguments for longtermism are sound. I don't know whether longtermism is right. I don't know what's the best way to try to do good is. And worse, like, it'd be really nice if I had some kind of theory that would tell me in situations like this where I have no clue what's going on. Maybe I can at least figure out something that will count as a reasonable attempt to try to do good in general. And I have no such theory as that, though that's a different talk, equally disappointing. So I don't know what to do and even taking into account the fact that I don't know what to do, I still don't know what to do even then. But I do think that it's useful to at least appreciate the depth of our moral ignorance. And that's not a good place to stop, but that is where we have to start.
(55:45) So that's it.
Other videos
- « Previous
- 1
- …
- 14
- 15
- 16