Teruji Thomas | The Multiverse and the Veil: Population Ethics Under Uncertainty
TERUJI THOMAS: (00:04) So, as you know, our mission at GPI is to do foundational academic research on how to do the most good. And I thought I'd start by explaining one interpretation of what doing good is about. This isn't really crucial to understanding the main part of the talk, but I thought it might put you in the right kind of mindset. So on this interpretation I'm going to give, doing the most good is about reasons of beneficence and these are reasons that arise directly from what's good for people. That is, from the value of people's lives for them as individuals. So I'm not talking about things like rights, prerogatives, special duties, etc. And by the same token, not all impartial welfare related reasons are reasons of beneficence in the sense that I'm trying to pick out. So here's an example. We've got these two options in the status quo. So there are two people Ann and Bob and in the status quo Ann is somewhat worse off than Bob. And we have the option to benefit Ann making her equally well off. So why would we benefit Ann? So one reason is that it's good for Ann and that's the reason of beneficence in my sense. Another reason that some people think is relevant here or another consideration that some people think is relevant, is that it decreases welfare inequality between Ann and Bob. And this is a fact that you can tell, you can read off from the welfare levels of the people involved. So it is a welfare related reason, but it's not a reason of beneficence in my sense. There are some egalitarians who have a different view of what inequality is all about, but anyway, I'm sort of setting asideΒ Β this kind of egalitarian concern.
(01:42) So here's the kind of question that I will mostly be focusing on in this talk. So suppose we have this theory of beneficence which tells us what to do when there's no uncertainty. Then the question arises, "What should we do when there is uncertainty?" All right, so here's an example. I'm going to use totalism as my main example in the talk. It's not really the motivating example for the core material that I'm presenting, but it's at least easy to understand. So totalism is the view that we should maximize total welfare. So here's the line of thought that you'll mostly be familiar with. So the future is potentially very vast, huge number of people. So any tiny probability of persistently influencing it might be overwhelmingly more important than any short-term concerns. And a particular key form of persistent influence that we might have is preventing extinction. So just to fill in some empirical premises here, suppose we can reduce the probability of premature extinction by one part in a billion. If things go well, there'll be 1020 future people. Then, just like plugging in some plausible numbers, the resulting gain in expected total welfare could be worth 10 times the total wellbeing of everyone alive today. By the premises of this example we'll see that totalists are, in a certain sense, fanatical about reducing extinction risk. Now, someone who likes totalism might be tempted to respond in the following way, βListen, my basic view is that when there's no uncertainty, what we ought to do is to maximize total welfare. But I'm not committed to maximizing expected total welfare when there is uncertainty. Instead, I could be risk averse or something.β The view could be that we should maximize some bounded function of total welfare and that would put some kind of limit on how much tiny probabilities are influencing the long-term matter. So there's quite a lot going on here. But the basic question that I'm addressing here is, "Is this a reasonable response? Could we do something like risk aversion? Or do we in fact have to go with this view which maximizes expected total welfare?"
(03:51) So here's the result I'm going to discuss. It's contained in my paper that's just been accepted in PPR and it's related to a bunch of other results in the social choice literature going back to Harsanyi's Aggregation Theorem. That's kind of my spin on these results. So here's the schematic version. So if you have a theory of beneficence that satisfies a couple of conditions, I'll tell you about in a minute, then first of all, it's completely determined by what it says in cases where there's no uncertainty. It's also completely determined by what it says about one-person cases, that is, cases where there's only one person's interests at stake. So we can take almost any view about uncertainty-free cases, for example, maximize total welfare. We can plug it into the theorem and we get automatically a theory of choice under uncertainty as well. Or if you happen to have a theory about what you ought to do in these one-person cases, then you can also plug that into the theorem and get a view about what you want to do when many people's interests are at stake. So for example, totalists who buy into these two plausible conditions have to take the view that we should maximize expected total welfare. So they must be fanatical in the sense that I was describing on the previous slide. And I'll give some other examples later.
(05:22) Okay. So here's the first principle. I'll try to explain and then I'll pause at the end of the slide in case you have any questions because I know it's a bit much to take in. So the first principle I call Person-Based Choice and it's a bit similar to the ex ante Pareto principle, but it's that same kind of principle. So it says, if each and every person who might exist in X where X is a choice situation, you can think of it just as a list of options and Y is another choice situation. So it says if each and every person who might exist in the first choice situation is in the same predicament as each and every person who might exist in the second choice situation, then we have to say the same thing about these two choice situations. So for example, if the third option in X is permissible, then we have to say that the third option in Y is also permissible. Or if you think your normative theory ranks options, gives the ranking of all of the options, then it has to rank the options in X the same way it ranks the options in Y. Now, I'm not going to spell out like exactly what this principle means in general, but I'll show you in this example and I hope that will get the main idea of it across. So forget about the second row at the beginning. Here's a one-choice situation. It just has two Options X1 and X2 and involves two people, Ann and Bob. And what happens to Ann and Bob depends on, let's say, the outcome of a fair coin toss. So on heads Ann gets 10. It doesn't really matter what the numbers mean, but you can think of these as lifetime welfare levels. So π£π’ is pretty good and -π£ is kind of bad or something like that. It doesn't really matter for the example. So on heads Ann gets π£π’, on tails she gets -π£ under the first option. And for the second option, everyone gets π’. That's kind of to simplify things. If other things are happening in the second option, then it gets a bit more complicated. But the point is... So I won't talk much about this. The point is that Ann has a π§π’% chance of getting π£π’ and a π§π’% chance of getting -π£. Now, the same thing is true of Bob. I should say, Β½ probability, that's what I really mean, a Β½ probability of getting π£π’ and a Β½ probabilityΒ of getting -1. Now it's true that the state in which he gets π£π’ is different from the state in which Ann gets π£π’. But from a probabilistic point of view, they're in the same predicament. So that's what I want to say about this case. Ann and Bob are in the same predicament. And here's another case, which just involves Kate. And you see that Kate is in the same predicament as Ann and Bob. So what I want to say is, this principle, person-based choice says that we ought to treat these two cases in the same way. So for example, if you think that here we have to choose Option π£, then also here, we have to choose Option π£. So you can think of this, to make it a bit more vivid, you can think in terms of guardian angels of all of these people. Let's suppose Ann, Bob and Kate all have guardian angels and when we're choosing here for Kate, for Kate's sake, Kate's guardian angel is saying, "Choose Option π£. Choose Option π£." That's what's best for Kate. And then it would be, since we're interested in beneficence, we should choose Option π£. We should go along with what her guardian angel says.
MARCUS: (08:50) I have a question. Option π£ is X1, Option π€ is X2β¦ [inaudible].
TERUJI THOMAS: (08:56) Yeah. Yeah. Yeah. So here's one choice situation. We choose between these ones.
MARCUS: (09:02) [inaudible]
TERUJI THOMAS: (09:04) Yeah. Yeah. Yeah.
(09:08) So if Kate's guardian angel is saying that, then Ann's and Bob's guardian angels will be saying the same thing. Ann's guardian angel will be saying, "Choose Option π£, Option π£." And Bob's will be saying the same thing and it would be a bit perverse from the point of view of beneficence for them choose Option π€. That's why we should treat these cases in the same way. Okay, so any questions about this other than Marcus'?
(09:29) Seems basically clear? Okay.
(09:34) Here's the second principle. It's a statewise impartiality condition. Itβs a form of impartiality and basically, what it says is that if we look in each state one at a time for example, if we just focus on heads, if the two cases look the same in each state, except perhaps for which people are involved, then we should treat them in the same way. So to illustrate, this is the same case as before. This is a different case from the one on the previous slide. This is the same case as before. Let's just look at heads. What's happening here is that one person gets π£π’ and one person gets -π£. And exactly the same thing is happening here on heads. So Alice here gets π£π’ and Bert gets -π£. All that's different is that there are different people involved. And the same thing is happening on tails, we have one person getting -π£ and one person getting π£π’, we have one person getting -π£ and one person getting π£π’. So these choice situations are nonetheless different because here, it's inevitably Alice who gets π£π’, whereas here, different people have chances of getting π£π’. But the point is that if we focus on heads, these two cases are kind of structurally identical, when we focus on tails, they're structurally identical. Statewise impartiality says that we should treat these in the same way.
(10:57) Now this is a little bit stronger than... like weakest form of impartiality that you might write down. So I'll give a little argument for it. It's a very suspicious argument that I got basically from Anna Mahtani and indirectly from Caspar Hare, but I kind of like it anyway. So suppose that Ann and Bob are actually the same people as Alice and Bert. I just don't know whether Ann is Alice and Bob is Bert or whether Ann is Bert and Bob is Alice. I'm confused about that. And suppose that heads, what I've been calling heads, is actually the hypothesis that Ann is Alice and Bob is Bert. Then on heads, this case is exactly the same as this case. Ann getting π£π’ is exactly the same as Alice getting π£π’. And suppose tails is the opposite hypothesis, that Ann is Bert and Bob is Alice, then Bob getting π£π’ is exactly the same thing as Alice getting π£π’. So if these payoff tables arise in that way, then these are actually just redescriptions of the very same case. So if you think that the payoff tables are all that matter, and it doesn't kind of matter how they arise, then we see that in general, we ought to treat these cases in the same way. So that's the funny argument for statewise impartiality. All right. Any questions about this one? Yeah.
DEAN: (12:35) To make this claim you have to be treating epistemic uncertainties and state uncertainties as the same. Are there arguments that don't require you to do that? Because they're conceptually very, very different types of uncertainties, right?
TERUJI THOMAS: (12:58) The thing that I actually think is that all of the uncertainties here are epistemic. It may be that the probabilities are aligned with the chances, for example, if it comes from a fair coin toss and you're aligning your credences with the chances. But basically, my background claim here is that what matters here is epistemic probabilities. And that's how I'm thinking about it. Marcus.
MARCUS:Β (13:22) This might be a response to the previous question. Another way to think about this, you can correct me if I'm wrong on this, Teru, but I think that you can get this statewise impartiality axiom by combining kind of an anonymity criterion which applies in each state and then statewise dominance. So statewise dominance says your evaluation of the ex ante prospects basically should be controlled by how you evaluate the outcome on each state. And then anonymity says, how you evaluate the outcome on each state should be invariant under permutations of the people. You put those two things together, you get this condition. Is that right?
TERUJI THOMAS: (13:55) Yes, that's right. So you can think of that as combining those two principles and that's perfectly fine. The reason I don't do that is that usually these conditions like anonymity and statewise dominance are stated in some kind of axiological framework where you're doing pairwise comparisons and stuff. You may have noticed that. In principle, these principles are just about arbitrary choice situations. I'm kind of going for a more general picture, but this is the same motivation.
(14:30) So here's the theorem. Just to recap, it says the same thing as before. If your theory satisfies these two conditions, person-based choice and statewise impartiality, then up to a few technicalities that I'm sweeping under the rug, it's completely determined by its verdicts in uncertainty-free cases and also by its verdicts in one-person cases. So here are a few examples. Back to totalism. So suppose we start with this view that in uncertainty-free cases, we should maximize total welfare, then in general, if you buy these two principles, you should have the view that you maximize expected total welfare. This is the fanatical view I mentioned earlier. You could also start from the one-person view that when we're just choosing for the sake of one person, we should maximize their expected welfare. And here, some subtleties arise about what you're going to say about chances of this person not existing. The simplest view is that we treat non-existence as if it's a zero level of welfare and maximize the expected welfare with that convention. So if you start with that kind of view and plug that into the theorem, you will again get this general view that you should maximize expected total welfare.
(15:40) Dean.
DEAN: (15:47) I think I... Could I ask you to say more about what you mean by completely determined. It sounds a little bit like you mean not incomplete. But the thing that we are arguing against is risk averse curvature over the... So I think I just... Could you please define completely determined.
TERUJI THOMAS: (16:06) So if you have two views about choice under uncertainty that agree when there's no uncertainty than they agree in general. Or to put it another way, tell me what to do when there's no uncertainty and I'll tell you what to do when there is uncertainty.
MALE SPEAKER: (16:35) So one of the important... [inaudible]
TERUJI THOMAS: (16:49) Right. One of these does not satisfy the two principles on the previous slide. The second one.
(17:08) So here's another example. So I really don't like averagism and I usually think that we should stop talking about it but I'm going to talk about it because it kind of illustrates what's going on here. So, suppose you take the view that instead of maximizing total welfare we should maximize average welfare, so that's the uncertainty-free view. The general view corresponding to this is an interesting one. You maximize expected total welfare divided by expected population size. So the main thing I want to point out here is that this is not kind of the naive thing that you would write down if you weren't thinking in terms of these principles. You might write down, "Oh well..." You might think, "Oh, we should..." The naive view is like we should maximize expected average welfare or something like that. And I think, what these principles kind of bring out is that this isn't a very good, at least in these ways, it isn't a very good way of thinking about averagism under uncertainty.
(18:09) I mentioned a third view. So this was actually the kind of view that sort of motivated my work on this. So this is a harm minimization view. It's not a very good view but it is also discussed quite a lot. The view is that you should just minimize harm, minimize total harm. And this is understood in such a way that⦠So it's relevant to population ethics. It's understood in such a way that if you create someone with a bad life, then that counts as harming them in the relevant sense. Whereas if you create someone with a good life or if you don't create anyone at all, then you don't harm anyone. This is a kind of way of explaining the intuition that, roughly speaking, we have reasons not to create people with bad lives but we, in some sense, don't have any reasons to create people with good lives. So if you start with this as your view of what to do when there's no uncertainty, you can ask what it tells us about what to do in general or you can even ask what it tells us about what to do in one-person cases. Suppose I'm just going to have a child and this will somehow have no other downstream effects. What this view tells me is to minimize expected harm. And since there's some probability that my child will have a bad life, there's basically inevitably some expected harm to this child according to the theory of harm that goes into this view. And so this view is kind of deeply anti-natalist in that sense. So these are just some examples of how the theme...
MALE SPEAKER: (19:48) [inaudible]
TERUJI THOMAS: (19:48) Okay. Sure.
DEAN: (19:51) Do I lose anything if I read βis completely determined byβ as maximize the expectation of?
TERUJI THOMAS: (20:01) Yeah.
DEAN: (20:02) So what's the space between them?
TERUJI THOMAS: (20:06) Well, for example, in this view of averagism, the uncertainty-free thing is maximize average welfare. The general view is not maximize expected average welfare.
DEAN: (20:19) Okay. I'll have to read the paper. Right. Thank you.
TERUJI THOMAS: (20:23) Okay.
(20:26) If you tell me what's permissible, when there's no uncertainty, the thought is there's a unique way of extending that, that satisfies these two principles. And for totalism, it turns out to be just what you would expect, expected totalism. But for averagism and some other views, it turns out to be a different kind of thing altogether.
(20:51) Okay. We can try again later.
(20:56) So... Okay. So, so far, basically, all I've said is that there is this unique way of extending and now I have to tell you, like, how does this extension actually work? So how did I come up with the examples on the previous slide? And I'm not going to go through the proof, but I'm just going to explain like what the bottom line is. So I mentioned that... So one of the claims is that your view is determined by what it says about one-person cases. And the upshot here is actually something quite familiar from social choices, the idea of a veil of ignorance, but understood in a particular way. So whatβs the idea of the veil of ignorance? Here's our familiar case. Now suppose we have this person who turns out to be called Bill and I don't know whether Bill is Ann or Bill is Bob. But I'm Bill's guardian angel and I want to do what's best for Bill. So there are four different sort of states here. So one is the coin, I guess it's still the coin, lands heads and Bill is Ann. Another possibility is that the coin lands tails and Bill is Ann and then lands heads and Bill is Bob or it lands tails and Bill is Bob. And if I think of the payoffs for Bill, according to this first option, well, if the coin lands heads and Bill is Ann then Bill will get π£π’ and so on. You can work out what he gets from these other hypotheses. And if we do this to the second option, we just get this. Bill gets π’ no matter what. Unsurprising. So the kind of bottom line of the theorem as far as these one-person cases goes is, to decide this case or any other case, what we should do, what we could do is translate it behind the veil of ignorance into a one-person case and then do what's best for Bill. That's the reduction.
(22:50) What about the uncertainty-free cases? So this is where the multiverse comes in. So here, we again have our usual familiar choice situation. And what happens again, depends on whether the coin lands heads, whether it lands tails and you can think of these being two different possible worlds, one where the coin lands heads, and one where it lands tails. Now, one at least metaphorical way in which we sometimes think about the future unfolding, we can think of the universe is actually branching off. So I flipped my coin and there's one branch of the universe where there's some version of me and my coin and it's landed heads. And there's this is other branch of the universe where there's me, or a version of me, and my coin has landed tails. So we could try to think about what this choice scenario would look like if we thought that uncertainty was actually resolved in this branching way. And so it would look like this. You'd have, there's no uncertainty now, but there is a copy of Ann and a copy of Bob in one branch of the universe, the heads branch, where this copy of Ann gets π£π’ and this copy of Bob gets -π£. And there's also a branch of the universe in which a different copy of Ann gets -π£ and a different copy of Bob gets 10. So this is just a way of transforming this original case with uncertainty into a case with no uncertainty. And then the claim is that whatever we say about this case, with no uncertainty, we also have to say the same about this case with uncertainty. Now, just to be clear, the thing about the multiverse is just really a metaphor at this point. What counts is the formal construction. Basically, you take the columns of the payoff tables, and you stick them together into one big column.
(24:40) Okay. So, Marcus.
MARCUS: (24:45 ) Presumably in the paper, there's like a technical lemma that says that you can do this because it's not totally obvious that you're just allowed to kind of like rearrange your matrices into columns like this, right? This is the key step in the proof is that you can actually do this.
TERUJI THOMAS: (24:56) Yeah.
MARCUS: (24:57) It's not obvious.
TERUJI THOMAS: (24:58) Right.
MARCUS: (24:58) Right.
FEMALE SPEAKER: (25:08) I mean, maybe this is the point to ask this question. I mean, I kind of got a little stuck early on, when you basically assume that extrinsic properties don't matter. With your individual... I mean with... I mean, early on with the Kate and Ann case, you assume that properties involving any kind of extrinsic... other extrinsic relation⦠like relations to other individuals or whatever state you're embedded in, aren't relevant. Like they get assumed away. And I think you're doing it again, here, maybe? Can you just say more about that because it just seems to me like that's a substantive assumption and I don't understand the motivation for it.
TERUJI THOMAS: (25:59) Well, there are different ways that these extrinsic things could matter. One is that they could matter for people's welfare. And I'm fine with that. You should interpret the numbers insofar as I haven't really said what the numbers mean. But you should understand welfare is taking into account all of the things that are relevant to a person's welfare, including whatever extrinsic stuff there might be. So first of all, what I'm doing is compatible with what you want, insofar as these extrinsic things contribute to people's welfare. You can also take the view that they matter in ways that don't contribute to people's welfare and that was kind of what the first mumbo jumbo about reasons of beneficence was. I am indeed just setting aside those types of considerations.
FEMALE SPEAKER: (26:50) And what's the motivation for setting them aside? You don't think it's a realistic thing to worry about? Or...
TERUJI THOMAS: (27:00) So no. That's not exactly what I think. Or at least I don't feel committed to that. Rather, I'm just doing one part of moral theory, if you like, and I'm asking what we ought to do in the light of our reasons of beneficence that may not be an all things considered judgment.
FEMALE SPEAKER: (27:18) Just one more. And is the thought also that extrinsic facts could never basically affect... I mean, I said this is like⦠I'm worried about how extrinsic relations would kind of change, either would change maybe the... I really have to see it written down, what is the nature of the calculation or... I mean, you're also assuming, "Oh, in a particular situation, where the extrinsic relations don't matter, I'm going to get the numbers to work out in a particular way." I'm worried about as soon as you put the extrinsic relations in, everything, all the numbers change, so that the formal structure that you're setting up is a possibility but doesn't fit the way that real world scenarios would in most contexts work.
TERUJI THOMAS: (28:05) Okay. So there's a lot going on there. I'll try to say something. I'm not sure I'll address exactly what you want. But what I'm thinking is that π£π’ here has to mean the same thing as π£π’ here, where the same thing means equally good for Ann and it could involve totally different... Her life might look totally different. But nonetheless, I'm claiming that we can make these kinds of welfare comparisons to the extent that whatever Ann's life looks like here, it's just as good as whatever Ann's life is there.
FEMALE SPEAKER: (28:34) That is the kind of thing I'm worried about.
TERUJI THOMAS: (28:37) Okay. Yeah. Thanks.
MALE SPEAKER: (28:45) I was just wondering if the probabilities matter of heads and tails or if no matter the properties you assign on, heads and tails, you get the exact same table at the bottom?
TERUJI THOMAS: (28:57) Right. This will be relevant on the next slide, but no. I've been thinking all along that heads and tails are equally alike.
MALE SPEAKER: (29:04) If something were different, would the table be the same? Or would it be a different table?
TERUJI THOMAS: (29:10) Yeah. I mean, the basic picture is⦠Suppose heads was twice as likely as tails, then you would need, as it were, two heads copies of Ann and Bob, so you'd need twice as many people here representing that state of nature. So the easiest way to think about it is just in cases where all the states are equally likely and then this works out.
DEAN: (29:34) So I'm going to propose something that definitely doesn't work and then ask why you don't think that it does. So, imagine that instead of talking about two separate people, you talked about the same person having multiple choices, like multiple times. So Option X1 is Ann gets to⦠ Somebody gets to pick either Ann's position like 10 times versus 10 different people having the same thing happen. So clearly, the stats work out that it's much, much better to be able to take the uncertain bet a bunch of times. It's a different construct for that one person than it would be in the case where they're separate people. So why is that different?
TERUJI THOMAS: (30:35) This is too complicated, so I'm going to punt to the end of the talk.
DEAN: (30:37) Great.
TERUJI THOMAS: (30:38) We can come back for the discussion later. I'll just wrap up the main part.
(30:46) So I like all of this stuff in here. Here's the part where I don't like it. Everyone knows that there are all these problems in infinite ethics. That is when you have infinitely many people. And it's like, here's what goes wrong with my theory when there are infinitely many people. So here's an underlying problem case. We have two options. In the first option, we have infinitely many people at level π£π’ and infinitely many at level -π£, the same infinity. In the second option, we have all those same people, but they're at level 0. So the kind of slightly wishy washy way of explaining what's difficult about this case is that there are two completely correct ways of describing it. So one way of describing it is, you could say that there are a thousand people at level π£π’ for every person at level -π£. I could group the people at level π£π’ into groups of a thousand and line them up with each person at level -π£. But I could also say that there are a thousand people at level -π£ for every person at level π£π’. So if you say it the first way, it sounds like really good, we should do Option π£. You say it the second way, it sounds kind of bad, we should do Option π€. So people have views about how to resolve this, but like on the face of it, it seems like probably the only reasonable conclusion is that both options permissible if you had to choose between these two cases. So up until this point in the talk, I've been assuming tacitly that all of the populations are finite, so you shouldn't believe that everything I've said so far just carries over to the infinite population case directly. But in fact, it works out to this extent that if you accept the kind of unrestricted version of my premises, then you should be able to judge this case the same way as you would judge this case behind the veil, where Kate has some probability of getting π£π’ or some probability of getting -π£ as opposed to π’. And the problem is that if you think about this initial case in the first way, then it corresponds to a case behind the veil where heads is a thousand times more likely than tails. And if you think about it the second way, then it corresponds to a case where tails is a thousand times more likely than heads. And so whatever verdict we have about this case, we should have the same verdict about this case and it's going to be basically completely insensitive to the probabilities of the two states and that seems like a terrible result. We want to be able to say something in some cases like this. So obviously, we can avoid this conclusion by restricting these two principles, person-based choice and stateless impartiality to only concern finite cases. But then we still have to figure out what to say more generally.
(33:30) Marcus.
MARCUS: (33:37) So I'm of course, very sympathetic to your intuitions here. But of course, your example sort of depends upon the idea that we know what it means to say a thousand times as many people when we're dealing with infinite populations. So there has to be some way of cashing that out. And if you cash it out the right way, you might be able to formulate your axioms so that they work in the infinite case. The problem is that there's a fundamental ambiguity here. What exactly do you mean by a thousand times as many people?
TERUJI THOMAS: (34:03) Yeah. So I mean, I just meant it in the naive way that I said that you can group people into groups of a thousand and pair them up with groups of one and so on.
MARCUS: (34:09) You can always do that. That's the problem. So there has to be some rules about what sorts of groups are admissible, otherwise, that's kind of almost like an empty state.
TERUJI THOMAS: (34:17) Absolutely. There's more to say here, but this is the naive problem.
MARCUS: (34:21) Exactly.
TERUJI THOMAS: (34:22) Cool. So, we can restrict our principles. That's not a very satisfactory move without saying more. This is a dialectic, very common in infinite ethics. So I don't think it like necessarily speaks very strongly against what I'm doing as opposed to other ways you might approach this issue, but there's certainly some interesting work to be done there.
(34:43) Okay, conclusion. So I think these supervenience principles are pretty plausible as long as we're talking about beneficence. It allows us to, I claim, avoid some other considerations maybe about equality or whatever. Even if they're not exactly right, I think it's useful to see how these kinds of formal results can work. Maybe they're a good starting place for thinking about these issues. What do they do? They allow us to reduce arbitrary choices to uncertainty-free or one-person choices involving a veil of ignorance or a multiverse. They give some simple axiomatizations. We can think of part of what I did as giving an axiomatization of expected totalism similar to Harsanyi's theorem and they lead to some interesting conclusions about standard views, sometimes arguably amounting to objections, for example, the objection from fanaticism or the objection from anti-natalism. And finally, they flounder in infinite cases, like many other principles. Thanks a lot.
Other videos
- « Previous
- 1
- …
- 14
- 15
- 16