Jeff Sebo | Artificial Sentience and the Ethics of Connected Minds
JEFF SEBO: (00:03) Great. So yes, as Hilary said, I want to talk about Artificial Sentience and the Ethics of Connected Minds. And as Hilary also mentioned, this is part of a new program that we have at NYU call the Mind Ethics and Policy Program. This program will launch in Fall '22, in a few months, and will examine the sentience and moral status, and legal status and political status of non-humans, including animals and artificial intelligences. So we want to be asking questions about what kinds of minds can there be? Which kinds of minds can be sentient? How much welfare can they have? Are their lives good or bad in expectation? And how will the radically different types of minds that might exist in the future change how we think about ethics and law and policy? And of course, some initial questions concern moral and legal and political circle expansion. How far should we attribute sentience in the world and welfare in the world? And what kind of moral weight should we give to different kinds of minds? But then this other type of question concerns what different types of minds might exist in the future? And how might they relate to each other in different kinds of ways? And how might that affect, mess with our current highly atomistic or individualistic ways of thinking about what we owe to each other? And so, this talk is going to be part of a project just now getting started to map that terrain and survey some of those questions and see what types of answers might be available to those questions. And I should note, that this talk is based on a collaborative project with Luke Roelofs, a philosopher at the NYU Center for Mind, Brain and Consciousness. And he works on the metaphysics of connected minds and I have some past work on the ethics of connected minds. And so we want to try to put that together and then extend it, not only to the human case, but to the non-human case, including animals and AIs. And so what I want to do in this talk, is at a high level, map that terrain and indicate what some of the interesting types of mental connections might be and what the metaphysical and ethical questions might arise when we explore those mental connections. And so what I'll do, at first, is indicate what types of mental connections I have in mind, I'll then gesture at some of the metaphysical questions that they raise concerning consciousness and the self and personal identity. And then survey some of the ethical questions that they raise about welfare, about autonomy, about virtue, and about the units of moral analysis that we use when we figure out what we owe to each other. And I want to make without fully establishing here, the following suggestion: moral properties and relations correspond to mental properties and relations. And what I mean by that is that if minds can be connected, moral subjects can also be connected. And if parts and wholes and groups can all have minds at the same time, then they can all be moral subjects at the same time. They can all have selves, welfares, autonomy, virtues, vices at the same time. And so if minds are much more fluid in the future, morality will have to be much more fluid in the future as well in a way that makes much of our current moral discourse and practice potentially out-of-date moving forward. Okay.
(03:37) So first, what kinds of connected minds do I have in mind? So I have two types of mental connections in mind here. One, Luke and I call networked minds and the other we call combined minds. So a networked mind, in our sense of the term, are multiple minds that share a direct mental connection. And what we mean by a direct mental connection is, they can share mental states relatively directly in characteristically intrapersonal ways. For example, by depositing an experience in memory for later access, as opposed to performing an action which is then perceived and interpreted as a kind of communication. So, if two minds are connected that way, we call them networked. There are some examples of this or there seem to be some examples of this now. For example, some conjoined twins, two in particular called the Hogan twins, have a thalamus bridge that connects their brains. They report the ability to see through each other's eyes. One can control two arms and a leg; the other can control two legs and an arm. They can switch back and forth between joint and individual control of these limbs. They claim to be able to think each other's thoughts, and so on. That would be an example of the type of mental connection I have in mind here. Another example is octopuses. Did you know that octopuses have nine brains, each one of them? They have a central brain and then they have a smaller peripheral brain in each arm and they exhibit some integrated and some fragmented behavior, kind of like a federal government interacting with state governments. So they might have networked minds in this sense. And in the future, artificial intelligences, if and when they achieve consciousness and sentience, they too, could have networked minds in this sense. They of course, are literally networked or could be literally networked with each other in a way that allows them to interact with each other in the same kind of way that the Hogan twins or octopuses seem to be able to do, but at a much greater scale.
(05:56) The other type of mental connection I have in mind is called combined minds. These are minds that share token mental states, either by overlapping or being built out of each other. Here too, we have some precedent for thinking about combined minds. For an example of overlapping minds think about dissociation in the human case. If you undergo a traumatic experience or some other kind of experience, you might encounter a certain kind of dissociation, where you compartmentalize some mental states from some other mental states. You develop different alters, different sets of thoughts and feelings and dispositions that get triggered and activated in different environmental contexts. And if we imagine a circumstance where a person has different alters, that are fragmented enough to clearly be different personalities in a certain sense, but still share some token beliefs or desires or dispositions, that would be one, at least on the surface, example of a combined mind in our sense of the term.
(07:15) Another, of course, is collective agency. If you and I take a walk together, we might have individual intentions that fit together in a way that jointly compose a shared intention. So, I intend to walk with you, if you walk with me, You intend to walk with me, if I walk with you. And these intentions fit together in a way that makes us a shared agent with an intention to walk. And so, that would be a case where we are separate agents and a shared agent at the same time. And we can imagine that in the future, minds can be combined, again like this, but much more so. Maybe even by sharing conscious experiences at multiple levels, rather than merely intentional states at multiple levels. We can imagine that.
(08:07) So, this raises lots of questions and among the questions are metaphysical ones. So obviously, one relevant question here does concern consciousness. How many connected minds might be possible in the future depends partly on what theory of consciousness is true. If we accept a relatively narrow theory of consciousness, like identity theory, it says only certain types of brains can be conscious, then that might limit the ways that minds can come together to form new kinds of conscious minds. But if we accept a relatively expansive theory of consciousness like, pan psychism, everything is conscious, then of course, that significantly expands the ways minds can come together and form new conscious minds. And if we accept a middle ground theory, like certain types of functionalism, that anything put together in a certain kind of structure with a certain type of function can be a mind and maybe can have consciousness, then we would end up somewhere in the middle. So this is something to keep in mind.
(09:09) Other questions concern the self and personal identity. If minds can be connected, does that mean selves can be connected? Does that mean persons can be connected? And that too, is going to depend on what theory of the self and what theory of personal identity you prefer. But let me mention one theory of the self that I think could be quite compatible with the idea of connected selves or different selves at different levels. And this is the Daniel Dennett self as a center of narrative gravity theory. So think about centers of gravity for a second. A center of gravity is not like a physical thing in the world. A center of gravity is an abstract idea, a useful fiction, the point in space around which your weight is evenly distributed. And we use this in some explanations because this is a simpler way of explaining, predicting and controlling certain phenomena like, when are you going to tip over? I can answer that question by thinking about where your center of gravity is located. But the interesting thing about centers of gravity is that parts, wholes and groups can all have them at the same time. If my parts are distinct enough that sometimes it makes sense to ask when each one is going to fall on its own, then I can say what its center of gravity is, for purposes of answering that question. And if groups are connected enough they all like, get together in a pile, then it makes sense to ask what the group center of gravity is. So depending on the situation at hand, what needs to be explained, it could make sense to attribute a center of gravity to an individual, parts of the individual and groups of individuals, all at the same time. Similarly, if my mind is made out of other smaller minds that are distinct enough, it might sometimes make sense to explain their behavior by attributing mental states to them as individual minds, mental dispositions to them as individual minds. And likewise, if you and I are a team and we do lots of stuff together, it might make sense to explain some of our joint behavior by attributing mental states to us as a pair or a group. So in the same kind of way that a center of gravity can exist at different levels at the same time, a center of narrative gravity, the simple abstract set of mental states around which all our actual fluctuating mental states are evenly distributed, that can be attributed to parts and wholes and groups at the same time as well. So I'll come back to this, but I think this might be useful for capturing the complicated reality that we're soon going to encounter. Okay.
(12:04) Now, I'm not going to try to resolve these metaphysical questions, but I will say, for my purposes, I'll assume that some kind of middle ground theory of consciousness like a functionalist theory of consciousness is true and I'll assume that Derek Parfit is right and that we should ask and answer metaphysical and normative questions separately. So whatever theory of the self, whatever theory of personal identity is true on the metaphysics side, we should treat it as an open question, whether that is the appropriate unit of analysis on the ethics side. And I think the answer to that will depend on what the theory is. But the one I just mentioned, I think, is promising if we want ethical relevance in our metaphysical theories. Okay.
(12:55) Let me survey some ethics questions that I think are going to be important and some of which I think are going to be quite important from an effective altruist or longtermist perspective in particular.
(13:17) First, questions about welfare, benefits and harms. When minds are connected are welfares connected? Do they share benefits and harms? This is one question. And briefly, I think that'll depend on the nature of the mental connection. If two minds share a token welfare state, if two minds have access to the same unit of pleasure, then it makes sense to say that they both benefit directly from that pleasure. But if two minds are connected via other mental states, then it makes sense to say that they have separate welfares. But of course, they might be benefited in harms at the same time, in virtue of their especially intimate relationship and shared circumstances, especially if they care about each other and want each other to be happy, and so on. But I think the more important question from an effective altruist or longtermist perspective, concerns the amounts of welfare we can expect in a world of connected minds. Take two minds and now suppose that we connect those minds. Should we think that there is more or less welfare in total than there was before we connected them? I can imagine different ways of thinking about this. On one hand you might think, yes, maybe when minds are connected, it unlocks certain kinds of amplification effects, more intense, pleasurable experiences because of a certain type of interaction that can occur across the minds. Or you might think, no, this actually has a suppression effect because now connections are being made that could have otherwise been spent on realizing pleasure states directly and so this mental energy is being diverted towards connections that could have otherwise been spent on pleasure. And this welfare question concerns a normative question too, which is: suppose there is a single unit of pleasure shared by two minds. And now, suppose we want to ask how many units of pleasure are there in the aggregate? Should we say one unit of pleasure? Should we single count it, even though two minds have access to it and benefit from it? Or should we double count it and say, "Look, if there's a single unit of pleasure here and two minds have access to it and benefit from it, actually, there are two units of pleasure here, not one." Obviously, which of these answers we select, is going to very significantly shape how much welfare we take, positive or negative, we take there to be in the aggregate, if minds are connected in these types of ways. Now I think probably, it makes sense to only single count these welfare states, even if multiple minds have access to them, but pay attention to possible amplification effects. If two minds have access to one unit of pleasure and they both benefit from it and then they both think, "Wow! This pleasure is really great. It makes me so happy and satisfied to have this pleasurable state." Maybe that pleasure is unlocking further positive states within each mind and maybe that adds more pleasure to the world. But this unit of pleasure probably should only be single counted. This is, at least, my current way of thinking about it. But those questions are really interesting and hard for me to wrap my mind around right now, so I'd love to hear if you have thoughts about them.
(16:59) Let me now though, turn to a couple of other types of ethical questions. One concerns autonomy and the duties we have to each other and the rights we have against each other. If two minds are connected, do they still have characteristically other-regarding moral duties to each other, a duty to live and let live, respect each other's autonomy, and so on and so forth? Or at a certain point, do they only have characteristically self-regarding duties to each other, like a duty to work together to pursue their moral perfection? Now, most philosophers think or many philosophers think that once a certain degree of mental or psychological connectedness or continuity is achieved, other-regarding duties disappear. We are now the same unit of moral analysis, the same unit of moral concern. And so, we should be treated as one and not as two. So we might have obligations to others, but not any longer to each other. But for reasons that we can discuss more in the Q&A, if you like, I think that this is the wrong way of thinking about it. I think no matter how intimately connected two minds become, if they still, to some degree, remain separate minds with separate beliefs, values, intentions, perspectives, they still have these characteristically other-regarding duties to each other. Though, the more intimately connected they are, the more psychologically connected they are, the more they might do things like share beliefs, share values, share intentions, jointly constitute themselves as a shared agent with a further set of shared moral obligations. So the way that I think about it is, suppose my day-self wants to not drink tonight but my night-self is going to want to drink tonight. The question is, is my day-self morally permitted to use coercion and physical restraint and all of these self-binding mechanisms in order to prevent my night-self from drinking? Or is my day-self required to respect my night-self's autonomy, coordinate, compromise with my night-self and vice versa? I think the latter is the case. I think that if these different selves disagree about what to do, no matter how bodily and psychologically and narratively connected they are, they still have a duty to consider each other's interests in perspective when deciding what to do and seek to compromise and coordinate where possible and use methods like coercion or physical restraint only when they would truly be justified and as a last resort. So I actually think that they have a duty to come up with a compromise as opposed to impose their will on each other by any means necessary. But not everybody agrees with me about that.
(20:12) Now, similarly, if two minds are connected, does that mean that they share duties to other people and share rights against other people? Or do they still have separate duties to other people and separate rights against other people? So suppose my night-self is morally negligent and goes out drinking, in spite of my day-self's request that they take it easy. And then they end up doing things that my day-self would not have endorsed, would not have done. Maybe my night-self cheats on my partner and my day-self wakes up and is like, "Oh, my God! How could that have happened? I never would have done that. Why did you do that?" Is your day-self responsible for your night-self's decision to cheat on your partner? I think yes and no. To the degree that these count as separate subjects, in which your day-self is not blameworthy for your night-self's decision. They did not do this and they would not have done this, all things considered. But there are other respects in which your day-self might be accountable for your night-self's behavior. Maybe they're indirectly responsible if they acted in ways that foreseeably produced this behavior. Maybe they're somewhat criticizable in light of this behavior if they share some dispositions that are revealed by the behavior. Maybe they're accountable in the same kind of way that a president is on behalf of the nation. If a past president did a terrible thing, now I become president, I might have a responsibility to apologize on behalf of the nation in virtue of my role as representative of the nation, even if everyone knows I was not the person who made the decision. So you can say at one and the same time, you are not blameworthy for the decision, but you would be blameworthy for not apologizing for the decision. You need to take responsibility for it because you represent the nation whose past representative did this thing. We can say similar things in this kind of case. So you might not be blameworthy, might not be the one who did it, but in virtue of these connections, you might still be on the hook in these other indirect or weaker ways.
(22:32) Now similarly, think about questions involving virtue. We think that moral subjects, moral agents, have virtues, they have vices. We understand these as character traits or dispositions to behave in certain ways. And we evaluate each other in part in reference to each other's virtues and vices. And so we can once again ask, in the same kind of way as we ask, do connected minds have connected welfares? And in the same kind of way that we ask, do connected minds have connected autonomy? We can ask, do connected minds have connected virtues?
(23:10) And this is where I think these questions about the self potentially become relevant again. And these questions about character potentially become relevant again. And this is why I really like this idea of thinking about the self as a kind of center of narrative gravity or a kind of center of psychological gravity because, again, the idea of thinking of the self as a center of narrative or psychological gravity is that, okay, there is no such thing as the self-inside of me. If you break open my brain, you are not going to find the self in any particular location, just like if you break open the rest of my body, you will not find my center of gravity in any particular location. This is an abstract object, a useful fiction that we use in order to provide simple, comprehensible explanations and predictions regarding complex dynamic phenomenon.
(24:11) But it is useful, and again, attributing different centers of gravity at different levels can be useful for different explanatory purposes. Attributing a center of gravity to my arm can be useful when asking how my arm will behave. Attributing one to me can be useful when asking how I'll behave and distributing one to a group can be useful when that group is like making a pyramid or something like that, asking how the group will behave, at what point the group will tip over. And similarly, again return to a case where I am fragmented in a certain way. I have this day-self with one clearly distinct set of dispositions and a night-self with another clearly distinct set of dispositions. For some explanatory purposes, it might be useful to distinguish these. If my day-self is like 80% honest and my night-self is 70% honest, it might be good to be able to say, "I can trust you a little bit more. You are a little bit more trustworthy during the day when sober during the day when sober and so on and so forth with other types of dispositions. That kind of fine-grained way of explaining and predicting and controlling each other's behavior can often be useful, not only for explanatory purposes, but also for evaluative purposes. But it can also, at the same time, be important to be able to assess the person as a whole because we interact primarily as persons right now. Maybe if all you are is my buddy, all you need to think about is, "Can I rely on the version of you I normally encounter?" But if you want to be my like colleague or my friend or my partner, who sees me in all of my sides, you need to know, "To what degree can I rely on the whole person?" If I cheat on my partner because my night-self is not very reliable, that might suck for my day-self because my day-self is trapped in this body with a night-self who is unreliable. But my partner needs to be with the whole person, with the whole human being. And so to some degree, in some cases, they can say, I know this part of you is more reliable, this part of you is less reliable, but at the end of the day, the main decision they need to make is do I want to build a life with this person. And for that, they need to attribute character states and dispositions to the person as a whole by taking some kind of average or doing something else, putting together in some other way the dispositions of the various personalities... Just like the way nations need to interact as nations. Maybe I trust this political party a little bit more than this one, but I know they both control the nation sometimes and the question I need to ask of this other nation is, do I trust the nation over a 20-year time horizon to respect this treaty and even if I trust this political party,
if I don't trust this one and I know that they're going to be in control sometimes, then perhaps I shouldn't trust the nation as a whole too much either, and I should be wary about entering into this treaty with the nation. We can think about persons in the same kind of way and more generally connected minds in the same kind of way.
(27:35) And so I want to close by making a suggestion about moral explanation and moral discourse in practice and the units of moral analysis that might be relevant in the future. As I've said, up until now, more or less, we have treated persons as the primary units of moral analysis, the units of moral analysis that we default to in everyday life, in everyday interactions and moral explanations and predictions, and so on and so forth. We treat persons, individuals, as the subjects who can have welfares, who can have autonomy, duties and rights, who can have virtues and vices and other morally relevant character traits. But I think, in a world of connected minds, if we do face a future with not only human and non-human animals, but also artificial intelligences whose minds can be networked, whose minds can be combined, can contain each other or be contained by each other, the more that becomes the default, the more we might need to deviate from the person as the primary, understood as, default unit of moral analysis.
(28:57) So an analogy that I think is useful... And this was an analogy that was inspired by a reading group at GPI this past term. An analogy that I find useful involves global governance. So in the world of international relations, a trend over the past 50 years or so, is that we used to default to the nation state as the primary default unit of political analysis. We understood, we interpreted what happens in the world primarily by thinking about the motivations and interactions among nation states. But then, globalization started, the cold world ended, markets shifted, all kinds of things happened and people had to step back and take that all in and then this new global governance literature arose and people recognized, "Oh, wait! This is actually much more complicated." If we truly want to understand what happens in the world and if we truly want to evaluate what happens in the world, then we have to be tracking national actors, super or international actors, multinational actors, actors that straddle nations, all of these at the same time. What happens in the world is a product of the local, the national, the international level, and what happens from actors who don't fit neatly into any of those categories, all at the same time. And only when we study all of that at once, can we truly explain and predict and control and evaluate what happens in international politics. And now that globalization has made this salient to us, we can look back and appreciate that this has actually been going on all along. It was never the right thing to do to simply treat nation states as the primary units of political analysis. Local and international and corporate, private actors, they always had a role to play. We just sort of understated that. We neglected it in our prior analyses and that weakened our prior analyses. So, it's not as though a new kind of actor arose necessarily. It's that a kind of actor that has always in some form existed arose much more prominently and saliently in a way that made us take notice and shift our explanations and complicate them and then look back and say, "Ah yeah. I can see them in the past too."
(31:38) So I think the same is going to happen with our moral analyses in a world of connected minds. When we have minds that are connected, such that many more of them are like the multiple personalities cases that I was describing, combined or overlapping minds that have a lot in common, share intentions and share agency a lot, but still diverge and fight with each other sometimes, those sub-personal part actors their role becomes more salient, more prominent, more clearly worthy of independent moral evaluation, in some contexts. And then more persons as a whole, individuals, come together in this more intimate way and form groups that often think and act as one, the more the group level becomes prominent and salient in a way that takes notice, and focus on them more. And then what happens is, we look back and think, "Ah, okay. Yeah. This was happening all along." And we were understating it. We were neglecting it. We thought about groups a little bit in the collective agency literature. We thought about parts a little bit in the personal identity literature, but we always treated them as edge cases. And now we appreciate, maybe this was always going on, but we were ignoring it because we were sort of simplistically and reductively assuming that just individuals can be bearers of welfare and autonomy and duties and rights and virtues and vices and character. So that's my suggestion.
(33:13) So again, this has only been a survey of questions that I think are going to become interesting and important as we really take seriously the full range of minds that might exist in the future and the full range of relations or connections that those minds might have in the future. I think we will realize that we represent only a tiny sliver of the range of possible minds. Many more are going to come online. They are going to be much more fluid and dynamic than ours currently are. And that will require morality to become more fluid and dynamic than ours currently is. And so we will have to move away from this atomistic individualistic way of thinking about the units of moral analysis, have to acknowledge that moral subjects can be connected in the same kind of way that minds can be connected, have to acknowledge that parts, wholes and groups can all be moral subjects in the same way that they can all have minds, if in fact they can. And then look back and recognize, okay, we now live in a kind of globalized moral world where moral explanations have to be tracking all of these subjects and their welfares and duties and rights and virtues and vices at the same time, in a way that is much more complex, but much more true to reality and then looking back realizing, yeah, it was always that way, not just with conjoined twins and octopuses, but just with all of us. And we didn't quite appreciate that, but we will at that point.
(34:54) So that's my suggestion. But as I said, this is only the start of projects that we started thinking about a few weeks ago. So this is still very early days and I'd love to hear your questions or your comments, your suggestions, topics that we can be looking at more, options, possible views about this topic or arguments for or against those views that we should be thinking about more or push back against the type of view that I was proposing. I'd love to hear any of that. So, thank you for listening.