A non-identity dilemma for person-affecting views
Elliott Thornley (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 6-2024
Person-affecting views state that (in cases where all else is equal) we’re permitted but not required to create people who would enjoy good lives. In this paper, I present an argument against every possible variety of person-affecting view. The argument is a dilemma over trilemmas. Narrow person-affecting views imply a trilemma in a case that I call ‘Expanded Non-Identity.’ Wide person-affecting views imply a trilemma in a case that I call ‘Two-Shot Non-Identity.’ One plausible practical upshot of my argument is as follows: we individuals and our governments should be doing more to reduce the risk of human extinction this century.
Other working papers
AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…
Measuring AI-Driven Risk with Stock Prices – Susana Campos-Martins (Global Priorities Institute, University of Oxford)
We propose an empirical approach to identify and measure AI-driven shocks based on the co-movements of relevant financial asset prices. For that purpose, we first calculate the common volatility of the share prices of major US AI-relevant companies. Then we isolate the events that shake this industry only from those that shake all sectors of economic activity at the same time. For the sample analysed, AI shocks are identified when there are announcements about (mergers and) acquisitions in the AI industry, launching of…
High risk, low reward: A challenge to the astronomical value of existential risk mitigation – David Thorstad (Global Priorities Institute, University of Oxford)
Many philosophers defend two claims: the astronomical value thesis that it is astronomically important to mitigate existential risks to humanity, and existential risk pessimism, the claim that humanity faces high levels of existential risk. It is natural to think that existential risk pessimism supports the astronomical value thesis. In this paper, I argue that precisely the opposite is true. Across a range of assumptions, existential risk pessimism significantly reduces the value of existential risk mitigation…