The freedom of future people
Andreas T Schmidt (University of Groningen)
GPI Working Paper No. 10-2023
What happens to liberal political philosophy, if we consider not only the freedom of present but also future people? In this article, I explore the case for long-term liberalism: freedom should be a central goal, and we should often be particularly concerned with effects on long-term future distributions of freedom. I provide three arguments. First, liberals should be long-term liberals: liberal arguments to value freedom give us reason to be (particularly) concerned with future freedom, including freedom in the far future. Second, longtermists should be liberals, particularly under conditions of empirical and moral uncertainty. Third, long-term liberalism plausibly justifies some restrictions on the freedom of existing people to secure the freedom of future people, for example when mitigating climate change. At the same time, it likely avoids excessive trade-offs: for both empirical and philosophical reasons, long-term and near-term freedom show significant convergence. Throughout I also highlight important practical implications, for example on longtermist institutional action, climate change, human extinction, and global catastrophic risks.
Other working papers
The weight of suffering – Andreas Mogensen (Global Priorities Institute, University of Oxford)
How should we weigh suffering against happiness? This paper highlights the existence of an argument from intuitively plausible axiological principles to the striking conclusion that in comparing different populations, there exists some depth of suffering that cannot be compensated for by any measure of well-being. In addition to a number of structural principles, the argument relies on two key premises. The first is the contrary of the so-called Reverse Repugnant Conclusion…
AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)
Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …
Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)
A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…