The freedom of future people

Andreas T Schmidt (University of Groningen)

GPI Working Paper No. 10-2023

What happens to liberal political philosophy, if we consider not only the freedom of present but also future people? In this article, I explore the case for long-term liberalism: freedom should be a central goal, and we should often be particularly concerned with effects on long-term future distributions of freedom. I provide three arguments. First, liberals should be long-term liberals: liberal arguments to value freedom give us reason to be (particularly) concerned with future freedom, including freedom in the far future. Second, longtermists should be liberals, particularly under conditions of empirical and moral uncertainty. Third, long-term liberalism plausibly justifies some restrictions on the freedom of existing people to secure the freedom of future people, for example when mitigating climate change. At the same time, it likely avoids excessive trade-offs: for both empirical and philosophical reasons, long-term and near-term freedom show significant convergence. Throughout I also highlight important practical implications, for example on longtermist institutional action, climate change, human extinction, and global catastrophic risks.

Other working papers

Intergenerational experimentation and catastrophic risk – Fikri Pitsuwan (Center of Economic Research, ETH Zurich)

I study an intergenerational game in which each generation experiments on a risky technology that provides private benefits, but may also cause a temporary catastrophe. I find a folk-theorem-type result on which there is a continuum of equilibria. Compared to the socially optimal level, some equilibria exhibit too much, while others too little, experimentation. The reason is that the payoff externality causes preemptive experimentation, while the informational externality leads to more caution…

Doomsday rings twice – Andreas Mogensen (Global Priorities Institute, Oxford University)

This paper considers the argument according to which, because we should regard it as a priori very unlikely that we are among the most important people who will ever exist, we should increase our confidence that the human species will not persist beyond the current historical era, which seems to represent…

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …