Economic inequality and the long-term future

Andreas T. Schmidt (University of Groningen) and Daan Juijn (CE Delft)

GPI Working Paper No. 4-2021, published in Politics, Philosophy & Economics

Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal consequences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around future generations and so-called longtermism, those effects might arguably matter more than inequality’s short-term consequences. We assess whether we have instrumental reason to reduce economic inequality based on its intertemporal effects in the short, medium and the very long term. We find a good short and medium-term instrumental case for lower economic inequality. We then argue, somewhat speculatively, that we have instrumental reasons for inequality reduction from a longtermist perspective too, because greater inequality could increase existential risk. We thus have instrumental reasons for reducing inequality, regardless of which time-horizon we take. We then argue that from most consequentialist perspectives, this pro tanto reason also gives us all-things-considered reason. And even across most non-consequentialist views in philosophy, this argument gives us either an all-things-considered or at least weighty pro tanto reason against inequality.

Other working papers

The asymmetry, uncertainty, and the long term – Teruji Thomas (Global Priorities Institute, Oxford University)

The Asymmetry is the view in population ethics that, while we ought to avoid creating additional bad lives, there is no requirement to create additional good ones. The question is how to embed this view in a complete normative theory, and in particular one that treats uncertainty in a plausible way. After reviewing…

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …

Evolutionary debunking and value alignment – Michael T. Dale (Hampden-Sydney College) and Bradford Saad (Global Priorities Institute, University of Oxford)

This paper examines the bearing of evolutionary debunking arguments—which use the evolutionary origins of values to challenge their epistemic credentials—on the alignment problem, i.e. the problem of ensuring that highly capable AI systems are properly aligned with values. Since evolutionary debunking arguments are among the best empirically-motivated arguments that recommend changes in values, it is unsurprising that they are relevant to the alignment problem. However, how evolutionary debunking arguments…