Economic inequality and the long-term future

Andreas T. Schmidt (University of Groningen) and Daan Juijn (CE Delft)

GPI Working Paper No. 4-2021, published in Politics, Philosophy & Economics

Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal consequences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around future generations and so-called longtermism, those effects might arguably matter more than inequality’s short-term consequences. We assess whether we have instrumental reason to reduce economic inequality based on its intertemporal effects in the short, medium and the very long term. We find a good short and medium-term instrumental case for lower economic inequality. We then argue, somewhat speculatively, that we have instrumental reasons for inequality reduction from a longtermist perspective too, because greater inequality could increase existential risk. We thus have instrumental reasons for reducing inequality, regardless of which time-horizon we take. We then argue that from most consequentialist perspectives, this pro tanto reason also gives us all-things-considered reason. And even across most non-consequentialist views in philosophy, this argument gives us either an all-things-considered or at least weighty pro tanto reason against inequality.

Other working papers

When should an effective altruist donate? – William MacAskill (Global Priorities Institute, Oxford University)

Effective altruism is the use of evidence and careful reasoning to work out how to maximize positive impact on others with a given unit of resources, and the taking of action on that basis. It’s a philosophy and a social movement that is gaining considerable steam in the philanthropic world. For example,…

Philosophical considerations relevant to valuing continued human survival: Conceptual Analysis, Population Axiology, and Decision Theory – Andreas Mogensen (Global Priorities Institute, University of Oxford)

Many think that human extinction would be a catastrophic tragedy, and that we ought to do more to reduce extinction risk. There is less agreement on exactly why. If some catastrophe were to kill everyone, that would obviously be horrific. Still, many think the deaths of billions of people don’t exhaust what would be so terrible about extinction. After all, we can be confident that billions of people are going to die – many horribly and before their time – if humanity does not go extinct. …

Will AI Avoid Exploitation? – Adam Bales (Global Priorities Institute, University of Oxford)

A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure is instructive…