Economic inequality and the long-term future

Andreas T. Schmidt (University of Groningen) and Daan Juijn (CE Delft)

GPI Working Paper No. 4-2021, published in Politics, Philosophy & Economics

Why, if at all, should we object to economic inequality? Some central arguments – the argument from decreasing marginal utility for example – invoke instrumental reasons and object to inequality because of its effects. Such instrumental arguments, however, often concern only the static effects of inequality and neglect its intertemporal consequences. In this article, we address this striking gap and investigate income inequality’s intertemporal consequences, including its potential effects on humanity’s (very) long-term future. Following recent arguments around future generations and so-called longtermism, those effects might arguably matter more than inequality’s short-term consequences. We assess whether we have instrumental reason to reduce economic inequality based on its intertemporal effects in the short, medium and the very long term. We find a good short and medium-term instrumental case for lower economic inequality. We then argue, somewhat speculatively, that we have instrumental reasons for inequality reduction from a longtermist perspective too, because greater inequality could increase existential risk. We thus have instrumental reasons for reducing inequality, regardless of which time-horizon we take. We then argue that from most consequentialist perspectives, this pro tanto reason also gives us all-things-considered reason. And even across most non-consequentialist views in philosophy, this argument gives us either an all-things-considered or at least weighty pro tanto reason against inequality.

Other working papers

It Only Takes One: The Psychology of Unilateral Decisions – Joshua Lewis (New York University) et al.

Sometimes, one decision can guarantee that a risky event will happen. For instance, it only took one team of researchers to synthesize and publish the horsepox genome, thus imposing its publication even though other researchers might have refrained for biosecurity reasons. We examine cases where everybody who can impose a given event has the same goal but different information about whether the event furthers that goal. …

On two arguments for Fanaticism – Jeffrey Sanford Russell (University of Southern California)

Should we make significant sacrifices to ever-so-slightly lower the chance of extremely bad outcomes, or to ever-so-slightly raise the chance of extremely good outcomes? Fanaticism says yes: for every bad outcome, there is a tiny chance of of extreme disaster that is even worse, and for every good outcome, there is a tiny chance of an enormous good that is even better.

AI alignment vs AI ethical treatment: Ten challenges – Adam Bradley (Lingnan University) and Bradford Saad (Global Priorities Institute, University of Oxford)

A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching…