The long-run relationship between per capita incomes and population size

Maya Eden (University of Zurich) and Kevin Kuruc (Population Wellbeing Initiative, University of Texas at Austin)

GPI Working Paper No. 29-2024

The relationship between the human population size and per capita incomes has long been debated. Two competing forces feature prominently in these discussions. On the one hand, a larger population means that limited natural resources must be shared among more people. On the other hand, more people means more innovation and faster technological progress, other things equal. We study a model that features both of these channels. A calibration suggests that, in the long run, (marginal) increases in population would likely lead to (marginal) increases in per capita incomes.

Other working papers

Choosing the future: Markets, ethics and rapprochement in social discounting – Antony Millner (University of California, Santa Barbara) and Geoffrey Heal (Columbia University)

This paper provides a critical review of the literature on choosing social discount rates (SDRs) for public cost-benefit analysis. We discuss two dominant approaches, the first based on market prices, and the second based on intertemporal ethics. While both methods have attractive features, neither is immune to criticism. …

Against Willing Servitude: Autonomy in the Ethics of Advanced Artificial Intelligence – Adam Bales (Global Priorities Institute, University of Oxford)

Some people believe that advanced artificial intelligence systems (AIs) might, in the future, come to have moral status. Further, humans might be tempted to design such AIs that they serve us, carrying out tasks that make our lives better. This raises the question of whether designing AIs with moral status to be willing servants would problematically violate their autonomy. In this paper, I argue that it would in fact do so.

In search of a biological crux for AI consciousness – Bradford Saad (Global Priorities Institute, University of Oxford)

Whether AI systems could be conscious is often thought to turn on whether consciousness is closely linked to biology. The rough thought is that if consciousness is closely linked to biology, then AI consciousness is impossible, and if consciousness is not closely linked to biology, then AI consciousness is possible—or, at any rate, it’s more likely to be possible. A clearer specification of the kind of link between consciousness and biology that is crucial for the possibility of AI consciousness would help organize inquiry into…