Consciousness makes things matter
Andrew Y. Lee (University of Toronto)
GPI Working Paper No. 25-2024, forthcoming at Philosophers' Imprint
This paper argues that phenomenal consciousness is what makes an entity a welfare subject, or the kind of thing that can be better or worse off. I develop and motivate this view, and then defend it from objections concerning death, non-conscious entities that have interests (such as plants), and conscious subjects that necessarily have welfare level zero. I also explain how my theory of welfare subjects relates to experientialist and anti-experientialist theories of welfare goods.
Other working papers
The end of economic growth? Unintended consequences of a declining population – Charles I. Jones (Stanford University)
In many models, economic growth is driven by people discovering new ideas. These models typically assume either a constant or growing population. However, in high income countries today, fertility is already below its replacement rate: women are having fewer than two children on average. It is a distinct possibility — highlighted in the recent book, Empty Planet — that global population will decline rather than stabilize in the long run. …
Economic growth under transformative AI – Philip Trammell (Global Priorities Institute, Oxford University) and Anton Korinek (University of Virginia)
Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital’s substitutability for labor…
In defence of fanaticism – Hayden Wilkinson (Australian National University)
Consider a decision between: 1) a certainty of a moderately good outcome, such as one additional life saved; 2) a lottery which probably gives a worse outcome, but has a tiny probability of a far better outcome (perhaps trillions of blissful lives created). Which is morally better? Expected value theory (with a plausible axiology) judges (2) as better, no matter how tiny its probability of success. But this seems fanatical. So we may be tempted to abandon expected value theory…