All Videos

This will not appear on the homepage or research page, but only on the videos page

Theron Pummer | Future Suffering and the Non-Identity Problem

If we dramatically reduced our carbon emissions, the quality of life of future people would be much higher than it would be if we carried on with business as usual. Nonetheless, because adopting a widespread policy of reducing emissions would affect the timings of conceptions and thus the identities of who would come to exist, it is likely that after a century or so none of the particular people who would exist if we carried on as usual would exist if we instead dramatically reduced our emissions. Reducing emissions may therefore be better for no particular future person. Are we nonetheless morally required to reduce our emissions, and, if so, on what basis? This is one instance of the non-identity problem, made famous by Derek Parfit. Drawing upon the distinction between morally requiring reasons and morally justifying reasons, I provide a new solution to the non-identity problem. According to my solution, we can be morally required to ensure that the quality of life of future people is higher rather than lower insofar as this involves reducing future suffering (negative welfare). Indeed, we are often morally required to do this. We can be morally required to reduce future suffering in this way even when it is not better for any particular future person and even when future people would have lives worth living regardless of what we do. However, we are never morally required to ensure that the quality of life of future people is higher rather than lower insofar as this involves merely increasing future happiness (positive welfare). My solution to the non-identity problem captures the procreation asymmetry while avoiding implausible forms of antinatalism. It has important implications for global priority setting.

Theron Pummer | Future Suffering and the Non-Identity Problem Read More »

Daron Acemoglu | Reclaiming humanity in the age of AI

This talk will argue that human agency – the ability of humans to make decisions that shape their lives in environments – is a fundamental value and is under two related threats: (1) the growing emphasis on a single dimension of human talents centered on analytical skills and college-level education; (2) the perspective and practice of digital technologies and AI sidelining humans. The philosophical foundations of these two threats are mutually self-reinforcing. They have together led to a pattern of growing economic gaps, status differences and political voice between college and non-college workers in the industrialized world. The next stage of AI looks set to exacerbate these trends by prioritizing AGI, excessive automation and limiting autonomous human decision-making. I will also outline how a different trajectory for technological change in AI can re-energize human agency.

Daron Acemoglu | Reclaiming humanity in the age of AI Read More »