Doomsday and objective chance

Teruji Thomas (Global Priorities Institute, Oxford University)

GPI Working Paper No. 8-2021

Lewis’s Principal Principle says that one should usually align one’s credences with the known chances. In this paper I develop a version of the Principal Principle that deals well with some exceptional cases related to the distinction between metaphysical and epistemic modal­ity. I explain how this principle gives a unified account of the Sleeping Beauty problem and chance-­based principles of anthropic reasoning. In doing so, I defuse the Doomsday Argument that the end of the world is likely to be nigh.

Other working papers

‘The only ethical argument for positive 𝛿’? – Andreas Mogensen (Global Priorities Institute, Oxford University)

I consider whether a positive rate of pure intergenerational time preference is justifiable in terms of agent-relative moral reasons relating to partiality between generations, an idea I call ​discounting for kinship​. I respond to Parfit’s objections to discounting for kinship, but then highlight a number of apparent limitations of this…

The unexpected value of the future – Hayden Wilkinson (Global Priorities Institute, University of Oxford)

Consider longtermism: the view that the morally best options available to us, in many important practical decisions, are those that provide the greatest improvements in the (ex ante) value of the far future. Many who accept longtermism do so because they accept an impartial, aggregative theory of moral betterness in conjunction with expected value theory. But such a combination of views implies absurdity if the (impartial, aggregated) value of humanity’s future is undefined…

Should longtermists recommend hastening extinction rather than delaying it? – Richard Pettigrew (University of Bristol)

Longtermism is the view that the most urgent global priorities, and those to which we should devote the largest portion of our current resources, are those that focus on ensuring a long future for humanity, and perhaps sentient or intelligent life more generally, and improving the quality of those lives in that long future. The central argument for this conclusion is that, given a fixed amount of are source that we are able to devote to global priorities, the longtermist’s favoured interventions have…