Simulation expectation
Teruji Thomas (Global Priorities Institute, University of Oxford)
GPI Working Paper No. 16-2021, published at Erkenntnis
I present a new argument that we are much more likely to be living in a computer simulation than in the ground-level of reality. (Similar arguments can be marshalled for the view that we are more likely to be Boltzmann brains than ordinary people, but I focus on the case of simulations.) I explain how this argument overcomes some objections to Bostrom’s classic argument for the same conclusion. I also consider to what extent the argument depends upon an internalist conception of evidence, and I refute the common line of thought that finding many simulations being run—or running them ourselves—must increase the odds that we are in a simulation.
Other working papers
Consequentialism, Cluelessness, Clumsiness, and Counterfactuals – Alan Hájek (Australian National University)
According to a standard statement of objective consequentialism, a morally right action is one that has the best consequences. More generally, given a choice between two actions, one is morally better than the other just in case the consequences of the former action are better than those of the latter. (These are not just the immediate consequences of the actions, but the long-term consequences, perhaps until the end of history.) This account glides easily off the tongue—so easily that…
How much should governments pay to prevent catastrophes? Longtermism’s limited role – Carl Shulman (Advisor, Open Philanthropy) and Elliott Thornley (Global Priorities Institute, University of Oxford)
Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. …
The epistemic challenge to longtermism – Christian Tarsney (Global Priorities Institute, Oxford University)
Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism…