How cost-effective are efforts to detect near-Earth-objects?

Toby Newberry (Future of Humanity Institute, University of Oxford)

GPI Technical Report No. T1-2021

Near-Earth-objects (NEOs) include asteroids and comets with orbits that bring them into close proximity with Earth. NEOs are well-known to have impacted Earth in the past, sometimes to catastrophic effect.2 Over the past few decades, humanity has taken steps to detect any NEOs on impact trajectories, and, in doing so, we have significantly improved our estimate of the risk that an impact will occur over the next century. This report estimates the cost-effectiveness of such detection efforts. The remainder of this section sets out the context of the report...

Other working papers

Longtermist political philosophy: An agenda for future research – Jacob Barrett (Global Priorities Institute, University of Oxford) and Andreas T. Schmidt (University of Groningen)

We set out longtermist political philosophy as a research field. First, we argue that the standard case for longtermism is more robust when applied to institutions than to individual action. This motivates “institutional longtermism”: when building or shaping institutions, positively affecting the value of the long-term future is a key moral priority. Second, we briefly distinguish approaches to pursuing longtermist institutional reform along two dimensions: such approaches may be more targeted or more broad, and more urgent or more patient.

AI takeover and human disempowerment – Adam Bales (Global Priorities Institute, University of Oxford)

Some take seriously the possibility of AI takeover, where AI systems seize power in a way that leads to human disempowerment. Assessing the likelihood of takeover requires answering empirical questions about the future of AI technologies and the context in which AI will operate. In many cases, philosophers are poorly placed to answer these questions. However, some prior questions are more amenable to philosophical techniques. What does it mean to speak of AI empowerment and human disempowerment? …

Aggregating Small Risks of Serious Harms – Tomi Francis (Global Priorities Institute, University of Oxford)

According to Partial Aggregation, a serious harm can be outweighed by a large number of somewhat less serious harms, but can outweigh any number of trivial harms. In this paper, I address the question of how we should extend Partial Aggregation to cases of risk, and especially to cases involving small risks of serious harms. I argue that, contrary to the most popular versions of the ex ante and ex post views, we should sometimes prevent a small risk that a large number of people will suffer serious harms rather than prevent…