How cost-effective are efforts to detect near-Earth-objects?
Toby Newberry (Future of Humanity Institute, University of Oxford)
GPI Technical Report No. T1-2021
Near-Earth-objects (NEOs) include asteroids and comets with orbits that bring them into close proximity with Earth. NEOs are well-known to have impacted Earth in the past, sometimes to catastrophic effect.2 Over the past few decades, humanity has taken steps to detect any NEOs on impact trajectories, and, in doing so, we have significantly improved our estimate of the risk that an impact will occur over the next century. This report estimates the cost-effectiveness of such detection efforts. The remainder of this section sets out the context of the report...
Other working papers
What power-seeking theorems do not show – David Thorstad (Vanderbilt University)
Recent years have seen increasing concern that artificial intelligence may soon pose an existential risk to humanity. One leading ground for concern is that artificial agents may be power-seeking, aiming to acquire power and in the process disempowering humanity. A range of power-seeking theorems seek to give formal articulation to the idea that artificial agents are likely to be power-seeking. I argue that leading theorems face five challenges, then draw lessons from this result.
Against the singularity hypothesis – David Thorstad (Global Priorities Institute, University of Oxford)
The singularity hypothesis is a radical hypothesis about the future of artificial intelligence on which self-improving artificial agents will quickly become orders of magnitude more intelligent than the average human. Despite the ambitiousness of its claims, the singularity hypothesis has been defended at length by leading philosophers and artificial intelligence researchers. In this paper, I argue that the singularity hypothesis rests on scientifically implausible growth assumptions. …
Evolutionary debunking and value alignment – Michael T. Dale (Hampden-Sydney College) and Bradford Saad (Global Priorities Institute, University of Oxford)
This paper examines the bearing of evolutionary debunking arguments—which use the evolutionary origins of values to challenge their epistemic credentials—on the alignment problem, i.e. the problem of ensuring that highly capable AI systems are properly aligned with values. Since evolutionary debunking arguments are among the best empirically-motivated arguments that recommend changes in values, it is unsurprising that they are relevant to the alignment problem. However, how evolutionary debunking arguments…