Summary: Against the singularity hypothesis

This is a summary of the GPI Working PaperAgainst the singularity hypothesis” by David Thorstad (published in Philosophical Studies). The summary was written by Riley Harris.

The singularity is a hypothetical future event in which machines rapidly become significantly smarter than humans. The idea is that we might invent an artificial intelligence (AI) system that can improve itself. After a single round of self-improvement, that system would be better equipped to improve itself than before. This process might repeat many times, and each time the AI system would become more capable and better equipped to improve itself even further. At the end of this (perhaps very rapid) process, the AI system could be much smarter than the average human. Philosophers and computer scientists have thought we should take the possibility of a singularity seriously (Solomonoff 1985, Good 1996, Chalmers 2010, Bostrom 2014, Russell 2019). 

It is characteristic of the singularity hypothesis that AI will take years or months at the most to become many times more intelligent than even the most intelligent human.1 Such extraordinary claims require extraordinary evidence. In the paper “Against the singularity hypothesis”, David Thorstad claims that we do not have enough evidence to justify the belief in the singularity hypothesis, and we should consider it unlikely unless stronger evidence emerges.

Reasons to think the singularity is unlikely

Thorstad is sceptical that machine intelligence can grow quickly enough to justify the singularity hypothesis. He gives several reasons for this.

Low-hanging fruit. Innovative ideas and technological improvements tend to become more difficult over time. For example, consider “Moore’s law”, which is (roughly) the observation that hardware capacities double every two years. Between 1971 and 2014 Moore’s law was maintained only with an astronomical increase in the amount of capital and labour invested into semiconductor research (Bloom et al. 2020). In fact, according to one leading estimate, there was an eighteen-fold drop in productivity over this period. While some features of future AI systems will allow them to increase the rate of progress compared to human scientists and engineers, they are still likely to experience diminishing returns as the easiest discoveries have already been made and only more difficult ideas are left. 

Bottlenecks. AI progress relies on improvements in search, computation, storage and so on (each of these areas breaks down into many subcomponents). Progress could be slowed down by any of these subcomponents: if any of these are difficult to speed up, then AI progress will be much slower than we would naively expect. The classic metaphor here concerns the speed a liquid can exit a bottle, which is rate-limited by the narrow space near the opening. AI systems may run into bottlenecks if any essential components cannot be improved quickly (see Aghion et al., 2019).

Constraints. Resource and physical constraints may also limit the rate of progress. To take an analogy, Moore's law gets more difficult to maintain because it is expensive, physically difficult and energy-intensive to cram ever more transistors in the same space. Here we might expect progress to eventually slow as physical and financial constraints provide ever greater barriers to maintaining progress.

Sublinear growth. How do improvements in hardware translate to intelligence growth? Thompson and colleagues (2022) find that exponential hardware improvements translate to linear gains in performance on problems such as Chess, Go, protein folding, weather prediction and the modelling of underground oil reservoirs. Over the past 50 years, transistors in our best circuits increased from 3,500 in 1972 to 114 billion in 2022. If intelligence grew linearly with transistor density computers would have become 33 million times more intelligent over this period. Instead, evidence suggests that intelligence growth is sublinear in hardware growth. 

Arguments for the singularity hypothesis

Two key arguments have been given in favour of the singularity hypothesis. Thorstad analyses them and finds that they are not particularly strong. 

Observational argument. Chalmers (2010) argues for the singularity hypothesis based on the proportionality thesis: that increases in intelligence always lead to at least proportionate increases in the ability to design intelligent systems. He supports this only briefly, observing that, for example, a small difference in design capability between Alan Turing and the average human led to a large difference in the ability of the systems they were able to design (the computer vs hardly anything of importance). The main problem with this argument is that it is local rather than global: It gives evidence that there are points in time where the proportionality thesis is correct, while to support the singularity hypothesis it would be necessary that the proportionality thesis is true at any time. In addition, Chalmers conflates design capabilities and intelligence.2 Overall, Thorstad concludes that Chalmers's argument fails and the observational argument does not vindicate the singularity hypothesis

Optimisation power argument. Bostrom (2014) claims that there will be a large amount of quality-weighted design effort applied to improving artificial systems, which will result in large increases in intelligence. He gives a rich and varied series of examples to support this claim. However, Thorstad finds that many of these examples are just plausible descriptions of artificial intelligence improving rapidly, not evidence that this will happen. Other examples end up being restatements of the singularity hypothesis (for example, that we could be only a single leap of software insight from an intelligence explosion). Thorstad is sceptical that these restatements provide any evidence at all for the singularity hypothesis.

One of the core parts of the argument is initially promising but relies on a misunderstanding. Bostrom claims that roughly constant design effort has historically led to systems doubling their capacity every 18 months. If this were true, then boosting a system's intelligence could allow it to design a new system with even greater intelligence, where that second boost is even bigger than the first. This would allow intelligence to increase ever quicker. But, as discussed above, it was increasing design efforts that led to this improvement in hardware, and AI systems have progressed much more slowly. Overall, Thorstad remains sceptical that Bostrom has given any strong evidence or argument in favour of the singularity hypothesis.

Implications for longtermism and AI Safety

The singularity hypothesis implies that the world will be quickly transformed in the future. This idea is used by Bostrom (2012, 2014) and Yudkowsky (2013) to argue that advances in AI could threaten human extinction or permanently and drastically destroy humanity's potential for future development. Increased scepticism about the singularity hypothesis might naturally lead to increased scepticism about their conclusion: that we should be particularly concerned about existential risk from artificial intelligence. This may also have implications for longtermism which uses existential risk mitigation (and AI risk mitigation in particular) as part of the central example of a longtermist intervention - at least insofar as this concern is driven by something like the above argument by Bostrom and Yudkowsky.

Footnotes

1 In particular, Chalmers (2010) claims that future AI systems might be as far beyond the most intelligent human as the most intelligent human is beyond a mouse. Bostrom (2014) claims this process could happen in a matter of months or even minutes (Bostrom, 2014).

2 Some of Turing's contemporaries were likely more intelligent than him, yet they did not design the first computer.

References

Philippe Aghion, Benjamin Jones, and Charles Jones (2019). Artificial intelligence and economic growth. In The economics of artificial intelligence: An agenda, pages 237–282. Edited by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. University of Chicago Press. 

Nicholas Bloom, Charles Jones, John Van Reenen, and Michael Webb (2020). Are ideas getting harder to find? American Economic Review 110, pages 1104–44.

Nick Bostrom (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines 22, pages 71–85.

Nick Bostrom (2014). Superintelligence. Oxford University Press.

David Chalmers (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies 17\9-10, pages 7–65.

I.J. Good (1966). Speculations concerning the first ultraintelligent machine. Advances in Computers 6, pages 31–88.

Stuart Russell (2019). Human compatible: Artificial intelligence and the problem of control. Viking.

Ray Solomonoff (1985). The time scale of artificial intelligence: Reflections on social effects. Human Systems Management 5, pages 149–53.

Neil Thompson, Shuning Ge, and Gabriel Manso (2022). The importance of (exponentially more) computing power. ArXiv Preprint.

Eliezer Yudkowsky (2013). Intelligence explosion microeconomics. Machine Intelligence Research Institute Technical Report 2013-1.

Other paper summaries

Summary: In defence of fanaticism (Hayden Wilkinson)

Suppose you are choosing where to donate £1,500. One charity will distribute mosquito nets that cheaply and effectively prevent malaria, in all likelihood your donation will save a life. Another charity aims to create computer simulations of brains which could allow morally valuable life to continue indefinitely far into the future. They would be the first to admit that their project is very…

Summary: Do not go gentle: why the Asymmetry does not support anti-natalism (Andreas Mogensen)

Many people believe that it makes the world worse to create miserable lives, but that it doesn’t make the world better to create happy lives. This is one way of expressing “the Asymmetry” in population ethics. If we go on creating new people, many will be happy, but some will be unhappy. If we accept the Asymmetry, the continued existence of humanity therefore involves…

Summary: Will AI avoid exploitation (Adam Bales)

We might hope that there is a straightforward way of predicting the behaviour of future artificial intelligence (AI) systems. Some have suggested that AI will maximise expected utility, because anything else would allow them to accept a series of trades that result in a guaranteed loss of something valuable (Omohundro, 2008). Indeed, we would be able to predict AI behaviour if…