Wednesday, February 11, 2009

What I think of the three kinds of technological singularity

Michael Anissimov has a post up on the three kinds of singularity. This is based on this post on the three schools of singularity thought by Eliezer Yudkowsky. This is relevant to many criticisms of the idea of a technological singularity as criticisms frequently focus on minor or ancillary effects of the singularity.

As I said before, I don't "believe" in the singularity. I think it's either irrelevant or a fairly trivial observation of technological growth trends. But within the context of the three schools I think I can express my thoughts more coherently.

Here are the three kinds of singularity Anissimov and Yudkowsky describe:

  1. Accelerating change: advances in computer science, AI research, genetics, human augmentation, and biotechnology create a positive feedback of rapid technological growth. As the abilities of our tools improve and our own abilities improve through augmentation (both external in the form of personal AIs and internal in the form of intelligence-enhacements and nootropics) technological change accelerates exponentially.

  2. Event horizon: advances in computer science, AI research, genetics, human augmentation, and biotechnology lead to the creation of a greater-than-human intelligence. It is impossible for a less intelligent mind to predict the actions of a more intelligent mind so it is truly impossible to make any definitive statements about what will happen after a superintelligence (whether pure AI or strongly augmented human).

  3. Intelligence explosion: advances in computer science, AI research, genetics, human augmentation, biotechnology and neurobiology allow intelligent beings (either human or AI) to alter their own brains so as to improve their own intelligence. As intelligence is the source of all technological development this process will feed back on itself, as the slightly more intelligent beings develop slightly better ways of improving their intelligence, all the while creating amazing spinoff technologies.


Here's what I think of them:

  1. The accelerating change school of the singularity is the one I find most compelling. This is because it is both logically plausible and reflects the experience humans have had of changing technologies in the past. Technologies like electronics combine with digital computer theories to develop fast computers that go on to have a major effect on other areas of development. I think the accelerating change argument is the most coherent and reasonable depiction of a technological singularity

  2. The event horizon school is flakier. First, I have issues with the idea that greater-than-human intelligences are necessarily unpredictable, second, I don't believe that raw intellectual or cognitive ability is the primary driver for technological progress, and thirdly we have seen that it is already impossible to accurately predict all the outcomes of any technological development, let alone strong AI/posthuman superintelligence.

  3. The intelligence explosion school is flakier still. It is based on the assumption that a sufficiently powerful general intelligence would necessarily be able to comprehend how it's own mind works and know how to improve it. I do believe that as knowledge of the workings of the brain increases it will lead to real gains in various intellectual capacities, through nootropics, brain augmentation, or through brain simulation on faster substrates. Gaining additional knowledge about the brain doesn't require us to be "smarter."

With reference to the last point: the knowledge of how the brain works will be gained through trial-and-error scientific experimentation and ongoing technological development of brain-scanning technology (itself developed by trial-and-error technological tinkering), surgery (again also developed through the inductive tinkering of the barber surgeons), and neural interface technology (which is being tinkered with as I write).

Anissimov believes that all technological progress must be judged on the basis of how much closer it brings us to the existence of a superintelligent AI, because then the superintelligent AI will take over the business of technological development and create an intelligence explosion.

Anissimov describes himself as a technological determinist, as such he presumably believes social change is caused primarily by technological development. I agree with technological determinism in general but I feel Anissimov's perspective is closer to cognitive derminism: he believes technological (and hence social) change in the future will happen purely as a result of the cognition of AI.

This is at odds with our experience because the component of scientific and technological development that relies entirely on pure cognition (e.g. Einstein's development of the theory of relativity or Newton's laws of motion) is quite small compared with those which required a substantial amount empirical study (Darwin's theory of evolution) or mechanical tinkering (Faraday's law of induction).

This is a similar criticism to Kevin Kelly's idea of thinkism, where Kelly highlights the fallacy of believing you can study the universe by simulating it, without recourse to experiment to attempt to falsify your belief.

To summarise: although the development of a smarter-than-human AI will be a huge aid to our understanding of the nature of intelligence, consciousness, and the human mind there is no reason to assume the effects (though unpredictable) will include an intelligence explosion - I agree that it may help lead to an acceleration in technological development - but it will only be one part of the general acceleration.

No comments: