Wednesday, February 18, 2009

Explaining intelligence: complex adaptive systems

I've been trying all day to write a coherent response to Michael Anissimov's recent posts Friendly AI - May I Check Your Ideological Baggage and The Three Singularity Schools, Kurzweil, and Superintelligence.

I finally succeeded in this comment on Ken MacLeod's recent discussion on evolution and AI.

Following is a slightly cleaner version:


My problem with Anissimov's implicit argument stems from a misunderstanding of the nature of technological progtess. Anissimov's belief that "once we create a superhuman intelligence all our scientific problems will be sold solved1" is based on the assumption that intelligence is the only contributory factor to innovation. Anissimov says:

To me, the relevance of a given technology to humanity’s future is largely determined by whether it contributes to the creation of superintelligence or not, and if so, whether it contributes to the creation of friendly or unfriendly superintelligence. The rest is just decoration.

"The rest" being every technological development that will occur between now and birth of our putative god-in-box AI.

Now I'm willing to bet microchips to nanobots that there will be a few interesting innovations, inventions, and scientific breakthroughs over the next few years that aren't directly linked to AI research but still have a large impact on people's lives.

Developing a cure for AIDS, for example.

Anissimov makes these claims concerning the importance of AI research in support of the intelligence explosion school of the technological singularity the school which can briefly be expressed as:

Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.

The problem with this view of the singularity is that intelligence is not the main driver of innovation.

We know this because the single most dynamic, creative, and successful innovation generator on the surface of this planet famously does not possess intelligence.

Natural selection lacks intelligence and it has produced an extraordinary fecundity of design and invention, not to mention the only version of intelligence currently available to us.

Some transhumanists imagine that simply creating a sufficiently powerful intelligence will solve our problems. We probably could evolve an intelligent being, using the process described in Ken MacLeod's The Star Fraction just by creating billions of lines of random code (a trillion script-kiddies at a trillion keyboards) and applying an evolutionary de-stupidifying process to it, then rinse, cycle, and repeat until we get something that smokes a pipe, does The Times crossword, and publishes the occasional enlightening monograph.

Or we could even do a brute-force molecular-level simulation of a human being, assuming that the various exponentials associated with computing hardware continue ticking over for a few more decades.

But in the meantime why not cut out the AI and go straight to innovation by evolution? Why don't we find some way of generating vast numbers of products and items and testing them under competitive conditions, then recombine and incrementally adjust and improve them until we have an optimal outcome?

And in fact we already do this - free markets create a huge pool of possible companies and products and the really bad ones are filtered out. Effective companies increase the extent of their control of a finite set of resources at the expense of less effective companies.

Companies don't breed, of course, rather new designs for companies are created by human beings. Most fail. But if one is superior to another it will survive and grow, taking wealth, influence and market share from its competitors.

It also applies to products: the idea that products like the iPhone come about as a result of a flash of genius insight from someone like Steve Jobs is incorrect. The iPhone is the result of a long series of tiny, incremental, trial-and-error developments across many scientific and technical fields.

My conclusion: if the singularity means anything then it means that technological change will continue to accelerate in certain areas. And this trend has already been happening for almost two centuries.

The physicist and complexity theorist Murray Gell-Mann would say that human civilization is a complex adaptive system in the same way that the biological evolutionary process and individual human minds are.

Knowledge, science, learning and culture have created an evolutionary process working outside of human minds and outside of biology that results in an ongoing rise in the rate of change of technical progress.

1: Presumably these superhuman entities would be willing to pay us to solve our problems, what with their being superhumanly bored with life having already simulated and experienced the totality of all possible existences while the lab guys were getting the celebratory muffins.

No comments: