Showing posts with label singularity. Show all posts
Showing posts with label singularity. Show all posts

Wednesday, February 18, 2009

Explaining intelligence: complex adaptive systems

I've been trying all day to write a coherent response to Michael Anissimov's recent posts Friendly AI - May I Check Your Ideological Baggage and The Three Singularity Schools, Kurzweil, and Superintelligence.

I finally succeeded in this comment on Ken MacLeod's recent discussion on evolution and AI.

Following is a slightly cleaner version:


My problem with Anissimov's implicit argument stems from a misunderstanding of the nature of technological progtess. Anissimov's belief that "once we create a superhuman intelligence all our scientific problems will be sold solved1" is based on the assumption that intelligence is the only contributory factor to innovation. Anissimov says:

To me, the relevance of a given technology to humanity’s future is largely determined by whether it contributes to the creation of superintelligence or not, and if so, whether it contributes to the creation of friendly or unfriendly superintelligence. The rest is just decoration.

"The rest" being every technological development that will occur between now and birth of our putative god-in-box AI.

Now I'm willing to bet microchips to nanobots that there will be a few interesting innovations, inventions, and scientific breakthroughs over the next few years that aren't directly linked to AI research but still have a large impact on people's lives.

Developing a cure for AIDS, for example.

Anissimov makes these claims concerning the importance of AI research in support of the intelligence explosion school of the technological singularity the school which can briefly be expressed as:

Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.

The problem with this view of the singularity is that intelligence is not the main driver of innovation.

We know this because the single most dynamic, creative, and successful innovation generator on the surface of this planet famously does not possess intelligence.

Natural selection lacks intelligence and it has produced an extraordinary fecundity of design and invention, not to mention the only version of intelligence currently available to us.

Some transhumanists imagine that simply creating a sufficiently powerful intelligence will solve our problems. We probably could evolve an intelligent being, using the process described in Ken MacLeod's The Star Fraction just by creating billions of lines of random code (a trillion script-kiddies at a trillion keyboards) and applying an evolutionary de-stupidifying process to it, then rinse, cycle, and repeat until we get something that smokes a pipe, does The Times crossword, and publishes the occasional enlightening monograph.

Or we could even do a brute-force molecular-level simulation of a human being, assuming that the various exponentials associated with computing hardware continue ticking over for a few more decades.

But in the meantime why not cut out the AI and go straight to innovation by evolution? Why don't we find some way of generating vast numbers of products and items and testing them under competitive conditions, then recombine and incrementally adjust and improve them until we have an optimal outcome?

And in fact we already do this - free markets create a huge pool of possible companies and products and the really bad ones are filtered out. Effective companies increase the extent of their control of a finite set of resources at the expense of less effective companies.

Companies don't breed, of course, rather new designs for companies are created by human beings. Most fail. But if one is superior to another it will survive and grow, taking wealth, influence and market share from its competitors.

It also applies to products: the idea that products like the iPhone come about as a result of a flash of genius insight from someone like Steve Jobs is incorrect. The iPhone is the result of a long series of tiny, incremental, trial-and-error developments across many scientific and technical fields.

My conclusion: if the singularity means anything then it means that technological change will continue to accelerate in certain areas. And this trend has already been happening for almost two centuries.

The physicist and complexity theorist Murray Gell-Mann would say that human civilization is a complex adaptive system in the same way that the biological evolutionary process and individual human minds are.

Knowledge, science, learning and culture have created an evolutionary process working outside of human minds and outside of biology that results in an ongoing rise in the rate of change of technical progress.

1: Presumably these superhuman entities would be willing to pay us to solve our problems, what with their being superhumanly bored with life having already simulated and experienced the totality of all possible existences while the lab guys were getting the celebratory muffins.

Wednesday, February 11, 2009

Problems of specificity

Eleizer Yudkowsky on the virtues of specificity:

When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.

This is a problem that is ever-present online. Very rarely are online debates actual arguments, they are bickering contests between people who have completely different ideas about the actual definition of words and the scope of the debate.

I was impressed by Yudkowsky's thoughts on the three schools of the singularity because it addresses directly the oft-ignored problem of what exactly someone means when they talk about the singularity.

This was the problem with PZ Myers' objections to the singularity - he argued against a few elements of Kurzweil's thesis and used these inconsistencies to dismiss the whole thing out of hand.

I agree there are problems with various aspects of Kurzweilian singularitarianism but they need to be adressed clearly and specifically.

Delicious specificity

In a similar vein I've been trying to work out what the best way of organising my Delicious tags is.

There are some tags, like "technology", "economics", "politics", "science", and "toread" which are so wide-ranging they lose all meaning.

However if my intention is to be able to refer back to a specific article when I need a reference too much specificity can hinder my search.

Tag bundles help solve the first problem of overarching vagueness by promoting "technology", "economics", "science", and "politics" to a well-earned retirement on the board of directors {?}.

Could it be possible to build a system into Delicious whereby it is possible to say that something vaguely reminds me of something else?

I don't mean a tag like "remindsmeofStephenFry". I mean a way of tagging a document that doesn't explicitly reference Stephen Fry in any way but still reminds me of him. Something like remindsmeof:"StephenFry".

In this context remindsmeof would be a command recognised by the Delicious API to note that the article isn't explicitly about Stephen Fry but nevertheless puts me in mind of him.

I suspect that this sort of thing is less useful in practice than I imagine, particularly as much of the utility of Delicious comes from its simplicity and intuitiveness.

And as to Yudkowsky: obviously this respect for the specific can be taken too far: a general knowledge encompassing many fields can also be very valuable.

It's best to learn a lot about a little and little about a lot.

What I think of the three kinds of technological singularity

Michael Anissimov has a post up on the three kinds of singularity. This is based on this post on the three schools of singularity thought by Eliezer Yudkowsky. This is relevant to many criticisms of the idea of a technological singularity as criticisms frequently focus on minor or ancillary effects of the singularity.

As I said before, I don't "believe" in the singularity. I think it's either irrelevant or a fairly trivial observation of technological growth trends. But within the context of the three schools I think I can express my thoughts more coherently.

Here are the three kinds of singularity Anissimov and Yudkowsky describe:

  1. Accelerating change: advances in computer science, AI research, genetics, human augmentation, and biotechnology create a positive feedback of rapid technological growth. As the abilities of our tools improve and our own abilities improve through augmentation (both external in the form of personal AIs and internal in the form of intelligence-enhacements and nootropics) technological change accelerates exponentially.

  2. Event horizon: advances in computer science, AI research, genetics, human augmentation, and biotechnology lead to the creation of a greater-than-human intelligence. It is impossible for a less intelligent mind to predict the actions of a more intelligent mind so it is truly impossible to make any definitive statements about what will happen after a superintelligence (whether pure AI or strongly augmented human).

  3. Intelligence explosion: advances in computer science, AI research, genetics, human augmentation, biotechnology and neurobiology allow intelligent beings (either human or AI) to alter their own brains so as to improve their own intelligence. As intelligence is the source of all technological development this process will feed back on itself, as the slightly more intelligent beings develop slightly better ways of improving their intelligence, all the while creating amazing spinoff technologies.


Here's what I think of them:

  1. The accelerating change school of the singularity is the one I find most compelling. This is because it is both logically plausible and reflects the experience humans have had of changing technologies in the past. Technologies like electronics combine with digital computer theories to develop fast computers that go on to have a major effect on other areas of development. I think the accelerating change argument is the most coherent and reasonable depiction of a technological singularity

  2. The event horizon school is flakier. First, I have issues with the idea that greater-than-human intelligences are necessarily unpredictable, second, I don't believe that raw intellectual or cognitive ability is the primary driver for technological progress, and thirdly we have seen that it is already impossible to accurately predict all the outcomes of any technological development, let alone strong AI/posthuman superintelligence.

  3. The intelligence explosion school is flakier still. It is based on the assumption that a sufficiently powerful general intelligence would necessarily be able to comprehend how it's own mind works and know how to improve it. I do believe that as knowledge of the workings of the brain increases it will lead to real gains in various intellectual capacities, through nootropics, brain augmentation, or through brain simulation on faster substrates. Gaining additional knowledge about the brain doesn't require us to be "smarter."

With reference to the last point: the knowledge of how the brain works will be gained through trial-and-error scientific experimentation and ongoing technological development of brain-scanning technology (itself developed by trial-and-error technological tinkering), surgery (again also developed through the inductive tinkering of the barber surgeons), and neural interface technology (which is being tinkered with as I write).

Anissimov believes that all technological progress must be judged on the basis of how much closer it brings us to the existence of a superintelligent AI, because then the superintelligent AI will take over the business of technological development and create an intelligence explosion.

Anissimov describes himself as a technological determinist, as such he presumably believes social change is caused primarily by technological development. I agree with technological determinism in general but I feel Anissimov's perspective is closer to cognitive derminism: he believes technological (and hence social) change in the future will happen purely as a result of the cognition of AI.

This is at odds with our experience because the component of scientific and technological development that relies entirely on pure cognition (e.g. Einstein's development of the theory of relativity or Newton's laws of motion) is quite small compared with those which required a substantial amount empirical study (Darwin's theory of evolution) or mechanical tinkering (Faraday's law of induction).

This is a similar criticism to Kevin Kelly's idea of thinkism, where Kelly highlights the fallacy of believing you can study the universe by simulating it, without recourse to experiment to attempt to falsify your belief.

To summarise: although the development of a smarter-than-human AI will be a huge aid to our understanding of the nature of intelligence, consciousness, and the human mind there is no reason to assume the effects (though unpredictable) will include an intelligence explosion - I agree that it may help lead to an acceleration in technological development - but it will only be one part of the general acceleration.

Tuesday, February 10, 2009

Singularity and transhumanism

PZ Myers has written an interesting critique of Ray Kurzweil's thoughts on a possible technological singularity:

...not only is the chart an artificial and perhaps even conscious attempt to fit the data to a predetermined conclusion, but what it actually represents is the proximity of the familiar.

We are much more aware of innovations in our current time and environment, and the farther back we look, the blurrier the distinctions get. We may think it's a grand step forward to have these fancy cell phones that don't tie you to a cord coming from the wall, but there was also a time when people thought it was radical to be using this new bow & arrow thingie, instead of the good ol' atlatl.

We just lump that prior event into a "flinging pointy things" category and don't think much of it. When Kurzweil reifies biases that way, he gets garbage, like this graph, out.

Now I do think that human culture has allowed and encouraged greater rates of change than are possible without active, intelligent engagement—but this techno-mystical crap is just kookery, plain and simple, and the rationale is disgracefully bad. One thing I will say for Kurzweil, though, is that he seems to be a first-rate bullshit artist.

...

Kurzweil tosses a bunch of things into a graph, shows a curve that goes upward, and gets all misty-eyed and spiritual over our Bold Future. Some places it's OK, when he's actually looking at something measurable, like processor speed over time.

In other places, where he puts bacteria and monkeys on the Y-axis and pontificates about the future of evolution, it's absurd. I am completely baffled by Kurzweil's popularity, and in particular the respect he gets in some circles, since his claims simply do not hold up to even casually critical examination.

Calling Kurzweil a bullshit artist is unfair: Kurzweil is a genuinely talented inventor and engineer. His beliefs might be a little kooky to some, but I've always found his writing compelling.

Kurzweil is a spiritualist: there's nothing wrong with that. A belief in the power of some imminent superhuman AI to solve all our problems is slightly less absurd than most religious beliefs, and Kurzweil doesn't come across as the type to build a pyramid of skulls in the meantime.

But really: who honestly cares about the singularity?

Building artificial human minds may be possible within my lifetime, or it may not.

There will still be substantial technological change, even if the prime mover remains good old-fashioned human grey matter.

What I find compelling is the suggestion of where ongoing developments in biology, computing, genetics, and human augmentation may take us over the next few decades.

Among these developments are new ways of combining human intelligence with machine intelligence that result in a substantial increase along all dimensions of intellectual development (what Kurzweil calls the law of accelerating returns.)

So although the idea of the singularity has become less compelling what continues to excite me about Kurzweil's writings are his descriptions of posthumans. Partly for the good ol' SFnal sensawunda, and partly because maybe it could happen to me. Maybe I could become a posthuman.

I think the idea and potential reality of self-guided human evolution is a great idea in itself. I can take or leave the singularity.

Prof Myers also comments separately on the recent pronouncements on the future of humanity Juan Enriquez at the TED conference:

Every species also takes control over its own evolution, in a sense; individuals make choices of all sorts that influence what will happen in the next generation. You could rightly argue that they don't do it with planning and intent, but I have seen nothing that suggests that our attempts to modify our species, low tech and high tech together, are any wiser or better informed about the long-term consequences than those of any rat fighting for an opportunity to mate. We do what we do; don't pretend it's part of a long term plan that is actually prepared for all of the unexpected eventualities.

I agree with Myers up to a point: he's basically saying that developments in biotechnology and the progress of transhumanism won't happen in some big, top-down, organised way, but will rather develop as a series of steps through stochastic tinkering in the lab and (eventually) the marketplace.

The beauty of human progress is it doesn't have any long term plan: we do what we do and we tinker and experiment and find things out.

Juan Enriquez can make all the grand pronouncements about the future of humanity he likes but what he is actually trying to do is raise investment capital for his company Biotechonomy.

And Biotechonomy will pay scientists to tinker and experiment and find things out.

Such is the nature of technological advancement.

Prof Myers ends on a positive note:

Maybe this information age will have as dramatic and as important an effect on humanity as the invention of writing, but even if it does, don't expect a nerd rapture to come of it. Just more cool stuff, and a bigger, shinier, fancier playground for humanity to gambol about in.

Well I certainly agree with that.

Wednesday, May 21, 2008

A Commentary on Commentaries

At any given time there are a smattering of article in the dead tree press, blogs, websites, and magazine outlets worthy of perusal by anyone with a healthy interest in what is said about what goes on in the world.

Collected here are a few items that I feel are worthy of comment (I'm going to have one post per article, 'cause it's easier that way).

Privacy and social networking are two key components of the zeitgeist of social debate in the first decade of the 21st century. Zoe Williams writes in The Guardian writes of teenagers and online exhibitionism:

"...trying to inculcate discretion at a time when everybody is seeking exposure is like teaching abstinence at a time when all they want to do is have sex. Never mind the rights and wrongs of it, it doesn't work..."

There is no doubt that adolescence is a time when children are emotionally crippled by their own biology until they emerge, as if from a crysalis, into the neurotic grab-bag of talents, proclivities, and questionable ethics that makes up what passes to be a fully-functioning adult and denizen of the 21st century (that's an awful sentence, on two levels, but I will keep it because I enjoyed writing it - damn it!). However. I don't think teenagers are necessarily stupid.

This brings us on to the next key point in Williams' article. Something that has already occurred to most journos and commentators is that all this rubbish that is stuck up on social networking websites will (theoretically) still be there in the year 2020, when yours truly might be thinking of running for election to political office.

What's to be done? Williams suggests:

"...that 15 years hence, people won't need to be protected from their past excesses, because the very fact that this is a universal impulse that social-networking sites merely cater to, will mean that tomorrow's politicians will all have as many skeletons in their closets as one another. In fact, if you don't have a YouTube video from when you were 16, dancing to Britney Spears's Toxic, then it'll be as much an impediment to your public approval rating as being single is today."

This point is well made. I will now smatter this blog with spelling mistakes and grammatical errors, safe in the knowledge that people will draw from this the conclusion that I am "genuine" and "honest about my mistakes."

However they could also conclude that I am too computer-illiterate to spellcheck my post!

[However if Ray Kurzweil is right, by 2020 the computers will have taken over in an event already being labelled as "the technological singularity" - if I'm capaigning on a pro-singularity ticket my spelling mistakes will be interpreted as an early and tacit recognition of the need to augment my feeble human intellect with a Mighty Processor. On the other hand if I'm going to campaign on an anti-singularity platform my PC-illiteracy will be seen as being evidence of my inherent suspicions of technology.]

The agony of indecision! I feel like the press is saying Gordon Brown must be feeling.

I don't owe the person who I will be anything. I would vote for him, but only after a close examination of the policies he supports on a variety of issues and the relative positions of his opponents.

In conclusion if, by 2020, we're still going on and on about politicians' personalities as if they mattered a gnat's shite then Dog help us, Dog help us all.

Wednesday, March 05, 2008

Rudy Rucker and the Singularity

To quote from Rucker's post:

"This is because there are no shortcuts for nature’s computations. Due to a property of the natural world that I call the “principle of natural unpredictability,” fully simulating a bunch of particles for a certain period of time requires a system using about the same number of particles for about the same length of time. Naturally occurring systems don’t allow for drastic shortcuts."

Rucker's argument is fair enough as far as it goes but the whole point of the statistical mechanics invented by Gibbs and Maxwell and Boltzmann is that once you have enough particles in a system you can make accurate statistical statements about that system.

So we have the gas laws, the laws of thermodynamics etc.

Another point worth making is that current developments in spintronics (computations using the "spin" of electrons) offer a layer of computation beneath that of atomic matter.

I concede that at some point "fudging" will have to take place, but as I pointed out before: statistical mechanics isn't really fudging. Diffusion can be accurately modelled without having to model every single damn particle.

Anyway my gut feeling is that if something like a singularity happens it will be much weirder than simply grinding up the Earth into nanomachines then running a simulated Earth on the nanomachines.

I mean c'mon, if you're a superhuman intelligence what's the first thing you're going to do? Create the perfect lay? Work out the formula for the perfect cup of tea (of course, according to Douglas Adams this is a much more difficult computational problem than most anything else...).