Showing posts with label Ray Kurzweil. Show all posts
Showing posts with label Ray Kurzweil. Show all posts

Wednesday, February 11, 2009

Problems of specificity

Eleizer Yudkowsky on the virtues of specificity:

When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.

This is a problem that is ever-present online. Very rarely are online debates actual arguments, they are bickering contests between people who have completely different ideas about the actual definition of words and the scope of the debate.

I was impressed by Yudkowsky's thoughts on the three schools of the singularity because it addresses directly the oft-ignored problem of what exactly someone means when they talk about the singularity.

This was the problem with PZ Myers' objections to the singularity - he argued against a few elements of Kurzweil's thesis and used these inconsistencies to dismiss the whole thing out of hand.

I agree there are problems with various aspects of Kurzweilian singularitarianism but they need to be adressed clearly and specifically.

Delicious specificity

In a similar vein I've been trying to work out what the best way of organising my Delicious tags is.

There are some tags, like "technology", "economics", "politics", "science", and "toread" which are so wide-ranging they lose all meaning.

However if my intention is to be able to refer back to a specific article when I need a reference too much specificity can hinder my search.

Tag bundles help solve the first problem of overarching vagueness by promoting "technology", "economics", "science", and "politics" to a well-earned retirement on the board of directors {?}.

Could it be possible to build a system into Delicious whereby it is possible to say that something vaguely reminds me of something else?

I don't mean a tag like "remindsmeofStephenFry". I mean a way of tagging a document that doesn't explicitly reference Stephen Fry in any way but still reminds me of him. Something like remindsmeof:"StephenFry".

In this context remindsmeof would be a command recognised by the Delicious API to note that the article isn't explicitly about Stephen Fry but nevertheless puts me in mind of him.

I suspect that this sort of thing is less useful in practice than I imagine, particularly as much of the utility of Delicious comes from its simplicity and intuitiveness.

And as to Yudkowsky: obviously this respect for the specific can be taken too far: a general knowledge encompassing many fields can also be very valuable.

It's best to learn a lot about a little and little about a lot.

What I think of the three kinds of technological singularity

Michael Anissimov has a post up on the three kinds of singularity. This is based on this post on the three schools of singularity thought by Eliezer Yudkowsky. This is relevant to many criticisms of the idea of a technological singularity as criticisms frequently focus on minor or ancillary effects of the singularity.

As I said before, I don't "believe" in the singularity. I think it's either irrelevant or a fairly trivial observation of technological growth trends. But within the context of the three schools I think I can express my thoughts more coherently.

Here are the three kinds of singularity Anissimov and Yudkowsky describe:

  1. Accelerating change: advances in computer science, AI research, genetics, human augmentation, and biotechnology create a positive feedback of rapid technological growth. As the abilities of our tools improve and our own abilities improve through augmentation (both external in the form of personal AIs and internal in the form of intelligence-enhacements and nootropics) technological change accelerates exponentially.

  2. Event horizon: advances in computer science, AI research, genetics, human augmentation, and biotechnology lead to the creation of a greater-than-human intelligence. It is impossible for a less intelligent mind to predict the actions of a more intelligent mind so it is truly impossible to make any definitive statements about what will happen after a superintelligence (whether pure AI or strongly augmented human).

  3. Intelligence explosion: advances in computer science, AI research, genetics, human augmentation, biotechnology and neurobiology allow intelligent beings (either human or AI) to alter their own brains so as to improve their own intelligence. As intelligence is the source of all technological development this process will feed back on itself, as the slightly more intelligent beings develop slightly better ways of improving their intelligence, all the while creating amazing spinoff technologies.


Here's what I think of them:

  1. The accelerating change school of the singularity is the one I find most compelling. This is because it is both logically plausible and reflects the experience humans have had of changing technologies in the past. Technologies like electronics combine with digital computer theories to develop fast computers that go on to have a major effect on other areas of development. I think the accelerating change argument is the most coherent and reasonable depiction of a technological singularity

  2. The event horizon school is flakier. First, I have issues with the idea that greater-than-human intelligences are necessarily unpredictable, second, I don't believe that raw intellectual or cognitive ability is the primary driver for technological progress, and thirdly we have seen that it is already impossible to accurately predict all the outcomes of any technological development, let alone strong AI/posthuman superintelligence.

  3. The intelligence explosion school is flakier still. It is based on the assumption that a sufficiently powerful general intelligence would necessarily be able to comprehend how it's own mind works and know how to improve it. I do believe that as knowledge of the workings of the brain increases it will lead to real gains in various intellectual capacities, through nootropics, brain augmentation, or through brain simulation on faster substrates. Gaining additional knowledge about the brain doesn't require us to be "smarter."

With reference to the last point: the knowledge of how the brain works will be gained through trial-and-error scientific experimentation and ongoing technological development of brain-scanning technology (itself developed by trial-and-error technological tinkering), surgery (again also developed through the inductive tinkering of the barber surgeons), and neural interface technology (which is being tinkered with as I write).

Anissimov believes that all technological progress must be judged on the basis of how much closer it brings us to the existence of a superintelligent AI, because then the superintelligent AI will take over the business of technological development and create an intelligence explosion.

Anissimov describes himself as a technological determinist, as such he presumably believes social change is caused primarily by technological development. I agree with technological determinism in general but I feel Anissimov's perspective is closer to cognitive derminism: he believes technological (and hence social) change in the future will happen purely as a result of the cognition of AI.

This is at odds with our experience because the component of scientific and technological development that relies entirely on pure cognition (e.g. Einstein's development of the theory of relativity or Newton's laws of motion) is quite small compared with those which required a substantial amount empirical study (Darwin's theory of evolution) or mechanical tinkering (Faraday's law of induction).

This is a similar criticism to Kevin Kelly's idea of thinkism, where Kelly highlights the fallacy of believing you can study the universe by simulating it, without recourse to experiment to attempt to falsify your belief.

To summarise: although the development of a smarter-than-human AI will be a huge aid to our understanding of the nature of intelligence, consciousness, and the human mind there is no reason to assume the effects (though unpredictable) will include an intelligence explosion - I agree that it may help lead to an acceleration in technological development - but it will only be one part of the general acceleration.

Tuesday, February 10, 2009

Singularity and transhumanism

PZ Myers has written an interesting critique of Ray Kurzweil's thoughts on a possible technological singularity:

...not only is the chart an artificial and perhaps even conscious attempt to fit the data to a predetermined conclusion, but what it actually represents is the proximity of the familiar.

We are much more aware of innovations in our current time and environment, and the farther back we look, the blurrier the distinctions get. We may think it's a grand step forward to have these fancy cell phones that don't tie you to a cord coming from the wall, but there was also a time when people thought it was radical to be using this new bow & arrow thingie, instead of the good ol' atlatl.

We just lump that prior event into a "flinging pointy things" category and don't think much of it. When Kurzweil reifies biases that way, he gets garbage, like this graph, out.

Now I do think that human culture has allowed and encouraged greater rates of change than are possible without active, intelligent engagement—but this techno-mystical crap is just kookery, plain and simple, and the rationale is disgracefully bad. One thing I will say for Kurzweil, though, is that he seems to be a first-rate bullshit artist.

...

Kurzweil tosses a bunch of things into a graph, shows a curve that goes upward, and gets all misty-eyed and spiritual over our Bold Future. Some places it's OK, when he's actually looking at something measurable, like processor speed over time.

In other places, where he puts bacteria and monkeys on the Y-axis and pontificates about the future of evolution, it's absurd. I am completely baffled by Kurzweil's popularity, and in particular the respect he gets in some circles, since his claims simply do not hold up to even casually critical examination.

Calling Kurzweil a bullshit artist is unfair: Kurzweil is a genuinely talented inventor and engineer. His beliefs might be a little kooky to some, but I've always found his writing compelling.

Kurzweil is a spiritualist: there's nothing wrong with that. A belief in the power of some imminent superhuman AI to solve all our problems is slightly less absurd than most religious beliefs, and Kurzweil doesn't come across as the type to build a pyramid of skulls in the meantime.

But really: who honestly cares about the singularity?

Building artificial human minds may be possible within my lifetime, or it may not.

There will still be substantial technological change, even if the prime mover remains good old-fashioned human grey matter.

What I find compelling is the suggestion of where ongoing developments in biology, computing, genetics, and human augmentation may take us over the next few decades.

Among these developments are new ways of combining human intelligence with machine intelligence that result in a substantial increase along all dimensions of intellectual development (what Kurzweil calls the law of accelerating returns.)

So although the idea of the singularity has become less compelling what continues to excite me about Kurzweil's writings are his descriptions of posthumans. Partly for the good ol' SFnal sensawunda, and partly because maybe it could happen to me. Maybe I could become a posthuman.

I think the idea and potential reality of self-guided human evolution is a great idea in itself. I can take or leave the singularity.

Prof Myers also comments separately on the recent pronouncements on the future of humanity Juan Enriquez at the TED conference:

Every species also takes control over its own evolution, in a sense; individuals make choices of all sorts that influence what will happen in the next generation. You could rightly argue that they don't do it with planning and intent, but I have seen nothing that suggests that our attempts to modify our species, low tech and high tech together, are any wiser or better informed about the long-term consequences than those of any rat fighting for an opportunity to mate. We do what we do; don't pretend it's part of a long term plan that is actually prepared for all of the unexpected eventualities.

I agree with Myers up to a point: he's basically saying that developments in biotechnology and the progress of transhumanism won't happen in some big, top-down, organised way, but will rather develop as a series of steps through stochastic tinkering in the lab and (eventually) the marketplace.

The beauty of human progress is it doesn't have any long term plan: we do what we do and we tinker and experiment and find things out.

Juan Enriquez can make all the grand pronouncements about the future of humanity he likes but what he is actually trying to do is raise investment capital for his company Biotechonomy.

And Biotechonomy will pay scientists to tinker and experiment and find things out.

Such is the nature of technological advancement.

Prof Myers ends on a positive note:

Maybe this information age will have as dramatic and as important an effect on humanity as the invention of writing, but even if it does, don't expect a nerd rapture to come of it. Just more cool stuff, and a bigger, shinier, fancier playground for humanity to gambol about in.

Well I certainly agree with that.