Showing posts with label singularitarianism. Show all posts
Showing posts with label singularitarianism. Show all posts

Wednesday, February 18, 2009

Explaining intelligence: complex adaptive systems

I've been trying all day to write a coherent response to Michael Anissimov's recent posts Friendly AI - May I Check Your Ideological Baggage and The Three Singularity Schools, Kurzweil, and Superintelligence.

I finally succeeded in this comment on Ken MacLeod's recent discussion on evolution and AI.

Following is a slightly cleaner version:


My problem with Anissimov's implicit argument stems from a misunderstanding of the nature of technological progtess. Anissimov's belief that "once we create a superhuman intelligence all our scientific problems will be sold solved1" is based on the assumption that intelligence is the only contributory factor to innovation. Anissimov says:

To me, the relevance of a given technology to humanity’s future is largely determined by whether it contributes to the creation of superintelligence or not, and if so, whether it contributes to the creation of friendly or unfriendly superintelligence. The rest is just decoration.

"The rest" being every technological development that will occur between now and birth of our putative god-in-box AI.

Now I'm willing to bet microchips to nanobots that there will be a few interesting innovations, inventions, and scientific breakthroughs over the next few years that aren't directly linked to AI research but still have a large impact on people's lives.

Developing a cure for AIDS, for example.

Anissimov makes these claims concerning the importance of AI research in support of the intelligence explosion school of the technological singularity the school which can briefly be expressed as:

Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.

The problem with this view of the singularity is that intelligence is not the main driver of innovation.

We know this because the single most dynamic, creative, and successful innovation generator on the surface of this planet famously does not possess intelligence.

Natural selection lacks intelligence and it has produced an extraordinary fecundity of design and invention, not to mention the only version of intelligence currently available to us.

Some transhumanists imagine that simply creating a sufficiently powerful intelligence will solve our problems. We probably could evolve an intelligent being, using the process described in Ken MacLeod's The Star Fraction just by creating billions of lines of random code (a trillion script-kiddies at a trillion keyboards) and applying an evolutionary de-stupidifying process to it, then rinse, cycle, and repeat until we get something that smokes a pipe, does The Times crossword, and publishes the occasional enlightening monograph.

Or we could even do a brute-force molecular-level simulation of a human being, assuming that the various exponentials associated with computing hardware continue ticking over for a few more decades.

But in the meantime why not cut out the AI and go straight to innovation by evolution? Why don't we find some way of generating vast numbers of products and items and testing them under competitive conditions, then recombine and incrementally adjust and improve them until we have an optimal outcome?

And in fact we already do this - free markets create a huge pool of possible companies and products and the really bad ones are filtered out. Effective companies increase the extent of their control of a finite set of resources at the expense of less effective companies.

Companies don't breed, of course, rather new designs for companies are created by human beings. Most fail. But if one is superior to another it will survive and grow, taking wealth, influence and market share from its competitors.

It also applies to products: the idea that products like the iPhone come about as a result of a flash of genius insight from someone like Steve Jobs is incorrect. The iPhone is the result of a long series of tiny, incremental, trial-and-error developments across many scientific and technical fields.

My conclusion: if the singularity means anything then it means that technological change will continue to accelerate in certain areas. And this trend has already been happening for almost two centuries.

The physicist and complexity theorist Murray Gell-Mann would say that human civilization is a complex adaptive system in the same way that the biological evolutionary process and individual human minds are.

Knowledge, science, learning and culture have created an evolutionary process working outside of human minds and outside of biology that results in an ongoing rise in the rate of change of technical progress.

1: Presumably these superhuman entities would be willing to pay us to solve our problems, what with their being superhumanly bored with life having already simulated and experienced the totality of all possible existences while the lab guys were getting the celebratory muffins.

Wednesday, February 11, 2009

Problems of specificity

Eleizer Yudkowsky on the virtues of specificity:

When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.

This is a problem that is ever-present online. Very rarely are online debates actual arguments, they are bickering contests between people who have completely different ideas about the actual definition of words and the scope of the debate.

I was impressed by Yudkowsky's thoughts on the three schools of the singularity because it addresses directly the oft-ignored problem of what exactly someone means when they talk about the singularity.

This was the problem with PZ Myers' objections to the singularity - he argued against a few elements of Kurzweil's thesis and used these inconsistencies to dismiss the whole thing out of hand.

I agree there are problems with various aspects of Kurzweilian singularitarianism but they need to be adressed clearly and specifically.

Delicious specificity

In a similar vein I've been trying to work out what the best way of organising my Delicious tags is.

There are some tags, like "technology", "economics", "politics", "science", and "toread" which are so wide-ranging they lose all meaning.

However if my intention is to be able to refer back to a specific article when I need a reference too much specificity can hinder my search.

Tag bundles help solve the first problem of overarching vagueness by promoting "technology", "economics", "science", and "politics" to a well-earned retirement on the board of directors {?}.

Could it be possible to build a system into Delicious whereby it is possible to say that something vaguely reminds me of something else?

I don't mean a tag like "remindsmeofStephenFry". I mean a way of tagging a document that doesn't explicitly reference Stephen Fry in any way but still reminds me of him. Something like remindsmeof:"StephenFry".

In this context remindsmeof would be a command recognised by the Delicious API to note that the article isn't explicitly about Stephen Fry but nevertheless puts me in mind of him.

I suspect that this sort of thing is less useful in practice than I imagine, particularly as much of the utility of Delicious comes from its simplicity and intuitiveness.

And as to Yudkowsky: obviously this respect for the specific can be taken too far: a general knowledge encompassing many fields can also be very valuable.

It's best to learn a lot about a little and little about a lot.

What I think of the three kinds of technological singularity

Michael Anissimov has a post up on the three kinds of singularity. This is based on this post on the three schools of singularity thought by Eliezer Yudkowsky. This is relevant to many criticisms of the idea of a technological singularity as criticisms frequently focus on minor or ancillary effects of the singularity.

As I said before, I don't "believe" in the singularity. I think it's either irrelevant or a fairly trivial observation of technological growth trends. But within the context of the three schools I think I can express my thoughts more coherently.

Here are the three kinds of singularity Anissimov and Yudkowsky describe:

  1. Accelerating change: advances in computer science, AI research, genetics, human augmentation, and biotechnology create a positive feedback of rapid technological growth. As the abilities of our tools improve and our own abilities improve through augmentation (both external in the form of personal AIs and internal in the form of intelligence-enhacements and nootropics) technological change accelerates exponentially.

  2. Event horizon: advances in computer science, AI research, genetics, human augmentation, and biotechnology lead to the creation of a greater-than-human intelligence. It is impossible for a less intelligent mind to predict the actions of a more intelligent mind so it is truly impossible to make any definitive statements about what will happen after a superintelligence (whether pure AI or strongly augmented human).

  3. Intelligence explosion: advances in computer science, AI research, genetics, human augmentation, biotechnology and neurobiology allow intelligent beings (either human or AI) to alter their own brains so as to improve their own intelligence. As intelligence is the source of all technological development this process will feed back on itself, as the slightly more intelligent beings develop slightly better ways of improving their intelligence, all the while creating amazing spinoff technologies.


Here's what I think of them:

  1. The accelerating change school of the singularity is the one I find most compelling. This is because it is both logically plausible and reflects the experience humans have had of changing technologies in the past. Technologies like electronics combine with digital computer theories to develop fast computers that go on to have a major effect on other areas of development. I think the accelerating change argument is the most coherent and reasonable depiction of a technological singularity

  2. The event horizon school is flakier. First, I have issues with the idea that greater-than-human intelligences are necessarily unpredictable, second, I don't believe that raw intellectual or cognitive ability is the primary driver for technological progress, and thirdly we have seen that it is already impossible to accurately predict all the outcomes of any technological development, let alone strong AI/posthuman superintelligence.

  3. The intelligence explosion school is flakier still. It is based on the assumption that a sufficiently powerful general intelligence would necessarily be able to comprehend how it's own mind works and know how to improve it. I do believe that as knowledge of the workings of the brain increases it will lead to real gains in various intellectual capacities, through nootropics, brain augmentation, or through brain simulation on faster substrates. Gaining additional knowledge about the brain doesn't require us to be "smarter."

With reference to the last point: the knowledge of how the brain works will be gained through trial-and-error scientific experimentation and ongoing technological development of brain-scanning technology (itself developed by trial-and-error technological tinkering), surgery (again also developed through the inductive tinkering of the barber surgeons), and neural interface technology (which is being tinkered with as I write).

Anissimov believes that all technological progress must be judged on the basis of how much closer it brings us to the existence of a superintelligent AI, because then the superintelligent AI will take over the business of technological development and create an intelligence explosion.

Anissimov describes himself as a technological determinist, as such he presumably believes social change is caused primarily by technological development. I agree with technological determinism in general but I feel Anissimov's perspective is closer to cognitive derminism: he believes technological (and hence social) change in the future will happen purely as a result of the cognition of AI.

This is at odds with our experience because the component of scientific and technological development that relies entirely on pure cognition (e.g. Einstein's development of the theory of relativity or Newton's laws of motion) is quite small compared with those which required a substantial amount empirical study (Darwin's theory of evolution) or mechanical tinkering (Faraday's law of induction).

This is a similar criticism to Kevin Kelly's idea of thinkism, where Kelly highlights the fallacy of believing you can study the universe by simulating it, without recourse to experiment to attempt to falsify your belief.

To summarise: although the development of a smarter-than-human AI will be a huge aid to our understanding of the nature of intelligence, consciousness, and the human mind there is no reason to assume the effects (though unpredictable) will include an intelligence explosion - I agree that it may help lead to an acceleration in technological development - but it will only be one part of the general acceleration.

Tuesday, February 10, 2009

Singularity and transhumanism

PZ Myers has written an interesting critique of Ray Kurzweil's thoughts on a possible technological singularity:

...not only is the chart an artificial and perhaps even conscious attempt to fit the data to a predetermined conclusion, but what it actually represents is the proximity of the familiar.

We are much more aware of innovations in our current time and environment, and the farther back we look, the blurrier the distinctions get. We may think it's a grand step forward to have these fancy cell phones that don't tie you to a cord coming from the wall, but there was also a time when people thought it was radical to be using this new bow & arrow thingie, instead of the good ol' atlatl.

We just lump that prior event into a "flinging pointy things" category and don't think much of it. When Kurzweil reifies biases that way, he gets garbage, like this graph, out.

Now I do think that human culture has allowed and encouraged greater rates of change than are possible without active, intelligent engagement—but this techno-mystical crap is just kookery, plain and simple, and the rationale is disgracefully bad. One thing I will say for Kurzweil, though, is that he seems to be a first-rate bullshit artist.

...

Kurzweil tosses a bunch of things into a graph, shows a curve that goes upward, and gets all misty-eyed and spiritual over our Bold Future. Some places it's OK, when he's actually looking at something measurable, like processor speed over time.

In other places, where he puts bacteria and monkeys on the Y-axis and pontificates about the future of evolution, it's absurd. I am completely baffled by Kurzweil's popularity, and in particular the respect he gets in some circles, since his claims simply do not hold up to even casually critical examination.

Calling Kurzweil a bullshit artist is unfair: Kurzweil is a genuinely talented inventor and engineer. His beliefs might be a little kooky to some, but I've always found his writing compelling.

Kurzweil is a spiritualist: there's nothing wrong with that. A belief in the power of some imminent superhuman AI to solve all our problems is slightly less absurd than most religious beliefs, and Kurzweil doesn't come across as the type to build a pyramid of skulls in the meantime.

But really: who honestly cares about the singularity?

Building artificial human minds may be possible within my lifetime, or it may not.

There will still be substantial technological change, even if the prime mover remains good old-fashioned human grey matter.

What I find compelling is the suggestion of where ongoing developments in biology, computing, genetics, and human augmentation may take us over the next few decades.

Among these developments are new ways of combining human intelligence with machine intelligence that result in a substantial increase along all dimensions of intellectual development (what Kurzweil calls the law of accelerating returns.)

So although the idea of the singularity has become less compelling what continues to excite me about Kurzweil's writings are his descriptions of posthumans. Partly for the good ol' SFnal sensawunda, and partly because maybe it could happen to me. Maybe I could become a posthuman.

I think the idea and potential reality of self-guided human evolution is a great idea in itself. I can take or leave the singularity.

Prof Myers also comments separately on the recent pronouncements on the future of humanity Juan Enriquez at the TED conference:

Every species also takes control over its own evolution, in a sense; individuals make choices of all sorts that influence what will happen in the next generation. You could rightly argue that they don't do it with planning and intent, but I have seen nothing that suggests that our attempts to modify our species, low tech and high tech together, are any wiser or better informed about the long-term consequences than those of any rat fighting for an opportunity to mate. We do what we do; don't pretend it's part of a long term plan that is actually prepared for all of the unexpected eventualities.

I agree with Myers up to a point: he's basically saying that developments in biotechnology and the progress of transhumanism won't happen in some big, top-down, organised way, but will rather develop as a series of steps through stochastic tinkering in the lab and (eventually) the marketplace.

The beauty of human progress is it doesn't have any long term plan: we do what we do and we tinker and experiment and find things out.

Juan Enriquez can make all the grand pronouncements about the future of humanity he likes but what he is actually trying to do is raise investment capital for his company Biotechonomy.

And Biotechonomy will pay scientists to tinker and experiment and find things out.

Such is the nature of technological advancement.

Prof Myers ends on a positive note:

Maybe this information age will have as dramatic and as important an effect on humanity as the invention of writing, but even if it does, don't expect a nerd rapture to come of it. Just more cool stuff, and a bigger, shinier, fancier playground for humanity to gambol about in.

Well I certainly agree with that.

Wednesday, May 21, 2008

A Commentary on Commentaries

At any given time there are a smattering of article in the dead tree press, blogs, websites, and magazine outlets worthy of perusal by anyone with a healthy interest in what is said about what goes on in the world.

Collected here are a few items that I feel are worthy of comment (I'm going to have one post per article, 'cause it's easier that way).

Privacy and social networking are two key components of the zeitgeist of social debate in the first decade of the 21st century. Zoe Williams writes in The Guardian writes of teenagers and online exhibitionism:

"...trying to inculcate discretion at a time when everybody is seeking exposure is like teaching abstinence at a time when all they want to do is have sex. Never mind the rights and wrongs of it, it doesn't work..."

There is no doubt that adolescence is a time when children are emotionally crippled by their own biology until they emerge, as if from a crysalis, into the neurotic grab-bag of talents, proclivities, and questionable ethics that makes up what passes to be a fully-functioning adult and denizen of the 21st century (that's an awful sentence, on two levels, but I will keep it because I enjoyed writing it - damn it!). However. I don't think teenagers are necessarily stupid.

This brings us on to the next key point in Williams' article. Something that has already occurred to most journos and commentators is that all this rubbish that is stuck up on social networking websites will (theoretically) still be there in the year 2020, when yours truly might be thinking of running for election to political office.

What's to be done? Williams suggests:

"...that 15 years hence, people won't need to be protected from their past excesses, because the very fact that this is a universal impulse that social-networking sites merely cater to, will mean that tomorrow's politicians will all have as many skeletons in their closets as one another. In fact, if you don't have a YouTube video from when you were 16, dancing to Britney Spears's Toxic, then it'll be as much an impediment to your public approval rating as being single is today."

This point is well made. I will now smatter this blog with spelling mistakes and grammatical errors, safe in the knowledge that people will draw from this the conclusion that I am "genuine" and "honest about my mistakes."

However they could also conclude that I am too computer-illiterate to spellcheck my post!

[However if Ray Kurzweil is right, by 2020 the computers will have taken over in an event already being labelled as "the technological singularity" - if I'm capaigning on a pro-singularity ticket my spelling mistakes will be interpreted as an early and tacit recognition of the need to augment my feeble human intellect with a Mighty Processor. On the other hand if I'm going to campaign on an anti-singularity platform my PC-illiteracy will be seen as being evidence of my inherent suspicions of technology.]

The agony of indecision! I feel like the press is saying Gordon Brown must be feeling.

I don't owe the person who I will be anything. I would vote for him, but only after a close examination of the policies he supports on a variety of issues and the relative positions of his opponents.

In conclusion if, by 2020, we're still going on and on about politicians' personalities as if they mattered a gnat's shite then Dog help us, Dog help us all.

Wednesday, March 05, 2008

Rudy Rucker and the Singularity

To quote from Rucker's post:

"This is because there are no shortcuts for nature’s computations. Due to a property of the natural world that I call the “principle of natural unpredictability,” fully simulating a bunch of particles for a certain period of time requires a system using about the same number of particles for about the same length of time. Naturally occurring systems don’t allow for drastic shortcuts."

Rucker's argument is fair enough as far as it goes but the whole point of the statistical mechanics invented by Gibbs and Maxwell and Boltzmann is that once you have enough particles in a system you can make accurate statistical statements about that system.

So we have the gas laws, the laws of thermodynamics etc.

Another point worth making is that current developments in spintronics (computations using the "spin" of electrons) offer a layer of computation beneath that of atomic matter.

I concede that at some point "fudging" will have to take place, but as I pointed out before: statistical mechanics isn't really fudging. Diffusion can be accurately modelled without having to model every single damn particle.

Anyway my gut feeling is that if something like a singularity happens it will be much weirder than simply grinding up the Earth into nanomachines then running a simulated Earth on the nanomachines.

I mean c'mon, if you're a superhuman intelligence what's the first thing you're going to do? Create the perfect lay? Work out the formula for the perfect cup of tea (of course, according to Douglas Adams this is a much more difficult computational problem than most anything else...).

Monday, December 24, 2007

Technology

I was in the Manchester Museum recently (it is rather good), and I saw a rather splendid display of bows and arrows. These were in many different styles and from many different cultures. There were little interactive displays that told you how the bows were made. An awful lot of effort and craft goes into what I had always assumed was a simple piece of wood.

Thinking about this as I struggled to operate my rapidly aging Sony Ericsson K750i it occurred to me that nowadays that level of development is rarely reached. Innovation and evolution happen so quickly that there is no point in refining a particular device. Hence mobile phones and mp3 players are rather unpleasant and tacky. Good design does crop up occasionally but the general rate of development is such that avenues of design are left unexplored and concepts are left incomplete.

Putting aside speculative discussion of a spike or singularity in the near future I sometimes wonder what “technology” and devices will really be like in, say, a thousand years time. What will human beings look like and how will they move around? In what manner will they reach orbit? Will they even bother?

In the realm of the state change seems constant and always disruptive. The pointless and dangerous desire to collate and store information is partly simply due to the technology being available to do it. This leads to accidents.

I feel privileged to live in this time of change but I do sometimes wonder if humanity will ever achieve an equilibrium with its environment. 99% of human history consists of people living in hunter-gatherer style societies. The current "singularity" we're living through will presumably result in either out destruction or in some new equilibrium. I wonder what it will look like.

Monday, November 05, 2007

A Bit of Fry and Hari

Today my topic is one of problems. Problems are often defined as particular questions or sets of options. In the case of immigration the problem is usually phrased:

"How can we reduce or control immigration so that it becomes a positive force within our society rather than a negative force?"

And, as always, the way the problem is phrased begs the question: "Is immigration a negative force right now?"

Blurgh.

I don't really care about immigration. It's a Daily Mail issue and has been parsed entirely in terms of being something negative, despite the obvious fact that with an aging population it is entirely necessary that we import cheap, youthful labour to care for our elderly.

Many of the great debates in life, the universe and everything devolve to questions that are misapplied. When I say I don't care about debating about God or poetry or global warming or the the ethics of scientific research (empiricism and the scientific method in the real sense of the concept is/should be gloriously free of these considerations, that it clearly isn't is regrettable) I don't mean I don't care. I mean I don't care about the issue as it is usually expressed because I feel it has been misrepresented.

Take transhumanism and the closely related issue of AI. Putting aside the vast technical and scientific barriers to both, the debate is often put in terms of "should we pursue this line of research?" This is a ridiculously stupid question to ask. The appropriate response is:

We are already pursuing this research in a form you don't yet recognise as transhumanist research and AI research.

Even more succinctly, and with regard to transhumanism you could say:

We are already transhumans and could well be considered posthumans.

Consider: prior to the industrial revolution humans were essentially bright apes and they (or other organisms) provided the majority of the energy required to run civilization. We are approaching a period in human development where manual labour might well become obsolete. The key lies in the control mechanisms.

It is still massively cheaper to employ Chinese humans to make most things than to develop and manufacture a robot capable of doing the same job.

Similarly with construction: the problem here requires a robot capable of navigating a building site, following vague instructions, applying "common sense" to problems, drinking tea and reading The Sun: all of which are as yet beyond the capabilities of even the most complex (or simple-minded, in the latter case) non-biological machines.

The point is that I wear glasses and wear clothes and take drugs and read books and use a pocket calculator and function much better, and am much happier, than I would be if I did not do these things.

Scientists of many different disciplines have made it their business to model parts of the human brain and the neural structures of other animals and have been doing so for decades.

Meanwhile genetic engineering continues apace. If you're smart enough to identify the red lines that mark any particular area of research as being dangerous then you're probably smart enough to cope with the outcomes. If you can't identify the red lines then we're probably fucked anyway.

Fukyama's argument boils down to the precautionary principle. The precautionary principle is flawed for all the reasons discussed by Ray Kurzweil in The Singularity is Near, so I won't bother going into it.

Johann Hari's recent article touches on a subject dear to my heart: transhumanism. As a devoted absorber of the teachings of More, Kurzweil, et al, I am always happy to see reference to this interesting ideology in the national press.

(...as an aside, and due to the purchase of T-shirts and prints by myself and many others the sublime Dresden Codak is to be published weekly!)

Apart from a reference in New Scientist a few years ago and a few "eccentric American" stories in The Guardian and other newspapers this is the first serious reference to transhumanism I've seen in the Dead Tree Press. Doubtless more will emerge over time and it will begin to gain credence (or at least name-recognition, which is all you seem to need these days c.f. Boris Johnson) amongst the general populace.

Hari also makes an excellent point regarding the criticism of transhumanism by Francis Fukyama. The debate has been warped to fit the extremes. Either we ban all research that might lead to the creation of a separate posthuman species or we actively pursue such research to the end of creating such a species.

As Hari points out, bickering over creating new species of human is pointless and stupid. Evolution only seems static to us because it works on such a long timescale (but not unthinkably long - only about 40 000 years separate us from prehuman hominids).

I think, like Hari, it is much more sensible to concentrate on the possibilities to create smarter, faster, stronger, healthier, more long-lived people.

The Gattaca issue, that maybe one day humanity might be divided between haves and have-nots, between the rich and the poor, between the upgraded and the legacy, between the Eloi and the Morlocks is also silly.

Human beings have always suffered inequality of health and ability due to inequality of wealth. The key to liberal democracy lies in all individuals being equal under the law (the problem of defining individals, particularly with regard to posthumans, is an issue for another day).

The issue for transhumanism is how to provide people with better lives and greater powers of self-expression and more opportunity for happiness. The issue is not the creation of a new species, although this might be a means to the end of creating more opportunity for happiness.

As always the problem is not as it is phrased, it is something altogether different. In the case of Stephen Fry's recent blog posting (which The Guardian has taken on as a series of articles) the issue as it is often phrased is "do you prefer looks/form to functionality?" in the context of consumer electronics.

Of course, as Fry points out, when it comes to a device you use every day form is very much tied up with functionality. For me beauty in consumer electronics stems from design, and design is also a key component of usability and hence functionality.

If a house is a "machine for living in" then that house should look aesthetically pleasing or it is not fulfilling it's function.

Anyone who claims that those who reject the existence of God are "close minded" is treading on thin ice. The debate should be not be: "does God exist?" The debate should be: "how did the universe begin?"

There are, I assert, ways of finding out if God exists. If God exists then presumably those who pray to God for help will achieve statistically higher in their endeavours than those who don't pray.

If there was a correlation between frequency of prayer ("faith" is difficult to measure) and say, salary, then you could begin to build a hypothesis for the existence of something that could be called God.

I'd be interested to see if any such research has been done and what the result was. I suspect as I am not aware of any such research then the results (if any) did not suggest God exists, as I'm certain the various Churches would be trumpeting it to the heavens.

Of course prayer is there as a comforter. I, as a secular humanist, choose to reject it as a piece of mental transhumanism that is not self-contained enough to be safe.

Faith in God or manifest destiny is too powerful and dangerous. Singularitarianism and transhumanism is as open to corruption as any ideology but, like liberalism, the fundamental precepts of the transhumanist meme are overwhelmingly positive.

Any corruption of liberalism would cease to be usefully described as such. So with transhumanism. Then it devolves to word games and Orwellian propaganda.

I feel it is much better to form a core of easily expressible beliefs and live by them. In the case of humanism this is all there is and I am alone.

This sucks. I'd like to do something to make this last longer and something to make me less lonely.

So transhumanism is the next logical step after humanism.

Friday, September 14, 2007

More on the Singularity

For some time now I've been trying to write a thoughtful article on the Technological Singularity. In true blogger style I've decided that rather than expend my energies on creating my own article, I will find someone else's and link to it.

The writer is Ronald Bailey from Reason Online, the online wing of a reasonably popular (by UK standards) libertarian magazine, and he is reporting on the recent Singularity Summit.

Bailey does a good job of summarising the basic ideas surrounding the Technological Singularity. He quotes one of the attendees of the conference: Eliezer Yudkowsky, cofounder of the Singularity Institute:

"...the Event Horizon school is just one of the three main schools of thought about the Singularity. The other two are the Accelerationist and the Intelligence Explosion schools..."

My summary of these groups is as follows:

Event Horizon: once an intelligence is developed that is "greater" (dismiss for a moment the difficulty in quantifying intelligence) than ours we, by our very nature, will be unable to predict what will happen.

Accelerationist: advances in computer hardware (c.f. Moore's Law) will continue to accelerate, along with our understanding of our own biology and our ability (via genetic engineering, implants, bioengineering etc) to alter our own biology. This means that within a few decades we will merge with our technology and become the greater intelligence. Suggested by Ray Kurzweil in The Singularity is Near.

Intelligence Explosion (not a bomb at Thames House, the other kind of intelligence): technology arises from the application of intelligence to problems. When technology is applied to our own apparent lack of intelligence, we will get marginally better intelligence which will result in marginally better technology which will produce even better intelligence. A feed-back loop will be created, with "intelligence" increasing with each iteration. Suggested by I.J. Good in a New Scientist article.

The three concepts feed into one another and don't necessarily cancel each other out.

I suspect that the world sketched out by Kurzweil is not impossible, but the timeframe seems implausible. There is no reason why matter shouldn't be able to support beings that are more durable than we are, longer lived, faster at learning, with better memories, and that experience the world more slowly and deeply (i.e. each second for them would offer what would amount to a week's worth of thinking time to us).

However the current state of our ability to control matter, though significant, doesn't seem to offer the possibility of superhumans within, as Kurzweil suggests, 50 years.

If silicon-based computer chips are currently undergoing exponential increases in the transistor per centimetre counts then it doesn't necessarily entail similar progress in another area like brain-scanning.

Kurzweil does a good job of pointing out exponential trends similar to Moore's Law in The Singularity is Near, for example the Human Genome Project (page 510 of the USA Penguin hardback copy I have), and the resolution of non-invasive brain scanning (page 159).

My basic problem with Kurzweil's book is my incredulity: the book is compelling whilst you read it, but once you're back in the real world you simply can't imagine a "singularity" of any flavour occurring.

Which, ahem, is pretty much the definition of the event horizon style singularity.

*sigh*

So I suppose I'll just have to wait and see, like everyone else...

Thursday, September 06, 2007

Post-simianism

Aaron Diaz's sublime webcomic Dresden Codak continues, with the most recent installment including an essay that harpoons the critics of transhumanism with spears of satire.

Friday, August 17, 2007

The New Solipsism

The Simulation Argument is the most recent form of old solipsistic arguments.

George Dvorsky puts forward "the dark side" of the Simulation Argument, basically that the Creator/Maintainer beings are at best ambivalent towards the plight of humanity - they don't care if we blow ourselves up or torture each other or die in hideous agony.

If we're not in a simulation then it means that it is much harder than we presume to create simulated intelligence - an assumption that runs counter to My First Millennialist Cult: Singularitarianism.

I've always seen it from a glass half full perspective anyway. We're not in Hell, and things are getting better. This suggests that the Game has rules and that ambivalence on the part of the Creator/Maintainer beings is something to be thankful for (paradise precludes free will [which probably doesn't exist anyway] as in Genesis, so I'd rather live with my quasi-free will than without even the illusion of independence). It also raises the possibility of an afterlife. This would be good.

And like all solipsistic arguments, there's not much we can do about it anyway.

If we aren't in a simulation then Everything is As It Seems. In this case rational arguments for a singularity event remain as strong as ever they were.

I suspect an argument against both the Fermi Paradox and the Simulation Argument would begin with the fallacy of taking one example, assuming that the example is representative with no additional information, and then asserting generalities based on only one piece of data (our existence).

Things are almost always more complex and weirder than at first they seem - I suspect that if we do find out the basics of our universe then we will be surprised.

Wednesday, August 01, 2007

Culture

I watched Garden State today. I agree with this guy, Zach Braff is completely underrated. It is an extremely good movie: very good indeed. Armed with only my AS-level in English language, I would say the essential theme is that of alienation, and of finding where you can be happy (“alienation” is a good word when dealing with arty subjects, another good word is “juxtaposition”). Analysis of the meaning of the film aside, the soundtrack was excellent, the acting was low-key (and excellent), and it was generally very good.

I finished reading Harry Potter and the Deathly Hallows. It was OK. It occurred to me that HP is the seminal cultural event of our generation. That and The Simpsons, the movie of which I have yet to see. In 30 years time comedians as-yet-unknown will pontificate on their experience of “growing up with Harry Potter” on a nostalgia-based programme-equivalent on an as-yet-uncreated media – probably hosted by Jimmy Carr.

I’ve been exploring the world of online webcomics. There are many, and many are surprisingly good. I may have already mentioned Questionable Content. There is also the superlative xkcd, Shortpacked, and a recent find: Dresden Codak.

The artwork and content of Dresden Codak is sublime. The creator occupies a similar headspace to myself: a strong regard for philosophy, singularitarianism, secular humanism, transhumanism, the epistemology of technology, technology in and of itself, and a mild interest in Jungian psychology, especially the Myers-Briggs personality test (I may have mentioned I persistently score INTJ, not that I think it's not a load of rubbish...). The artwork is whimsical and the draughtsmanship is excellent. The storylines are engaging and the characters are great. I look forward to the next instalment.

It is seeing things like this that make me want to forget about university, buy a tablet, Photoshop software, and just make cartoons for the rest of my life. But that's what mid-life crises are for...

Saturday, November 11, 2006

It's All Just Meat

When faced with the breadth and majesty of books like The Singularity is Near, Engines of Creation and Hacking Matter it is difficult not to feel inspired and optimistic about the future. There is a particular brand of techno-progressive idealogue who can weave images in the mind as effectively as any of the technologies they predict.

There is a problem. Ray Kurzweil calls it "the argument from incredulity". The picture of a plentiful future peopled by synthetic superhumans is so compelling it registers on our internal TGTBTM (Too Good To Be True Meter) as a being unreasonably optimistic.

Most rational people who don't live within the bleeding edge of pattern-recognition technologies simply refuse to accept that the clanking, smoking, bad-for-the-environment hulks of machined metal that are what most people still associate with the word "machine" could ever turn into anything as sublimely effective and versatile as a human being.

It is interesting to look back at how technological changes have been predicted to occur and how they actually did occur. Take the widely-predicted advent of powered flight in the late Victorian era. We see those wonderfully fanciful drawings of individuals flitting around in one-man ornithopters. The artist that comes to mind is Heath Robinson, although his were probably actually drawn in the early twentieth century.

One of the great fears of the Victorians, as exemplified by the writing of H.G. Wells, was the concept of "war in the air" (the other great fear being that of the advent of powered cavalry). That WITA became a terrible reality through the Blitz of London within the lifetimes of many of those already adults during the Victorian era suggests that our current fears of GM viruses, dangerous artificial intelligences and rampant nanotechnology scenarios may not be as far-fetched as we could imagine.

I disagree with Kurzweil's sweeping "law of accelerating returns". The most compelling point he makes in support of this is that over the lifetime of the universe, complexity has tended to increase in a manner which strongly resembles an exponential graph.

However when we get down to the nitty-gritty level of technological development over the years and decades of a human lifetime "progress" (an amorphous term in this debate) seems to happen in fits and starts, and is strongly influenced by political and economic factors.

That a young (perhaps 20 year-old) Victorian in 1900 growing up in a world without heavier than air flight could live to see the Apollo and the Moon landings (Bertrand Russell (1872-1970) springs to mind) is an extraordinary thing.

Another point to consider before dismissing Kurzweil entirely is the nature of technological change. The counterpart of the technology of aircraft in the early twentieth century is the development of computers in the late twentieth and early twenty-first century. Computers tend to affect things in ways that were difficult to comprehend beforehand. Highly efficient information management and versatile and decentralised communication generate a lot of side-effects that would not be immediately obvious to someone who was not aware of computers.

Spime and conversion and the gradual move from centralised manufacturing towards decentralised CAD/CAM machines will allow the effects of abundant, effective computing to move from the world of pure information to the real world the rest of us inhabit.

At the moment our world is still very much run by meat. Humans are required for their versatility and imagination, if for little else. I think that of all Ray Kurzweil's predictions, the most likely to turn out to be correct is his belief that there will be a strong convergence between human and machine intelligence with the result of an (even more) profound change in the way the world works.