Tuesday, February 24, 2009

On The Origin of Wealth

After a brief hiatus I have started reading this book again. Everything Taleb has written, sans the epistemic philosophy and general snark, can be seen as a subset of this book.

As I read Beinhocker is discussing how the observation that businesses evolve in the marketplace can be applied to practical strategy. He makes the point that for the most part conventional strategy and long-term planning is futile in the face of the complex non-linearities of the marketplace.

Beinhocker grounds his descriptions of economic activity in modern physics itself, rather than attempting to ape physics as many early economists (like Jevons and Walras) did.

Beinhocker defines wealth as useful order. And order is information, and useful information is knowledge. So knowledge is wealth.

Beinhocker says that value is created by:

• Irreversible actions
• Local reductions of entropy and
• Fitness

Fitness is determined by an evolutionary process, the free market, which can be thought of as a knowledge-generating machine.

Beinhocker advocates ideas similar to Alex Harrowell of the Yorkshire Ranter (who I suspect is familiar with the book), stating that we need to build institutions that evolve more effectively.

This sentiment runs counter to many traditional conceptions of Big Man, top down, authoritarianism. Politicians are praised for ignoring evidence and not adapting to circumstances.

The scientific method and free markets work so well because they lead to the creation of a large number of ideas and then subject each idea to testing, selecting the most successful ones, replicating and recombining these successful ideas, then repeating the whole process continuously.

The outcome is a an increase in knowledge, and hence wealth.

Bildungsphilister

In The Black Swan Nassim Nicholas Taleb defines a bildungspilister as a philistine possessed of a fake, cosmetic culture.

Taleb borrows the term from Nietzsche, who used it thus:

A bildungsphilister is someone who reads newspapers and reviews and imagines themselves to be cultured and educated but lacks genuine, introspective erudition.

Bildungsphilisters are prone to dogmatic, cliched, and unsubtle responses to events and things.

Taleb extends it to refer to anyone who has a high degree of education in one particular non-empirical field, who is prone to using buzzwords and ignores conflicts between the ideas they promote and the nature of reality.

Timewasting

Since I made a conscious decision to stop reading newspapers a la Taleb I've found that I spend more time reading blogs, most especially the Yorkshire Ranter, Stumbling and Mumbling, and DSquared.

In fact the amount of time I've freed up by reading fewer newspapers has been entirely consumed by additional blog reading.

My attention span seems to be subject to its own version of Jevon's Paradox. Increases in the efficiency and quality of my text consumption are immediately swallowed up by an overall increase in the amount of text consumed.

I would prefer to spend my time reading substantive literature, both novels and textbooks, rather than blogs. However because I spend so much of my time sitting in front of an Internet connected screen I inevitably end up getting distracted by them.

Wednesday, February 18, 2009

Explaining intelligence: complex adaptive systems

I've been trying all day to write a coherent response to Michael Anissimov's recent posts Friendly AI - May I Check Your Ideological Baggage and The Three Singularity Schools, Kurzweil, and Superintelligence.

I finally succeeded in this comment on Ken MacLeod's recent discussion on evolution and AI.

Following is a slightly cleaner version:


My problem with Anissimov's implicit argument stems from a misunderstanding of the nature of technological progtess. Anissimov's belief that "once we create a superhuman intelligence all our scientific problems will be sold solved1" is based on the assumption that intelligence is the only contributory factor to innovation. Anissimov says:

To me, the relevance of a given technology to humanity’s future is largely determined by whether it contributes to the creation of superintelligence or not, and if so, whether it contributes to the creation of friendly or unfriendly superintelligence. The rest is just decoration.

"The rest" being every technological development that will occur between now and birth of our putative god-in-box AI.

Now I'm willing to bet microchips to nanobots that there will be a few interesting innovations, inventions, and scientific breakthroughs over the next few years that aren't directly linked to AI research but still have a large impact on people's lives.

Developing a cure for AIDS, for example.

Anissimov makes these claims concerning the importance of AI research in support of the intelligence explosion school of the technological singularity the school which can briefly be expressed as:

Intelligence has always been the source of technology. If technology can significantly improve on human intelligence – create minds smarter than the smartest existing humans – then this closes the loop and creates a positive feedback cycle. What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that they’d design the next generation of brain-computer interfaces. Intelligence enhancement is a classic tipping point; the smarter you get, the more intelligence you can apply to making yourself even smarter.

The problem with this view of the singularity is that intelligence is not the main driver of innovation.

We know this because the single most dynamic, creative, and successful innovation generator on the surface of this planet famously does not possess intelligence.

Natural selection lacks intelligence and it has produced an extraordinary fecundity of design and invention, not to mention the only version of intelligence currently available to us.

Some transhumanists imagine that simply creating a sufficiently powerful intelligence will solve our problems. We probably could evolve an intelligent being, using the process described in Ken MacLeod's The Star Fraction just by creating billions of lines of random code (a trillion script-kiddies at a trillion keyboards) and applying an evolutionary de-stupidifying process to it, then rinse, cycle, and repeat until we get something that smokes a pipe, does The Times crossword, and publishes the occasional enlightening monograph.

Or we could even do a brute-force molecular-level simulation of a human being, assuming that the various exponentials associated with computing hardware continue ticking over for a few more decades.

But in the meantime why not cut out the AI and go straight to innovation by evolution? Why don't we find some way of generating vast numbers of products and items and testing them under competitive conditions, then recombine and incrementally adjust and improve them until we have an optimal outcome?

And in fact we already do this - free markets create a huge pool of possible companies and products and the really bad ones are filtered out. Effective companies increase the extent of their control of a finite set of resources at the expense of less effective companies.

Companies don't breed, of course, rather new designs for companies are created by human beings. Most fail. But if one is superior to another it will survive and grow, taking wealth, influence and market share from its competitors.

It also applies to products: the idea that products like the iPhone come about as a result of a flash of genius insight from someone like Steve Jobs is incorrect. The iPhone is the result of a long series of tiny, incremental, trial-and-error developments across many scientific and technical fields.

My conclusion: if the singularity means anything then it means that technological change will continue to accelerate in certain areas. And this trend has already been happening for almost two centuries.

The physicist and complexity theorist Murray Gell-Mann would say that human civilization is a complex adaptive system in the same way that the biological evolutionary process and individual human minds are.

Knowledge, science, learning and culture have created an evolutionary process working outside of human minds and outside of biology that results in an ongoing rise in the rate of change of technical progress.

1: Presumably these superhuman entities would be willing to pay us to solve our problems, what with their being superhumanly bored with life having already simulated and experienced the totality of all possible existences while the lab guys were getting the celebratory muffins.

Tuesday, February 17, 2009

Realisable fusion power

I commented on this fusion-fission hydrid reactor design on Futurismic a while ago.

Researchers at the university of Texas are developing a means to process spent nuclear fuel using fusion:

The scientists propose destroying the waste using a fusion-fission hybrid reactor, the centerpiece of which is a high power Compact Fusion Neutron Source (CFNS) made possible by a crucial invention.

The CFNS would provide abundant neutrons through fusion to a surrounding fission blanket that uses transuranic waste as nuclear fuel. The fusion-produced neutrons augment the fission reaction, imparting efficiency and stability to the waste incineration process.

The reason this is exciting is that it raises the possibility of a way of developing fusion technologies incrementally and economically. Instead of going all-out to build a nuclear fusion reactor in one step, putative nuclear fusion companies could market their wares as a means of processing the nasty transuranic waste output of conventional fission reactors.

This would provide fusion companies with a source of revenue to develop more advanced magnetic containment methods, and many of the other technical requirements of fusion electricity production.

The problem with fusion technology in the form of the ITER project is it's a massive, expensive, centralised, all-or-nothing endeavour.

I entirely support ITER: but it I'd love to see some this fission-fusion hybrid fuel cycle implemented in practice.

Charles Stross makes this point about incremental development but concerning the LiftPort group, a consortium that have made the mistake (as Stross sees it) of focusing on the development of an elevator system under the assumption that the revenue-generating fullerene cable technology would appear from somewhere else.

Monday, February 16, 2009

I see their knavery; this is to make an ass of them

I've been doing the rounds of the UK anti-tabloid blogs over the last few days: including the The Sun Lies, Enemies of Reason and Alone in the Dark.

The creators of these blogs see it as their duty as good citizens to refute and rejoin every last lie, exaggeration, misrepresentation, canard, fib, falsehood, and untruth that pours from the pages of those grotesques of British public life: The Daily Mail, The Sun, and The News of the World.

This is an entirely necessary task, as the tagline on Mail Watch says:

We are not here to hate the readers of the Daily Mail.

We are here to show them that they are being lied to.

We ask our readers and contributors to keep this is mind.

Indeed: but Daily Mail readers aren't stupid (or at least not as borderline-disabled as you'd have to be to believe what is written in The Mail) so why do they continue to buy a newspaper that is lying to them?

I'm not being naive here: why do people read The Mail? Is there a particularly good crossword? Are the horoscopes particularly accurate? Is the sports coverage terse and reliable?

If the quality of every part of the newspaper is similar to the quality of the headline, news, and editorials then it can't be particularly good.

I have no way of knowing if the editorial line of these newspapers accurately reflects the pre-existing views of the readership or not. If I were to take over any of these newspapers and replace the editorial staff with people with extensive technocratic competence as well as journalistic and writing skills, would the readers be any better off?

I don't know.

But there comes a point when the nastiness and unpleasantness reaches a level where it generates a genuine public hazard. Check out the latest of the baby P case, an open letter sent by the social services blog Community Care to the editor of The Sun Rebekah Wade.

Social workers do an incredibly necessary and unpleasant job. And when the tabloids aren't complaining that they're not doing their jobs properly they're complaining when they decide to take a child into care.

You can imagine an alternate universe where baby P was taken into care before he died and became the subject of tabloid ire because the child snatching social services were taking kids away from their parents.

As the letter says:

Informed public opinion is undoubtedly important. Unfortunately, your coverage misinformed your readers. And in considering their views ahead of the facts and the informed opinions of the social workers who struggle with the realities at the frontline everyday, you have risked more children's safety and maybe their lives.

So at what point will something be done? And is it even practical to do anything? Is the generation that reads this trash dying out?

How would you go about altering the sensational and dangerous reporting of British tabloids?

Said with confidence

Friday, February 13, 2009

A more hostile memetic environment

I didn't know who this "right wing Dutch politician" was until all the kerfuffle yesterday.

Foamingly right-wing racists feed off the oxygen of public attention: if HMG genuinely wanted to damage Mr Wotsits' credibility they should simply have ignored him.

This is the problem with not wholeheartedly embracing free-speech. Supposed anti-hate laws give a megaphone to idiots by turning them into martyrs.

More open debate and more freedom of speech is a necessary part of what Alex calls a more hostile memetic environment: the more society is exposed to stupid ideas the stronger it's immune response to them will be.

Wednesday, February 11, 2009

What is the best way to write blog posts?

The best way to write blog posts is to have a clear and specific point to make, and only to make one point per blog post.

If you want to make multiple points, or write a multifaceted argument on a particular subject what you are writing become what Stephen Fry calls a blessay.

From now on I will try to keep my posts short and interesting, rather than long and rambling.

Problems of specificity

Eleizer Yudkowsky on the virtues of specificity:

When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.

This is a problem that is ever-present online. Very rarely are online debates actual arguments, they are bickering contests between people who have completely different ideas about the actual definition of words and the scope of the debate.

I was impressed by Yudkowsky's thoughts on the three schools of the singularity because it addresses directly the oft-ignored problem of what exactly someone means when they talk about the singularity.

This was the problem with PZ Myers' objections to the singularity - he argued against a few elements of Kurzweil's thesis and used these inconsistencies to dismiss the whole thing out of hand.

I agree there are problems with various aspects of Kurzweilian singularitarianism but they need to be adressed clearly and specifically.

Delicious specificity

In a similar vein I've been trying to work out what the best way of organising my Delicious tags is.

There are some tags, like "technology", "economics", "politics", "science", and "toread" which are so wide-ranging they lose all meaning.

However if my intention is to be able to refer back to a specific article when I need a reference too much specificity can hinder my search.

Tag bundles help solve the first problem of overarching vagueness by promoting "technology", "economics", "science", and "politics" to a well-earned retirement on the board of directors {?}.

Could it be possible to build a system into Delicious whereby it is possible to say that something vaguely reminds me of something else?

I don't mean a tag like "remindsmeofStephenFry". I mean a way of tagging a document that doesn't explicitly reference Stephen Fry in any way but still reminds me of him. Something like remindsmeof:"StephenFry".

In this context remindsmeof would be a command recognised by the Delicious API to note that the article isn't explicitly about Stephen Fry but nevertheless puts me in mind of him.

I suspect that this sort of thing is less useful in practice than I imagine, particularly as much of the utility of Delicious comes from its simplicity and intuitiveness.

And as to Yudkowsky: obviously this respect for the specific can be taken too far: a general knowledge encompassing many fields can also be very valuable.

It's best to learn a lot about a little and little about a lot.

What I think of the three kinds of technological singularity

Michael Anissimov has a post up on the three kinds of singularity. This is based on this post on the three schools of singularity thought by Eliezer Yudkowsky. This is relevant to many criticisms of the idea of a technological singularity as criticisms frequently focus on minor or ancillary effects of the singularity.

As I said before, I don't "believe" in the singularity. I think it's either irrelevant or a fairly trivial observation of technological growth trends. But within the context of the three schools I think I can express my thoughts more coherently.

Here are the three kinds of singularity Anissimov and Yudkowsky describe:

  1. Accelerating change: advances in computer science, AI research, genetics, human augmentation, and biotechnology create a positive feedback of rapid technological growth. As the abilities of our tools improve and our own abilities improve through augmentation (both external in the form of personal AIs and internal in the form of intelligence-enhacements and nootropics) technological change accelerates exponentially.

  2. Event horizon: advances in computer science, AI research, genetics, human augmentation, and biotechnology lead to the creation of a greater-than-human intelligence. It is impossible for a less intelligent mind to predict the actions of a more intelligent mind so it is truly impossible to make any definitive statements about what will happen after a superintelligence (whether pure AI or strongly augmented human).

  3. Intelligence explosion: advances in computer science, AI research, genetics, human augmentation, biotechnology and neurobiology allow intelligent beings (either human or AI) to alter their own brains so as to improve their own intelligence. As intelligence is the source of all technological development this process will feed back on itself, as the slightly more intelligent beings develop slightly better ways of improving their intelligence, all the while creating amazing spinoff technologies.


Here's what I think of them:

  1. The accelerating change school of the singularity is the one I find most compelling. This is because it is both logically plausible and reflects the experience humans have had of changing technologies in the past. Technologies like electronics combine with digital computer theories to develop fast computers that go on to have a major effect on other areas of development. I think the accelerating change argument is the most coherent and reasonable depiction of a technological singularity

  2. The event horizon school is flakier. First, I have issues with the idea that greater-than-human intelligences are necessarily unpredictable, second, I don't believe that raw intellectual or cognitive ability is the primary driver for technological progress, and thirdly we have seen that it is already impossible to accurately predict all the outcomes of any technological development, let alone strong AI/posthuman superintelligence.

  3. The intelligence explosion school is flakier still. It is based on the assumption that a sufficiently powerful general intelligence would necessarily be able to comprehend how it's own mind works and know how to improve it. I do believe that as knowledge of the workings of the brain increases it will lead to real gains in various intellectual capacities, through nootropics, brain augmentation, or through brain simulation on faster substrates. Gaining additional knowledge about the brain doesn't require us to be "smarter."

With reference to the last point: the knowledge of how the brain works will be gained through trial-and-error scientific experimentation and ongoing technological development of brain-scanning technology (itself developed by trial-and-error technological tinkering), surgery (again also developed through the inductive tinkering of the barber surgeons), and neural interface technology (which is being tinkered with as I write).

Anissimov believes that all technological progress must be judged on the basis of how much closer it brings us to the existence of a superintelligent AI, because then the superintelligent AI will take over the business of technological development and create an intelligence explosion.

Anissimov describes himself as a technological determinist, as such he presumably believes social change is caused primarily by technological development. I agree with technological determinism in general but I feel Anissimov's perspective is closer to cognitive derminism: he believes technological (and hence social) change in the future will happen purely as a result of the cognition of AI.

This is at odds with our experience because the component of scientific and technological development that relies entirely on pure cognition (e.g. Einstein's development of the theory of relativity or Newton's laws of motion) is quite small compared with those which required a substantial amount empirical study (Darwin's theory of evolution) or mechanical tinkering (Faraday's law of induction).

This is a similar criticism to Kevin Kelly's idea of thinkism, where Kelly highlights the fallacy of believing you can study the universe by simulating it, without recourse to experiment to attempt to falsify your belief.

To summarise: although the development of a smarter-than-human AI will be a huge aid to our understanding of the nature of intelligence, consciousness, and the human mind there is no reason to assume the effects (though unpredictable) will include an intelligence explosion - I agree that it may help lead to an acceleration in technological development - but it will only be one part of the general acceleration.

Current areas of interest

Here's what I'm up to at the moment:

  • Doing Open University Tutor Marked Assignments in "Data, Computing, and Information" and "Engineering the Future" courses.
  • Learning how to program Python using Diving into Python and Thinking Like a Computer Scientist.
  • Applying to go to university.
  • Looking for a job.
  • Creating a lengthy webcomic narrative.
  • Blogging extensively.
  • Reading The Origin of Wealth by Eric Beinhocker.
  • Reading Four Laws that Drive the Universe by Peter Atkins.
  • Reading Feersum Endjinn's by Iain M Banks.

Tuesday, February 10, 2009

Singularity and transhumanism

PZ Myers has written an interesting critique of Ray Kurzweil's thoughts on a possible technological singularity:

...not only is the chart an artificial and perhaps even conscious attempt to fit the data to a predetermined conclusion, but what it actually represents is the proximity of the familiar.

We are much more aware of innovations in our current time and environment, and the farther back we look, the blurrier the distinctions get. We may think it's a grand step forward to have these fancy cell phones that don't tie you to a cord coming from the wall, but there was also a time when people thought it was radical to be using this new bow & arrow thingie, instead of the good ol' atlatl.

We just lump that prior event into a "flinging pointy things" category and don't think much of it. When Kurzweil reifies biases that way, he gets garbage, like this graph, out.

Now I do think that human culture has allowed and encouraged greater rates of change than are possible without active, intelligent engagement—but this techno-mystical crap is just kookery, plain and simple, and the rationale is disgracefully bad. One thing I will say for Kurzweil, though, is that he seems to be a first-rate bullshit artist.

...

Kurzweil tosses a bunch of things into a graph, shows a curve that goes upward, and gets all misty-eyed and spiritual over our Bold Future. Some places it's OK, when he's actually looking at something measurable, like processor speed over time.

In other places, where he puts bacteria and monkeys on the Y-axis and pontificates about the future of evolution, it's absurd. I am completely baffled by Kurzweil's popularity, and in particular the respect he gets in some circles, since his claims simply do not hold up to even casually critical examination.

Calling Kurzweil a bullshit artist is unfair: Kurzweil is a genuinely talented inventor and engineer. His beliefs might be a little kooky to some, but I've always found his writing compelling.

Kurzweil is a spiritualist: there's nothing wrong with that. A belief in the power of some imminent superhuman AI to solve all our problems is slightly less absurd than most religious beliefs, and Kurzweil doesn't come across as the type to build a pyramid of skulls in the meantime.

But really: who honestly cares about the singularity?

Building artificial human minds may be possible within my lifetime, or it may not.

There will still be substantial technological change, even if the prime mover remains good old-fashioned human grey matter.

What I find compelling is the suggestion of where ongoing developments in biology, computing, genetics, and human augmentation may take us over the next few decades.

Among these developments are new ways of combining human intelligence with machine intelligence that result in a substantial increase along all dimensions of intellectual development (what Kurzweil calls the law of accelerating returns.)

So although the idea of the singularity has become less compelling what continues to excite me about Kurzweil's writings are his descriptions of posthumans. Partly for the good ol' SFnal sensawunda, and partly because maybe it could happen to me. Maybe I could become a posthuman.

I think the idea and potential reality of self-guided human evolution is a great idea in itself. I can take or leave the singularity.

Prof Myers also comments separately on the recent pronouncements on the future of humanity Juan Enriquez at the TED conference:

Every species also takes control over its own evolution, in a sense; individuals make choices of all sorts that influence what will happen in the next generation. You could rightly argue that they don't do it with planning and intent, but I have seen nothing that suggests that our attempts to modify our species, low tech and high tech together, are any wiser or better informed about the long-term consequences than those of any rat fighting for an opportunity to mate. We do what we do; don't pretend it's part of a long term plan that is actually prepared for all of the unexpected eventualities.

I agree with Myers up to a point: he's basically saying that developments in biotechnology and the progress of transhumanism won't happen in some big, top-down, organised way, but will rather develop as a series of steps through stochastic tinkering in the lab and (eventually) the marketplace.

The beauty of human progress is it doesn't have any long term plan: we do what we do and we tinker and experiment and find things out.

Juan Enriquez can make all the grand pronouncements about the future of humanity he likes but what he is actually trying to do is raise investment capital for his company Biotechonomy.

And Biotechonomy will pay scientists to tinker and experiment and find things out.

Such is the nature of technological advancement.

Prof Myers ends on a positive note:

Maybe this information age will have as dramatic and as important an effect on humanity as the invention of writing, but even if it does, don't expect a nerd rapture to come of it. Just more cool stuff, and a bigger, shinier, fancier playground for humanity to gambol about in.

Well I certainly agree with that.

Monday, February 09, 2009

The eye of the apple

This development is... interesting.

That Apple's next update to the OSX line of operating systems is to incorporate a geolocation facility is both compelling and worrying.

Obviously having fully networked, location-aware computers is a Good Thing, but the potential is also there for additional harmful tracking and monitoring of individual computers.

Where Apple leads Microsoft and everyone else is sure to follow. Yet another component for realisable spimes is imminent.

Interesting times.

power to the commentariat: The Inauspicious Er...

power to the commentariat: The Inauspicious Er...

I'd go with "lolwut?". Carries impact and genuine scorn, as if you don't really give a damn. "Ahem" is also nice of course. Both are either end of the scorn - condescension axis.

Or you could always try sincerity...

Productivity in shops

One of the problems with working in a second hand bookshop is that although you spend a large portion of your time sitting in front of a PC on your own you're productivity drops to nearly zero.

Every time someone asks you a question or buys a book it breaks your concentration, then there's all the usual stuff - Twittering, blogging, surfing, and general procrastination.

I'm not complaining - I'm just saying that this is the reason I've managed to get so little work done.

And in a way I am being productive, or at least as productive as misanthropic booksellers ever are...

/Bernard Black

The fifth element

In the Western classical tradition there were believed to be four elements from which everything in the human world was made. They were earth, air, fire, and water.

Every object in the human world was believed to consist of different proportions of these four elements.

Of course in order to combine these elements together into objects there needed to be a fifth element, which Terry Pratchett calls the element of surprise.1

Surprise is a funny thing: an emotional response in a rational context. You think one thing, something changes, and then you think another thing. They've forgotten my birthday, every jumps out at you, they haven't forgotten your birthday!

Why do we feel surprise? Is it an evolutionary adaptation to finding out that things are other than they are? Does it act as a kind of exclamation mark for the mind to highlight the importance of a change in the universe or is it simply a high-order emergent property of consciousness that serves no really useful purpose?

Do animals feel surprise in the same way as humans? Are there qualitative difference amongst surprises?

Intellectual surprises are the most fun, over at Overcoming Bias Robin Hanson asks what would have surprised our hunter-gatherer ancestors about how we view the world today?

The answers boil down to a couple of basic points:
  1. The universe is far bigger and older than expected.
  2. Everything in the universe is actually composed of a surprisingly small set of items operating on the basis of a surprisingly simple set of rules. Complex things emerge from these simple rules.
The loss of determinism to quantum physics was also suggested.

As an SF fan I'd love to fast forward a few thousand years to see what things we will discover in the future that will surprise me.

There is a strong temptation to fall into the trap if thinking "we've got this science thing basically nailed down except for a few small details."

One of the most compelling things about life is surprise: the act of discovering something is other than you expected and that things are not as they initially seem.

{Incidentally if this post seems a little off it's because I've had difficulty sleeping recently and I'm currently very tired... ;)}

1: Actually the original fifth element was quintessence, which accounts for all the stuff in the "heavenly realm" of the sky that didn't obviously consist of any of the other four.

Saturday, February 07, 2009

The quantum leap intelligence problem: are posthumans ineffable?

There is an idea prevalent in transhumanism that when posthumans or strong AI finally develop they will be to us as we are to the beasts that perish. They will be so much more intelligent that we will be incapable of understanding them.

They will possess super-science mojo and will live in mathematically optimal blocks of matte computronium, and they will have awesome tech that we puny baselines won't be able to distinguish from magic. They will be ineffable and godlike and we simply won't be able to understand them or their motives.

(Aside: for the sake of brevity for the remainder of this post I will refer to "them and us" to distinguish baseline humans from posthumans - not because I don't buy into the whole Kurzweil machine-human merger, but just because it's easier to write about)

I disagree with this idea of posthuman ineffability.

The idea suggests that there are other ways of being intelligent (i.e. possessing a highly accurate model of the outside universe and a highly accurate model of yourself and your fellows, thereby enabling self-reflection, communication, and culture 1) that are an entire quantum-leap above human intelligence such that we won't be able to comprehend them or their actions.

Michael Anissimov has written an interesting article making the point that human beings are dumb. In fact we possess only the bare minimum of intelligence required to create the civilization we have now.

Michael Anissimov makes some good points in this article on the current state of human intelligence:

Hey, human philosophers — I’ve got some bad news. It turns out that Homo sapiens probably isn’t the qualitatively smartest possible being.

...

How do I know? Well, most other members of the genus Homo had plenty of time to build agricultural civilizations, but they were too unintelligent to get off the ground. Homo sapiens was just barely smart enough to do the trick. And like a self-replicating machine that moves from 99.9% closure to 100% closure, the payoff was big.

I agree with this as far as it goes. All it took to develop complex social technologies like language and complex physical technologies like bows and arrows was a small increment in intellectual capacity.

We made that quantum leap from animal to human around 100 000 years ago: in the intervening period we haven't evolved a great deal (in fact, some say we've stopped evolving at all).

Ergo we are possessed of the bare minimum intellect required to sustain and develop technological civilization.

Anissimov uses this as an argument in favour of the idea that there there are many more superior modes of intelligence that we haven't yet developed or encountered, or in his words:

The apparent magnitude of our accomplishments, including those of Einstein, is merely a side-effect of how low our standards are. To another species on another world whose intelligence was crafted in the furnace of selection pressures more intense than ours, quantum mechanics is obvious from the get-go. The only thing funnier about how dumb we are to take so long to figure it out is our self-importance at having finally figured it out.

This is where I disagree with Anissimov. I think that the miniscule quantum leap between pre-human animals and human beings is a one-time event. Improvement is certainly possible, but to claim that there is some qualitatively and quantitatively different perspective on the universe that is definitively superior in every measurable dimension to human thought and would result in beings that we are incapable of understanding is incorrect.


Here's why:

  1. Human knowledge and understanding does not progress wholly through deductive reasoning or pure cognition. In fact a large amount of human knowledge and understanding comes from what Nassim Nicholas Taleb calls stochastic tinkering, and Eric Beinhocker calls deductive tinkering. Trial and error and accident has contributed enormously to the development of human knowledge. Presumably a posthuman would make mistakes: otherwise how would it learn? And if it doesn't learn how does it grow and develop?
  2. I agree with Kevin Kelly that there is a fallacy in the idea of "thinkism." Thinkism is the idea that it is possible for a mind, completely ignorant of the workings of the physical universe, to consider a few small objects, like a rock, flower, a feather, and a model of a galaxy embedded in amber, and then use these items to deduce the workings of the physical universe without any recourse to experiment. It could well be that there are other universes with different physical laws that could generate those items and without recourse to experiment how would this mind know which universe it lived in?
  3. We will share the same universe (they may go elsewhere, of course, but the chances are baselines will stay here): as such this posthuman entity will be subject to all the usual laws of entropy, conservation of energy, gravity and whatnot. As such this posthuman will need things and do things that are explicable to us. Only by creating a solipsistic alternate-reality computer bizarro world could a posthuman behave in a completely ineffable fashion: and even then a posthuman would still be subject to the axioms of a given logical or mathematical environment. N-incomplete problems would remain so.

Looking at point 1 "how does a posthuman learn" suggests an interesting counterargument: "posthumans develop in a way that doesn't involve learning, they use something different and ineffable."

The problem with this counterargument is that what I'm arguing is empirically refutable: the existence of a truly ineffable posthuman entity is something that is observable, so my point can be refuted by the observation of one posthuman entity whose motives and actions we do not understand. However transhumanist thinkers can continue asserting that true posthumans are by definition ineffable until the end of time. I predict that as posthumans emerge and their actions are studied they will eventually always be found to be explicable by baseline humans (if weird and peculiar - see below).

I agree that there are almost certainly better modes of intelligence, but I disagree with the idea that these modes of intelligence will ever be wholly incomprehensible to baseline humans.

They may be faster, cleverer, wittier, more attractive, stronger, longer-lived, instatiated within superior hardware, and better at poker - but it doesn't mean baseline humans would be incapable of understanding them.

The distinction between what I'm arguing and what Anissimov implies in his article is fairly subtle, and I could be accused of nit-picking, but I think it's important that we realise that there is no reason to assume posthumans will be completely and utterly ineffable to us, at least not if they want to survive IRL (which they may not).

That humans have accomplished what we have says more about the power of the evolutionary methods of stochastic tinkering combined with occasional deductive reasoning than it does about the brilliance of human intellect: and this is exactly what Anissimov is saying and why I agree with the premise of his article.

But once you possess culture (what Ian Stewart and Patrick Cohen call extelligence, or what Richard Dawkins might term "a memetic environment"), and a reasonable means of manipulating the universe it doesn't matter how "smart" you are. Trial and error and learning and robust heuristics take care of the rest.

I believe that some posthumans will be pretty weird, some may be charismatic, some may be frightening. But we can get to where they are, they are post-humans and they took a path we will be able to follow. Because of this and for the reasons given above I don't believe posthumans or posthuman civilization will ever be truly ineffable.



1: In fact it could be that this superior intelligence works on a completely different basis to creating a highly accurate model of the universe and the self, and works on some other basis that we can't comprehend. This non-intelligent "intelligence" would be truly ineffable and would completely disprove my point if it actually was superior in every possible way to human intelligence.

Further reading: if you do understand precisely what I'm trying to say I should mention that it isn't original, Greg Egan argues something similar in the opening chapter of Schild's Ladder.

Tuesday, February 03, 2009

Interviews and university

Well. Here I am.

In approximately ten hours I will be on campus at Warwick University, either being interviewed or trying to find the bar.

And I can't sleep.

It isn't because I'm nervous. I just have a serious problem with sleep.

At about ten in the evening I enter a stage of heightened wakefulness and distraction. If left to their own devices my sleep patterns shift later and later until I go to sleep at around 6:00 and wake up at around 14:00.