Saturday, February 07, 2009

The quantum leap intelligence problem: are posthumans ineffable?

There is an idea prevalent in transhumanism that when posthumans or strong AI finally develop they will be to us as we are to the beasts that perish. They will be so much more intelligent that we will be incapable of understanding them.

They will possess super-science mojo and will live in mathematically optimal blocks of matte computronium, and they will have awesome tech that we puny baselines won't be able to distinguish from magic. They will be ineffable and godlike and we simply won't be able to understand them or their motives.

(Aside: for the sake of brevity for the remainder of this post I will refer to "them and us" to distinguish baseline humans from posthumans - not because I don't buy into the whole Kurzweil machine-human merger, but just because it's easier to write about)

I disagree with this idea of posthuman ineffability.

The idea suggests that there are other ways of being intelligent (i.e. possessing a highly accurate model of the outside universe and a highly accurate model of yourself and your fellows, thereby enabling self-reflection, communication, and culture 1) that are an entire quantum-leap above human intelligence such that we won't be able to comprehend them or their actions.

Michael Anissimov has written an interesting article making the point that human beings are dumb. In fact we possess only the bare minimum of intelligence required to create the civilization we have now.

Michael Anissimov makes some good points in this article on the current state of human intelligence:

Hey, human philosophers — I’ve got some bad news. It turns out that Homo sapiens probably isn’t the qualitatively smartest possible being.

...

How do I know? Well, most other members of the genus Homo had plenty of time to build agricultural civilizations, but they were too unintelligent to get off the ground. Homo sapiens was just barely smart enough to do the trick. And like a self-replicating machine that moves from 99.9% closure to 100% closure, the payoff was big.

I agree with this as far as it goes. All it took to develop complex social technologies like language and complex physical technologies like bows and arrows was a small increment in intellectual capacity.

We made that quantum leap from animal to human around 100 000 years ago: in the intervening period we haven't evolved a great deal (in fact, some say we've stopped evolving at all).

Ergo we are possessed of the bare minimum intellect required to sustain and develop technological civilization.

Anissimov uses this as an argument in favour of the idea that there there are many more superior modes of intelligence that we haven't yet developed or encountered, or in his words:

The apparent magnitude of our accomplishments, including those of Einstein, is merely a side-effect of how low our standards are. To another species on another world whose intelligence was crafted in the furnace of selection pressures more intense than ours, quantum mechanics is obvious from the get-go. The only thing funnier about how dumb we are to take so long to figure it out is our self-importance at having finally figured it out.

This is where I disagree with Anissimov. I think that the miniscule quantum leap between pre-human animals and human beings is a one-time event. Improvement is certainly possible, but to claim that there is some qualitatively and quantitatively different perspective on the universe that is definitively superior in every measurable dimension to human thought and would result in beings that we are incapable of understanding is incorrect.


Here's why:

  1. Human knowledge and understanding does not progress wholly through deductive reasoning or pure cognition. In fact a large amount of human knowledge and understanding comes from what Nassim Nicholas Taleb calls stochastic tinkering, and Eric Beinhocker calls deductive tinkering. Trial and error and accident has contributed enormously to the development of human knowledge. Presumably a posthuman would make mistakes: otherwise how would it learn? And if it doesn't learn how does it grow and develop?
  2. I agree with Kevin Kelly that there is a fallacy in the idea of "thinkism." Thinkism is the idea that it is possible for a mind, completely ignorant of the workings of the physical universe, to consider a few small objects, like a rock, flower, a feather, and a model of a galaxy embedded in amber, and then use these items to deduce the workings of the physical universe without any recourse to experiment. It could well be that there are other universes with different physical laws that could generate those items and without recourse to experiment how would this mind know which universe it lived in?
  3. We will share the same universe (they may go elsewhere, of course, but the chances are baselines will stay here): as such this posthuman entity will be subject to all the usual laws of entropy, conservation of energy, gravity and whatnot. As such this posthuman will need things and do things that are explicable to us. Only by creating a solipsistic alternate-reality computer bizarro world could a posthuman behave in a completely ineffable fashion: and even then a posthuman would still be subject to the axioms of a given logical or mathematical environment. N-incomplete problems would remain so.

Looking at point 1 "how does a posthuman learn" suggests an interesting counterargument: "posthumans develop in a way that doesn't involve learning, they use something different and ineffable."

The problem with this counterargument is that what I'm arguing is empirically refutable: the existence of a truly ineffable posthuman entity is something that is observable, so my point can be refuted by the observation of one posthuman entity whose motives and actions we do not understand. However transhumanist thinkers can continue asserting that true posthumans are by definition ineffable until the end of time. I predict that as posthumans emerge and their actions are studied they will eventually always be found to be explicable by baseline humans (if weird and peculiar - see below).

I agree that there are almost certainly better modes of intelligence, but I disagree with the idea that these modes of intelligence will ever be wholly incomprehensible to baseline humans.

They may be faster, cleverer, wittier, more attractive, stronger, longer-lived, instatiated within superior hardware, and better at poker - but it doesn't mean baseline humans would be incapable of understanding them.

The distinction between what I'm arguing and what Anissimov implies in his article is fairly subtle, and I could be accused of nit-picking, but I think it's important that we realise that there is no reason to assume posthumans will be completely and utterly ineffable to us, at least not if they want to survive IRL (which they may not).

That humans have accomplished what we have says more about the power of the evolutionary methods of stochastic tinkering combined with occasional deductive reasoning than it does about the brilliance of human intellect: and this is exactly what Anissimov is saying and why I agree with the premise of his article.

But once you possess culture (what Ian Stewart and Patrick Cohen call extelligence, or what Richard Dawkins might term "a memetic environment"), and a reasonable means of manipulating the universe it doesn't matter how "smart" you are. Trial and error and learning and robust heuristics take care of the rest.

I believe that some posthumans will be pretty weird, some may be charismatic, some may be frightening. But we can get to where they are, they are post-humans and they took a path we will be able to follow. Because of this and for the reasons given above I don't believe posthumans or posthuman civilization will ever be truly ineffable.



1: In fact it could be that this superior intelligence works on a completely different basis to creating a highly accurate model of the universe and the self, and works on some other basis that we can't comprehend. This non-intelligent "intelligence" would be truly ineffable and would completely disprove my point if it actually was superior in every possible way to human intelligence.

Further reading: if you do understand precisely what I'm trying to say I should mention that it isn't original, Greg Egan argues something similar in the opening chapter of Schild's Ladder.

1 comment:

Dominic Fairfax said...

Hello there,

Came across your blog today. How come nobody commented on it even though it was written in 2009? I have the same problem regarding my blogsite, especially on articles about the technological Singularity, trans and posthumanism. It would seem that not many people are interested in trans and posthumanism which is a shame because it is such a fascinating subject and great for debating. Most people seem to be interested in pretty mundane everyday occurrences which doesn't say an awful lot for humanity in our organic state. Lol. Dominic Fairfax