Monday, June 26, 2006

Nuclear Power and Techno-Progressive Ideology

I think I ought to clarify my position regarding “ideologies”. As I’ve said many times before I personally subscribe to a liberal philosophy, with progressive, democratic, semi-libertarian (in that I’m for civil liberties but not entirely against the existence of the state), transhumanist, humanist, secular, techno-progressive, enlightened, and utilitarian ideology with simple-man’s karma (a la My Name is Earl), common-sense, good humour, and the meme of enlightened self-interest thrown in for good measure. However. I do not believe that ideology should get in the way of politics. The best sort of politician in my opinion is one who will judge all cases on their particular merits (in the context of legislation) and make a decision based on their personal feelings and rational conclusions.

The problem (at least in the past) was that politicians and statesmen would make decisions on purely ideological grounds regardless of the fact that ideology had no place in the debate. But that’s the problem with ideology; it seeps insidiously into every corner of life until it utterly consumes you in every conceivable way.

Let’s take the example of nuclear power. There is currently a debate (well no, actually, TB has decided [somewhat pre-emptively] that new nuclear reactors Are A Good Thing) as to whether a new generation of nuclear reactors should be built in this country to help us reach our climate-change-stopping targets.

As I mentioned before, as a techno-progressive, you’d expect me to be enthusiastic over a renewal of interest in nuclear fission generation, but from a purely economic viewpoint, the numbers just don’t make sense: you build a nuclear reactor, plus infrastructure, and you have to spend in the region of two billion pounds. To build a hi-tech, high temperature, coal-fired energy-generator would require only about 200 million pounds. So what about the greenhouse gases? Well, if we just pump the CO2 underground (carbon sequestration) then we won’t have to worry about the carbon emissions. And in the case of nuclear we’ll still end up with waste (although there are methods to speed up the half life of by-products, rendering them safer much sooner than before, and the possibility of using thorium as a fuel, these still have the drawback of having to store and process waste and the infrastructure being more expensive) and nuclear remains an order of magnitude more expensive.

My point about ideology is that I shouldn’t let my techno-progressive ideology blind me to the fact that in our particular case, on the British Isles, we have huge reserves of coal remaining and the space (for example in the former gas and oil deposits under the North Sea) for carbon sequestration, and as such it would be foolish to go blundering into a nuclear quagmire. Another point worth mentioning is that nuclear is not renewable. We’ll run out of uranium some day, just as we’re running out of fossil fuels, and all our energy-plans will be stop-gap solutions until we develop miraculous solar power or fusion reactors. Another point worth concentrating on is saving energy, and generally being more efficient in our use of energy.

Being techno-progressive doesn’t mean advocating generation after generation of fuel-hungry, juice-guzzling gadgets, it means finding a subtler and more practical solution to a problem and not simply denying “technology” and declaring “technology” to be the cause of all our problems.]

Wednesday, May 17, 2006

Careers and Sterling

Sorry I’ve been absent for so long, nine days, in fact. But in those nine days sixty three thousand people have died of AIDS, I have revised fifty pages-worth of maths textbook, read more pages of text than your average medieval scholar would have done in a decade, suffered from an overreaction on the part of my body towards a totally harmless tree pollen, discovered a competition that offers a top cash prize of £750 that is about writing about the future of technology of all things, decided I love the music of “Alabama 3”, I watched the sublime "The Godfather", I bought a copy of Charles Stross’ The Atrocity Archives and considered that a masters in chemical engineering, followed by a post-graduate course in nanotechnology from UCL (or Cambridge [a man can dream...] or Manchester) would be quite an acceptable decision careerwise, and have fallen in love with the new MacBook. I want one. It is perfection – and it can run Windows! Now the last bastion of my resistance to buying a Mac is finally crumbling! I can have all the benefits, and not have to worry about compatibility issues.

Anyway I’ve just popped in to apologise for not updating my blog (who the hell am I apologising to?)and also that I’ve found a number of interesting articles by Bruce Sterling (whose "Schismatrix Plus" I need to read). Here is a typically interesting excerpt:

So where are the human limits? What are we supposed to do with these peculiar twin minorities: the tiny minority who can program from the silicon up and who genuinely understand computation, and the other cyber-dyslexic community who won't have any truck with computers under any circumstances? If I were a eugenicist, I would suggest that maybe we ought to interbreed these populations for the safety of the rest of society. But that's just a conceit.

More practically, I would suggest instead that the problem itself is a phantom problem. Human intellectual limits, although very much there, don't really matter all that much. There are, what, 5.7 billion people on the planet right now? Let's assume that one percent of the population can really hack. One percent of that figure would be 57 million people. This is a huge pool of creative talent, it must be as big as the entire population of Europe at the height of the Renaissance. If we can't coax a few decent multimedia programs out of that group, I would suggest that perhaps the fault lies elsewhere.

And if that makes the market smaller, so what? We can just do what Microsoft does. Instead of selling an easy workable program to a vast popular audience of 20 million people, we can sell a difficult, treacherous program to an elite audience of two million people, only we'll sell them the very same program ten times over in different upgrades.

I have often heard people in computing fretting over the purported fact that their mental inferiors can't keep up with the deep technical skills needed for computation. It's odd that I've never heard this said about television (except for VCRs, that is). I've only rarely heard it said about automobiles. Most of us can't fix or understand our televisions, and we can't fix or understand our automobiles either, but this vast ignorance about television and automobiles doesn't seem to bother anybody. We'll let most anybody get behind the wheel of a two-ton vehicle which can travel a hundred miles an hour and kill a dozen people in the blink of an eye. We never demand that they learn anything about the chemistry of oil refining, or about internal combustion. We just let 'em drive the car, and if they're no good at it and kill somebody, well, that's just tough luck!

I think it might be possible to design a computer that's as easy to drive as an automobile. Where you just rent one and sit in the seat and turn the key and get going, without getting enmeshed in the barbed wire of extensions and shells and bell-and-whistle hotkeys and all the rest of it.

I think the extremes of complexity in the human computer interface may be a passing phase. You shouldn't have to become a portly UNIX freak in order to manage a computer. I suspect, in fact, that it ought to be possible to design computers simple enough for animals to use. After all, do you really need a cellphone? Your cat, that's who needs a cellphone. Who knows where your cat is right now, anyway? Your cat needs a beeper. We already have gophers and lynxes on the Internet; on the Internet nobody knows you're a dog; is there any real technical reason why can't I put my dog on the Internet? I suspect this might be genuinely possible.

I suspect the ultimate Internet link is going to look and act a lot like a make-up case. You won't see any command-line prompts when you use it. It will be a social device, a social-relations technology just like a make-up case is. When you pull it out of your purse and open it and talk face to face to your friends on the other side of the planet, you will feel just about the same kind of glamorous intimate pleasure you feel when you are pulling out and using your compact mirror. The engineers will no longer be in control. Or at least, the engineers won't be trying to one-up one another by building and selling each other macho power-user desktop dragsters full of smoke and burnt rubber and oil fumes.

These words were spoken in October 1994 at the American Center for Design “Living Surfaces” Conference, San Francisco. It is extraordinarily prescient in what (as I read it) it says about intuitive graphical user interfaces and the move towards usability.

It also touches on a pet peeve of mine: people today are perfectly capable of operating cellphones, PCs, MacBooks etc without actually having the faintest idea of how the device they are using actually works. I can’t program “from the silicon up”, but I have no problem using a PC. My last rambling rant on this topic had me move from finding this state of affairs reprehensible to finding it a monument to humanities’ interdependency. Still, I think people should try to be slightly more technically oriented. Maybe we should all try and be like shands (plural shandi? – go look it up: reference to Strata by Terry Pratchett, one of his less critically acclaimed but nevertheless superb books) who simultaneously have several different professions. Below is a list of different profession-collections:

• Chemical engineer, meat animal herder, lift-attendant and bounty hunter.
• Solid state electrician, graphic-novelist, taxi-driver and cartwheel artizan.
• Film critic, private detective, cat burglar and insurance broker.
• Pharmacist, software technician and priest.
• Religious scholar, pro boxer, lithographer and architect.

It is clear that many people living today will live much longer than you (for the purposes of this conversation, you are “Joe Everyman”) might expect, and might retrain and follow a dozen totally different career paths over your active life, constantly hoping for the event-horizon of retirement to suck you down into the placid singularity of a halting state.
Speaking of which, I bought The Atrocity Archives by Charles Stross, I hope it is as good as his other books.

Monday, May 08, 2006

The Ultimate Gadget

For some time now I’ve been ruminating on “the ultimate gadget” (that’s right, you heard me – ruminating). The problem is that every time I pursue this train of thought I commit singularity and have to reboot. The thought process generally goes something like: “foldable, hi-definition, 3D electronic paper with eVisors and a neural interface…!!! Then singularity hits and a mere pre-transcendence homo sapien such as myself cannot know wot of the wonders available to the vast and cool and unsympathetic posthuman intellects of the beyond…
So in an attempt to create a useful thought experiment, here are some guidelines:

• All the technology used in the gadget should be feasible by the standards of 2006.
• By “gadget”, I mean a handheld device. Clearly cybernetic cyborg-implants are disallowed by the first point, and anything I can’t carry around in a jeans pocket is also out.

The gadget would be the form factor of an Orange SPV M5000 except it wouldn’t have the annoying asymmetry of the aforesaid device, and it should be slightly longer and thinner, more like that of the Sony Ericsson P910. The dimensions would probably be about 12 x 7 x 1.5 cm. In this aspect the gadget would resemble a mini-laptop/tablet, with the screen and a compact, QWERTY keyboard available on the interior of the clamshell. However the screen should be touch-sensitive when the device is in “PDA mode/camera mode”. The supplied stylus should of course have a little cap under which lurks a black ball point.

The gadget would have a camera similar to that of the imminent Sony Ericsson K790i. 3.2 MP, a substantial lens, autofocus, macro modes, and a xenon bulb for flash photography. The flashlight should also have a lower power setting which allows it to be used as a torch (a very useful feature of the K750i). There would of course be a cover to protect the lens – and the cover should not cover the flash. The gadget would have a smaller camera inside the clamshell for video conferencing. It is worth pointing out that beyond about 4 megapixels other factors start to affect the quality of the photographs you take much more than the number of pixels. I think building an optical zoom into the device is reasonable. The camera and cover should be flush to the surface of the back of the device, and be of a similar consistency material-wise so that it is not irritating or uncomfortable to hold.
The gadget would come with a full suite of entirely open-source, free and tech-hippy-friendly software including all the office basics, games, a web browser, basic photo and picture editors, sound recorders, media playing software etc…
Speakers and microphone (both for mobile-phone capability and note taking) would have to be included, of course. External function buttons could include pause/play, stop, back-forward buttons, a camera button and a voice record button, as well as the all important “hold” slider switch.

Connectivity options would include: a USB/Firewire data cable (presumably bundled in the box) a standard audio jack for speakers and earphones, an IRDA port located near the top end of the device (this would mainly be for use as a remote control for TVs, VCRs, DVD players, PVRs etc), Bluetooth, wireless a/b/g, EDGE, GSM, UMTS etc. The device should be able to function as a removable USB pen-drive with the cable. Included in the box should be adapters that allow the device to communicate with phone outlets and LAN cables.
One feature I’d like to include (but sadly cannot for feasibilities sake) and would like to see on mobile devices generally is a cross-spectrum sensor that allows the device to communicate in huge numbers of different ways across all conceivable areas of the electromagnetic spectrum. Updates and information about protocols could be downloaded and put in place through software mechanisms. This is a bit like black box technology, where you simply have a lump of stuff that does something depending on what you do to it. I suspect that this is one technology that will emerge from widespread nanotechnology and the invention of computronium, but I should really leave it out of this experiment as such a device isn’t quite here yet. For an example of the sort of universal gadgetry I mean read the sublime Artemis Fowl and the Eternity Code by Eoin Colfer.

Processing power and RAM should be sufficient to play games to a standard similar to that of, oh, say an old Gameboy Advance, after all, if you could play PSP games with PDAs there would be no need for a device designed specifically for gaming – right?.
The device should come with a hard disk of a type similar to that found in the current generation of iPods. Flash memory is still at the 2-4 GB stage, and I’d use flash over a vulnerable hard disk any day, but the disk would of course be cushioned, buffered and have a facility similar to those of Powerbooks, whereby an on-board motion sensor removes the stylus from the hard disk when a sudden motion is detected. The top-end iPods currently have 60 GB hard disks, so I’ll go for one of those.

Among the many ebooks that should be included with the device include a static version of Wikipedia, Brewer’s Dictionary of Phrase and Fable, The CIA World Factbook, IMDB Movie Database, US Army Survival Manual, The Bible, Koran, The Complete Sherlock Holmes, the Encyclopedia of Mythology, Foldoc Dictionary of Computing, Buddhist Dictionary, BBC Health Medical Notes, and How Stuff Works (all currently available for Palm, Windows handheld and Symbian users in TomeRaider format – now you know why I need the 60 GB hard drive).

It would be nice to be able to receive TV on a mobile device (in a ho-hum, should be done sort of way) and to be able to view all the freeview digital channels and record them and transfer them to a PC.

Thinking about it, I have absolutely no preference as to size and form. Something like the P series from Sony Ericsson or the old Nokia we-swear-its-not-a-brick would be fine, although the Orange SPV M5000 is almost perfect.

If you happen to be sitting in the park and an idea for an absolutely killer SF short about cyborgised spiders infiltrating an extreme-extropian libertarian enclave in a decrepit habitat in orbit around Pluto and you just have to write it down, fast, ’cause the words are right there in your head… Then an external keyboard would be good, something like those supplied by Palm and Nokia, amongst others. Ideally the keyboard would be foldable and not much larger than the device itself, so that you could whip one out of one interior jacket pocket and the other out of another interior jacket pocket.

GPS navigation facilities are obviously a must-have, and while we're talking about satellites and whatnot, why not have a satellite phone built in as well?

There is no doubt that such a device is entirely feasible with today’s technology, but usually when you try to put too much functionality into a device it becomes a jack of all trades and a master of none. I think (hope) that my perfect gadget (or something very like it) will be on the market before 2010, and shortly after that there will be all the eVisors and singularity-gifted consumer goods a presingularity liberal capitalist could wish for.

Tuesday, April 25, 2006

Transhumanism

In English today we discussed ICT and Language, and the point that we are all interdependent when it comes to technology came up. Even someone who knows how to program a computer doesn’t necessarily know how the PC actually works. Computer technology and its close affiliate, information technology, have spawned a whole raft of ideas and concepts that would be unfamiliar to anyone from before about the 1920s and with the rapid development in these fields techniques and concepts can rapidly become irrelevant and whole new problems can crop up overnight (being flippant here, but you know what I mean).

I’m prepared to bet £300 that if you took any group of 100 people from a progressive Western democracy, dumped them on a completely untamed and uncivilized planet (as they do in Strata) with only the most basic tools (i.e. axe, hammer, shears – this is cheating a bit anyway) most of them would be dead within a month, and those that survive would probably not continue to do so for very long (however in Iron Sunrise by Charles Stross the posthuman AI-run-amok deposits people on barely terraformed planets with cornucopia engines and suchlike).

The point is that you can’t sit down in a jungle and make a 3 GHz Intel Celeron processor chip using “the knowledge of the woods” or some garbage. Every achievement in every field of human endeavour is the top of a massive, broad pyramid of ideas based on things like subsistence agriculture.

In his recent book, Collapse: How Societies Choose to Fail and Succeed Jared Diamond (such a brilliant name!) discusses how social trends during times of economic success affect the rise and fall of civilizations (there’s a whole lot more as well, and it looks to be a fascinating book…). It is a point of considerable debate in some quarters as to exactly how durable and resilient our society actually is.

What we can justifiably assume from human nature is that when things are going well people will tend to assume they will stay that way forever. Do I need to back this assertion up? Probably, but I can’t be bothered. Look at the .com bubble. Look at the whole stock market for Chrissakes. People expect things to stay pretty much the same as they always have done, and act surprised when things change. As Ray Kurzweil explains in The Singularity is Near people have difficulty absorbing trends that take more than five years or so to become apparent because our resolution of events tends towards the second – minute – hour level, with perhaps the top end of perception being in the region of a year (speculation, people, speculation).

www.cyborgdemocracy.net is a blog and website that advocates democratic transhumanism. Go see them for their explanation for what a transhumanist is, what a democratic transhumanist is, and what a libertarian humanist (or extropian) is. The origins of the extropian movement are complex (i.e. I can’t be bothered to describe it in detail and would rather make a crass generalisation from limited knowledge, which is much more fun than being professionally journalistic ;-)). As far as I can work out a sentiment among Americans, represented by the ideals of libertarianism, which advocates limiting of the control of the state over the individual, rampant free-market capitalism, a night watch state, and the abolition of hierarchy, collided with the creation of the Internet and simultaneous progress in fields like AI, biotechnology (e.g. the Human Genome Project), and the imminent arrival of nanotechnology.

Libertarians who lived in the time of Thomas Carlyle were allowed to run rampant in Britain between about 1830 and 1860, and the night watch state that was instituted for much of that time simply didn’t work. Free markets were revealed to be just as capable of crushing people underfoot as dictatorships and kingdoms. The wealthy couldn’t be relied upon to be charitable enough to help those unable to help themselves and the interdependence of people is society was revealed: no one could pull anywhere unless they pulled together. As a result of the backlash against this state of affairs you get Karl Marx, Engels, socialism and a fifty year-war that almost destroyed the planet. However social reforms like the NHS in Britain, universal suffrage, and the New Plan in the US resulted in the wonderfully progressive, generally cheerful and happy-go-lucky tradition of liberal democracy that we (i.e. someone who owns or has access to a PC, Internet connection, is housed, and has time to spare to read weblogs (OK: that’s someone in one of the aforementioned progressive liberal democracies – or China…)) enjoy living with today.

Many libertarians, however, felt that all this social reform meant that more and more power was being taken back by the state (e.g. ID cards, control orders, evil instruments of the unholy like mobile telephone masts and speed cameras etc) and sometimes expressed themselves rather badly.

The problem was the libertarian system had been shown not to work. People needed the state for things like welfare, healthcare, education, crime fighting, and defence. Advances in technology seemed to offer a solution though: if you can develop an anti-aging technology it would remove much of the need for a public healthcare organisation in a stroke (lol) and with viable nanotechnology-based “autodocs” we could…

And on and on and on… The point is that extropians imagine that “magic technology” can cure all of society’s ills (including, apparently the need for a state). I don’t buy this personally. It’s a utopian dream for sure, and maybe one day we could all live long, happy and fulfilling lives by virtue of our wondrous sufficiently advanced technology. But first: who makes the nanotech? Second: who makes the AI? Third: What about commons? Fourth: Anarchy doesn’t work sunshine (we had anarchy way back in 4000 BC, and now we don’t – I wonder why?).

You could argue that the last two solve themselves: commons are replaced by magic technology, real estate by constant expansion out into space by home-grown diamond starships, courts and justice systems can by bought and sold (mmm really?) and anyway we’ll-all-be-uplifted-superintelligences-living-in-a-computer-so-we-won’t-disagree-
about-anything-anyway (more cynical noises – we’ll leave out the posthumans ’cause lets face it: we don’t know nuffink about ’em).

The point is that although technology can empower the individual (I’m writing a blog her after all…), it can’t (yet) solve every damn problem we have. I think we still need a democratic state structure (possibly modelled on the Swiss version of nearly direct democracy, rather than our own FPTP representative version – direct democracy is an interesting topic and one that could be pursued on a much larger basis now communications technology and connectivity are such as they are in the developed world).

If the admirable aims of transhumanists are to be realised I think it will only be through cooperation and integration into a broader, global, society. I don’t mean globalisation; I mean something more like this. The fact is that even if I had a PhD in “the application of artificial stem cells using Darwinian algorithms to solve problems in a surgical context” I would probably not be able to design the chips the computer that ran the algorithm or have enough expertise in polymer creation to make the fabric of the swivel chair I used at my work station, or know how to hunt a seal for that matter.

Twenty odd years ago a certain British politician was purported to have said that there “is no such thing as society”, and today we are still living with the consequences of her philosophy. Technology can’t replace society as technology is society, and society is people. Ergo, technology is people.

Society is as much a technology as anything else, as is any structure of the mind, like language, which helped our genetic ancestors get to our level of intelligence. Evolution is a dynamic process, after all, even if it is way out of our aforementioned perception of change, and it would seem that the advent of tool-use, language and fire all contributed to us. As we continue this ongoing process it accelerates as we adopt more and more effective methods to amplify our actions and our ideas. Thinkers like Ray Kurzweil reckon that once we’ve got this AI thing cracked then the next stage of the evolution of intelligence will be attached to the increasingly increasing rate of improvement in ordered information (as in: information with a purpose, the better the order, the better it can fulfil its purpose, which can be sentient, intelligent life).

It is frightening to consider how helpless I would be as an individual in an unknown and hostile environment, having to fend for myself against nature and all her ills. But fortunately for me I live within a web of devices, both large and small, tangible and intangible, that ensure I am well fed, looked after, educated and reasonably happy. The transhumanists are right: technology will allow us to transcend the poorer aspects of our nature and become better than we are, we know this because to a large extent it already has.

Sunday, April 23, 2006

Second Person Narratives

At the moment I’m writing an essay entitled Language and Occupation as part of a dry-run for my AS level revision (translation: I was looking for somewhere to start revising English, which is impossible, so I defaulted to doing my homework…) and just before I got to the interesting bit about J.L Austin’s speech acts I got bored and wandered off into blogland.

Charles Stross’ latest piece clubbed me over the eyes – he was talking about something that was very close to the rambling discussion of speech acts I was trying to bolt together. He discusses how the first, second and third tenses are used (or aren’t used) in fiction. Generally we use the third tense (he/she/it) when writing about something. If you take it that reading is the closest thing to telepathy we’ve managed to develop: Stross describes the third person version of telepathy as

…an omniscient telepathy cam weaving among the actors like an attention-deficient mosquito, landing to suck a moment's thought here then buzzing across the room to slurp a transient meme there…

Generally speaking the use of the second person is rarely used in fiction (one exception that comes to mind because I reread it recently is Strata by Terry Pratchett. In Strata Pratchett uses second person imperatives “Consider Kin Arad, now inspecting outline designs for the TY-archipelago…”). Stross proposes to do just that though. He explores what the concept of blogging may have meant for someone ten or more years ago:

Obviously, you know what it feels like to read a blog. But cast your mind back in time ten years —, no, make that fifteen — to a time before you encountered the net and before blogs had been invented. Try to imagine yourself as an aspiring SF writer who's read about this internet thingy, and about some experimental hypertext tools (from Xanadu and Hypercard to Hyper-G by way of Gopherspace and WAIS, with a side-order of this funny compromise thing some guy with a double-barreled name is tinkering with at CERN). As this aspiring SF writer, you've decided to write a novel set in 2006, a novel in which this internet thingy your tech-head friends keep gassing about in the pub is everywhere. And you start trying to work out just what that might mean. You've heard about email, and that intuitively makes sense. You've possibly heard of AOL or CompuServe or CIX and, if you move in academic circles, of USENET, and the idea of a bunch of people talking on a multi-user bulletin board isn't that strange. And there'll be some kind of easy-to-use hypertext system that lets ordinary folks add data to it.

But what are the ordinary folks going to add to this hypothetical global hypertext thing? What are they going to talk about? How are they going to use it and what's it going to feel like?

Asking these questions, your traditional, instinctive, bad-SF approach is to explain everything in nauseating detail: "Johnny sat down at the HyperTerminal and typed in his password. The computer verified his identity and let him in, throwing up a picture of the InterWeb. Johnny thought for a moment: where do I want to go today? The answer was obvious. Like all other communications media, the InterWeb had only really taken off once it was adopted by the porn industry as a replacement for Betamax tapes; now, finding anything useful in it was like walking down a strip mall full of flashing red neon signs and questionable window displays. But Johnny wanted to research his dissertation topic. So he typed in the address locator of Google, a popular information clearinghouse that scanned the rest of the InterWeb daily and indexed it, allowing keyword searches." (And so on.)

A more sophisticated approach that increasingly became the norm in more literary SF over the past couple of decades is to show. "Johnny picked up his laptop and logged on. Windows opened on its desktop, pop-up ads flashing garish offers of hardcore p=orn at him. Annoyed, he brought up a browser and headed to a search site to continue researching his dissertation." This mode is a whole lot less clunky, but it's got a crippling handicap: the author has to make the leap from technical description ("typing his password into the HyperTerminal's keyboard") to action ("he logged in") in a manner that is comprehensible to the reader. Because, let's face it, if you've never seen a computer the second version of this story is a whole lot less accessible than the first. Early SF was seen by its authors and their self-ghettoized readers as a didactic, educational medium exposing them to new ideas about technology and the way we might live. You could show the first version to a 1930s reader and they'd be able to follow the plot: the second remix is incomprehensible, because the referents for the action simply aren't there ("laptop", "logged on", "browser", "search site").

Sorry for piling that down there but it is an excellent comment on how SF tends to be written, especially the long, protracted infodumps that are necessary to introduce the ignorant reader to a new concept.

One of the best authors I know when it comes to not making this mistake is Ken MacLeod. In his early books (Stone Canal and The Star Fraction from the Fall Revolution sequence) MacLeod casually drops references to both advanced technology, and references to events that have taken place in real history – the result is a book which merits rereading after a couple of months or several hours of googling and Wikipedia exploring, but its density and originality are extraordinary.

Well I gotta get back to my essay. Such is life.

Tuesday, April 18, 2006

Wikipedia

Comparing Wikipedia to the Encyclopadia Britannica and other much more established works, as well as other online encyclopaedias (encyclopaediae?) seems to be in vogue at the moment, with reviews and articles in Focus Magazine, The Guardian, The Gadget Show, and The Register.

Speaking as a student and a regular user of Wikipedia I can name three qualities that make it extremely worthwhile compared to other publications:

It is free.

It is detailed.

It is comprehensive and massive.

Although there have been many criticisms of the project, and recently a number of important people have left the organisation, I haven’t seen much evidence of the graffito and vandalism that has earned Wikipedia criticism. I can only assume that this is because of the tireless efforts of many nameless volunteers who constantly monitor and repair such instances.

I do sometimes come across labels like “The Neutrality of this Article is Disputed”, which is quite reassuring: usually authoritative publications will not concede any bias or possibility of prejudice on the part of their writers, and it is refreshing to find a source that admits its own flaws.

However like all utopian dreams there are generally quite a few blemishes when the project is put into practice. It is sad that people are vandalising and abusing Wikipedia. Jason Scott explains why Wikipedia has started to judder recently here at archive.org.

Fortunately although Wikipedia may be consigned to some kind of back-seat, there is a future for the model: Wikipedia co-founder Larry Sanger has set up The Digital Universe:

"What makes people enthusiastic about contributing to Wikipedia is not that anyone can participate, it's that it's easy for the people who do to participate, and that they get instant feedback from in the community," he says. "Those features that make Wikipedia compelling can be replicated in a system that is managed by experts. The whole idea is to teach experts the Wikipedia magic."

His conception seems to be pretty close to an ideal for an online encyclopaedia, carrying forward the best aspects of Wikipedia but putting it into the hands of people “qualified” to do the job. It seems a bit sad that the anarchic nature of Wikipedia hasn’t played out as well as it might have done, and all because of the most irritating and immature elements of society (e.g. Radio One…), but I’m reassured that there will be a replacement if Wikipedia does go the way of the dodo, which it might still manage to avoid.

Friday, April 14, 2006

Some Comments on Ray Kurzweil's Recent Book

Recently I read a fabulous book called The Singularity is Near: When Humans Transcend Biology by someone called Ray Kurzweil.I would advise everyone to read this book. It is very good. It contains many good ideas, and the central idea is absolutely extraordinary.

I have discovered an example of Ray Kurzweil’s description of trends of technology. Kurzweil claims in his excellent book The Singularity is Near: When Humans Transcend Biology, that there will be a period of immense and profound change in the early-to-mid twenty first century revolving around a series of revolutions in our understanding of genetics and biology, nanotechnology and artificial intelligence.

The overall thesis proposed in the book is fascinating, and is reinforced by a number of smaller theories and pieces of evidence. However I’ll leave the extraordinary idea of the technological singularity for another piece of text and concentrate on another idea suggested by Kurzweil in The Singularity is Near. This idea, as with many of Kurzweil’s notions, is based on observation of technological trends. Kurzweil claims that any technology will typical follow seven stages, which are all part of an overall trend. Following is a short summary of Kurzweil’s seven stages:

The Precursor Stage: The prerequisites of a technology exist and some individuals speculate that the technology may one day exist. Leonardo da Vinci drew pictures of aeroplanes and automobiles, but this cannot be said to be the same as “inventing” these devices any more than the ancient Greeks who wrote the myths of Icarus can be said to have invented the aeroplane.

Invention: An inventor blends curiosity, scientific skills, determination and other talents to create the first actual working example of a particular technology.

Development: This stage is often more essential to the success of a particular technology than the stage of invention. At this stage the technology will be restricted to a few careful guardians. The equivalent of this stage in the development of writing would have been the scribes of Ancient Egypt, and in powered heavier-than-air flight this would have been the period with a few pioneers like the Wright brothers and Louis Bleriot carrying the technology forward in the early twentieth century. Often the ultimate result of this period is something like mass-production, which allows the technology to progress to stage 4.

Maturity: Whilst continuing to develop the technology becomes established, accepted and “normal”. It is at this point that some people begin to assume the technology is A: perfect and B: will last forever.

The Stage of False Pretenders: This is an interesting period in the history of a technology. A new technology emerges which claims to completely replace the older technology, and the enthusiasts backing the new technology predict a quick victory. However, although the new technology has some benefits, it is deficient in other ways. An example from music-storage-format technologies is the audio cassette tape. This emerged as a “replacement” for older vinyl discs in the sixties and seventies, however when the weaknesses of the format became apparent it was itself rapidly superseded by digital compact disks (the spelling of “disc” with a k seems more appropriate in this context) As I mentioned before, these seven stages are an elaboration of an overall trend.

Technologies go from being expensive, not working very well and being limited to a small elite to being relatively cheap, working fairly well and being quite widely available to being almost free, working extremely well and being available to everyone (with exceptions, of course, see below). It is in the field of information technologies that these trends are most noticeable because of the ongoing very high rate of change in that area.

Here is the example of this trend: In 1981 the Osborne Computer Corporation released the world’s first “portable” micro-computer, which cost $1795. It had the following hardware features: Dual 5¼-inch floppy disk drives4 MHz Z80 CPU65 kilobytes main memoryFold-down keyboard doubling as the computer case’s lid5-inch, 52 character × 24 line monochrome CRT displayParallel printer portSerial port for use with external modems or serial printers And, as you can see from the image on this device's Wikipedia entry, can hardly be described as portable. It was heavy, ugly, and difficult to carry around. Now, in 2006 we have the wonderful $100 laptop being developed by the one laptop per child organisation.

This is a plan to distribute these extremely durable and low-cost laptops to children in third world countries to bring them the benefits of information technology and the resultant enhancements in education. The latest prototypes (with prospective prices a shade over $110 per unit) have the following hardware features: 366 MHz AMD CPU with 0.25 Watt power consumptionSVGA 7” LCD screen with colour and black and white modes (for ebooks)128 megabytes of DRAM512 megabytes of flash memoryWireless networking using an “extended range” 802.11b wireless chipset (2 Mbit/second to minimise power consumption. Alphanumeric keyboard, and a long touchpad for handwriting lessonsBuilt in stereo speakers and microphone3 external USB portsA hand crank generator!

To return to Kurzweil’s seven stages: here we can clearly see an example of technology simultaneously improving, becoming cheaper and becoming available to many more people. Critics of the $100 laptop scheme include Bill Gates, who I feel rather missed the point with his comments when questioned about the $100 laptop programme at the introduction of the Ultra Mobile PC in March:

"If you are going to go have people share the computer, get a broadband connection and have somebody there who can help support the user, geez, get a decent computer where you can actually read the text and you're not sitting there cranking the thing while you're trying to type."

This isn’t a consumer venture like the Ultra Mobile PC. This is a boot-strapping charity venture. I agree that delivering a top-of-the-line dual-core monstrosity running the (doubtless sublime and doubtless expensive) Windows Vista operating system (instead of the free and open source Linux-based Red Hat OS of the $100 laptop) with a wireless 10 Mbps connection covering the entire African continent and a personal guide for every single village in the region would be preferable to what is being suggested, but we don’t all have Mr Gates’ billions to spend. The idea is one sub-$100 laptop for every child.

Further to Mr Gates’ comments: charging $60+ for an operating system means any scheme to provide cheap computing facilities to the developing world is likely to grind to a halt if it intends to use Microsoft Windows™. Other slightly more pertinent criticisms of the project include the possibility of pollutants produced by the disposal of some of the components. This is a genuine worry, and not just one that applies to these laptops, but to all electronic goods. Other critics suggest slightly different approaches, a blogger writes:

Sell the $100 laptop in open market and use royalty to fund free laptops to poor children: I don't understand why OLPC doesn't want to sell in open markets, and why the manufacturing contract has to be exclusive to specific manufacturer(s). By doing this, OLPC is not unleashing the power of the markets. Such a sound concept as $100 laptop, when complemented by the market, will work exponentially well. I suggest a system where the design is made close to open source, and any manufacturer can use the design, and they can make improvements. However, the manufacturers should agree to submit any design or function improvements to the MediaLabs, in return for the original design. The MediaLabs should collect royalty as a percentage of sales, and use it to fund free or subsidized laptops for children of poor countries.” [sic]

This makes some interesting points. I suppose that something like this may well happen in the near future if the scheme is to be pursued (coming to a PC World near you…). The whole idea reminds me of the start of the excellent science fiction novel Singularity Sky by Charles Stross, when mobile telephones rain down on the inhabitants of a repressive regime. In this case it is poverty that is being combated.

This project, aimed at increasing the information processing and educational facilities in poor regions is one of many projects that will extend the reach of civilization and progress into developing countries. To read more about the One Laptop per Child organisation, visit their website at: http://laptop.org/ To read more about the Ray Kurzweil’s seven stages of technology visit his website: www.kurzweilAI.net

Some News

Since I started blogging a few weeks ago I’ve set up several different blogs. From now on I’m just going to use this one. This is why there is so much stuff published just today. Please enjoy.

The Meme of the Meme

A "meme" is a unit of cultural information. As with many ideas there are several different interpretations of exactly what a meme is, but my personal interpretation goes like this:

My understanding of the idea of the meme is based on my understanding of what constitutes "life".

"Life" consists of a pattern of information that is so structured that it tends to replicate (make copies of itself) and in doing so sometimes changes (mutates). Usually when the pattern changes it does so in a way that makes it less likely that the pattern will succeed in replicating, however sometimes a particular change will occur that is "beneficial" to the pattern, that is, the altered pattern is more likely to replicate than the unaltered pattern.

Philosophers who thought about this definition of life noticed that this behaviour was not only limited to matter, but was also displayed in other media. An example of another medium in which patterns of information replicate is the medium of information that is stored in a computer system.

Computer viruses are so-called because they exhibit similar characteristics to biological viruses. They are essentially patterns of information that tend to create copies of themselves, and spread through all of the available region.

Another medium in which information is altered (or processed) and travels around is the human mind. The human mind processes enormous amounts of information, in the form of music, the written word, symbols, spoken langauge, images, noises etc.

As I understand it, a "meme" is the smallest unit of this sort of information. This is where it becomes difficult to give the word a precise definition. If we are to take the meme in its purest sense, as an elaboration of an analogy taken from the world of biology, then a meme is equivalent to a gene. A gene is a unit of genetic information, and a meme is a unit of "cultural" information.

There is a problem with this. A "gene" is a discrete unit of data encoded as a series of chemicals. No one seems very clear as to whether a meme is the label for some as-yet-undiscovered unit of human experience ("experions" ;-)) or is a label for a "mind virus", which it is often used as.

"I can't get "The 5th of Beethoven" by the Electric Light Orchestra out of my head!" said Bert.
"That's because you have caught a meme!" replied Alec. "A meme is like a computer virus, but for someone's brain."

Another similar problem is with the definition of "culture". Stating what is culture and what isn't is controversial at the best of times. Comparing memes to genes means we have to compare culture to biology (now I come to think of it - what is biology exactly?).

So to summarise: a "meme" is a self-propagating unit of cultural information that may take the form of an idea, concept, song, phrase, habit, mood or something else. No one has decided exactly what a meme is. However I suspect that a fairly good example of a meme is the idea of the meme itself. By writing this I am reinforcing the meme, and I am helping it spread.

As with genetics, there are some memes that are beneficial (using the word in the woolly sense of part gene/part mind-virus) and there are others that are not. Religions comprise a collection of memes that are generally quite successful in the sense that memes of certain religions are very common in several cultures and often form a central part of the culture.

An example of this is morality. Morality is the ideology of the Christian religion, and has taken a central role in the traditions of Western democratic, progressive, liberal cultures like that which is the main culture in the UK or France.

However, as with genetics, the driving force of memes is their own self-propagation. A meme doesn't necessarily have to benefit the people it uses as its "vector", it only has to ensure that the individuals it infects/occupies/uses survive long enough to spread the meme to several other people, and that the individuals spread it as effectively as possible.

As always, all the self-referential cork screw thinking involved in thinking about memes leaves me quite tired. There are many questions surrounding the somewhat ambiguous concept of the meme, and no one seems very sure as to what the answers might be.

Anyway, if you want to read a clearer description of the meme and what a meme is I suggest you go to www.en.wikipedia.org/wiki/Meme

Guardian

In the technology supplement of yesterday’s The Guardian newspaper there was an article criticising Wikipedia and Google News:

Away from the hurly-burly of Wikipedia, even current events can seem oddly remote when viewed online. Google News, for example, employs computer algorithms similar to those used in spam filters to identify and present the news. In looking for similarities, the news is homogenised and breaking stories fail to rise to prominence.

For the veteran researcher Daniel Brandt, who taught CIA whistleblower Phillip Agee how to use computers, much of what a human editor provides is lost. “What’s gone is any sense of “a scoop” or “an important development”. Or “new information that puts a new slant on an ongoing story. There’s no authority, no perspective and no sense of historical continuity. It’s a dumbing-down process,” says Texas-based Brandt.

Mm. I’m not so sure about this opinion. I’d say that most of what is published and broadcast through conventional news-organs like newspapers, TV news and radio news is pretty “dumbed down” already. Further, more often than not editors (especially in popular tabloid newspapers) can actually bring prejudice and spin to stories. Although many of the criticisms of Wikipedia and Google News are fair enough – Jimmy Wales himself has said that many mature articles are “horrific crap” – I have to point out that there are plenty of criticisms you can level at more conventional media as well as “new media”.

In an only slightly related topic: I find it immensely annoying when the phrase “so called” is used by newsreaders. It is meant to imply that the nickname the ’reader is about to use was not made up by some other source in the media, and has simply been, you know, ‘around’. So instead of looking like populist buzzword-jockeys, the news organ appears as the objective news-vendors they purport to be.

Even worse is the use of the term “…dubbed by the media…” which again is intended to imply objectivity but could just as well mean “…we’ve all decided to call it something catchy with less than three syllables so the GU can stretch it through their four amp brains…” Call it an over reaction if you will, but I’ve always found it intensely annoying.