Tuesday, April 25, 2006

Transhumanism

In English today we discussed ICT and Language, and the point that we are all interdependent when it comes to technology came up. Even someone who knows how to program a computer doesn’t necessarily know how the PC actually works. Computer technology and its close affiliate, information technology, have spawned a whole raft of ideas and concepts that would be unfamiliar to anyone from before about the 1920s and with the rapid development in these fields techniques and concepts can rapidly become irrelevant and whole new problems can crop up overnight (being flippant here, but you know what I mean).

I’m prepared to bet £300 that if you took any group of 100 people from a progressive Western democracy, dumped them on a completely untamed and uncivilized planet (as they do in Strata) with only the most basic tools (i.e. axe, hammer, shears – this is cheating a bit anyway) most of them would be dead within a month, and those that survive would probably not continue to do so for very long (however in Iron Sunrise by Charles Stross the posthuman AI-run-amok deposits people on barely terraformed planets with cornucopia engines and suchlike).

The point is that you can’t sit down in a jungle and make a 3 GHz Intel Celeron processor chip using “the knowledge of the woods” or some garbage. Every achievement in every field of human endeavour is the top of a massive, broad pyramid of ideas based on things like subsistence agriculture.

In his recent book, Collapse: How Societies Choose to Fail and Succeed Jared Diamond (such a brilliant name!) discusses how social trends during times of economic success affect the rise and fall of civilizations (there’s a whole lot more as well, and it looks to be a fascinating book…). It is a point of considerable debate in some quarters as to exactly how durable and resilient our society actually is.

What we can justifiably assume from human nature is that when things are going well people will tend to assume they will stay that way forever. Do I need to back this assertion up? Probably, but I can’t be bothered. Look at the .com bubble. Look at the whole stock market for Chrissakes. People expect things to stay pretty much the same as they always have done, and act surprised when things change. As Ray Kurzweil explains in The Singularity is Near people have difficulty absorbing trends that take more than five years or so to become apparent because our resolution of events tends towards the second – minute – hour level, with perhaps the top end of perception being in the region of a year (speculation, people, speculation).

www.cyborgdemocracy.net is a blog and website that advocates democratic transhumanism. Go see them for their explanation for what a transhumanist is, what a democratic transhumanist is, and what a libertarian humanist (or extropian) is. The origins of the extropian movement are complex (i.e. I can’t be bothered to describe it in detail and would rather make a crass generalisation from limited knowledge, which is much more fun than being professionally journalistic ;-)). As far as I can work out a sentiment among Americans, represented by the ideals of libertarianism, which advocates limiting of the control of the state over the individual, rampant free-market capitalism, a night watch state, and the abolition of hierarchy, collided with the creation of the Internet and simultaneous progress in fields like AI, biotechnology (e.g. the Human Genome Project), and the imminent arrival of nanotechnology.

Libertarians who lived in the time of Thomas Carlyle were allowed to run rampant in Britain between about 1830 and 1860, and the night watch state that was instituted for much of that time simply didn’t work. Free markets were revealed to be just as capable of crushing people underfoot as dictatorships and kingdoms. The wealthy couldn’t be relied upon to be charitable enough to help those unable to help themselves and the interdependence of people is society was revealed: no one could pull anywhere unless they pulled together. As a result of the backlash against this state of affairs you get Karl Marx, Engels, socialism and a fifty year-war that almost destroyed the planet. However social reforms like the NHS in Britain, universal suffrage, and the New Plan in the US resulted in the wonderfully progressive, generally cheerful and happy-go-lucky tradition of liberal democracy that we (i.e. someone who owns or has access to a PC, Internet connection, is housed, and has time to spare to read weblogs (OK: that’s someone in one of the aforementioned progressive liberal democracies – or China…)) enjoy living with today.

Many libertarians, however, felt that all this social reform meant that more and more power was being taken back by the state (e.g. ID cards, control orders, evil instruments of the unholy like mobile telephone masts and speed cameras etc) and sometimes expressed themselves rather badly.

The problem was the libertarian system had been shown not to work. People needed the state for things like welfare, healthcare, education, crime fighting, and defence. Advances in technology seemed to offer a solution though: if you can develop an anti-aging technology it would remove much of the need for a public healthcare organisation in a stroke (lol) and with viable nanotechnology-based “autodocs” we could…

And on and on and on… The point is that extropians imagine that “magic technology” can cure all of society’s ills (including, apparently the need for a state). I don’t buy this personally. It’s a utopian dream for sure, and maybe one day we could all live long, happy and fulfilling lives by virtue of our wondrous sufficiently advanced technology. But first: who makes the nanotech? Second: who makes the AI? Third: What about commons? Fourth: Anarchy doesn’t work sunshine (we had anarchy way back in 4000 BC, and now we don’t – I wonder why?).

You could argue that the last two solve themselves: commons are replaced by magic technology, real estate by constant expansion out into space by home-grown diamond starships, courts and justice systems can by bought and sold (mmm really?) and anyway we’ll-all-be-uplifted-superintelligences-living-in-a-computer-so-we-won’t-disagree-
about-anything-anyway (more cynical noises – we’ll leave out the posthumans ’cause lets face it: we don’t know nuffink about ’em).

The point is that although technology can empower the individual (I’m writing a blog her after all…), it can’t (yet) solve every damn problem we have. I think we still need a democratic state structure (possibly modelled on the Swiss version of nearly direct democracy, rather than our own FPTP representative version – direct democracy is an interesting topic and one that could be pursued on a much larger basis now communications technology and connectivity are such as they are in the developed world).

If the admirable aims of transhumanists are to be realised I think it will only be through cooperation and integration into a broader, global, society. I don’t mean globalisation; I mean something more like this. The fact is that even if I had a PhD in “the application of artificial stem cells using Darwinian algorithms to solve problems in a surgical context” I would probably not be able to design the chips the computer that ran the algorithm or have enough expertise in polymer creation to make the fabric of the swivel chair I used at my work station, or know how to hunt a seal for that matter.

Twenty odd years ago a certain British politician was purported to have said that there “is no such thing as society”, and today we are still living with the consequences of her philosophy. Technology can’t replace society as technology is society, and society is people. Ergo, technology is people.

Society is as much a technology as anything else, as is any structure of the mind, like language, which helped our genetic ancestors get to our level of intelligence. Evolution is a dynamic process, after all, even if it is way out of our aforementioned perception of change, and it would seem that the advent of tool-use, language and fire all contributed to us. As we continue this ongoing process it accelerates as we adopt more and more effective methods to amplify our actions and our ideas. Thinkers like Ray Kurzweil reckon that once we’ve got this AI thing cracked then the next stage of the evolution of intelligence will be attached to the increasingly increasing rate of improvement in ordered information (as in: information with a purpose, the better the order, the better it can fulfil its purpose, which can be sentient, intelligent life).

It is frightening to consider how helpless I would be as an individual in an unknown and hostile environment, having to fend for myself against nature and all her ills. But fortunately for me I live within a web of devices, both large and small, tangible and intangible, that ensure I am well fed, looked after, educated and reasonably happy. The transhumanists are right: technology will allow us to transcend the poorer aspects of our nature and become better than we are, we know this because to a large extent it already has.

Sunday, April 23, 2006

Second Person Narratives

At the moment I’m writing an essay entitled Language and Occupation as part of a dry-run for my AS level revision (translation: I was looking for somewhere to start revising English, which is impossible, so I defaulted to doing my homework…) and just before I got to the interesting bit about J.L Austin’s speech acts I got bored and wandered off into blogland.

Charles Stross’ latest piece clubbed me over the eyes – he was talking about something that was very close to the rambling discussion of speech acts I was trying to bolt together. He discusses how the first, second and third tenses are used (or aren’t used) in fiction. Generally we use the third tense (he/she/it) when writing about something. If you take it that reading is the closest thing to telepathy we’ve managed to develop: Stross describes the third person version of telepathy as

…an omniscient telepathy cam weaving among the actors like an attention-deficient mosquito, landing to suck a moment's thought here then buzzing across the room to slurp a transient meme there…

Generally speaking the use of the second person is rarely used in fiction (one exception that comes to mind because I reread it recently is Strata by Terry Pratchett. In Strata Pratchett uses second person imperatives “Consider Kin Arad, now inspecting outline designs for the TY-archipelago…”). Stross proposes to do just that though. He explores what the concept of blogging may have meant for someone ten or more years ago:

Obviously, you know what it feels like to read a blog. But cast your mind back in time ten years —, no, make that fifteen — to a time before you encountered the net and before blogs had been invented. Try to imagine yourself as an aspiring SF writer who's read about this internet thingy, and about some experimental hypertext tools (from Xanadu and Hypercard to Hyper-G by way of Gopherspace and WAIS, with a side-order of this funny compromise thing some guy with a double-barreled name is tinkering with at CERN). As this aspiring SF writer, you've decided to write a novel set in 2006, a novel in which this internet thingy your tech-head friends keep gassing about in the pub is everywhere. And you start trying to work out just what that might mean. You've heard about email, and that intuitively makes sense. You've possibly heard of AOL or CompuServe or CIX and, if you move in academic circles, of USENET, and the idea of a bunch of people talking on a multi-user bulletin board isn't that strange. And there'll be some kind of easy-to-use hypertext system that lets ordinary folks add data to it.

But what are the ordinary folks going to add to this hypothetical global hypertext thing? What are they going to talk about? How are they going to use it and what's it going to feel like?

Asking these questions, your traditional, instinctive, bad-SF approach is to explain everything in nauseating detail: "Johnny sat down at the HyperTerminal and typed in his password. The computer verified his identity and let him in, throwing up a picture of the InterWeb. Johnny thought for a moment: where do I want to go today? The answer was obvious. Like all other communications media, the InterWeb had only really taken off once it was adopted by the porn industry as a replacement for Betamax tapes; now, finding anything useful in it was like walking down a strip mall full of flashing red neon signs and questionable window displays. But Johnny wanted to research his dissertation topic. So he typed in the address locator of Google, a popular information clearinghouse that scanned the rest of the InterWeb daily and indexed it, allowing keyword searches." (And so on.)

A more sophisticated approach that increasingly became the norm in more literary SF over the past couple of decades is to show. "Johnny picked up his laptop and logged on. Windows opened on its desktop, pop-up ads flashing garish offers of hardcore p=orn at him. Annoyed, he brought up a browser and headed to a search site to continue researching his dissertation." This mode is a whole lot less clunky, but it's got a crippling handicap: the author has to make the leap from technical description ("typing his password into the HyperTerminal's keyboard") to action ("he logged in") in a manner that is comprehensible to the reader. Because, let's face it, if you've never seen a computer the second version of this story is a whole lot less accessible than the first. Early SF was seen by its authors and their self-ghettoized readers as a didactic, educational medium exposing them to new ideas about technology and the way we might live. You could show the first version to a 1930s reader and they'd be able to follow the plot: the second remix is incomprehensible, because the referents for the action simply aren't there ("laptop", "logged on", "browser", "search site").

Sorry for piling that down there but it is an excellent comment on how SF tends to be written, especially the long, protracted infodumps that are necessary to introduce the ignorant reader to a new concept.

One of the best authors I know when it comes to not making this mistake is Ken MacLeod. In his early books (Stone Canal and The Star Fraction from the Fall Revolution sequence) MacLeod casually drops references to both advanced technology, and references to events that have taken place in real history – the result is a book which merits rereading after a couple of months or several hours of googling and Wikipedia exploring, but its density and originality are extraordinary.

Well I gotta get back to my essay. Such is life.

Tuesday, April 18, 2006

Wikipedia

Comparing Wikipedia to the Encyclopadia Britannica and other much more established works, as well as other online encyclopaedias (encyclopaediae?) seems to be in vogue at the moment, with reviews and articles in Focus Magazine, The Guardian, The Gadget Show, and The Register.

Speaking as a student and a regular user of Wikipedia I can name three qualities that make it extremely worthwhile compared to other publications:

It is free.

It is detailed.

It is comprehensive and massive.

Although there have been many criticisms of the project, and recently a number of important people have left the organisation, I haven’t seen much evidence of the graffito and vandalism that has earned Wikipedia criticism. I can only assume that this is because of the tireless efforts of many nameless volunteers who constantly monitor and repair such instances.

I do sometimes come across labels like “The Neutrality of this Article is Disputed”, which is quite reassuring: usually authoritative publications will not concede any bias or possibility of prejudice on the part of their writers, and it is refreshing to find a source that admits its own flaws.

However like all utopian dreams there are generally quite a few blemishes when the project is put into practice. It is sad that people are vandalising and abusing Wikipedia. Jason Scott explains why Wikipedia has started to judder recently here at archive.org.

Fortunately although Wikipedia may be consigned to some kind of back-seat, there is a future for the model: Wikipedia co-founder Larry Sanger has set up The Digital Universe:

"What makes people enthusiastic about contributing to Wikipedia is not that anyone can participate, it's that it's easy for the people who do to participate, and that they get instant feedback from in the community," he says. "Those features that make Wikipedia compelling can be replicated in a system that is managed by experts. The whole idea is to teach experts the Wikipedia magic."

His conception seems to be pretty close to an ideal for an online encyclopaedia, carrying forward the best aspects of Wikipedia but putting it into the hands of people “qualified” to do the job. It seems a bit sad that the anarchic nature of Wikipedia hasn’t played out as well as it might have done, and all because of the most irritating and immature elements of society (e.g. Radio One…), but I’m reassured that there will be a replacement if Wikipedia does go the way of the dodo, which it might still manage to avoid.

Friday, April 14, 2006

Some Comments on Ray Kurzweil's Recent Book

Recently I read a fabulous book called The Singularity is Near: When Humans Transcend Biology by someone called Ray Kurzweil.I would advise everyone to read this book. It is very good. It contains many good ideas, and the central idea is absolutely extraordinary.

I have discovered an example of Ray Kurzweil’s description of trends of technology. Kurzweil claims in his excellent book The Singularity is Near: When Humans Transcend Biology, that there will be a period of immense and profound change in the early-to-mid twenty first century revolving around a series of revolutions in our understanding of genetics and biology, nanotechnology and artificial intelligence.

The overall thesis proposed in the book is fascinating, and is reinforced by a number of smaller theories and pieces of evidence. However I’ll leave the extraordinary idea of the technological singularity for another piece of text and concentrate on another idea suggested by Kurzweil in The Singularity is Near. This idea, as with many of Kurzweil’s notions, is based on observation of technological trends. Kurzweil claims that any technology will typical follow seven stages, which are all part of an overall trend. Following is a short summary of Kurzweil’s seven stages:

The Precursor Stage: The prerequisites of a technology exist and some individuals speculate that the technology may one day exist. Leonardo da Vinci drew pictures of aeroplanes and automobiles, but this cannot be said to be the same as “inventing” these devices any more than the ancient Greeks who wrote the myths of Icarus can be said to have invented the aeroplane.

Invention: An inventor blends curiosity, scientific skills, determination and other talents to create the first actual working example of a particular technology.

Development: This stage is often more essential to the success of a particular technology than the stage of invention. At this stage the technology will be restricted to a few careful guardians. The equivalent of this stage in the development of writing would have been the scribes of Ancient Egypt, and in powered heavier-than-air flight this would have been the period with a few pioneers like the Wright brothers and Louis Bleriot carrying the technology forward in the early twentieth century. Often the ultimate result of this period is something like mass-production, which allows the technology to progress to stage 4.

Maturity: Whilst continuing to develop the technology becomes established, accepted and “normal”. It is at this point that some people begin to assume the technology is A: perfect and B: will last forever.

The Stage of False Pretenders: This is an interesting period in the history of a technology. A new technology emerges which claims to completely replace the older technology, and the enthusiasts backing the new technology predict a quick victory. However, although the new technology has some benefits, it is deficient in other ways. An example from music-storage-format technologies is the audio cassette tape. This emerged as a “replacement” for older vinyl discs in the sixties and seventies, however when the weaknesses of the format became apparent it was itself rapidly superseded by digital compact disks (the spelling of “disc” with a k seems more appropriate in this context) As I mentioned before, these seven stages are an elaboration of an overall trend.

Technologies go from being expensive, not working very well and being limited to a small elite to being relatively cheap, working fairly well and being quite widely available to being almost free, working extremely well and being available to everyone (with exceptions, of course, see below). It is in the field of information technologies that these trends are most noticeable because of the ongoing very high rate of change in that area.

Here is the example of this trend: In 1981 the Osborne Computer Corporation released the world’s first “portable” micro-computer, which cost $1795. It had the following hardware features: Dual 5¼-inch floppy disk drives4 MHz Z80 CPU65 kilobytes main memoryFold-down keyboard doubling as the computer case’s lid5-inch, 52 character × 24 line monochrome CRT displayParallel printer portSerial port for use with external modems or serial printers And, as you can see from the image on this device's Wikipedia entry, can hardly be described as portable. It was heavy, ugly, and difficult to carry around. Now, in 2006 we have the wonderful $100 laptop being developed by the one laptop per child organisation.

This is a plan to distribute these extremely durable and low-cost laptops to children in third world countries to bring them the benefits of information technology and the resultant enhancements in education. The latest prototypes (with prospective prices a shade over $110 per unit) have the following hardware features: 366 MHz AMD CPU with 0.25 Watt power consumptionSVGA 7” LCD screen with colour and black and white modes (for ebooks)128 megabytes of DRAM512 megabytes of flash memoryWireless networking using an “extended range” 802.11b wireless chipset (2 Mbit/second to minimise power consumption. Alphanumeric keyboard, and a long touchpad for handwriting lessonsBuilt in stereo speakers and microphone3 external USB portsA hand crank generator!

To return to Kurzweil’s seven stages: here we can clearly see an example of technology simultaneously improving, becoming cheaper and becoming available to many more people. Critics of the $100 laptop scheme include Bill Gates, who I feel rather missed the point with his comments when questioned about the $100 laptop programme at the introduction of the Ultra Mobile PC in March:

"If you are going to go have people share the computer, get a broadband connection and have somebody there who can help support the user, geez, get a decent computer where you can actually read the text and you're not sitting there cranking the thing while you're trying to type."

This isn’t a consumer venture like the Ultra Mobile PC. This is a boot-strapping charity venture. I agree that delivering a top-of-the-line dual-core monstrosity running the (doubtless sublime and doubtless expensive) Windows Vista operating system (instead of the free and open source Linux-based Red Hat OS of the $100 laptop) with a wireless 10 Mbps connection covering the entire African continent and a personal guide for every single village in the region would be preferable to what is being suggested, but we don’t all have Mr Gates’ billions to spend. The idea is one sub-$100 laptop for every child.

Further to Mr Gates’ comments: charging $60+ for an operating system means any scheme to provide cheap computing facilities to the developing world is likely to grind to a halt if it intends to use Microsoft Windows™. Other slightly more pertinent criticisms of the project include the possibility of pollutants produced by the disposal of some of the components. This is a genuine worry, and not just one that applies to these laptops, but to all electronic goods. Other critics suggest slightly different approaches, a blogger writes:

Sell the $100 laptop in open market and use royalty to fund free laptops to poor children: I don't understand why OLPC doesn't want to sell in open markets, and why the manufacturing contract has to be exclusive to specific manufacturer(s). By doing this, OLPC is not unleashing the power of the markets. Such a sound concept as $100 laptop, when complemented by the market, will work exponentially well. I suggest a system where the design is made close to open source, and any manufacturer can use the design, and they can make improvements. However, the manufacturers should agree to submit any design or function improvements to the MediaLabs, in return for the original design. The MediaLabs should collect royalty as a percentage of sales, and use it to fund free or subsidized laptops for children of poor countries.” [sic]

This makes some interesting points. I suppose that something like this may well happen in the near future if the scheme is to be pursued (coming to a PC World near you…). The whole idea reminds me of the start of the excellent science fiction novel Singularity Sky by Charles Stross, when mobile telephones rain down on the inhabitants of a repressive regime. In this case it is poverty that is being combated.

This project, aimed at increasing the information processing and educational facilities in poor regions is one of many projects that will extend the reach of civilization and progress into developing countries. To read more about the One Laptop per Child organisation, visit their website at: http://laptop.org/ To read more about the Ray Kurzweil’s seven stages of technology visit his website: www.kurzweilAI.net

Some News

Since I started blogging a few weeks ago I’ve set up several different blogs. From now on I’m just going to use this one. This is why there is so much stuff published just today. Please enjoy.

The Meme of the Meme

A "meme" is a unit of cultural information. As with many ideas there are several different interpretations of exactly what a meme is, but my personal interpretation goes like this:

My understanding of the idea of the meme is based on my understanding of what constitutes "life".

"Life" consists of a pattern of information that is so structured that it tends to replicate (make copies of itself) and in doing so sometimes changes (mutates). Usually when the pattern changes it does so in a way that makes it less likely that the pattern will succeed in replicating, however sometimes a particular change will occur that is "beneficial" to the pattern, that is, the altered pattern is more likely to replicate than the unaltered pattern.

Philosophers who thought about this definition of life noticed that this behaviour was not only limited to matter, but was also displayed in other media. An example of another medium in which patterns of information replicate is the medium of information that is stored in a computer system.

Computer viruses are so-called because they exhibit similar characteristics to biological viruses. They are essentially patterns of information that tend to create copies of themselves, and spread through all of the available region.

Another medium in which information is altered (or processed) and travels around is the human mind. The human mind processes enormous amounts of information, in the form of music, the written word, symbols, spoken langauge, images, noises etc.

As I understand it, a "meme" is the smallest unit of this sort of information. This is where it becomes difficult to give the word a precise definition. If we are to take the meme in its purest sense, as an elaboration of an analogy taken from the world of biology, then a meme is equivalent to a gene. A gene is a unit of genetic information, and a meme is a unit of "cultural" information.

There is a problem with this. A "gene" is a discrete unit of data encoded as a series of chemicals. No one seems very clear as to whether a meme is the label for some as-yet-undiscovered unit of human experience ("experions" ;-)) or is a label for a "mind virus", which it is often used as.

"I can't get "The 5th of Beethoven" by the Electric Light Orchestra out of my head!" said Bert.
"That's because you have caught a meme!" replied Alec. "A meme is like a computer virus, but for someone's brain."

Another similar problem is with the definition of "culture". Stating what is culture and what isn't is controversial at the best of times. Comparing memes to genes means we have to compare culture to biology (now I come to think of it - what is biology exactly?).

So to summarise: a "meme" is a self-propagating unit of cultural information that may take the form of an idea, concept, song, phrase, habit, mood or something else. No one has decided exactly what a meme is. However I suspect that a fairly good example of a meme is the idea of the meme itself. By writing this I am reinforcing the meme, and I am helping it spread.

As with genetics, there are some memes that are beneficial (using the word in the woolly sense of part gene/part mind-virus) and there are others that are not. Religions comprise a collection of memes that are generally quite successful in the sense that memes of certain religions are very common in several cultures and often form a central part of the culture.

An example of this is morality. Morality is the ideology of the Christian religion, and has taken a central role in the traditions of Western democratic, progressive, liberal cultures like that which is the main culture in the UK or France.

However, as with genetics, the driving force of memes is their own self-propagation. A meme doesn't necessarily have to benefit the people it uses as its "vector", it only has to ensure that the individuals it infects/occupies/uses survive long enough to spread the meme to several other people, and that the individuals spread it as effectively as possible.

As always, all the self-referential cork screw thinking involved in thinking about memes leaves me quite tired. There are many questions surrounding the somewhat ambiguous concept of the meme, and no one seems very sure as to what the answers might be.

Anyway, if you want to read a clearer description of the meme and what a meme is I suggest you go to www.en.wikipedia.org/wiki/Meme

Guardian

In the technology supplement of yesterday’s The Guardian newspaper there was an article criticising Wikipedia and Google News:

Away from the hurly-burly of Wikipedia, even current events can seem oddly remote when viewed online. Google News, for example, employs computer algorithms similar to those used in spam filters to identify and present the news. In looking for similarities, the news is homogenised and breaking stories fail to rise to prominence.

For the veteran researcher Daniel Brandt, who taught CIA whistleblower Phillip Agee how to use computers, much of what a human editor provides is lost. “What’s gone is any sense of “a scoop” or “an important development”. Or “new information that puts a new slant on an ongoing story. There’s no authority, no perspective and no sense of historical continuity. It’s a dumbing-down process,” says Texas-based Brandt.

Mm. I’m not so sure about this opinion. I’d say that most of what is published and broadcast through conventional news-organs like newspapers, TV news and radio news is pretty “dumbed down” already. Further, more often than not editors (especially in popular tabloid newspapers) can actually bring prejudice and spin to stories. Although many of the criticisms of Wikipedia and Google News are fair enough – Jimmy Wales himself has said that many mature articles are “horrific crap” – I have to point out that there are plenty of criticisms you can level at more conventional media as well as “new media”.

In an only slightly related topic: I find it immensely annoying when the phrase “so called” is used by newsreaders. It is meant to imply that the nickname the ’reader is about to use was not made up by some other source in the media, and has simply been, you know, ‘around’. So instead of looking like populist buzzword-jockeys, the news organ appears as the objective news-vendors they purport to be.

Even worse is the use of the term “…dubbed by the media…” which again is intended to imply objectivity but could just as well mean “…we’ve all decided to call it something catchy with less than three syllables so the GU can stretch it through their four amp brains…” Call it an over reaction if you will, but I’ve always found it intensely annoying.