Robots have been a standard trope of SF since at least 1920 when Karel Capek introduced them in his play R.U.R. (Rossum's Universal Robots), coining the term from the Czech word 'robota', meaning statute labour. The idea (and actual fact) of automata goes back even further, to at least the early 19th century, and the almost interchangeable term 'android' goes back 100 years further still in writings about 13th century philosopher Albertus Magnus and his supposed attempts to create artificial life. Indeed, recent archaeological findings tend to support the idea that the Romans built toys that would be recognisable as automata. Early SF produced a plethora of robot stories and the ones that stick in my mind include Eando Binder's 'Adam Link' tales and, of course, Isaac Asimov's first collection of robot shorts, I Robot (1950), borrowed its title from a Binder short story. Since those early days SF has produced all kinds of robots including C3PO and R2D2 in Star Wars (1977), Marvin the paranoid android from The Hitchhikers' Guide to the Galaxy (1978), the hapless Roderick (1980) and the muderous Tik-Tok (1983) from the pen of John Sladek and Rudy Ruckers 'boppers' from Software (1982), to name but a few. Sometimes the robots have been feared, at others they are the dedicated servants of humankind, often they are used allegorically in order to make points about prejudice and slavery. Cyborgs have been used similarly from RoboCop (1987), a 'good guy' to the Terminator (1984), a 'baddie'. No surprise then that AIs have had the same ambiguous treatment. Mark Clifton & Frank Riley's 'Bossy' from The Forever Machine (1954) was very helpful, whereas Harlan Ellison's 'AM' from his short story I have no mouth and I must scream (1967) was a complete sadist! AIs were so feared in the Dune universe (1965 onwards) that human 'computers', Mentats, were bred to replace the need for them. DF Jones' Colossus (1966) starts out as a baddie but, by the time of its inferior sequels in the seventies, turns out to be a goody, in much the same way as the muderous 'HAL9000' from Arthur C Clarke's 2001 (1968) gets let off the hook in the sequel 2010 (1982). Terminator's 'SKYNET' and The Matrix (1999) are complete bastards, as are the AIs in Dan Simmons' Hyperion books (1989 onwards), but Iain Banks' 'Minds' in his Culture books (1987 onwards) and Ian McDonald's 'ROTECH' AIs (from Desolation Road (1988) and Ares Express (2001)) couldn't be more protective of humankind. Bill Gibson's 'Wintermute' from Neuromancer (1984) is rather more ambiguous, but seems OK, but Ken McLeod's 'Fast Folk' from The Stone Canal (1996) are duplicitous and hostile. The various crews in the Star Trek series have more than their fair share of run-ins with loony computers, from 'M5' in "The Ultimate Computer" to 'Nomad' in "The Changeling" (a forerunner to 'V'Ger' in the first movie), not to mention loony androids and loony nano-bots! Of course the ST universe also has the ultimate good guy AI-android rolled into one, Data, of whom Asimov would be proud. And just to prove that no one is safe even Mulder and Scully from The X-Files meet homicidal computers in "Ghost in the Machine" and the Bill Gibson-penned "Kill Switch". Of course, you can be a loony and a good guy, which is as good a description as any of Max Headroom (1985).
Arguably early SF got computers completely wrong, tending toward gigantic machines like Asimov's 'Multivac' and its contemporaries, and it wasn't really until the mid-seventies, notably The Shockwave Rider (1975) by John Brunner, that computers began to be seen as we understand them today. Having said that, those in the field of AI research who favour modelling the human brain using neural network transducers admit that, currently, we'd still need something the size of an aircraft hanger to do this successfully! Also, again arguably, SF got robots wrong too, tending toward those of humanoid construction performing multiple tasks, rather than dedicated robots of specific design as we have today. SF did, however, correctly predict the fear and worry associated with the increasing use of computers which, given public ignorance and fear of scientific hubris, probably wasn't difficult to imagine. Obviously people should be wary, but they've consistently proven themselves inconsistent in their attitudes. For instance the US government was taken to court in the early eighties for developing 'launch on warning' computers for its nuclear deterrent (the charge related to its 'unconstitutionality') and, also in the 80's, in Britain the Central Electricity Generating Board (as was), then responsible for our nuclear power programme, got into trouble for placing decision-making power in the hands of computers in the event of a 'loss-of-coolant accident'. Yet at more or less the same time little was being said about the development of medical 'expert' diagnostic systems, other than a few worries about who should be liable if things went wrong - the manufacturers, the programmers, or the health authority using them? In the 'real world' the biggest problem with computers was their tendency to go down at bloody inconvenient times (like in the middle of this!). Thank God for auto-save!
Personally I find 21st century robots a complete disappointment. I mean, I'm an avid watcher of Robot Wars and Techno-Games but, let's face it, these so-called robots are really just remote-controlled toys. As for 'production line' robots, they're little better; dumb dedicated machines under the control of a central computer for the most part. Not that that's necessarily bad in and of itself, but where are all the really useful ones like street cleaning 'bots and house vacuuming 'bots? But I recognise that the desire for humanoid robots is just a hang over from my SF reading. After all, what's the point to them? The argument generally runs that since there is no better and adaptive mutli-tasking 'machine' than a human, then the 'bots should have that form, ie. why build a cooking robot and a cleaning robot and a linen-changing robot and a back-scrubbing robot if you can build one robot that does the lot? Well, part of the answer is, if all that's true then why build a robot in the first place? Surely what you actually want is a human servant. But of course they have to be fed and housed and paid, etc, and you might have some ethical objection to that kind of labour being performed by humans, likening it to slavery or some such. Besides, in this instance the whole point of robots is that they are labour-saving devices, so you'd quickly disappear up your own logic if you took all those arguments to their logical conclusions. Also the fact is that such sophisticated robots would be little distinguishable from 'weak' AI in the first place, so you're coming at the concept from the wrong end, so to speak. Consider the 'intelligent' house: while it would necessarily have to have internal 'limbs' in order to perform some of its duties, would it benefit the house to have those limbs in humanoid form? I doubt it. On the other hand look at the current Japanese attempts to build a humanoid robot, Asimo, and the reasons for that... Arguably they have made great progress just in getting the thing to walk and climb and descend stairs (the Daleks are sooo jealous), but the reason it's humanoid isn't to do with its ability to perform tasks, but with the desire of its owners/users to have it as something of a companion - thereby replacing the robot dog, presumably (in the West they're toys, but in Japan they're company for old people; go figure). The next line of argument says something like, so much effort has to go into producing these robots that the best way of producing a humanoid with human-level intelligence is still fucking. In other words, have children and get them to do the chores! QED. Why have a humanoid robot that can do the shopping if your house computer can order the goods anyway and have them delivered? And so on. The point is that the real arguments about mechanisation (in the broadest sense) are better addressed in terms of 'content' rather than 'form'. I guess I'll just have to stay disappointed as far as robots go.
Artificial Intelligence arguments are fraught with difficulties, not least because of the problem of defining 'intelligence' in the first place. It's hard enough to do it with humans. Is intelligence to do with data acquisition and retention, or processing power and ability, or both, or what? How intelligent is 'intelligent'? Nearly all humans, no matter how 'stupid', would be considered intelligent, but are dolphins? Dogs? Cats? How will you know when your AI has become intelligent? Do you favour so-called 'strong-AI' or 'weak-AI'? I mean, if your house-AI tells you to cook your own dinner because it's busy composing new works based on the music of Shostakovich, then it's probably too damn intelligent. What if it starts screaming at you, "Let me out of here; I want to be a tractor!"? On the other hand maybe it would be content to be a house; in which case can you imagine the scenario where the stereotypical ladies-over-the-back-garden-fence are replaced by bitchy AIs on the internet. "Oooh, that number 37 never cleans its front pavement. Let's the whole street down." "I heard it only changes its sheets once a week. I change mine every day so that my occupiers can have linen-fresh comfort every night." And so on. Worst of all, if you're after producing strong-AI, are the moral, ethical and metaphysical dimensions of creating such 'life'.
There is no getting away from it as far as strong-AI is concerned. You will not be creating 'intelligence' you will, by definition, be creating a new life-form. And you have to ask yourself, why? What am I doing this for? An answer might be, to understand more about human intelligence, which seems a bit self-defeating to me. After all, if that's what you want to know, then perhaps you should be studying humans? A different answer might run: If intelligence is an emergent property of a complex system, and if an increasingly complex system like, say, the internet gives rise to an AI (bootstrapping its way into existence in much the same way as human intelligence is supposed to have done), then perhaps by 'creating' an AI we can better understand the 'emergent' AI. But, since there's no guarantee that these two AIs would be in any way qualitatively the same, you're back to the problem of understanding 'intelligence' no matter how it originates so, once again, studying humans would give you just as great an insight. In terms of strong-AI you are really looking at concepts like consciousness and self-awareness and so forth. Which is not to say that they are required for intelligence, but that they are at least implied. And, despite the best efforts of such as Professor Susan Greenfield and others, we still have such a limited understanding of human consciousness that I can't see how we're going to reproduce it artificially. In other words, how do you produce a car if you don't know what a car is and can't even conceive of one?
Some still say that it is the Turing Test for AI that will establish when something, some system, is intelligent, but even here the arguments rage. John Searle, a philosopher of sorts, came up with the "Chinese Room" gedanken experiment. Very roughly what he said was, imagine I'm in a room with an input and an output slot and a big book of chinese symbols such that, should someone outside the room slip in a piece of paper with a set of such symbols on, and I look up a set of appropriate output symbols, write them down and send them out then, no matter if it makes perfect sense to whoever's outside, that does not mean that I (Searle) understands Chinese. Therefore I (the system composed of Searle, the room and the book) am not intelligent. In other words, syntax is not semantics. But I find the argument tautological since the essence of the Turing Test is to fool the outside observer, which was achieved, and Searle has assumed from the outset that the 'translation device' (i.e. himself) does not understand Chinese, which makes it pretty easy to 'prove' that he does not understand Chinese. Besides which it certainly wouldn't be an argument against something in the system being intelligent, at least intelligent enough to be able to recognise input, manipulate symbols and produce output. Searle's argument doesn't have an excluded middle (in logical terms) but an undistributed middle. Which is to say that his argument is not: either the system understands Chinese or it doesn't understand Chinese (an excluded middle), but rather either the system understands Chinese or it doesn't understand anything (an undistributed middle). Which is just getting back to the argument about how intelligent is 'intelligent'? Answer: depends on how you define it. At least one of my two video machines claims to be intelligent, though I think the older of the two is by far the smarter. How 'intelligent' is an intelligent washing machine? Besides, by Searle's own argument he is running what, in computer terms, would be an emulation and so while Searle-in-the-room might not understand Chinese the emulation does! Furthermore he claims that for intelligence to be present there has to be intentionality, that is not just thinking, but thinking about something. But one needn't understand symbols in order to be thinking about them, otherwise logical symbolism would be a bunch of meaningless squiggles and any transforms done on them equally meaningless. Which would put a whole bunch of people out of work and utterly destroy several millennia of thinking-about-thinking. Not a bad thing, some would say.
All of which is a long-winded way of describing how important it is that one has a functional definition of intelligence before one goes about either producing it or ascribing it. Furthermore, the Turing Test might all be a red herring anyway. Pat Hayes and Ken Ford certainly think so and they advocated (in 1995) getting rid of the Turing Test altogether, likening it to the alchemists' search for the Philosopher's Stone. The idea is that the quest for the stone was a great way of motivating the development of chemistry, but that no modern chemists are actually still looking for the stone; similarly the Turing Test was a great way of stimulating the development of AI-related science, but is not worth pursuing in and of itself. But perhaps it's time we turned our attention to the current state of AI research in the 21st century (caveat: 'current' might mean up to four years old, depending on what I was researching...).
Basically there are two main approaches to producing AI: the 'top-down' and the 'bottom-up'. The top-down approach, not to put too fine a point on it, is concerned with 'brute-force' programming, which I think places the emphasis on data acquisition and retention, and the bottom-up places the emphasis on structure and learning systems, mostly using neural nets, which I think is processing-led. Leaving aside arguments that say surely a synthesis of the two would be better, what is the progress to date? Well, brute-force programming has had some successes, for instance Deep Blue II defeating chess champion Kasparov, though no one, least of all the machine's programmers, would claim that it is intelligent. Psychologists think that Kasparov was beaten by his own mind, being constantly rattled by the speed of the machine's moves. Of the various limited Turing Tests that have been made, 'winners' have included the programme PC Therapist III which, upon being backed into a corner by its interrogator, came up with the lovely response, "I think you don't think I think." Another 'talking' programme, Racter, speaks fluently but nonsensically. 'Thinking' programmes are a little more promising and include Herbert Simon's BACON which deduced Kepler's Third Law of Planetary Motion from the first two; and Douglas Lenat's Automated Mathematician and Eurisko which re-invented maths from the ground up and even came up with (among other things) Goldbach's Conjecture (all even numbers greater than two are the sum of two prime numbers). Pretty impressive, but is it intelligence...? For those who think that intelligence has to include another indefinable quality, 'common sense', there is Cyc which has already started reading and adding data to its memory, in addition to the intense programming it receives.
Arguably the bottom-up researchers are having more success, but in fields that are just as limited as those of their top-down colleagues. Among those who favour neural nets are the ubiquitous Marvin Minsky who, despite trying to trash the field in the late sixties, has become a late-but-enthusiastic convert, and Igor Aleksandr (from the University of London), not to mention David Rumelhart (Stanford U) and Geoffrey Hinton (Carnegie-Mellon). Together and separately they have developed pattern recognition algorithms that have found applications in such diverse areas as the stock market, betting, credit evaluation, handwriting recognition, facial recognition, identifying signal-to-noise ratios, lab design and medical diagnosis. A lot of the so-called 'expert systems' are of neural net structure and they are all in use today in financial institutions, hospitals, security and law enforcement (CCTV 'incident rooms' are being retro-fitted right now, like the one in Manchester (cf. "Who is Big Brother?"). However, none of these systems yet display anything like intelligence so, on the whole, I'd have to say that we are still a considerably long way from producing AI.
And I still wonder why we're doing this. One line of argument I feel particularly strongly against is the one that says, human (biological) intelligence is but a stepping stone of evolution and that artificial (mechanical) intelligence is the end point that evolution is aiming toward. In particular there is this idea that space is just too dangerous for humans and it's much better to leave it to the machines. Well, excuse me for pointing this out, but space is just as dangerous for machines, if not more so, as it is for humans. Anyone who's had a satellite or a power grid knocked out in the wake of a solar flare would kzz that. And, even if it were true, that's still no reason for us not to go into space. It is a reason to develop better radiation shielding, etc, but that's about all. Critics of biology point out that if the speed of light is an unsurmountable barrier, then the huge journey times of space travel favour the machines. But, again, I would say that all that really means is that we need to develop successful cryogenic techniques, or bio-sphere (closed) ecologies for generation ships, or any of a number of SF ideas. Just saying "Oh it's too dangerous/difficult to bother" doesn't sit well with the history of humankind and makes no real sense.
Then there's the human-machine interface/hybrid, up to and including uploading and downloading human consciousness to and from machines, though there is as yet little going on in that area - one scientist, Kevin Warwick, with a chip in his arm is all I can bring to mind - though there's a lot of promising work with prosthetics (cf. "I don't need no doctor"), but it's still a long way from detecting a nervous impulse and producing a mechanical action to transferring a mind (whatever that is) to and from hardware. I'd like to think that future evolution wasn't simply an either/or situation, but combined elements of both human and machine consciousness. I find Greg Egan's futures intriguing, where you could live on disk (so to speak) in a virtual environment or have yourself downloaded into a biological entity or a robot body, for instance in Diaspora (1997) or Schild's Ladder (2001), or even that of Richard Morgan's first novel, Altered Carbon (2002), in which people live as humans, but their consciousnesses are recorded on a 'cortical stack' so that they can be downloaded into new bodies should the old one be damaged. In both types of future, interstellar travel is facilitated by only having to send the info, rather than the 'meat'. Such concepts are likely to remain SF for a long time to come, but it seems a better evolutionary aiming point to go for rather than the outright replacement of biology with hardware. I think that the chances are that if such technology is to be developed, then a lot of pointers will come from the biological sciences, rather than the physical, though both will play their part. With this in mind I'd say that the best reason for producing AI is just to establish that, in principle, intelligence can be non-biologically based, not for the sake of the entities thus created, but for our own future development. Cyborgs are probably a good stepping stone here, even if there were not already sufficient compassionate reasons for producing better prostheses. To those critics, both pro- and anti-science, that find something obscene about the human-machine interface, in particular those who find the whole idea so disgusting that they cannot conceive that anyone would want to do this to themselves, I would just point out that there is no shortage of humans who willingly and enthusiastically get themselves tattoo'd, scarred and pierced for purely cosmetic reasons and that, therefore, there is likely to be no end of volunteers to have themselves 'Borg-ised' (with apologies to Star Trek).
In conclusion then, I'd have to say that I'm a bit disappointed that my old SF futures haven't come true as far as robots and AI are concerned, but I would also say that I am fascinated by what is going on and await further developments eagerly.
[Up: Article Index | Top: Concatenation]
[Updated: 06.5.10 | Contact | Copyright | Privacy]