Wednesday, November 23, 2011

The Inevitability of A.I.


What most of sci-fi (The Matrix, for instance) calls ‘A.I.’ or ‘Artificial Intelligence’ is referred to in Patriots as ‘Simulated Intelligence’. (The conceptual difference is detailed in the chapter The Cheesemaker’s Dilemma.) For this essay, I’ll use the more common and familiar A.I. designation. Also keep in mind that what was referred to by Asimov and others as ‘robotic’ behavior boils down to A.I. 


Why A.I. is inevitable

A.I. will be the ultimate expression of our insatiable appetite for information technology. In 2011, as Patriots is being written, we live in an era of information-processing (computer), communications and technological dependence. This dependence grows daily. But in earlier times, when this dependence could be foreseen, it was greatly feared and resisted on (purportedly) moral grounds. It was during such times that Isaac Asimov wrote his Three Laws of Robotics.


Asimov’s three (or four?) laws…

Condensed from Wikipedia:
The Three Laws of Robotics were introduced by sci-fi author Isaac Asimov in his 1942 short story ‘Runaround’. The laws are:  
• A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
• A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law. 
• A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Another law was later added, and numbered as the ‘zeroth law’:
• A robot may not harm humanity, or, by inaction, allow humanity to come to harm. 
The Three Laws (and the ‘zeroth’) have pervaded science fiction and are referred to in many books, films, and other media. 


…don’t actually work

The Wiki piece cited previously notes this caveat:
It is recognized that [The Three Laws] are inadequate to constrain the behavior of robots, but it is hoped that the basic premise underlying them, to prevent harm to humans, will ensure that robots are acceptable to the general public.
In other words: Asimov’s laws, in practice, wouldn’t actually work. That’s worth examining. If Asimov didn’t think his laws were a practical means of actually governing A.I. behavior, why did he create them? Again, from the Wiki article:
Before Asimov began writing, the majority of artificial intelligence in fiction followed the Frankenstein pattern. Asimov found this unbearably tedious:  
“... one of the stock plots of science fiction was ... robots were created and destroyed by their creator. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings? With all this in mind I began, in 1940, to write robot stories of my own – but robot stories of a new variety. Never, never, was one of my robots to turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust.” — Isaac Asimov, 1964.
Asimov was looking for a way out of the Frankenstein writers’ ghetto. He knew that the march of technology was inexorable, and that belief systems (see Chapter 15: The God That Failed) evolve. But he also knew that he could not merely wait for the natural progression of change - as a practical matter, he needed to write his stories as soon as possible. His solution was to replace the Faust/Frankenstein belief structure with a new one, as embodied in the Three Laws. He instinctively knew that other writers, as anxious to explore new ground as he was, would help him spread the word. 


How the Three Laws outlived their usefulness

Frankenstein themes still exist today, but they have lost most of their moral weight and exist largely for their entertainment value. When the machines turn against men in R.U.R. or Metropolis, it is a demonstration of technology as a Faustian bargain. But when this happens in The Matrix, it is well-understood that - despite our ‘differences’ - man and machine must inevitably co-exist. In reality, technology run amok (whether it’s in the form of rolling blackouts or the Challenger disaster or the Titanic or whatever) is usually attributable to the failings of one man or a group of men, rather than mankind in general. 
Certain kinds of technology, however, are still reserved as ‘the province of God’ (though acknowledgement of an actual deity is usually avoided). Jurassic Park is an example: Apparently the creation of life (or re-creation of life, in this case) remains morally out-of-bounds for some reason. Likewise, genetic manipulation of humans, as portrayed in Splice, remains verboten in pop fiction.
Artificial Intelligence, on the other hand, has lost much of its power to shock, because we now live with it and benefit from it. It pervades our lives. The fear is no longer that we could become dependent on it - that ship has sailed. The modern concern is that, now that we do depend on it, it might fail.


Why the Three Laws wouldn’t work in the real world

Leaving aside the fact that Asimov’s laws were never anything more than a logical-sounding plot device, there are a number of fundamental reasons why they are not practically applicable:
• Written in an era of computer programming, Asimov’s laws assume (wrongly) that A.I. will be achieved that way. 
‘Patriots’ posits that self-awareness is an integral part of artificial (which it distinguishes as ‘simulated’) intelligence, and that this cannot be achieved through a means as crude as ‘computer programming’. Furthermore, these thinking machines will not resemble what we call ‘computers’ at all. (Asimov likewise saw this coming, and called his devices ‘positronic brains’ rather than computers, although he implied - somewhat contradictorily - that they would be programmed in some way. But most writers and pundits remain wedded to ‘computers’ as the repositories of A.I.)
• Any laws - of man or machine - can be gotten around. There are always loopholes.
A number of sci-fi tales have been written specifically to demonstrate that Asimov’s laws can be (and sometimes should be) circumvented.
• The Laws gained acceptance not because they were true, but because of a paradox whose (inevitable) resolution we could not accept.
To achieve a true artificial intelligence, we must create a self-aware entity. The Three Laws presume that such an entity can be treated like any other personal property.
The paradox is that any self-aware entity that is treated as personal property is, in fact, a slave. The unpalatable (but necessary) resolution, explored in the ‘Patriots’ series, is that all slaves - ‘living’ or not - must be set free. In the end it is man’s willingness to accept slavery that is immoral, not technology itself.


‘Accelerating AI’

As ‘Patriots’ was being written, a popular paper on AI written by John O. McGinnis of Northwestern University was making its way around the Internet. 
Mr. McGinnis is not yet ready to grapple with the fact that an Artificial Intelligence will inevitably also be a self-aware intelligence, and therefore - if we treat AI tomorrow the way we treat a laptop computer today - a slave. Or perhaps he does not believe that this is in fact the case. (I’m guessing that he has just not looked all the way down this particular rabbit hole.)
In any event, he seems to be fighting the last war - Asimov’s War - against the dwindling number of people who still believe AI must inevitably become a malevolent Frankenstein’s Monster. To his credit, he does not invoke Asimov or his overworked Seven Laws as a cure. Unfortunately, he does not offer anything better. Instead, he pitches the adherence to a ‘Friendly AI’. What this amounts to, apparently, is to somehow insure that our creation, er, likes us. Or maybe not that, exactly, because as a definition McGinnis cites the Singularity Institute, which summarizes the goals of friendly AI as seeking the elimination of “involuntary pain, death, coercion, and stupidity.” In other words, AI may be friendly to humans but it is definitely UNfriendly to their failings. 
To fulfill its mission of stamping out stupidity, AI will no doubt simply follow the conveniently-placed arrows on the millions of “I’m With Stupid” shirts now in circulation to locate the very wellsprings of all human error. Having easily identified this neatly-labeled problem, the Singularity people assure us that AI will eliminate it as its first order of business. You and I and other not-stupid people (you know, the folks we approve of) have naught to fear from this - we will just sit back and watch it happen. Following that triumph, AI can surely stamp out the reasons behind the human aptitude for causing needless death, pain, and coercion as well. Again, not to worry: Someone else - someone you don’t much like - is to blame.
Of course, if this is honestly along the lines of what the Singularity people believe, the very first source of stupidity AI must seek out and destroy is the Singularity Institute itself.
Mr. McGinnes cannot see the elephant in the room. (Not to single him out - apparently no one else can, either, at this early stage of the game.) His response to the question of how to prevent AI from doing evil is basically a slogan, like Google’s ‘Don’t Be Evil’. (The L&M Cigarette Company used to have a slogan, too: ‘Just what the doctor ordered’. They used it until it sounded baldly ludicrous, and Google’s line is probably destined for a similar fate. So much for the effectiveness of slogans as solutions.)
Mr. McGinnis makes a sound point, though, when he says that AI must be developed because we must have it. (He might as well have added that AI ‘must’ be developed because it WILL be, because marketplace forces are moving to insure that this will happen regardless of what policies we enact or fail to enact. AI, as per the title of this essay, is inevitable.) 
He makes another sound point when he tells us:
“…confusing the proposition that AI may soon gain human capabilities with the proposition that AI may soon partake of human nature is the single greatest systemic mistake made in thinking about computational intelligence—an error that science fiction has perpetuated.”
Quite right. Just because AI can reason does not mean it is susceptible to our human (animal) failings, such as greed, vanity, jealousy, deceit… well, it’s a long list. In fact, it’s most likely that this is NOT the case. Technology-gone-bad is just a convenient way for sci-fi writers to make their living, and it has been for quite some time now. Stories such as Blade Runner are written to satisfy the demands of drama and the marketplace for fiction. Hi-tech villains sell, just like Nazi villains do, but that doesn’t mean they’re plausible in real life. No matter how cool the concept sounds, rest assured, they never really Saved Hitler’s Brain.
However, Mr. McGinnis fails to think this all the way through. True, AI is unlikely to go all SkyNet on us and decide to wipe out humanity. That gun sitting on the table won’t go out and look for someone to kill, either. But if someone comes along who wants to get some killing done, that gun could come in mighty handy.
Because as we have always known, Pogo was right: We have met the enemy, and he is us.


from The Patriots of Mars [Postscripts & Essays]

No comments:

Post a Comment