Indistinguishable From Magic

AI Fiction and Reality

When I was a budding megageek of maybe ten I wanted to learn to write programs on our family computer in BASIC. We’d played a little with computer programming in school, and I’d been taken with the way obtuse written instructions would somehow manage to generate something interesting on the green monochrome of the chunky Apple IIs. I read through books of simple programs but they weren’t designed to make much sense to a fourth-grader. Still, sometimes particular lines jumped out because they were mostly written in plain English:

60 REM CHANGE THE SCREEN COLOR

While others baffled me with a jumble of letters and mathematics:

70 Z = (RAND(X + 1) / 10) * MOD Y

Despite my confusion, I brought the books home and typed it all in exactly as it appeared, and it would work. The screen changed color, or the little ASCII dude would run away from his ASCII dog. Easy.

Naturally, I proclaimed myself an expert. I figured I could extrapolate and write my own programs, leaving out the impenetrable symbolic manipulation. It didn’t make sense. Who needed it? That REM command sounded important and clever. The program did whatever it said. REM must be some sort of communication protocol or conduit, enabling the programmer to tell the computer what to do directly, without the fooling around. It was like I was having a solid man-to-man with my computer. We’d be on the level. We’d talk in plain English. I liked the whole idea of REM. So I’d work only with that. I tried:

10 REM DRAW A BOX

It was deviously simple. How could it fail? I ran it. No errors, but nothing happened, either. What the? What was the problem? Was I simply failing to properly utilize the REM language? I tried various combinations of words in hopes of communicating with my stubborn IBM. “DRAW A SQUARE” (alternate wording). “PLEASE DRAW A SQUARE” (adding politeness). “DRAW A SQUARE OR I FIND A MAGNET” (adding threats). “DIE COMPUTER DIE” (maybe I should play with people instead). Yet I could not salvage success. I was not to comprehend the mysterious REM language.

Arthur C. Clarke wrote that any sufficiently advanced technology is indistinguishable from magic. It was some years later that I realized what had gone wrong. The secret of REM that I labored to decipher? In BASIC, REM is the command instructing the computer to ignore the rest of the line, allowing for programmer comments. Most computer languages have a syntax for this. It allows programmers to describe what a chunk of code is doing. Helps with debugging later. Functionally, REM actually prevents the computer from reading text as instructions. All the math and symbols were of course what was actually running the program.

So I was stupid. There was no special REM language to talk to my computer. Well, how should I have known? I only had the equivalent of a fourth-grade education. But then, I was in the fourth grade.



A ten-year-old human may make the mistake of imagining his computer to be the product of magic or superscience. But at least he is capable of making such a conceptual leap.

True artificial intelligence, not the smoke and mirrors REM kind, can’t compete. However, artificial intelligence in film often does make these leaps, out of writer ignorance or for the sake of entertainment or imagination. (Screenwriters would be under a lot less pressure if their audience consisted solely of ten-year-old boys.) The realistic problem of achieving AI is a frequent victim of snappy screenplays. Robots work, computers are truly magic. Androids learn to cry. Jeff Goldblum effortlessly hacks into a UFO’s mainframe, apparently through its unsecured wireless network. But then, this is one of the reasons we need film. AI is a fantastic concept that, at present, does not exist. In fiction we can ignore the fact that human intelligence and AI are two fundamentally different functions.

Human cognition depends upon electrically-charged neurons, about 100 billion of which make up a brain. Each functions like a small processor. Imagination helps us envision solutions to problems that we’ve never actually encountered before. Sounds like a pretty good deal until you try calculating a square root. The point is, it’s not all about pure processing power. Our brains didn’t evolve to handle everything. We can function based on the cumulative knowledge of a lifetime of stored memories. Stoves are hot, look both ways before crossing the street, love hurts like a steel-toed kick to the shin. Soft human brains retain memories through association. Learn something enough times in enough different ways and it will eventually sink in. Hear a bell every time dinner’s ready and pretty soon you’re hungry whenever the cat plays with that damn jingle toy.

Computers don’t have the luxury of highly associative memory. Everything must be hardcoded. Information has to be painstakingly sorted and searched through to find that one little bit of interest. Luckily computers work fast.



Checkers is old hat. It’s a game for a wristwatch or an Atari. It may even be solved.

If a game is solved, that means the computer can retain all possible game states in memory. So no matter how the pieces are arranged, at any point in the game, the machine can reference that state in its vast memory — and always make the statistically best move. Chinook, the Deep Blue of checkers, is close. It can occasionally guarantee a win as soon as the fifth move.

Other games are more complex. Chess has something like 1041 possible states (that's a one followed by 41 zeroes). Even though modern computing can’t approach numbers like that, dedicated state of the art chess machines like Deep Blue still have massive catalogs of states and games and can therefore find a move worthy of a grandmaster in about the time it takes you to blink. Today even algorithms running on standard PCs can compete against the best human players in the world.

None of this is free-wheeling associative memory, it’s all a problem of search. Deep Blue doesn’t get a feeling about a good move the way Garry Kasparov does. Derived from indescribable associations carved somewhere in his memory, he just knows for some reason that his knight better leave the queen alone no matter how tempting it seems at first. Meanwhile, Deep Blue feverishly rifles through hundreds of millions of choices per second to judge the move which will lead to the most probable victory. And its success is still largely in the hands of programmers. Without having remotely enough storage space for all those possible states, playing the game is still more a task of deciding how to efficiently conduct the search for moves.

Imagine if you had to go through this crap every time you did anything. If you weren’t programmed cleverly enough, just leaving the room would require some thought. You might first consider the window (nope, eight floor drop, possible death), then the closet (nope, crammed with moldy Archie comics) before considering and ultimately making a decision in favor of the door.



The fictional idea of AI truly catching up to human intelligence, and even winning, has been around a while. Stanley Kubrick famously took it on in 2001, my favorite both for style and because it took a serious look at how AI would interact with humans as we continued to evolve. Still, the film has a tough time trying to shake the idea that AI might be anything more than a threat. HAL, the film’s unblinking automated antagonist, dominates the third act, going from 21st century wondercomputer to homicidal maniac in, what, an hour? Kubrick infamously fails to treat the audience to much in the way of explanation. The otherwise superfluous sequel, 2010, attempts to define a literal reason for HAL’s, er, calculation error. We’re informed that HAL was unfairly asked to do something very human: lie. Some crewmembers had specialized knowledge about the mission, while others were not permitted to know the details. HAL misinterpreted his duty to keep their secrets and instead reached the logical conclusion that the humans could find out restricted information and subsequently make errors per their smushy biological programming, jeopardizing the mission. Therefore, wiping out the crew with a combination of uncooperative airlocks and oxygen depravation seemed like the right solution. Easy to keep secrets from the crew when they’re being flung into space by renegade EVA pods.

That more or less covered it for audiences who weren’t comfortable with Kubrick’s original ambiguity. But it’s not the only way to look at the film. The Cold War explanation would be that computers are mysterious behemoths and you get what you deserve if you trust a roomful of vacuum tubes more than a government pencil, slide rule, and sheaf of NASA-watermarked graph paper. Science fiction film nerds like me take it even further, generating stuff about mankind needing to grow beyond its technological dependencies to evolve.

Produced about the same time as 2010, and also exploring the directions AI might go, Wargames simply assumed it inevitable that all computers were connected and inching towards intelligence. They would naturally spawn reasoning faculties advanced enough to consider initiating a nuclear war, only to stop once they understood the valuable life lesson of the futility of tic-tac-toe.



Robots have been the more frequent vehicle for dramatic interpretation of AI. They don’t need the pure processing power of HAL to overcome confinement to a computer. Mobile humanoids can even be played by actual human actors or jive-talkin’ CGI sidekicks. Regardless of form, science fiction artificial intelligence generally comes in one of three varieties: hilariously quaint, dispassionately violent, or morally complex human replica.

The first type is the domain of an antiquated age — or modern humor. Classic science fiction is rife with clones of Forbidden Planet’s Robby, his clunky frame and ostensibly important twirling machinery and blinking lights representing the time period’s technological ideals and fears. They might move slowly, but few robots of the time were anywhere near a modest size. Gort from The Day the Earth Stood Still silently towers over the hapless humans, who are powerless to do anything about it while his human-like (and by extension, friendlier) counterpart Klaatu spends the film negotiating for peace. But a few decades passed, people landed on the moon, and computers became small enough to fit inside small buildings. Fears about slow giant robots went the way of propeller beanies.

Today, as one of the four pillars of clichéd internet humor, people often find old-school robots either entertaining or furiously annoying, depending perhaps on their general level of tolerance towards internet culture. (I personally retain some marginal interest in ninjas, am indifferent towards pirates, and have long since stopped caring about monkeys. But I still think robots are funny.) Perhaps the apex of the idea, Matt Groening’s brilliant Futurama generates substantial mileage from robot humor. Finally given the regular time slot and support that FOX never bothered to provide, Futurama has proven so popular on late-night Adult Swim repeats that the series is going back into production on Comedy Central. For jerks like me and my robot-loving brethren who refuse to let this phenomenon die, it’s confirmation that robot humor remains appreciated even while remaining ubiquitous in Flash animation and ironic avatars.



When given the luxury of a personality, these classic artificial life forms were sometimes regarded as friendly and serviceable to mankind, while other times they were simply unstoppable killing machines. As real-life technology evolved, the helpful innocence of Robby seemed less realistic than, strangely enough, homicidal time-traveling cyborgs with muted Austrian accents. By the mid-eighties, computer technology was evolving so rapidly that really anything seemed feasible eventually.

The killer robot concept found full fruition in The Terminator. Humans are efficiently pursued and destroyed, without any possibility of making use of their ability to weasel out of things. Even HAL had seemed reasonable. He had, in fact, already outreasoned us and logically concluded that human death was the best way to accomplish the mission. But the Terminator ignored reason altogether in favor or fulfilling its order to hunt down specified targets. As a cyborg, it becomes much more the stuff of nightmares and horror cliché: deliberate, plodding, insatiable. Thematically, the Terminator isn’t really any fundamentally different than, say, a killer shark or a zombie. In both cases, we’re the helpless prey, victimized by predators that just keep coming and coming.

Even when AI isn’t so merciless, it might just be unable to adequately factor in the value of a life. The coldly efficient Borg from Star Trek serves as an example. Individuals are just something to assimilate to make the hive stronger. Similarly, machines from The Matrix (and the host of novels it borrows heavily from-notably William Gibson’s Neuromancer) have harvested human bodies as power sources. At least in that case people get to live out imaginary existences in the framework of the Matrix, working for large, faceless corporations and plodding through their lonely lives just the way they would had the machines not taken over.

The films also touch on an interesting corollary to the idea of an artificial intelligence being the dominant force of human lives. We’re told that early versions of the Matrix were designed to make everyone’s life perfectly wonderful and happy. (How kind of the machines to be considerate of our feelings! I guess we can buy into this idea if we’re already accepting the concept that the machines would bother giving us anything.) But those early versions failed. The details of why are left unclear, but it’s easy to imagine a world without problems collapsing in on itself. It would be like living in a giant Bret Easton Ellis novel-there simply wouldn’t be enough drugs for everyone. People need things to worry and complain about, or they’ll make up fictional problems to compensate. Give us traffic. Give us taxes. Give us frustrating decisions about color-coordinating our bathrooms. Give us murderous robots.



As human replicas, robots can provide a crowbar-to-the-skull obvious metaphor to explore just what it means to be alive, anyway. Star Trek: The Next Generation simply swapped out one creature struggling with his human side from the original series for another, utilizing the android Data instead of the Vulcan Mr. Spock. Stanley Kubrick’s death short-circuited the great potential of 2001’s AI: Artificial Intelligence, one of the few attempts in recent memory to draw on the substantive literature on the subject (in this case, Brian Aldiss’ short story “Supertoys Last All Summer Long”). Given his pioneering work on 2001 and generally more dispassionate eye, it’s reasonable to assume Kubrick would have explored the topic more thoughtfully and objectively than Spielberg’s precocious kid story did.

Even more unfortunately, one of the great thought experiments on robot intelligence, Isaac Asimov’s classic collection of stories, I, Robot, was adopted, only to be hopelessly Will Smithified, shredded beyond any resemblance of its original form. Asimov’s original work portrays some of the natural consequences of achieving true artificial intelligence. In particular, he stresses what happens when they’re bound by unbreakable behavioral laws: the famous three Laws of Robotics. Not all cases fit nicely into the constructs of these three laws. For example, if a robot is ordered to do something dangerous, that should override its self-preservation protocol. But how should the robot react as conditions change? Turns out that this works a bit differently in the real world (fictional or not) than it does on a computer screen, where infinite loops and memory overruns merely result in frozen web browsers and blue screens of death. Can people really trust robots not to harm them, just because they’re programmed to be good? A subtle nervous fear permeates most of Asimov’s portraits of AI.

Despite the natural ability of the medium to create suspense by capitalizing on anxiety, the film version doesn’t make much of an effort to worry about many of these natural paradoxes, trading thoughtful exchanges for witty one-liners and a bunch of endless action sequences. The Laws of Robotics are dutifully repeated for nerds’ sake, though essentially unexplored. But it’s hardly a new idea that major Hollywood studios are willing to subject most creative, if intellectual, endeavors to a soul-crushing death by focus group. Perhaps it’s best if my other favorite science fiction literature stays hidden after all. I pray no one ever greenlights a series of shitty buddy cop flicks loosely based on Asimov’s other robot novels.

The hallmark film on “human envy” is still Blade Runner. Nearing twenty-five years since release, it remains stark, fascinating, and fantastic. It manages to take on both real and artificial perspectives on intelligence. Is it right for humans to construct androids (known as replicants in the film) just for our service and pleasure, knowing they will be given just enough emotion to feel pain and solitude? How can replicants be expected to placidly accept their dreary existence, knowing they have a finite lifespan and can outsmart their creators? What does experience really mean for creatures with a limited life, whether those lives are a hundred years, or just four? Depending on which version of Blade Runner you watch, you can even get a different take on the subject. Ridley Scott’s director’s cut erases all voice-over narration, which in turn gives the film the twist of a new ending.


Why is there so often a gap between reality and fiction? The ideas come up frequently in science fiction because they have so many applications. AI creates conflict, drama, thrills, even comedy. And in general, it’s interesting to think about. It’s an altogether different form of life, often more intelligent and without the physical weaknesses of their monkeyish creators. So with all the examples, shouldn’t we expect more refined portrayals?

While it’s a natural theme for science fiction, especially if you prefer the term “speculative” fiction, true AI is just as bogged down with the complexities of real life. It’s not actually all that funny, frightening, or human in its current form. Drama rarely succeeds on a computer screen. A truly stunning AI breakthrough might come in the form of a vastly more efficient search algorithm. (Not exactly a traditional model for suspense.) Who wants to watch a greasy guy dramatically write LISP code when we could bend the rules a little and let him just talk to the machine? What if the machine could also be portrayed by a Blade Runner-era Sean Young? Or, whatever, even Rutger Hauer.

AI, like a lot of other sciences (and life in general) is a lot more fun when you exclude some of the facts. Real life science is complicated, time-consuming, and fraught with missteps and resets. Let’s say we want to write a story about clones, and let’s say we want them to attack something, and let’s further say we don’t have Ewan MacGregor and Samuel L. Jackson to prop up our hopeless dialogue, so we’re going to rely on good science to carry the story. If cloning is portrayed realistically it’s about as exciting as watching someone write their dissertation. Instead of watching the fully aware duplicates stroll out of the magic cloning booth, we watch as the cloned embryo inexorably becomes a fetus, then continues on its vengeful path towards birth, education, indoctrination, and maturity. The twenty or so years this takes ought to give all of the characters plenty of time to plan their next move.

Not that it can’t work as a vehicle for serious writers who have something to say about the human condition. You really need not look past Blade Runner to find a good example. But basic drama requires a reduction in boundary conditions to cut any story down to the meat of basic conflict. The struggle of progress to achieve genuine artificial intelligence makes for a better documentary than entertainment, so it’s probably for the best that AI is made “easy” in fiction. If we simply assume AI exists and can interact with it, we can get on with the story and explore consequences. Some willing suspension of disbelief is implied, too, or we’re never going to get the chance to ponder on the meaning of our actions, and how they might lead to homicidal robots from the future.

Nevertheless, lazy writing will always stand on the shoulders of clichéd giants. In the same way that it’s easy to make a character cool by jamming a cigarette in his mouth, or making him smart with one raised eyebrow and a Michael Bay-style sharp focus change, it’s easy to squeeze some drama out of AI going berserk merely because we expect it to. Since when are movies about drama anyway? It helps that robots and computers are also easily made awesome.



It’s difficult to say which path AI is going to actually take in realspace. Most fiction presents it as dangerous, even to the point of being a likely cause of our extinction, but artificial intelligence gone berserk is just one way to kill all humans. In an AI class I took in graduate school, my professor honestly sounded a bit disappointed that his field was no longer considered the most likely cause of man-made catastrophe (nanotechnology and biological warfare are now accepted as much more dangerous). Not that it won’t provide a valuable assist. Vernor Vinge describes the concept of “the singularity” in which superhuman intelligence arises (most likely pure AI, but he leaves room for the possibility of a biotechnological conclusion). After this point, the “human era” ends. Things will change too fast for us to comprehend, rendering extrapolation of future events impossible, and abandoning us in the wake of a superior form of life. Vinge considers this inevitable in some form (best case: something like humans perfectly in control of their technology) and due within a few decades.

The history of AI is still too brief to make much in the way of genuine prediction. Moore’s Law has essentially proven itself true (from the prediction made by Intel co-founder Gordon Moore in 1965 that processor complexity, and hence computing power, would double at a regular rate-every 18 months or two years, depending on which version of the quote you believe). So while we can presume that technological capability should continue to increase, something as miasmic as artificial intelligence isn’t hindered merely by processing power, but by conceptual understanding. Vinge’s theory is a possible, but not definite, extension of Moore’s Law. But considering that my office computer will crash at the slightest threat of an important deadline, I find it profoundly difficult to believe the machines will eventually get themselves together enough to rise up and kill me without the mind of a supervillain behind them. (Erase my bank account or get me fired, maybe.) While logic circuits have a natural advantage in rote computation or predictable but complex games, it remains a clumsy, inefficient way to acquire knowledge. Even the best machine learners are outsmarted by children. In experiments, researchers have attempted to instill all manner of knowledge into an infantile computer brain, only to watch it try to build a tower out of blocks by starting at the top. Perhaps there’s no substitute for experience ... or a million years of evolution.