Human, mecha, supermecha and David
Initial reaction to Spielberg and Kubrick’s A.I. Artificial Intelligence (AI) was mixed. While a number of critics hailed AI as a worthy successor to Kubrick’s 2001: A Space Odyssey, many others damned the film as some combination of simplistic, confused, boring, and schmaltzy. Salon.com alone afforded three versions of this latter, negative reaction: Charles Taylor tells us that Spielberg “botches the emotions and issues he raises”; Stephanie Zacharek remarks that Joel Haley Osment’s performance as robo-boy David is a complex gem “in a movie that’s otherwise afraid to be too complex”; and David Thomson celebrates Jude Law’s libidinous Gigolo Joe as a “terrific idea” that Spielberg, busy peddler of “woeful sentimentality,” himself never grasps.1
In this essay I try to show that those who have damned AI have done so prematurely: AI has problems, but a lack of interestingly elaborated ideas isn’t one of them. My strategy will be to provide and argue for a primer consisting of two points for understanding the film’s overall structure and intentions. I then try to show that, given essentially just these two architectonic points, AI emerges as coherently engaged with complex and decidedly unschmaltzy ideas and arguments, making it, in all likelihood, the most explicitly philosophical mainstream film since 2001.
Explicit philosophical richness is, of course, neither necessary nor sufficient for a film to be good. Most good or great films have negligible explicit philosophical content — the films of Kubrick, Malick, and Renoir, like the novels of Dostoevsky, Melville, and George Eliot, are exceptional — and many if not most films with explicit philosophical content of various sorts are barely watchable.2 Thus, nothing I say here will constitute a direct defense of AI‘s overall virtue (although my sympathies with respect to this larger question will always be clear). I will concentrate instead on the, as it were, defensive task of removing what seems to have been the principal obstacle to other people finding as much that it is valuable in AI as I do.
Since I do think that AI is a near or “flawed” masterpiece, however, I am committed to the view that explicit idea-mongering in a film is compatible with its being good. This view is hard to deny, I think, and it is not clear that the contrary view — that all estimable films do or should have at most only relatively implicit philosophical concerns, say those that emerge principally in the light of considerations of genre or of wider social contexts of various sorts — has ever been seriously held by anyone.3 But preferences need not (and commonly do not) end where rational argumentation gives out. My own sense is that a critical preference for films whose most interesting level of broadly philosophical content is relatively implicit is widely shared, and I conjecture that AI‘s tepid reception is traceable in part simply to the fact that it flouts this preference.
AI‘s reception may also reflect a kind of disconnect between what academic philosophers such as myself take to be philosophical content and what everyone else does. I gather, for example, that Christopher Nolan’s Memento, which was released a few months before AI, struck and continues to strike many people as philosophically rich. But the only philosophical content I can discern in Memento is a thesis that nobody should ever have denied: that extensive impairment to memory would be incredibly corrosive of normal psychological and social functioning. Coordinately, the extensive discussions online and elsewhere directed at answering “What happened?” questions with respect to Memento strike me as fun and even illuminating in some fashion, but not as philosophically probative. (It isn’t clear that any “What happened?”-driven discussion could be.). Memento may be a brilliant film — its narrative calisthenics are certainly remarkable — but if so, it’s not because it’s philosophically exceptional. In this article I try to show that AI is, by contrast, the philosophical real deal: that it develops many surprising and controversial theses and lines of argument. While I’ve already indicated that divining philosophical content in a film is not the same thing as showing that it is great, in AI‘s case, if it is a great film, it will I think be in part because of its philosophical depth.
Before moving on to our AI primer, some rules of the game for this essay:
- Save the Spielbricking for later. AI effectively has two famous authors. While it’s fun to speculate about who should get the credit or blame for what, it isn’t philosophically illuminating. In any case, all such speculations are best deferred until publication of Spielberg’s screenplay and the materials due to Kubrick and his collaborators that that screenplay built upon.
- No allusion- or reference-tracing for its own sake. AI is a remarkable synoptic achievement. As Jonathan Rosenbaum of The Chicago Reader observes, it functions as a kind of simultaneous sequel to three of Kubrick films (2001, A Clockwork Orange, Eyes Wide Shut) and two of Spielberg’s (E.T., Close Encounters). AI also contains gentler echoes of almost every other Kubrick film as far back as Killer’s Kiss, as well as to a host of other films. I won’t bother to limn these sorts of connections except insofar as they feed directly into AI‘s philosophical enterprise.
- German philosophy avoidance system. AI contains some tantalizing near puns on German words that have special significance for philosophers: “Bildung” (David’s brother Martin asks him for his Build-day) and “Dasein” (A boy asks Martin whether David has DAS, the Damage Avoidance System, and another replies “DAS ist gut.”) in particular. These garnishes of philosophical catnip can be safely left for another occasion.
- Literature and psychoanalysis are someone else’s problems. I suspect that AI is lousy with Freud, and I suspect that it’s lousy with literature too, from Pope’s “Rape of the Lock” to whatever the literary antecedents of Lord Johnson Johnson (Brendan Gleeson) and his moon-mobile are. This material too I set aside as strictly philosophically tangential.
- No nitpicking about the set-up. AI explains the prevalence of robots over humans in the 22nd century by suggesting that they consume “no resources beyond their original manufacture.” But there are no energy free lunches: if the robots weigh more or less what human weigh, and move around as much as and in essentially the same way that humans do, then they’re using comparable amounts of energy (resources in a broad sense). Having to pack all of that energy in at the time of manufacture is at best an inconvenience, and at worst incurs many additional energy costs. Moreover, the Swintons’ home and its grounds, the dumping of robot parts, the glitz and lights of Rouge City, and so forth all seem part of a culture of material abundance not of scarcity. We set all of this aside as a relatively small nit by speculative movie standards: it’s peripheral to the main action and philosophical endeavor of the film, and it could easily have been omitted.4
- He’s and she’s please. Essentially all the robots we see in AI are gendered. I’ll therefore avoid the strictly correct “it” and “its” in favor of whichever battery of gendered pronouns in English will help us keep track of the robot individual that’s at issue.
All but the last of these “rules” simply clarifies what we won’t be doing, except incidentally in this essay. Someone else should, of course, investigate these other dimensions of AI.
1. An AI Primer: Two Points
Both AI‘s inner narrative and its outer frame are necessarily mysterious to the extent that we aren’t sure who David’s Giaccomettian saviors are. It may not be especially important to know whether they are orga(nic) or mecha(nical); after all, it wouldn’t be too unsatisfying to be told that the spindly creatures are supposed to transcend those categories. But the question of their descent — whether the spindly creatures are descendants of the mecha we see in the 22nd century portion of the inner narrative, or, what I take to be the principal alternative hypothesis, they are extra-terrestrial — is crucial in a film so centrally concerned with various sorts of parent-child relations.
I think that the right story about the spindly creatures is that they are the descendants of the inner narrative’s 22nd century robots. I will follow the usage common in online discussions of AI, and call the spindly creatures supermecha — with the proviso that we keep in mind that it’s the creatures’ descent from the mecha and not their mechanicalness per se that is important.
Surprisingly few considerations internal to AI support the descended-from-mechas understanding of the supermecha. That the creatures are mecha is suggested by the fact that they appear to communicate by (something like) movie-transfer and have glowing circuit-like elements beneath their metallic skin. But that the supermecha are descended from David and Joe’s ilk (rather than ultra-advanced robots from another planet) has to be inferred
- from general foreshadowings, e.g., the inner narrative’s depiction of 22nd century human anxiety about increasingly functional robots
- from understanding Joe’s remark to David that “They [i.e., the humans] made us too quick, too smart, and too many. We are suffering for the mistakes they made because when the end comes, all that will be left is us” as accurate prophecy
- by a broadly Occamist “Don’t multiply types of entities [here, types of robots] beyond necessity” argument.
None of these inferences is individually strong, and taken jointly they remain far from conclusive. And this seems to me to represent a serious miscalculation on Spielberg’s (and possibly Kubrick’s) part: it’s simply too hard for an audience to correctly identify the supermecha. Given, as we will see, the complex web of ideas and arguments that AI asks its audience to keep in focus, we need clarity at this juncture, not artful ambiguity.
The indirect and partially external-to-the-film argument for the descended-from-mechas understanding of the supermecha is just that AI is much more interesting and intelligible if they are. On one level, of course, the whole of this essay constitutes an argument for this proposition, but let me now provide two specific examples of how the descended-from-mechas understanding of the supermecha helps make the film more intelligible.
First, consider the outer narrator-supermecha’s otherwise preposterous and embarrassing genuflections before human genius, which conclude that human beings must be “the key to existence.” That the attitude of the supermechas to us should be this worshipful — that we should be Gods to them — does not seem to be intelligible except on the assumption we humans directly or indirectly designed and created them.
Thus if the supermecha are descendants of the 22nd century robots we see, then AI can be naturally read as exploring a vision of humans as tragic, hybrid, creatures in a way that our artificial descendants are not. They really do have creators; they have the Gods/designers — us — that we only dreamed disconsolately about. In sum, supposing that the supermecha are descended from David and Joe’s ilk makes sense of an otherwise puzzling remark by AI‘s outer narrator and solidifies a general line of thought with which AI, on any account, is much concerned.
Second, the descended-from-mechas understanding of the supermecha helps us understand the fantastical elements of AI, such as David’s being carried Disney-Pinocchio-style to Blue Fairy by a school of fish and David’s finally falling asleep (something we were told is impossible for him). This side of AI received much, mostly negative notice from critics. The oft-heard suggestion was that the introduction of explicit fantasy in the final third of the film was a cop-out on Spielberg’s part: a presumptively counter-Kubrickian changing of the rules to contrive a crowd-pleasing, “feel-good” ending. With our basic inner narrative/outer frame distinction in hand, however, we can locate the fantastical side of AI solidly within the inner narrative. It is an interesting twist rather than any sort of cop-out to learn that the story of robo-boy David is a kind of fairy tale told by the supermecha. But evidently this twist is far more meaningful if we know who the supermecha are. If the supermecha are descendants of the inner narrative’s 22nd century robots, then the fairy-tale is also an origin or creation myth for the supermechas.5
We summarize our first primer point as follows:
PP1: AI comprises an outer frame and an inner narrative. The inner narrative is a fairy tale/origin myth told by supermecha descendants of the burgeoning robot civilization the inner narrative mostly depicts.
My second architectonic point concerns the organization of AI‘s inner narrative. I see the body of AI as having two main axes of philosophical inquiry. Both axes are announced in the first scene of the inner narrative, in which Hobby proposes building the first robot-child. The first axis is constituted by a (partially implicit) series of hypotheses Hobby makes about the nature of love, love’s relation to intelligence or full personhood, and about the state of the play in robotics circa A.D. 2125, and by a variety of skeptical responses to those hypotheses. The leading question of this axis is whether robo-boy David’s love is real. The second axis begins from the query a scientist/ethicist colleague of Hobby’s makes at the end of that first scene: supposing a robot child could love, could or should a human parent love the robot child back? This axis of inquiry brackets the question of whether robots in general and David in particular can love (and even goes so far as to assume for the sake of argument that they/he can), and uses the possibility of a robot’s loving as a kind of analytical tool with which to expose the conditions under which humans can love and thence to explore what human loving is.6
We summarize our second primer point as follows:
PP2 AI’s inner narrative has two main axes of philosophical inquiry. The first scrutinizes a series of hypotheses about the nature of love and personhood. The second regresses on the conditions an object must satisfy for a (normal) human to love it. We will call these lines of inquiry, the hypothetical and the regressing axes respectively.
PP2 provides, I believe, the key to understanding much of AI‘s inner complexity. The regressing axis supposes David’s love is real for the sake of an argument that has the conditions of human love as its focus, whereas along the hypothetical axis David’s love is the focus and part of what is at issue. Again, according to the regressing axis, human loving is real, though poorly understood and worthy of closer scrutiny. On the other hand, as we will see in section 2, the hypothetical axis explicitly allows that human love might be a kind of delusion.
This is complex stuff! It’s a tribute to Osment’s performance and to Spielberg’s direction of that performance that David’s character can support all of this complex content. But the principal evidence for the basic distinction in PP2 does not involve divining subtleties within performances, rather it’s that it is given to us as an explicit road map at the beginning of the film. We should salute Spielberg for being willing to pay the price of a deeply uncinematic opening scene of speechifying by Hobby and his colleagues — the sine qua non of all the inner narrative’s later complexity.7
Our two primer points in hand, the plan for the rest of the essay is as follows: In sections 2 and 3, I explore the inner narrative’s hypothetical and regressing axes of inquiry respectively. In section 4, I follow the film in considering one instance of the regressing axis separately: the extent to which an object’s being unique or one of a kind is or should be a necessary condition for a human to love it. In section 5, I describe, somewhat more speculatively, the principal way in which AI‘s inner narrative tries to knit together its two main lines of philosophical inquiry. Finally, in section 6, I return to PP1 and to the significance of AI‘s outer frame.
2. Hobby’s Three Hypotheses
Hobby immediately advances to another even bolder hypothesis: that all and only (the “all” is implicit) robots who love will develop a subconscious, or dream, or be able to understand and produce metaphors and metaphorical reasoning, or be able to carry out certain kinds of “self-motivated reasoning” (which Hobby later appears to clarify as “forming one’s own desires”).
This list of intriguing traits is clearly not supposed to be exhaustive. For example, accommodating the moral of the sequence in which Sheila (the female robot whose face Hobby opens, right) nonchalantly begins to obey a request to undress in public seems to involve adding “capacity for shame” or “capacity for moral indignation” to the official list of currently, unrealistically absent traits. Moreover, since Hobby clearly has in mind a robot with all the listed traits, it’s strongly implied that he thinks these traits (and any others not mentioned) are a package deal: that to have any one of them is to have them all.
The non-exhaustiveness and package-deal-ness of the listed traits suggest that the traits are supposed to be representative or landmark (rather than definitive) phenomena: that it’s the region of human functionality the traits help identify that’s most important. What is the region of functionality that, as it were, contains and bundles together the listed traits? So far as I am aware, AI doesn’t directly name it, but some dialogue of Gigolo Joe’s is suggestive. As part of an argument that David should abandon his dream of reuniting with his adoptive mother by becoming a real boy, Joe attempts to humble David by insisting that he, like every other mecha, was built “specific.” That is, Joe insists that David is a child-bot the way he, Joe, is a lover-bot, other robots we have seen are nanny-bots, and so on. It will emerge below that AI‘s final perspective on Joe’s claims (of specificity for himself, for all previous mecha, and for David) is interestingly complex. What is clear, however, is that according to the audience’s initial take on the events of the inner narrative, Joe is simply wrong. According to that first take, David was specifically a child-bot before he imprinted on Monica, but upon imprinting he immediately becomes the first mecha who loves, who starts to acquire a subconscious, to dream, and so on. David’s wild journey to become a real boy through the offices of Blue Fairy is, from this perspective, proof of how far David is from being specific. Let us, for convenience then, reformulate Hobby’s second hypothesis as follows: All and only robots who (really) love will be nonspecific. I will call this Hobby’s conceptual hypothesis.
Hobby’s third and final hypothesis is really a (partially implicit) series of narrowly empirical observations and predictions: normal humans love, no current (i.e., circa A.D. 2125) robots love, but the state of the art in robotics is such that adding Hobby’s latest piece of technical wizardry (allegedly, and in my view implausibly, based on some breakthrough in understanding single neurons) will yield a robot that loves. I will call the conjunction of these materials Hobby’s empirical hypothesis.8
Much of the inner narrative is, I think, naturally understood as consistent with all of Hobby’s hypotheses. Thus Hobby’s perspective does deserve to be understood as the main or primary perspective we have on that inner narrative (at least as far as the hypothetical axis is concerned). But there are at least two important strands within the inner narrative that pull directly against this main thrust of the hypothetical axis, conclusively frustrating any attempt to identify the film’s own perspective with Hobby’s.
First, there are elements within the inner narrative that suggest that David is not as revolutionary as Hobby thinks.
- Sheila’s tear. At the very moment Hobby is bent on demonstrating the limits of current mecha technology, Sheila apparently transcends those limits, shedding a ghostly tear as Hobby opens her face and removes her artificial brain. This image is echoed later when the image of David falling from the Cybertronics building is reflected in Joe’s amphibicopter bubble, forming a tear-like streak down his face.
- Teddy. Reprising the HAL-trick from2001, there is a clear sense in which Teddy is, as it were, the most human character in the film (at least prior to the arrival of the supermechas). He appears to have a balanced concern for and attachment to each of Monica, Martin, and David. Teddy also exhibits a heartening practical concern for himself, repairing his physical being and also defending his more diffuse, metaphorical interests (“I am not a toy”).
That is, essentially every pre-David robot we get to know to any extent exhibits traits that seem at variance with Hobby’s empirical hypothesis, and that call into question various parts of his other two hypotheses. In a nutshell, AI subversively entertains that various forms of robot consciousness and proto-culture are forming independently of any designer’s intentions, contrary to Hobby’s explicit theories of the genesis of such capacities and even contrary to what the robots may say about themselves. A possible general story about this — one that opportunistically mixes elements from Hegel, Nietzsche, and Heidegger — is that the various forms of animus against mechas have as a common effect that they disrupt the normal functioning of individual mechas (this is more or less what we see happen to both David and Joe). But making it impossible for a mecha of a certain degree of complexity to fulfill its specific role leads it to new behavioral and functional expressions of the capacities that underlay its original functionality. As mechas improvise, they become reflective and aware of themselves in new ways, in particular as distinct from and independent of their originally assigned role or function. Collaboration between improvising mechas now occurs, new forms of mecha-mecha sociality arise, and suddenly a full-scale, Nietzschean slave rebellion in consciousness becomes a possibility.
Of course, no story of this kind is required by what we actually see in AI. But that’s actually what we should expect. On the sort of account of AI that I’m sketching, after all, the film isn’t interested in directly refuting Hobby’s hypotheses, since we have to be able to take those hypotheses seriously as part of our interpretive framework for understanding most of the action. Rather AI simply wants to us to understand and feel the presence of contrary and very unsettling possibilities. Suspension of an audience between possibilities, like dramatic embellishment more generally, is rightly frowned upon in academic philosophy (although exceptions are granted to the mighty dead such as Plato and Nietzsche), but it’s a prerogative of an artwork, even one that’s highly philosophical. AI isn’t a treatise and we should be glad of it.
The second strand within the inner narrative that argues with Hobby’s hypotheses casts a debunking, even dismal eye on normal human capacities and motivations:
- Human love ain’t pretty. AI insinuates repeatedly that the psychological reality underlying human behaviors including those phenomena we ordinarily classify as love is a depressing mixture of aggression (“I love you. Don’t kill me.” Henry repeats to Monica; the human lovers of Joe’s clients alternately abuse and murder them), possession and territoriality (“David® At Last A Love of Your Own”; Henry’s willingness to shake, demonize, and finally cause Monica to expel David; Martin’s challenge to get Teddy to come to him rather than David), brute sexuality (Rouge City), and simply having one’s various needs met.
- We are orga hear us roar. AI also asks us to take seriously what we might call an organic level of explanation of human loving that is beneath the level of what we would ordinarily term psychology. Hobby’s architectural hypothesis that places parent-child bonds at the core of all forms of love is true on this view because of the operation of universal organic drives to reproduce or replicate ourselves. Our children share half of our genes, and our ability to love them and defend their interests as our own is in turn directly rooted in the bodily and behavioral confirmation that they provide to us — reflecting us back to ourselves — that they in fact are our organic, genetic heirs. Being mecha rather than orga, it is these mirroring tests that David is constantly failing.
- We’re specific/robotic too. AI flirts constantly with the view that humans are more specific and behaviorally rigid than they ordinarily take themselves to be. The apotheosis of this is the pasta-eating scene. We are initially horrified by David’s laughter (right), which seems out of scale with the situation. But then we soon find Henry’s and Monica’s answering laughter even more unsettling: it too seems like a poorly calibrated mechanical response of some kind. That this episode of all things is what leads to Monica to decide to go through with imprinting David on her is, in its way, shattering.
Humans talk a good game about love, this whole side of the inner narrative suggests, but little if any of what ordinarily passes for love in humans affairs meets the standards of the concept. Humans aren’t, the film entertains (at least for the most part), capable of love, and are much more “specific” and, as it were, robotic than they like to think they are. According to this line of thought, it could be true that, by virtue of his novel wiring and upbringing, David ends up one of us. But that turns out to be less of an achievement than we originally thought simply because we aren’t so impressive. This line of thought from the inner narrative can be understood as arguing that all of Hobby’s hypotheses are false if the concepts (i.e., love, non-specificity, and so on) employed in those hypothesis are given their intended, ideal meaning. Alternatively, if we allow what humans actually do and are like to fix the content of these concepts (love, non-specificity,…), the hypotheses come out true but unimpressive and even deeply depressing. It is this misanthropic line of argument in the film that I think detractors are responding to when they complain of AI‘s “chilliness.” That chilliness is strongly in evidence near the end of the film when David interacts with the Monica-surrogate/-resurrectee. David gets in effect the Monica of his dreams: one who has eyes only for him, who exists solely as an audience for his needs. The debunking perspective chillingly allows that this dire rendition of human love is in fact a completely general reality: that David’s attachment to Monica is an unnaturally pure but otherwise highly representative sample of human love.
Similar debunking, broadly misanthropic lines of thought have, of course, played an important role in many of Kubrick’s films, although, so far as I can tell, they constitute the governing visions only of A Clockwork Orange and Barry Lyndon. In Kubrick’s other films, a debunking line of thought seems to me to function much more as it does in AI — not as authorial voice, but as a foil to other lines of argument, or as a kind of intellectually and emotionally astringent perspective on the action that we can always be asked to take up. A version of the debunking, skeptical perspective on human love is also part of every Tin Pan Alley sophisticate’s conceptual toolkit. It’s in Warren and Dubin’s “I Only have Eyes for You” (the Dick Powell version of which Joe plays for a client ) when the singer wonders whether his paramour finds that love is an optical illusion too.
Global skepticism about human love, like other sorts of global skepticism, is hard to take seriously (at least when one is beyond a certain age!). It would therefore be of interest if there were a line of argument (from the inner narrative) for the conclusion that David doesn’t love that is independent of such considerations. I present a line of argument of this kind in section 5.
3. Robo-Boy as Analytical Tool
Hobby’s hypotheses together with some of the counter-possibilities discussed in the previous section form the hypothetical axis of the inner narrative’s philosophical inquiry. The other, regressing axis of that inquiry brackets the question of whether robots can love, and uses the possibility of a robot’s loving as a kind of analytical tool with which to expose the conditions under which humans can love and thence to explore what human loving is.
The principal condition for which the film argues in this way is simply, and most generally stated: mortality. Much of the philosophical energy of the first third of AI is devoted to thinking through (in a way that science fiction rarely ever tries to) exactly how difficult it would be to maintain the complex of attitudes and behaviors constitutive of a parent’s love for a child if the nominal child were not vulnerable or mortal in the ways that normal human children are.
Occasionally AI makes the point directly in terms of death and pain:
- In the first scene after “imprinting” on Monica, David raises the issue of Monica’s death, and correctly concludes that his own relative immortality guarantees then that he will eventually be left alone.
- David damages himself by ingesting spinach (right) and has to be returned to Cybertronics for repair. We see Monica hold David’s hand and try to comfort him as engineers slide circuit-boards out of his chest. When David cheerily tells her not to worry about him because nothing the engineers are doing hurts him, Monica is momentarily flummoxed: Has she allowed her emotions to run away with her? Has she just been made the robo-boy’s dupe? We see Monica struggle with these questions and feelings; shake them off; and, to her biological son Martin’s chagrin, return to David’s side.
- Adults rush to pull drowning Martin from the pool leaving David, who can’t drown, confused at the bottom, with only Teddy showing any immediate concern.
The film equally often makes the same point indirectly, by observing some facet of mortality that David lacks, then leaving the audience to register the emotional impact of these observations, and to draw its own conclusions. That in some respects the audience is left to feel and think for itself about David, is crucial to the impact of the first third of AI. It places us in Monica’s and Henry’s position of (1) seeing David diverge from human (or more broadly organic) norms, (2) wanting to dismiss those differences as unimportant, but then (3) having to deal with the fallout from that charitableness as events unfold, and further disorienting details come in.
Consider time. To be mortal is not just to die one day, but to every day be closer to that death, and have every day as a possible last day. It is to be a waster of time, a fritterer of opportunities, and so on. In sum, the mortality of normal human beings puts an intrinsic temporal scale on all human endeavors. But David does not share that temporal scale. He shares instead the much more fungible relation to time that is characteristic of artifacts such as buildings or tables or spoons. Paradigmatic artifacts are remarkably stable structures; they will last for millennia if nobody or no natural accident breaks them.
The distinction between David’s artifactual time-scale and human time-scales is an important subtext of the scene in which David asks Monica whether she will die. While David rightly worries about spending (as good as) eternity alone, Monica’s husband Henry is downstairs fretting about being unfashionably late to a party. It is important that we do not see Henry as especially ridiculous here. Rather he is just acting on the temporal scale on which all individual human dramas play out. Will Monica ever look this beautiful again? Or feel this happy and carefree again? Will Henry? Maybe not. But then this evening should not be let go lightly: it’s potentially a special moment given significance by the fragility and transience of human beauty and contentment. David by way of contrast is awkwardly outside all that.
How are artifacts such as spoons and tables able to last indefinitely? Roughly speaking, it is because they need nothing from their environment to maintain themselves as what they are. Artifacts tend to be in equilibrium steady-states (whereas it is almost impossible to get ordinary living things into such states: essentially all of their relatively steady states require constant regulation and active maintenance). This is David’s way of being in the world: he doesn’t ingest food or drink, and he doesn’t sleep or get tired or bored. Now part of us surely wants to minimize the impact of these sorts of facts. We are tempted to say, in effect: “So what? Just love the adorable little guy! So he’s faking meals and sleep. Big Deal.” But AI unsettles such responsiveness and charity by inviting us to see that mealtimes, bedtimes, wake-up coffee, and so on are essentially artificial constructs for David. These are games David plays to express his love for his mommy rather than anything that has a value for him independently of her. It is left to us to (1) frame the question: “When isn’t something a game to David?” (2) Formulate its apparent answer: “When it concerns his love for Monica.” And (3) decide whether we can really make sense of or respond to a love that floats free of all other needs. David’s “I found you” to the Monica-surrogate/-resurrectee, and the Folgers coffee advertisement-like stylings of their scenes together, are chilling precisely because they raise the possibility that David never breaks out of pure game-playing (hence that a tear-shedding audience is also the robo-boy’s dupe).
Most artifacts have little or no behavior to speak of, but among those that do have behaviors of various sorts, one particular sort of behavior is arguably distinctive and tellingly inorganic: looping or going in to a loop. By virtue of its relative autonomy from its environment, something that is behaviorally plastic but still deeply artifact-like has as a possibility for itself, that it may go into some sort of loop, unprofitably repeating the same simple step indefinitely. By way of contrast, no mortal creature with real needs for nutrients, rest, mates, predator-avoidance, and so on, can have any appreciable tendency to get stuck in a behavioral loop of this kind. So looping behavior is a kind of totem of non-mortality; of non-temporality and of not being related to the world as an unstable system the way living beings tend to be. It is for this reason that “going into a loop” is a kind of science-fiction cliché: the Achilles heel of many an artificial adversary.9
A further facet of mortality that David lacks is what I will call continuity: living creatures are, as it were, always on; they respire uninterruptedly until they die. David does not seem to be like that. He may, like Teddy, have an off-switch, and in the scene where David illicitly ingests spinach, his ultimate state appears to be something like full shut-down not mere unconsciousness (someone who is unconscious is still respiring, needs tending, life-support, and so on). But part of the force of a normal notion of human love is surely that the object of that love has a continuous claim on one. Parents may sometimes desire relief from their always on, non-interruptible relations of obligation grounded in the continuity or non-turn-offability of their children. But this sort of desire isn’t obviously distinguishable from simple second thoughts about parenthood. What conception of parenthood would be left if one could, say by turning one’s children off, set those obligations aside (a babysitter is a way of meeting one’s obligations, not an escape from those obligations themselves)? Consider the following situation:
A couple takes a vacation on a whim, switching off their children for the duration. Enjoying their time away, the couple extends their vacation indefinitely. Finally they return, but having had a change of heart about parenting while away, they decide to simply leave little David and little Darlene switched off in a closet.
This case seems to me to not involve parenthood as we know it or anything particularly close to it, rather it’s a kind of fake or “Potemkin” parenthood.10
Let me end my main discussion of AI‘s strategy of regressing on ingredients of mortality (itself conceived as a condition on the possible objects of human love) by looking at one ingredient that I do not believe the film handles entirely surefootedly: growth or mutability. David isn’t supposed to be growing or changing in important respects. He is, as Hobby says in his initial proposal, intended to be a perfect “freeze-frame” of a child.11 To see the problems this poses, consider the scene in which Monica reads Pinocchio to David and Martin. What exactly is supposed to be going on here? The act of reading a book to one’s child isn’t just a way of spending time with that child the way throwing a stick to a dog is just a way to spend time with one’s dog. Reading to one’s child is, among other things, helping her to take a step on the road to maturity and independence. One reads Pinocchio to them now, in part, so that they can read The Lord of The Rings by themselves a little later. From there it’s on to Catcher in the Rye, Moby Dick, and filling out college applications.
But since David does not fit at all into that sort of scheme of development, so that he’s strictly frozen at his current reading level and can never grow up mentally to any significant degree, one wonders at what point Monica would have given up bothering to read stories to David? When would Monica have started to find reading to David to be pointless or even to fail to be reading to a child in any normal sense at all? If, on the other hand, David is capable of progressing mentally, so that only his body remains frozen, then I think that there’s a sense in which Hobby’s initial description and much of the rest of the film is just terribly misleading. If Holden Caulfield is really in the wings, then, I submit, teenage rebellion and Emersonian attempts to try to hear the call of one’s higher self can’t be far behind: David would be growing and mutable after all. His original imprinting might leave David a momma’s boy, but a whole galaxy of other needs and desires would soon crowd in so that he’d never be just a momma’s boy. He could become the 22nd century’s Elvis Presley.
So far as I can see, AI never quite grasps the nettle on this basic point. Instead the movie tries to have it both ways: David learns from and is changed by reading Pinocchio, but somehow the ball of development never gets (and never could get) rolling enough for him to ever, as it were, stop being an 11-year-old. So this is one place where the criticism that AI doesn’t follow through on its ideas seems accurate.12
So far in this section I have explored the ways in which AI uses the possibility of a robot’s loving as a kind of analytical tool with which to expose the sort of things that humans can love. These sorts of explorations, however, do invite the response that even if David is not a suitable object for human love, a David 2.0 constructed to not do the things that particularly disturb human beings would be suitable. The objection is that David’s engineers needed to take seriously the problem of rejection of a mecha by its orga hosts the way transplant surgeons take deadly seriously the problem of a body’s tendency to reject new organs. In the light of David’s failure, then, David 2.0 should:
- Have a built-in decommissioning date so that he goes non-operational after a 20-year tour of duty. The idea is to mirror roughly the length of time that normal human children are their parents’ dependents.
- Ingest food and fluids convincingly (as the character Data does on Star Trek: The Next Generation), i.e., into some internal temporary storage area. Simulated elimination of this material should also be arranged if possible.
- Irreversibly shut down if he doesn’t receive a (temporary storage) diet that would support life for a human boy. Current so-called virtual pets or Tamagotchi, which “die” if someone doesn’t give them regular input of some kind, are the model here.
- Blink (the way current facial expression robots always do), and close his eyes, and have a proper “sleep state” that involves being left on but with a much reduced level of activity. Sleep modes of current computers are the model here.
- Lack an off-switch. And if shut-down occurs for any reason for more than a minute, that state is irreversible.
- Not have any unnecessary skills. Especially not those that either violate bodily integrity or make him seem like an appliance of some kind. David’s ability to turn himself into lip-synching speaker-phone does both, and is therefore a paradigm of what a robot-designer should try to avoid.
- Have no upper bound on his reading and comprehension levels so that in principle he can mature, becoming, as it were, an adult with an 11-year-old’s body. Optionally, if David 2.0 does mature in this way, Cybertronics should offer a regular body maintenance/ updating program that “swaps out” his current body for various late adolescent and even adult bodies as he advances mentally. Perhaps these swapping out procedures could be organized to be synchronized with David 2.0 going off to prep-school and college. (This is a partial alternative to 1 above.)
I will call any response along these lines a “slashdot” objection after the web site slashdot.org, which is characterized by exuberance about engineering solutions to problems. Many of the posters at slashdot.org may even believe that if a problem isn’t amenable at least in principle to an engineering solution, then it’s a pseudo-problem of some kind.
Clearly a David 2.0 would fly fewer immediate red flags for a human trying to love him than David. Whether such a robot could sustain for years the complex of attitudes and behaviors constitutive of a parent’s love for a child is difficult to say. With one proviso, which is the topic of the next section, my own bet is that if not David 2.0, then in the limit of the process of refinement begun with David 2.0 (David 3.2, or 4.5 or …), we would arrive at appropriate objects for human loving.13 being able to play this role, effectively making it very difficult for parents to backslide into treating their robot child as an artifact or appliance. But even just having the parents receive counseling would help. And, per impossible, if ethics weren’t an issue, the “Stepford Child” or “Double Blind” solution of not telling the adopting parents that their child is a robot might be worth exploring.)) What does this, if true, tell us about AI? Just, I think, that exposing the conditions of human love — and the overall meaning of those conditions — is a core part of the film’s project. A perfectly human-like artificial creature wouldn’t help us see any of those conditions or their meaning, rather it would just leave us in our initial unreflective state. Shameless human- or orga-chauvinism aside, a perfectly human-like robot would be one of us, but for that reason it wouldn’t help us see who we are.
4. The Importance of Being Unique
My own view is that non-uniqueness or genericness of an object probably does make it unsuitable for humans to love it. In particular, I doubt whether the complex web of attitudes and behaviors constitutive of parental love could survive being able to always get another, just as good replacement child off a shelf from somewhere. (Would the right thing be to say that one had been concerned all along for the shelved backup?) But for non-uniqueness to be a real problem for David, we have to believe that artificial systems such as David are non-unique or generic in a way that natural or organic systems are not. Clearly Hobby believes this. Indeed he may further believe that the artificial/natural distinction coincides with the generic/unique (or first of a kind/one of a kind) distinction.14 But as we saw in section 2, it is very unwise to equate Hobby’s perspective with the film’s.
AI seems at least to flirt with a view of individuality that is quite contrary to Hobby’s and that is independently quite attractive. According to this contrary view, the key to uniqueness, whether in robots or human beings, is a matter of nurture or history rather than of nature.15 Human identical siblings share a nature (setting epigenetic phenomena — environmental influence on gene expression — aside for the sake of the argument), but are still unique individuals to an arbitrarily high degree by virtue of their divergent life histories, and the way those histories are reflected into the stories they tell about themselves. Similarly, then, no other David® robot can be Monica’s first robo-boy, or will be abandoned in a forest (above) quite the way he was (perhaps not in any way), survive a Flesh Fair the way David did (perhaps not in any way), and so on. David’s story, which we later see him recapitulate for the Monica-surrogate/-resurrectee, is to all intents and purposes one of a kind.
On this view, Hobby is wrong when he unfavorably compares David with his dead son. David now has a completely individualized history that is as unique as anyone could ever want, i.e., as much one of a kind as Hobby’s own son was and would have been even if he’d had an identical twin brother (again, setting epigenesis aside). And David too is making a kind of mistake when he reacts violently to seeing the other David® robots. The other David® robots may share David’s nature, but they will be quite different from David once they get out in the world, imprint on a parent of their own, and so forth. We would, I believe, have some understanding and pity for a human child who went on a rampage after learning belatedly of an identical sibling, or (even closer to David’s case) of the existence of the sorts of delayed identical siblings that (slight idealizations of) contemporary cloning procedures promise. But we would also see such a child as confused.
It may seem outrageous to suggest that both the robot-creator and the robot characters in AI are equally deceived on this very important front, but, in effect I’ve been arguing that the groundwork for attributing such an error is laid throughout AI.
Note, however, that for some time now, science fiction (as well as some philosophy) has had as part of its stock in trade a variety of technologies that effect full-person-copying. Such person-copiers are supposed to capture the developed state of the organism, thereby duplicating all the effects of history and nurture. A technology of essentially this sort is even appealed to at the end of AI to generate the Monica-surrogate/-resurrectee. (The Monica-surrogate is in fact a satisfyingly weird and improbable synthesis of the generic and the unique: a super-clone who can herself never be duplicated, or even extended beyond a single bout of consciousness.) By fiat, then, this is the threatening case where “another, just as good” is available. Could this sort of possibility underwrite or otherwise bolster Hobby’s perspective in the film?
The answer to this question depends on whether there is any reason to believe that robots such as David are more susceptible to being reproduced in this radical way than a typical human is. If the reproducing-mechanism works by molecule-for-molecule copying of some kind (the supposed method of the transporters in Star Trek), then the answer must be “No.” Presumably David would be no more or less copiable than anything else on a molecular level. But if the duplication mechanism works in a way that trades on the specifics of David’s robot nature — traditionally, on the existence of relatively medium-independent, digital data structures of some kind (“software”) — then the answer may well be “Yes.”16 But whether this sort of duplication possibility is a real one depends on how suitable such easily copiable, digital data structures really are for driving human-style thinking and doing.
The emerging consensus in the cognitive sciences is that they aren’t suitable at all: increasingly, brains and brain subsystems look less and less like computers and more and more just like very intricate, complex systems, and the only things that look like software or digital data are public symbol systems and other manifestations of culture. It is common to hear classical cognitive science and the computer model of the mind diagnosed as “mistaking the profile of the agent plus the environment for the cognitive profile of the naked brain.”17 Doubtless this emerging consensus, like those before it, is multiply confused, and guilty of both hype and of throwing out babies with the bathwater of the previous work it regards as confused. But if anything like it is correct, that doesn’t bode well for fantasies about downloading of data directly to brains (as depicted in, e.g., both The Matrix and Total Recall). And it doesn’t bode well for the prospect of human-like robots with easily copiable, digital data-structure-like inner states either.
Compare the giant spiders and insects of some 1950s B-movies. These imaginings strike us as quaintly confused: You can’t make a giant spider or insect just by making an ordinary spider or insect bigger. Such creatures lack internal supporting skeletons and would collapse under their own weight. We don’t yet understand, as it were, the anatomy of the mental well enough to say exactly what quaint confusion the fantasies of downloadable/copiable mental data are trading on — but it’s a fair bet that the next 50 years will change that. By A.D. 2051 it could well be part of smug pop-scientific commonsense that situations depicting downloadable/copiable mental data are impossible without an “endoskeleton” that itself is not the sort of thing that can be downloaded/copied. We’ll see, won’t we?
5. One Hand Clapping
It is now time to redeem a promissory note I issued at the end of section 2: to sketch a line of argument for the conclusion that David doesn’t love that does not employ global skepticism about human love as a premise.
The core of the argument is the thesis adumbrated by this section’s title: that love (even of oneself) isn’t the sort of thing that anyone can achieve by oneself (“the sound of one hand clapping”), rather one must first be someone else’s beloved. To change metaphors: no one joins the love economy except by importing love from someone who’s already part of that economy. I will call this the One Hand Clapping (OHC) thesis (about love). Making the OHC thesis precise, let alone defending it against all reasonable objections, is impossible in the space available here: it would involve us rehearsing many of the most controversial arguments developed in analytic philosophy over the past quarter century.18 But many philosophers now hold, in part as a result of these arguments, that characterizing an entity as possessing psychological states means in the first instance treating that entity as manifoldly rationally responsive, where the terms of this treatment must be understood as a chapter within the life of an ongoing community of some kind. Maturity within such a community is a matter of the entity in question repaying the early treatments by in fact turning out to be systematically interpretable from within the communally instituted perspective of norms for rational behavior (including linguistic behavior constitutive of treating and interpreting others as rationally responsive). The upshot of these views are universal OHC theses: nobody gets to be a believer or a lover or in fact any sort of rational agent except by being taken up into some existing community of rational agents (believers, lovers, and so on).19
Whatever one might think of the OHC thesis about love in particular, it should be clear that all or practically all humans satisfy the condition it prescribes. That is, practically all humans are loved by their biological mothers or by other caretakers long before they themselves love or are even capable of love. Young human beings typically return love to those who initially love them, and who, if the OHC thesis is correct, made it possible for them to love anyone. But this is optional: it’s at least possible, for example, that a human being might gain the capacity to export love but never exercise that capacity.
One way to understand David is as an attempt to think through the counter-possibility to the OHC thesis: a love that exists prior to being loved, in a way that arguably no human love is. And arguably the attempt is a failure (hence a success for AI‘s strategy of providing grounds for disagreeing with Hobby’s hypotheses and with the interpretation of the inner narrative those hypotheses sponsor). When David spends 2,000 years imploring the Coney Island Blue Fairy statue to make him a real boy, we have a strong sense not just of how out of touch with reality David really is, but also of how much wishful-labeling is involved in calling his basic attachment to Monica “love.” Never having entered the love economy properly, there’s no scale to his attachment: David’s so-called love is indiscernible from an arbitrary fixation or obsession or clinginess or satellite-hood (the moon imagery is perhaps important in this regard).
This indeterminacy about what exactly is going with David is, of course, vivid in myriad ways throughout the first third of AI. This, combined with the fact that David is always petitioning Monica for love in a way that is characteristic of adult attachments — something that (by the OHC thesis) doesn’t quite make sense (since he’s not yet part of the love economy) — helps make the first third of AI a memorably creepy experience (Kubrick-style misanthropy alone would just leave us chilled).
The grounds for attributing the OHC thesis to AI‘s intellectual framework are indirect, and I concede that here we are at the limits of what can be responsibly attributed to the film. First, near the beginning of AI, we repeatedly see David reflected in multiple mirrors as if to emphasize that it’s how he’s viewed that’s important; that he awaits Monica’s love to, as it were, pull him out of the mirror, to stop him just being a satellite of her. This image, like much else, is beautifully turned around in the final sequence of the movie where it’s the Monica-surrogate/-resurrectee who’s seen multiply reflected. More on this later.
Second, at one point AI explicitly signals an interest in issues about the logic of relations such as love and recognition: the niche in which the OHC thesis lives. Martin tries to convince David to cut off a lock of Monica’s hair by bargaining that if he does it, then Martin will tell Monica that he loves David. Martin argues that this is a prize worth having because, since Monica loves Martin, by a version of what logicians call transitivity, Monica will have to love David.
Third, and most important, I think, (only?) something like the OHC thesis can make David’s central Pinocchio-inspired quest to become a “real boy” less than completely quixotic. If we take love as a kind of emblem of full-humanity — and surely AI asks us to do exactly this (albeit in a way that differs from and is probably inconsistent with the way in which love is covered by the universal OHC conclusions of some recent analytical philosophy), then, given the OHC thesis, to become a real boy is not to become organic (an impossibility for David) but to become a lover. From this perspective, the poignancy of Monica’s rejection of David is that her love is exactly what David needs to enter the economy of love and moral recognition. Her love is what David needs for his love to be real, and by extension for him to be a real boy. As David himself pleads to Monica as she struggles to abandon him: “If you love me, I’ll be so real for you.” In this light, consider too the final scenes of the inner narrative in which David at least to some extent fulfills his Pinocchio-inspired dreams of becoming a real boy. By receiving something-like-love from the Monica-surrogate/-resurrectee, David’s love becomes something-like-real and that in turn makes him a something-like-real boy. AI does not tell us the extent to which we can remove the “something-like” qualifiers here. Insofar as the Monica-surrogate/-resurrectee is Monica, we can remove the qualifiers, but insofar as the Monica-surrogate/-resurrectee is just a doll or toy for David and herself needs to gain entry to the love economy (the inversion in these scenes of the earlier multiple reflection imagery suggests as much), then we cannot remove the qualifiers.20 There is a sense, which I will get to in the next section, in which I think it is right to say that AI has a happy ending (actually I’ll argue that it has two happy endings). But David’s own predicament at the end of the film is a solidly ambiguous one; it recapitulates rather than dispels the creepiness (as well as the chilliness) that is so affecting in the first third of AI.
6. Happy Endings
Let us now return to the issues raised by the first primer point:
PP1 AI comprises an outer frame and an inner narrative. The inner narrative is a fairy tale/origin myth told by supermecha descendants of the burgeoning robot civilization the inner narrative mostly depicts.
which I introduced and argued for in section 1. Recall that one of our first uses of PP1 was to clarify whose fairy tale AI is because failure to grasp this point seemed to underlie the most widespread criticism of the film: that its ending is a feel-good cop-out. In this section I try to taxonomize the various senses in which AI does end happily, but for us not for David.
First, it is, I think, simply comforting for us that the supermechas are like us in that they tell fairy-tales. I think it is natural to extend this point to the even more comforting suggestion that the fairy tales/origin myths they tell are literally told to their own, as it were, progeny. We never see the “young” supermechas, but the genre of the film surely has them as its implied audience. Moreover, if the “descended from” understanding of the supermecha is correct, then some process of raising new generations must have been perfected in the past and there is no reason to believe that it wouldn’t be continuing. Fairy-tale, bedtime storytelling appears to be close to universal among humans and may yet be revealed to be a kind of extension into the semantic and narrative sphere of mommy-speak or caretaker-ese: the widely-discussed phonological and syntactic training regimen that appears to be a human universal. (The nanny-bot who meets a horrific end in the Flesh Fair, and who resembles Monica, does a perfect version of this highly tonal and rhythmic speech.) And even if this speculation about universals is off the mark, such storytelling is still a kind of totem that represents the beginnings of narrative imagination, and hence of human culture in the specific circumstances of its acquisition by each individual human being. If supermechas do that too, then their kinship to us, the sense in which they are an extension of human culture and of ourselves, is clear.
Second, the supermechas not only are like us, they like us too. As we discussed in section 1, the supermecha see David as the link back to their creators — us — whom they revere. We might be poor excuses for gods but we’re the only game in town. Moreover, this is a new status for us since we are not the godlike creators of our biological children. To see this, consider that at the level of nature, a human parent is little better than a pathway for prior genetic material. And at the level of nurture, while parents undoubtedly influence their children’s personality, the exact character of that influence is more likely to be alternately haphazard and agonistic than it is to be a matter of smoothly implemented intention and design.
Now consider our hypothetical artificial descendants. They are arguably our truest children. Completely if indirectly our design and creation: they are children not of our loins, where, as it were, God knows what speaks through us, but of our minds and culture where we speak in our own voices. The mecha are products of what we, the creatures who say “we” and “I” and “thou,” who love and recognize and tell stories to one another, are.
Our biological children are on one level threatening (if parents are honest) — they go beyond us, they surpass us, they humble us. They see us go into embarrassing mental and physical decline, even as their own lives flower, making their decisions to divert scarce resources of time and emotion and money to our well-being difficult and dubious. Dealing with all the different notes in this symphony of sacrifice and need and obligation and resentment is of course the stuff of great literature and art. It is nothing less than the whole cloth out of which our human lives are made.
Our mind-children (a useful term due to the hyperbolic and credulous robot scientist, Hans Moravec of Carnegie Mellon University) are, of course, potentially even more threatening than our biological children. But, just as with our biological children, letting insecurity rule over love — the Flesh Fair option — is a huge mistake. Our deepest hope is that our biological children will think highly of us or at least understand us, and we should hope for these things from our mind-children too. For that reason, even if for no other, we should aspire to treat them well.
The inner narrative happy ending of AI is that our mind-children of the 42nd century do in fact appreciate us, are trying to understand us, and maybe even love us. This may, in some ways, be more than we deserve, but as I’ve argued, this is nonetheless an intelligible set of attitudes for supermecha to have toward their creators. And if such sentiments don’t belong in AI, then where?
Third, AI‘s outer frame happy ending is that the supermecha like us so much that, in the manner of Gigolo Joe at Dr. Know, they combine the categories of flat fact and fairy tale to tell the story of David: the first mecha created to love, who may or may not succeed in loving, but who does at last sleep. By telling protean tales about figures that are as distant from them as Socrates and Jesus are from us, the supermecha weave us into their ways of raising their supermecha progeny, of bringing their young — our grand-mind-children — into narrative self-command.
These three points of comfort for us taken from the ending of AI are increasingly sentimental, at least in some sense of that term. Whether the film would be better without them is difficult to judge. For myself, however, I’m glad they’re there. It would be strange and dispiriting if a film that so seriously and harrowingly explores our emotional natures did not try to find some way to nurture and honor that emotional nature in the persons of its audience.21
- Charles Taylor, “Artificial Maturity,” June 29, 2001; Stephanie Zacharek, “Boy Wonder,” July 10, 2001; David Thomson, “Artificial Sexuality,” July 12, 2001. All articles from Salon.com. [↩]
- I think here especially of Star Trek episodes and films and their occasionally extensive philosophical asides, but aesthetically disastrous, sophomoric pretension surely has a thousand faces. [↩]
- For one thing, if a cogent argument for the contrary view were really in the cards, then parallel arguments would presumably be available for other media. But it’s almost impossible to believe that there could be a compelling argument against explicitly philosophical work such as Mark Strand’s poetry, Bruce Nauman’s visual art, and Merce Cunningham’s modern dance. We’d always instead take the proffered argument as a reason to reject at least one of its premises. [↩]
- That is, I think that there are other nit-less stories that could explain a future circumstance withAI‘s combination of declining/restricted human birth rates and exploding robot populations. Compare the conservation-law-violating nit in the Wachowski Brothers’ The Matrix: that humans can be used as batteries because they output more energy than they consume. This is unomittable in the sense that I don’t see that any nit-less alternative story is possible. The “human battery” story is the sleight-of-hand that converts Cartesian-style evil demons, which do their work in philosophy by being merely logical possibilities, into plot-driving actualities. Students encountering Cartesian thought experiments for the first time often try to argue against Descartes that there just aren’t any conniving demons with the requisite powers to deceive us. And why would they want to go to all that trouble to deceive us anyway? What’s in it for them? This line of thought is wrong-headed as a reply to Descartes, but it is the right reply to the Wachowskis’ actualized Cartesian thought experiment. It seems highly unlikely that anything much better than the “human battery” story could be raised against the students’ challenge.Compare also David Fincher’s Fight Club, where the central revelation — that all of the early fight scenes are really self-fighting scenes — is a nit that unravels the whole narrative. Self-fighting isn’t cool and couldn’t draw followers and precipitate the formation of clubs. And if it is cool and couldattract followers (i.e., so that human perversity is significantly greater than I take it to be), then those clubs would have to be self-fighting clubs rather than the fighting clubs we see. This nit, like The Matrix‘s, seems to me to be essentially unomittable. [↩]
- The supermechas’ movie-transfer mode of commuication is exactly what is required if AI itself, and not just the external narrator’s commentary, is to be comprehensible as a supermecha fairy tale. [↩]
- The researcher whose query initiates the second axis of inquiry is an African-American woman. That a beneficiary of earlier civil rights struggles should initiate a line of inquiry that brackets the question of mecha love in favor of questions about its recognition and being loved back by members of the dominant community is surely not an accident. We later see a number of African-American mechas including a comedian mecha voiced by Chris Rock, so that the film lays the groundwork for a discussion of “mecha rights.” The official website for AI took this political backstory very seriously, providing a timeline for signal events such as a mecha emancipation proclamation, and the rise of a Klan-like, Anti-Robot Militia, as well as samples of future pro- and anti-mecha literature and propaganda. It is possible to regret that AI did not push further in this direction, but it’s actually very hard to see how it could have without becoming a completely different film. My own view is that the political history of AI‘s 22nd century future might be better treated in a less time-pressured, less inherently melodramatic medium. An HBO mini-series (“Band of Mechas”!) that would stand to AI as HBO’s Band of Brothers stands to Spielberg’s earlier Saving Private Ryan (SPR) would perhaps be ideal. I should add that I think that AI as it stands is much more interesting and well realized than SPR is. [↩]
- Not everyone agrees that the price is right, witness Charles Taylor’s complaint at Salon.com that: “Part of the problem is Spielberg’s screenplay. Oddly for such a visual director, Spielberg lays out the themes in clumsy, expository dialogue instead of showing us.” But the initial exposition only provides the basic structure for the inner narrative — the film’s actual thematic content is shown later. More important, there’s little reason to believe that without the thematic control afforded by explicitly laying out that structure at the beginning of AI that anything we could have been shown later would have meant as much. What you are in a position to see in the sense of being able to explicitly report on depends on what you already take yourself to know. Osment’s performance may a wonder, but what it shows doesn’t emerge out of nothing. To ask for it without the early speechifying is, I think, to ask for the impossible (at least given normal time constraints). [↩]
- Hobby’s three hypotheses, and especially the first two, can be assessed for their cogency independently of the film, and on another occasion that project would be worth pursuing at length. Our interest here, however, is simply in setting out AI‘s manifold thoughts about all these matters as clearly as possible. [↩]
- Early in the 20th century Jean-Henri Fabre noted something like behavioral loops in various species of Sphex hunting wasps. Fabre’s studies were taken as evidence by cognitive scientists in the 1960s that the wasps were running internal computer programs of some kind: that the behavioral repetition in question was driven by inner cycling around context-free representations characteristic of computers stuck in a loop. This was then taken to be reason to apply computer models of cognition and mindedness throughout the animal kingdom. But from the perspective of recent trends in cognitive science — for a little more on which see section 4 — this inference now looks like a mistake: the sort of looping behavior the wasps engage in is quite different in kind from representation-driven computer loops. Moreover, there is a widespread tendency in informal philosophical discussions to embroider and exaggerate the Sphex wasp data that’s currently out there. For example, I’ve often heard philosophers claim in conversation that certain Sphex wasps will, as it were, loop unto death. But Fabre didn’t, so far as I’ve been able to determine, ever report observing this, and I’ve not been able to find any subsequent experimental data that supports it. Putting these two points together — that early inferences drawn from Sphex wasp data have been substantially undermined, and that the data itself is often mischaracterized — it’s clearly high time for a philosophical reexamination of the whole area. At any rate, even though I’ve tried to make the “looping implies artifactual” claim plausible, this essay doesn’t need to settle the issue (i.e,, strictly speaking, we only need to set out AI‘s thoughts on the matter). [↩]
- David Edelstein at Slate.com chides Spielberg for the “primitive melodrama” of the sequence in which David is threatened with acid at the Flesh Fair. But an earlier melodramatic episode goes unremarked upon by him: Will the anonymous Fair-worker find Teddy’s off-switch? The relations between these two melodramatic episodes — one (Teddy’s) we barely respond to and one (David’s) that grips us already suggests that “primitive” is off the mark. The essence of melodrama is, I take it, improper or illicit or excessive appeal to an audience’s emotions. But questions about what can and should appeal to our emotions are evidently germane to many of AI‘s explicit themes. Whence come these norms for emotional display and response? What must emotions be in order that norms of various sorts can apply to them anyway? This suggests that the use of melodrama in the middle section of AI, far from primitive, may be able to be given a detailed motivation in terms of the film’s philosophical explorations. [↩]
- We later find out that David is a projection into the realm of artifacts of Hobby’s own son who died at age 11. Note too that the almost dead metaphor of a “freeze-frame” is literalized in the final third of the film when David is frozen in ice. I suspect that there are interesting things to be said about the relation between the metaphors literalized through the film and David’s growing facility with metaphor, e.g., his plaintive “My brain is falling out” (after he sees the David® assembly line) is literally false. [↩]
- A related point is that David seems 11 years old going on about 6, witness Pinocchio, teddy bears, crayon writing, and so on. Martin is somewhat similarly infantilized, though not so much as David. But he has been in a coma for the past five years. It would be nice to know whether David’s slightly unusual psychology is an attempt to capture that of an 11 year-old boy who’s missed the last five years of his development, e.g., to maximize continuity for Monica and Henry. At any rate, whether there is any internal-to-the-story motivation of this kind or not, the upshot is that David ends up embodying a kind of essence of childhood. He is a child but not one of any particular age. [↩]
- It might help to take the “organ rejection” model even more seriously than our David 2.0 objector suggests. Why work exclusively on making David less rejectible? Why not also try harder to suppress the, as it were, “immune response” in the adoptive parents? In a pharmacologically advanced future it’s not hard to imagine psychoactive medications (perhaps descendants of Prozac and MDMA (Ecstasy [↩]
- This is uncertain since surely one (slashdot-ish) moral Cybertronics’ engineers would draw from both the Swintons’ rejection of David and David’s suicide attempt is that they should stop selling David® and Darlene® robo-boys and -girls the way we and they currently sell cars or computers. Rather they should — take your pick — sell robot-children like unreproducible works of art, or like highly individualized, hand-stitched pieces of couture clothing, or, less extremely, like carefully numbered and limited reproductions or prints. [↩]
- It is surprisingly easy to overlook or undervalue this option. Consider X-Men and other “super-hero” fantasies. These stories are fantasies of extreme individuality rooted directly in underlying nature. It is not clear whether the appeal of these stories stems from not grasping the possibilities for nurture/history-driven individuality or from a kind of skepticism about such possibilities. [↩]
- Some allegedly futuristic and technologically savvy films are painfully obtuse about this basic point about the character of digital data. James Cameron’s Terminator 2: Judgment Day turns on the physical destruction of a particular office building that houses some crucial data. But data is not, except accidentally, confined to a single physical instance, hence this action reflects a very primitive conception of the sort of problem the characters are confronting if they really want to avert a future that turns on that data. Hence the joke sequel titles: Terminator 3: The Backup, Terminator 3: The Mirror Site, and so on. [↩]
- I quote here from Andy Clark’s Being There (MIT Press, 1995), which is a readable though occasionally incautious introduction to what I’m calling “the emerging consensus.” [↩]
- I allude here to arguments about non-individualism and externalism in the philosophy of language and mind. Discussions of these arguments often, seemingly for good reasons, quickly devolve into dueling presentations of the discussants’ broader allegiances in metaphysics and epistemology. The unfortunate, sum effect is a deep debate with almost no shallow end. Non-initiates might want to start swimming with some of the papers in Robert Brandom’s Articulating Reasons: An Introduction to Inferentialism (Harvard University Press, 2001) and with Michael Luntley’s Contemporary Philosophy of Thought: Truth, World, Content (Blackwell, 1999). [↩]
- Embracing OHC theses forces one to say something about hypothetical cases of massive deception — e.g., brain in a vat (e.g., The Matrix) or façade world (e.g., The Truman Show) cases. Most philosophers in the area reply by distinguishing different sorts of cases, denying outright that some cases are, on closer inspection, so much as possible (where possibility ≠ conceivability). Remaining cases may be possible — or so these philosophers allow — but they deny, again on closer inspection, that they in fact portend a conception of the psychological that is individualistic and internalist. [↩]
- According to the OHC line of thought we have been exploring David’s reality, and the reality of his love stand or fall together. Evidently this conflicts with one of AI‘s advertising taglines — “His love is real but he is not,” but so what? We shouldn’t expect advertising to be more than catchy boilerplate, and the tagline at issue is squirmingly ambiguous anyway. It’s hard though not impossible to get the tagline to come out true without equivocating over the meaning of “real.” That said, the tagline brilliantly encapsulates the tensions between perspectives that are at the heart of AI. [↩]
- I want to thank Brook D’Amour, Kimberlee Gillis-Bridges, David Nixon, Paul Taylor, and Dana Dragunoiu for their help. [↩]