Bright Lights Film Journal

A Cinema of Satyagraha? A Client-Centered Approach to Film Spectatorship

El Topo

“In our devotion to realism-as-catharsis, we’ve become so obsessed with psychologizing fictional characters that we forget we are the ones who need humanizing.”

Beneath El Topo‘s (1971) sprawled detritus of hippie psychedelics, spaghetti western affectation, Buddhist parables, Sufistic posturing, and egotist-auteurist proclamations lurks a grandly neoprimitive overhaul of the Epic of Gilgamesh. Just as Jodorowsky’s hubristic, messianic gunfighter stumbles into a Buddhist epiphany and, in the film’s second half, attempts a life of Christian meekness before ultimately succumbing to his old, murderous habits and extinguishing himself in a self-immolatory sacrifice, so does the Sumerian king, in the Akkadian version of the epic, embark on an escalating series of heroic murders only to overstep his demigodlike bounds, become chastened by the Gods, and ultimately discover that the secret of immortality, even when revealed to him by Utnapishtim, is as ephemeral as life itself. The troublesome ending of Gilgamesh, in which a mischievous snake dines upon Gilgamesh’s flower of immortality while he obliviously bathes, is usually interpreted by scholars — I am thinking of Herbert Mason in particular — as a pedantic reminder that one must accept the mundane limitations of life and of oneself. Gilgamesh, unable to deliver Enkidu from an underworld inhabited by cruel monsters half-man, half-bird, is humbled into materiality and becomes a better king at the expense of Enkidu’s love and any intimations of immortality.

But the proposed analogy between the film and the Sumerian epic falters, of course, because near El Topo‘s end Jodorowsky’s hero abandons his repentant, clownish ways as a mime to gun down sadistic townspeople for whom epiphanic humility is either meaningless or risible. Only after climactically stiffening into the generic role of the avenging gunfighter can he, in the film’s denouement, relax into the self-immolatory pose adopted by protestors of the Vietnam War. But we cannot know if his suicide is a Buddhistic act of forfeiture or an existentialistic act of autonomy, for the chaos of Jodorowsky’s vaunted, noncommittal eclecticism moots any difference.

Jodorowsky thus perverts and even negates Gilgamesh (a source he vaguely acknowledges in his companion text El Topo: A Book of the Film) by claiming that the violent avenger, once chastened by either fate or circumstance, can justifiably return to violence after forsaking violence’s self-interestedness. Jodorowsky’s mock-Buddhistic conclusion is the fraudulent product of a philosophical smorgasbord: he wants egoistically to satisfy himself with violent catharsis and then, in a composite act of auteurism and superegoism, shame himself for doing so. But why must the superegoistic development of the character demand a return to violence? Why must he repeat the process of self-deprecation in the film’s denouement? Why does El Topo lack faith in his initial humbling (at the hands of the fourth desert master, who selflessly commits suicide rather than egoistically duel with him)? If we believe Herbert Mason’s reading of Gilgamesh — in which even a raping demigod requires no second revelation to accept an ultimate humility — why should El Topo’s suicide, mandated by a temporary slip into violence, be preferable to his politically powerless life as a clown?

The obvious answer is that El Topo, for all its internationalist grotesquerie and faux pacifism, is still generically a Western, and is in thrall to the genre’s ruggedly individualistic solutions. More simply, it’s a movie, that sacred form wherein bloody retribution is less a character-driven choice than a culturally irresistible mandate. Even fashionable pretensions to Buddhism can’t holster the audience’s desire for bloodshed, for we lack the courage to civilly disobey the vengeful God of cinema, unloving as it is. Even those among us who withhold their love cannot avoid becoming contaminated. If the masses occasionally decline to buy tickets, they are just apathetic, not dissenters; if intellectuals abstain on elitist or moral grounds, they become nevertheless ensnared in the conspiracy when iconic images of shunned cinematic machismo return in newspaper ads, Times Square billboards, news reportage, and so forth. Short of hermitage, Oedipal blindness, or unironically adopting Plato’s moral arguments in Book X of The Republic, we all become poisoned.

Our love of violent retribution remains rooted in the mythology of catharsis, even though Aristotle’s fragmentary Poetics, in its present state, is more a series of notes than a finished, coherent thesis.1 This is not to say that contemporary audiences respond to pity, fear, and suffering in accordance with Aristotelian dogma or that they argue, as scholars had long done, whether the motives of pharmakon (purgation) are didactic or merely aesthetic, and whether catharsis’ psychic effects are cumulative or evanescent. What matters today, rather, is that catharsis (which Aristotle himself left ill-defined) can be reduced conveniently to violent effects that emphasize suffering — for Aristotle the least important ingredient of tragedy2 — at the expense of the pathos of hamartia. The high calling of tragedy has become reduced to its symptom, violence, represented in a sadistic pleasure-principle — just as, in a more sociological way, contemporary American politics have supplanted coherent ideology with populist righteous indignation. We thus arrive at the ritual cleansing of the finale of Taxi Driver (1976), in which catharsis is redefined as the Roman Catholic penitence and suffering with which Scorsese is preoccupied. Recently I had a young Korean student of fifteen (mis)define catharsis as “sadism” — I am still unsure whether this was a common error in translation or a revealing cross-cultural insight.

The mock-cathartic ritual must be repeated, each time with diminishing effect; whatever the ritual cleanses are only the pity and fear created by the narrative itself, not the daily pities and fears we always carry with us. We hope in vain that each catharsis might erase some of life’s true fears, yet know that we are trapped in a futile and self-reflexive pattern. We are reminded of Constantine Constantius, the hero of Kierkegaard’s Repetition: “Just as [the ancient Greeks] taught that all knowledge is recollection, thus will modern philosophy teach that life itself is a repetition,”3 he opines in the novella’s opening. Who but a Kierkegaard hero could see circular powerlessness as the necessary key to existence?

Generic film narratives, obsessed with their ritualistic mandate for vengeance, cannot emancipate us any more than film heroes would show mercy to villains or violate themselves the equation between masculinity and violence. Admittedly, I here think of violence as a singular crime. As I grow older, I become less interested in philosophic or categorical distinctions between ethical and merely aesthetic violence, and become more apt to see it as the indivisible noun it was for Tolstoy, Gandhi, Parshvanath, and all others who see in aggression a terminal adolescence or, as Isaac Asimov did, a failure of the imagination. Hannah Arendt’s short monograph On Violence, an attempt to axiomatically categorize varieties of social and individual might and coercion, is of little use to me. If, as Arendt suggests, “strength is individualistic and force is corporate,” and “[v]iolence . . . is distinguished by its instrumental character” and is “phenomenologically . . .  closest to strength, since the implements of violence, like all other tools, are designed and used for the purpose of multiplying natural strength until, in the last stage of their development, they can substitute for it,”4 I can only nod blandly and rejoin with a jaded “So what?” Because “strength” can be used to signify either a quality (in itself) or a quantity (of something else), it doesn’t seem useful as a way to measure what could be an objective phenomenon that, when organized, naturally piles into an organized force and measures something already forceful.

The pacifist-artists of the twentieth century, from Jean Renoir, René Clément, and Kon Ichikawa, to Erich Maria Remarque, W. Somerset Maugham, and W. H. Auden, to Frank Bridge, Karl Amadeus Hartmann, and Michael Tippett (imprisoned for conscientious objection during WWII), have not bothered with taxonomies either. Rejecting that Homeric ambiguity which discovers aching glory in Pyrrhic victories, such pacifists say more or less the same thing — their thesis shocks only for its obviousness. The shock, certainly, is experienced mainly by retrograde institutions and governments: Paths of Glory (1957) was banned in France for twenty years, and the Japanese government that had commissioned Britten’s Sinfonia da Requiem (1940) rejected the work because its Christian pacifism contradicted the empire’s imminent plans for war. America’s media culture is now ethically split down the middle, with the mainstream marching forward with ever greater paeans to generic militarism and the American left reduced to film festival environmentalism and niche sensitivities. As Hollywood produces far fewer films than it did in the ’40s and ’50s, each more expensive than the next, regression into violent genre types becomes economic necessity.

When the gangster of the new American cinema (i.e., post-Scorsese) exchanged the Western’s pretended moralism for pretended urban realism, the resultant absence of pathos could result only in cooler style and craftsmanship, and ultimately the better technologies that buy them. The storytelling cult of Scorsese and his sycophants gave us no philosophers, of course — their original textbooks were Howard Hawks and Raoul Walsh, and subsequent generations of screenwriters somehow have obsessed with hitmen, petty gods of life and death who pretend that existentialism flows both capriciously and mechanically from the barrel of a gun. The amoral romance of the gangster may seduce more than the immoral Air Force pilot who carpet-bombs racial minorities with impunity, but the grasping propagandist Sylvester Stallone, aging into unplanned obsolescence, nevertheless returns with a sojourn into the sentimental narcissism of The Expendables (2010). Perhaps when Stallone finally dies, the rest of us can go on living; perhaps, too, an English translator will eventually render the title of Sun Tzu’s classic text correctly, not as The Art of War but as the less romanticized and less Americanized The Method of War.5

The neoromanticism of the defunct John Woo school was a temporary diversion that did little to erode the triumphant realism of Scorsese-ism, whose exotically revolting character types are but sociological symptoms, ethnographic specimens all the more entertaining for their blamelessness. We become the de facto guilty party, desiring in our bourgeois criminal fantasies, more amorphous than any economic syndicate, the freedom to act immorally and have the ready-made sociological excuse to do so. Eventually realism bores, just as a surfeit of gore numbs; an increase of stylization fails to redeem us, however, especially when the style only transforms violence into a prettily mannerist ritual. We soon feel nothing but an absence of feeling, the relentless pursuit of guiltless excitation making us indistinguishable from sociopaths. Yet we must blame not only the narcissistic producer of art but the weary and inevitable accumulation of experience: if Peter Jackson’s Braindead (1992) entertained in our youth, Tommy Wirkola’s Dead Snow (2009) only deadens nearly a generation later. Bloody joy, in adulthood, becomes horribly naive, a phantom sentiment. Perhaps we should be oddly thankful for the didacticism of Romero’s Survival of the Dead (2009), whose denouement reveals that even zombies need to learn forgiveness.

No longer claiming the unpretentious delight it enjoyed in the exploitation era of the 1970s, generic violence has become the modish gruel of the Korean gangster, the yakuza punk, the American serial killer stewing in some metallic lair, all of them confusing cut-rate alienation with a catch-all postmodernism. Exemplary here are the new Korean cinema’s endless gangland rituals — say, those of Kim Ji-woon’s A Bittersweet Life (2005) — whose admix of film-festival burlesque and cool stoicism bid farewell to Hong Kong’s old neoromanticism and usher in a “higher” calling of the genre film as aesthetic commodity. I don’t mean that A Bittersweet Life‘s endless violence is ritualistic, for that is unimportant. Rather, watching the film becomes a ritualistic education in what cinema itself currently is and can be, for film violence remains the measure of cinema as long as our desires, ambitions, fantasies, rebellions, and so forth are coded adolescently and/or sadomasochistically as catharsis-through-revenge. The audience’s masochistic, passive ceremonies will not prevent film critics from harping on the cinema’s “pleasure,” but to claim pleasure as film’s prime engine is now ludicrous. In their obsessive quest to aestheticize realism, filmmakers have finally revealed the utter joylessness of sadism, a truth not only of the Saw mill and so much dross like Turistas (2006), but of alleged “art” in the manner of The Three Burials of Melque Estrada (2005).

How sad that Eisenstein, Benjamin, and practically all of the intrawar intelligentsia truly believed that montage’s dialectical violence could liberate us from fascism. On some level the Marxists knew that montage could be sold off like any other tool, but its power to shock space and retard Hegelian historicism had yet to be spoiled by a culture industry intent on making the spatiotemporal cut into a mechanistic lifestyle, not a disorienting revelation. It would be decades before the Marxist tensions of montage — whose “intellectual” clashes even Eisenstein never really worked out — were undone and refashioned as the homologous glue of the clockwork music video, puerile car chase, and screeching taco ad. But we should not conservatively confuse our innoculation to montage to a full desensitization to violence. We, not entirely lost souls, are still bothered by battered babies’ brains, charity-driven images of sub-Saharan famine, and terrorists’ Youtube decapitations. But the shock-cut of film horror no longer shocks because splicing no longer disorients — our attention-deficit culture has become a priori a cross-section of psychic and emotional splicings. The shock-cut, at best, reveals alienation as a permanently and insurmountably absurd condition, as do the abrupt juxtapositions that link the blackout sketches of a Seinfeld episode. It is no accident, then, that recent horror filmmakers have favored nervous mise-en-scenes (e.g., Rob Zombie’s 2007 Halloween remake) to redefine horror as an inescapable temporality from which a declassed montage can no longer deliver its victims. What remains nonetheless is a studied and newly dulled aesthetic of violence, the impressionist’s l’art pour l’art de-intellectualized and reformatted as the techno-realist’s violence-as-lifestyle. We arrive at the death of drama, for amorality, by definition, cannot be dramatic.

The exercise of violence, like Camus’ mandate for suicide, may be necessary to propose freedom, but violence in mainstream cinema is less a free psychic act than an exertion made by the pre-psychological forms of film genres themselves. When American films advertise themselves as “new ruminations” on violence — as did Eastwood’s Unforgiven (1992) or Spielberg’s Munich (2005) — they become self-congratulatory events in ethical exceptionalism. Auteurs — perhaps rightfully — applaud themselves for being superior to the genres they engage, while newspaper critics applaud themselves for recognizing the necessity of the auteur’s self-congratulation.

The unthinking moralism that the auteur’s thinking ethics undo was changed only slightly by postwar disaffection and the “morally ambiguous” heroes of noir and Hollywood’s “adult” Western of the 1950s — heroes who retain their superior morality only because they refuse to kill in cold blood. How convenient, then, that most villains, falling far short of true manhood, must sneakily draw a pistol after they’ve been subdued, reducing the hero’s superiority to the fallacy of a fated genre. At other times opportune predestination leaves intact the hero’s goodness. Such is the despondent anticlimax of Marathon Man (1976), as Laurence Olivier’s Nazi falls on own sword and spares Dustin Hoffman the responsibility of a cold-blooded murder he is incapable of carrying through. Likewise, a film as supposedly raw as Stuart Gordon’s Stuck (2007) spares poor, eviscerated Stephen Rea the duty of killing the hit-and-run driver who leaves him endlessly bleeding in her windshield and imprisoned in her garage, as she finally ignites herself in fated clumsiness. Our ability to revel in sadism only by denying its existential responsibility has a long history: in one of English literature’s most infamous stage directions, D’Amville, the titular hero of Tourneur’s The Atheist’s Tragedy, climactically trips and strikes out his own brains on an axe, a happy accident that, four centuries later, freed goodly Hollywood cowboys from understanding what murder really means.

The gracefully heroic killer of Gilgameshian lore finds a modern and only slighly secularized kin in Elizabethan drama’s Machievellian scourge, appointed by god or fate to take the necessary journey of the damned. Arising in a time when countries and standard laws were still inchoate, the Elizabethan scourge, as Gamini Salgado has noted, created an ethos of personal revenge that only later would become the retributive justice of the legalized state. Providing audiences an attractively rationalized way to reconcile an El Topo-like desire for both anarchism and sanctimony, the scourge’s vicarious thrills nevertheless outweigh whatever moralistic conclusions are intended to rein in catharses and help audiences come to their senses (has anyone ever rooted for the Earl of Richmond?). Once asked to fill in at a local university for an English instructor who had gone momentarily insane, I replaced Wuthering Heights on her reading list with Marlowe’s The Jew of Malta, knowing that adolescents (at least the males) would identify more with outcast Barabbas’ attempted annihilation of the world than with fortuitous love and impracticable romantic striving. (I was kindly reinforcing their prejudices, no more, but since they were remedial students, revenge seemed a convenient “remedy.”) The students’ responses confirm surveys in the early days of pay-cable demonstrating that post-prandeal audiences readily preferred Death Wish II (1982) to the prestigious Chariots of Fire (1981). Today, Chariots is condemned to the dustbin of jejune 1980s humanism, while Charles Bronson, ressurrected in sequel after sequel, assumes a Christlike if wrinkled immortality.

It is not violence that attracts us, really, but the charisma of its perpetrator. As charisma in Hollywood historically resided not only in character types but in the celestial personalities of a star system, transcendence became bureaucratized. Repudiating violence, therefore, also entails repudiating beauty, especially when beauty is culturally conditioned to look violent, whether cloaked in the metallic blues and grays of a sniping video game or presented in the daylight terms of the Jacobean drama’s “wild justice” (to use Francis Bacon’s term). But in today’s Hollywood, beauty is becoming easier to disbelieve: if seamless CGI technology renders violence gorgeous and painterly, the gracelessness of contemporary actors reveals the seams afresh. When the straining and effortful John Travolta is intended to substitute for charismatic Robert Shaw in the 2009 remake of The Taking of Pelham One Two Three, the stoical rationality of the original character devolves into infantile hamminess. Histrionic villainy is effective only when it slips effortlessly into rarified moments of understatement and restrained naturalism that place bombast in relief; even Tommy Lee Jones, among our most tiresomely bombastic actors, knows when to let down his guard and allow some residual humanity to infect his supermasculine posturing. Even then, self-conscious acts of relief are no substitute for preternatual grace. Stand-up comics stopped doing impressions not simply because impressions are déclassé, but because there are currently no actors characteristic enough to be imitable or who, in their endless interchangeability, can impress us.

Whether Tourneur’s D’Amville or Charles Bronson’s wearied vigilante, the scourge possesses an amorality rooted in predetermined lovelessness. Slave to his archetype, he knows murder not as existential obligation but as the inverse of a received moral order. Psychologizing and humanizing the scourge might revise him generically, as in Unforgiven (1992), whose title announces the scourge’s soullessness. Yet Eastwood’s William Munny is human enough for the audience to “understand” as an example of social psychology, willing to die responsibly, if need be.6 We distinguish him from Gene Hackman’s sadistic sheriff and Richard Harris’s homicidal English Bob only because, after the final massacre, Munny spurns the mythologizing attentions of Bob’s obsequious biographer, Beauchamp, and thus attempts to repudiate both the thesis of The Man Who Shot Liberty Valance (1962) — wherein John Wayne is all too willing to mythologize Jimmy Stewart’s perceived heroism — and El Topo‘s ongoing paradox of bloodlust and penance.

In Unforgiven Eastwood rebukes not only an amoral culture industry but his own High Plains Drifter (1973), perhaps the first mainstream Hollywood film to posit a truly amoral hero. Neither a figure of Ayn Randian superheroism nor post-Peckinpah disillusionment, Eastwood’s ghostly avenger is almost pure id, a spectral rapist whose literal inhumanity ultimately excuses him. He delights in deriding and exploiting hypocrtical churchfolk, and in the film’s climax is perfectly willing to let most of them be massacred for their pusillanimity and incompetence. The weathered apparition has some heart, though, for he appoints the local dwarf sheriff and mayor and exhorts Native American children to confiscate dry goods from a racist shopkeep — he is, oddly enough, a politically correct nihilist. If Unforgiven ultimately mythologizes the act of demythologizing, heroizes the act of de-heroizing, and commits other meta-crimes because its theme (anti-heroism) clashes with its form (it’s a movie), it is a relatively sane move nonetheless — saner, too, than the conclusion of Gran Torino (2008), which sees a tortured Eastwood sacrificing himself Christlike for a career of grinning, bellicose sins.

Nevertheless, without such conscientious mea culpas, we otherwise are left with the thesis of, say, Lethal Weapon (1987), which suggests that our generic Caucasian hero-killer is, at heart, a suicidal psychotic who longs for a bourgeois stability (here, that of his earthy African American partner) as unachievable as the scourge’s triumph over moral banality. We are otherwise left thus — unless we refuse totally the violence in film and the violence of a film culture predicated on ungenerous archetypes and their ungenerous identifications. Where, then, is the escape? In the end, as we cope individualistically with inevitable violences of content and form, we retain securely in the backs of our minds that which cannot be co-opted precisely because it lacks value as either plastic commodity or conditioning tool: peacefulness.

If bedrock peacefulness is to be the tool of resistance, our purgation of catharsis, we might choose from Thoreau’s individualistic civil disobedience, Ghandi’s anti-individualistic satyagraha, the Buddhist’s holistic renunciation of the mundane, or the Jain’s imaginary, anthropomorphic oversensitivity, whose refined veganism refuses to injure even mushrooms. For now, understand that peacefulness is not charitableness, consternation over endangered birds, the purification of water tables, the nutrification of refugees, or neutralizing other symptoms of modern decay. Nor can we follow the fallacious golden rule, for that wrongly assumes that one doesn’t hate oneself. Hillel’s formulation — “what is hateful to you, do not do to your neighbor” — is more pragmatically framed in terms of pain rather than narcissism, but again naively trusts subjectivity.

Peacefulness is not the annihilation of violence — an act which, as satyagraha emphasizes, often mandates violence itself — but the ignorance of those ideas that inspire violence. To do this, we must minimize an entire history of argumentation about the inevitability of natural violence, from Freudian thanatos to the third essay of Nietzsche’s The Genealogy of Morals to everyday thirstings for revenge so mechanistically ingrained that the most minor social slight accrues a seemingly biological imperative to kill. We must begin with forgiveness — a difficult task because it is seen as a sign of weakness and because Christianity gave it a bad name. But the finale of Girish Karnad’s Utsav (1984), wherein the beseiged heroine extends a forgiving hand to her longstanding yet pitifully human, now genuflecting tormenter, moves more than almost anything in twelve decades of American cinema. The conclusion of Blade Runner (1982), too, grows richer with age: how rare and non-anticlimactic it now seems for Rutger Hauer’s villain to forgive Harrison Ford’s hero, an act whose wise melancholia rivals that of the final movement of Tchaikovsky’s Pathetique.

Indeed, films that intentionally disappoint our bloodlust are the ones that stand out in our memories as the years pass, when we, long past adolescence, are no longer shackled to the mandates of genre or violence itself.7 In the documentary Men Who Made the Movies: Sam Fuller (2002), director Richard Schickel makes much of the “subversion” of Fuller’s Forty Guns (1957), wherein one gunfighter is so paralyzed with fear that his duelling opponent can calmly walk up to him and punch him out, presumably because the coward is unworthy of murder. Though a step in the right direction, this scene only adjusts the rules of genre without redefining masculinity — no violence is done to the genre itself. For its bloodlessness Peckinpah’s The Ballad of Cable Hogue (1970) — in which only an iguana explodes in slow-motion — is actually more radical than The Wild Bunch (1969). A nominal outlaw but ultimately a humorist and only reluctantly a man of violence, Jason Robards’ “Hogue” has the option of sparing pathetic Strother Martin at the film’s conclusion; the film, in presenting an empathic Western hero who leaves vengeance deliberately unfulfilled, imagines a far more mature vision of manhood, even if modernity’s overpowering automobile leaves him in the dust. In his stoic self-control Hogue finds the (temporary) freedom that eludes Warren Oates’ “Bennie” in Bring Me the Head of Alfredo Garcia (1974), who, Peckinpah believed, could escape death at the hands of Emilio Fernandez’s henchmen if only he had laid down his pistol and walked away, against all expectations.8 But the tragic hero can’t walk away, and the “if” is revealed as a lie, a false existentialism dangled deceptively. The vestige of freedom, we now know, is found not in tragedy but in tragicomedy.

Decades earlier John Ford subverted genre solutions by ending She Wore a Yellow Ribbon (1948) with a comic barroom brawl instead of the expected Indian slaughter, thereby upending not only Western convention but the racist assumptions of the day and the militaristic ethos John Wayne would continue to mount until the apologia of The Shootist (1976), Wayne’s Gran Torino. Before his death-bed recantation was The Cowboys (1972), vilified then for heroizing eleven-year-old killers when American teenagers were being conscripted to fight an already anachronistic Asiatic crusade. The greatest offense of The Cowboys is not that it is a bildungsroman of pastoral revenge, but John Williams’ obnoxious score during the children’s massacre of Bruce Dern’s band of rustlers — its Sousa-like pomp is as propagandistic as any Maoist trumpery extolling the anti-Confucian vanguard. It can’t be a coincidence that heroism, in any culture, always sounds asinine.

Today, the expressly pacifist action hero is an American taboo: Steven Seagal’s mainstream career died when he appended to the final bloodbath of On Deadly Ground (1994) an edifying slideshow on the rape of wilderness regions and the need for provident husbandry. No matter how savage Seagal has been to illiterate rednecks for the previous ninety minutes, his concern for nature, ironically, betrays the “naturalness” of the action hero, who stands for right, unthinking action, not organic sensitivities. Americans cannot tolerate “qualified” action in the way that Chinese audiences might accept the Buddhist martial artists of King Hu’s A Touch of Zen (1971), Liu Chia-liang’s Eight Diagram Pole Fighter (1983), or Brandy Yuen’s Master of Zen (1994), wherein heroes will righteously but not lethally do battle. Zen restraint finds an unexpectedly humorous counterpart in the more whimsical jidei geki, as when Mifune breaks the arms of an unruly gang in Red Beard (1965) only to muse, “I’ve gone too far . . . a doctor mustn’t do these things!” Likewise, what is remarkable about Shintaro Katsu’s “Zatoichi” isn’t the chambara violence he inflicts but the pent-up violence he, in his cantankerous and Taoistic restraint, continuously holds in reserve.

Even if one is brave enough to unashamedly admire Seagal’s murderous pacifism, he carried forth the charismatic Billy Jack (1971) archetype without the commune that features in its hippie predecessor; indeed, no American filmmaker today — even Michael Moore — would advance agrarian anarcho-syndicalism as an answer to American ignorance. We forget that in the early 1970s even the lowliest genre films were wringing their hands over the ethics of violence. In The Deadly Trackers (1973), a crude Western begun and then abandoned by Samuel Fuller, a deliberately instrospective Richard Harris plays a pacifist sheriff who disallows his son to have even a toy gun. One “can’t reason with a gun,” he tells the boy. When the boy’s head is crushed underfoot by bandits escaping his town, however, he soon changes his mind, but later strikes a remorseful pose by a river bank to examine his bloodstained clothes and then become physically ill. In The Peace Killers (1971), the leader of a hippie commune exhorts his followers to Gandhian nonviolence — which, unlike Thoreau’s civil disobedience, prohibits self-defense — when they are beset by marauding bikers: “We can’t become like them . . . Everyone wants to be peaceful inside . . . all it takes is for someone to bring it out in them.” Yet the commune leader (who looks conspicuously like Jesus), walking plaintively through windswept meadows, questions his motives as a treacly pacifist song is heard on the soundtrack — “Is it time for defending? . . . Is there anything worth a fight?” When the climactic biker massacre ensues, he watches from afar but eventually intervenes to save the heroine from a second rape — yet it is she who prevents him from murdering the villain, lest he undo his ideology. In that long-lost era, even exploitation sought pacifism, and Straw Dogs (1971) was the exception. Even the crime thriller self-consciously imploded into revisionism, most somberly in Peter Yates’ The Friends of Eddie Coyle (1973). After Robert Mitchum is blandly shot in the head by the titular friends, any notion of revenge becomes deferred beyond the edges of the screen when the final credits abruptly roll. Minimalism becomes antiromanticism, then didacticism.

This is not to say that Hollywood, in the militant phases of WWII and the Reagan era, gave rise only to propaganda9 and that pacifism existed only during a brief historical window. Yet mainstream Hollywood was unable to truly embrace pacifism even in a social-conscience — or social-science — experiment like Wellman’s The Ox-Bow Incident (1943), as dramatically correct a story as Hollywood could mount during the war. The film is based on an entirely false premise: its ostensible point about civilized justice could only be well taken if Dana Andrews is rightfully accused of cattle rustling and is nevertheless afforded the humanity of due process. Sympathizing with Andrews as an innocent is far too easy; unfortunately, conventional cinematic morality assumes audiences will only be receptive to social critique if society is entirely wrong and the underdog entirely right. Nuance has no place in allegory. The film attempts a resonating denouement when the judgmental Major (Frank Conroy) commits suicide after his son chastises him for pitilessly hanging innocent Andrews. But we could only assume that had Andrews been guilty, the Major’s encouragement of ochlocracy would have been socially and morally appropriate and hardly the cause for suicide or even self-reflection. The conclusion of Fritz Lang’s Fury (1936) — still among Hollywood’s best social-conscious films of the Depression — disappoints similarly when, in its final scene, Spencer Tracy forgives before a judge the angry mob that tried to kill him, not because they deserve forgiveness, but because his bloodthirstiness is negated by the love of a chaste woman.10

If mob rule was once to be avoided not for inherent wickedness but for potentially inconvenient outcomes, the combination of militarism and capitalism, capable of transforming Sun Tzu into a self-help pamphlet, now ensures that the costs of violence remain invisible. Not only are the Iraqi and Afghan wars censored from American television; our decadence makes the notion of sacrifice so repulsive that we forget it was formerly considered patriotic to laud commercial films about costly wars and pathetic cripples, from blinded John Garfield in Pride of the Marines (1945) to hook-wielding Harold Russell in The Best Years of Our Lives (1946). Surprisingly, the CGI gore of today’s bloated spectacle only exacerbates the material, studio-bound plight of Garfield, eventually alone in a machine gun nest in Pride of the Marines, facing horde after horde of cornily demonized Nipponese for over twenty minutes. When a grenade-bearing Japanese sneaks up to his nest, infernal gore is needless: the bloodless hole that appears in Garfield’s fallen comrade’s helmet sufficiently negates the myth of charismatic heroism.

Even when filmic heroisms are deflated, we rarely see the hero as truly human because few actors who take on generically violent roles are capable of replicating humanity. I can think of only one scene in a genre film in which the hero looks convincingly (admittedly a subjective term) depressed: the moment in which Rod Steiger’s Mexican revolutionary in Duck, You Sucker (1971) discovers his entire clan massacred in a cave, heralded by Morricone’s lachrymose woodwinds. Leone’s camera begins with Steiger despairing on a stone, his eyes sagging, then zooms out to a long shot of his drooping pear-shaped body, and then back into a medium close-up, whereupon he tearfully rips the crucifix from his neck and glowers up at heaven. It is not only his facial expressions that convince, however; as his rotundity sags, proud shoulders depress, and previously incessant vigor petrifies, his whole method-acting body produces one of the most natural “looks” in acting I have ever seen, and in a performance often criticized for hamminess. While contemporary Hollywood actors communicate only through facial tics articulated in close-up (generally all they can do), Steiger uses his entire frame almost motionlessly during the long take. The distance of Leone’s camera allows the impact of the massacre to register so slowly that Steiger’s bodily actions seem as though they are happening and have already happened, as if his emotive profundity were transcending tense itself. In his petrified countenance we recognize a burgeoning desire for revenge indistinguishable from the abyssal need for unfated suicide.

This is a very difficult look to fake — those who have contemplated suicide will see in it the colossal stoniness that occurs when soulful fury fossilizes, when one feels at once ageless and infantile. Nevertheless, Steiger’s truthfulness is still illusory. In our devotion to realism-as-catharsis, we’ve become so obsessed with psychologizing fictional characters that we forget we are the ones who need humanizing — and the relationship between cinema and audience is far more coercive than that between analyst and neurotic. Readers of Aristotle’s Poetics long questioned whether the cathartic ritual exorcises the tragic elements of that moment (in which one emerges no better or worse than when one entered the theater) or, as Leon Golden has suggested, cleanses other vices already in one along with the fear and pity arisen in the momentary drama (allowing one to emerge cumulatively better with each tragedy viewed). Following Jakob Bernays’ Grundzüge der verlorenen Abhandlungen des Aristoteles über die Wirkung der Tragodie (1857) and the mid-19th-century obsession with pathologizing social problems, catharsis has often been regarded as a psychic therapy for an unnamed disease (unless life itself was supposed to be the disease purged). But as Stephen Halliwell has argued, “Katharsis does not stand for a notion of pure outlet or emotional release, still less a discharge of pathological emotions,”11 and instead advances the recognition of fear and pity as a (perhaps naïve) means of cultivating and sensitizing the polis. That the pity and fear of catharsis are part and parcel of a rational, pleasurable dramatic experience does not mean the object of purgation must be irrational.

If hamartia truly exists, it does so not as a character flaw in the dramatic hero but as a flaw in the hero-worshipping audience — our impossible desire to overcome a perpetual sense of isolation and unite with the desired (in this case, dramatized or iconic) object. We arrive here at the jaded tragedy of the human condition: distance from the other is both a prerequisite for self-understanding and an existential chasm to be bridged — yet if bridged, one destroys oneself in the process of unification. In his essay “On Lonesomeness,” the sociologist Herman Schmalenbach (a student of Georg Simmel) provides a description of the individual’s alienated subjectivity that could well apply to the spectator’s impossible and fantastic wish-fulfillment union with the charismatic hero:

We refer to the subjectivism of the sophists. Gorgias makes the extraordinary statement that there is no object of knowledge independent of the subject, and if there were one, it would definitely not be knowable [or] could not be communicated to another. . . . The soul is absolutely closed to external things as well as to other individuals. It has, as in Leibniz, no windows and no exits or entrances — no doors.12

If Schmalenbach ultimately finds the root of lonesomeness in a Calvinist distance from God (the sort of distance Leonard Bernstein would rather conveniently ameliorate in his Kaddish Symphony), we find it today in our inability to penetrate or break from the cinematic gods that shape our cultural identifications.

Because the soul is by definition insular, we struggle in vain to connect with the other; recognizing this futility, we sublimate our desires through what Freud called the “mild narcotic” of art, hoping that filmic projections can raze the barrier by erecting a screen. The images the screen projects, however, are themselves windows as sealed as the soul. As addictive, coarse, and unenlightening as cut-rate vodka, catharsis becomes the degraded criminal enterprise of the culture industry. But it is not merely generic content that degrades catharsis; in a medium that increasingly champions hectic montage over mise-en-scene, the contemplative and introspective stasis — the freedom to actually act — afforded by the classical stage becomes fragmented on film into a mere highlight reel of gestures and microcosmic moments. It is no accident that films framed by a clinical psychiatric setting — or, like Bergman’s Scenes from a Marriage (1976), have characters engage in autotherapy — attempt the greatest return to the cathartic stage, for depression both forces sustained introspection and warps it to a point where catharsis becomes confused with medical panacea.

Whether the pathological understanding of catharsis is merely a relic of the 19th-century German bourgeoisie or possibly holds pragmatic applications today is debatable. My own futile and slavish experience with psychiatry and pill-popping confirms well enough Adorno’s argument that “the psycho-analyst’s wisdom finally becomes what the Fascist unconscious of the horror magazines takes it for: a technique by which one particular racket among others binds suffering and helpless people irrevocably to itself in order to command and exploit them.”13 The Sopranos invented a rather canny solution to this dilemma. In the final episodes, psychiatrist Dr. Melfi finally wises up to what she had been warned of in the series’ first season — that exploitative Tony Soprano is only using her expertise to sharpen his own rationalizations and skills as a con man. The psychiatric illusion is inverted and catharsis thwarted: her commodified “racket” of $300/hour becomes no match for a true racketeer, whose illiterate cunning and vulgarity easily outwit her bourgeois good intentions, doctoral degree, and authoritative office furniture.

What dooms Tony Soprano’s therapy sessions is not so much his sociopathy as the formal rules of serial television, which demand that most scenes conform to approximately equal lengths and carry approximately the same moral weight. Only on those precious occasions when the camera doesn’t cut away reflexively and lingers for a few seconds longer than we expect of serial television does something begin to happen. Yet a few extra seconds are insufficient — the violent splice aborts psychoanalytic evolution, rescues the viewer from moral responsibility, and conveniently delivers Tony Soprano to the unclinical (if not wholly unselfconscious) narrative to which he preternaturally belongs. The splice of convenience is a moral crime emblematic of serial television but obviously not particular to it. For this reason Woody Allen’s attempts at the Bergmanesque look and feel introspective but come across as superficial glosses; though Allen’s camera remains theatrically static, his editing scheme in Hannah and Her Sisters (1986) and Crimes and Misdemeanors (1989), for instance, is still in the comic mode, with a punchline cut after each revelation or accusation.14 Allen’s dramatic scenes are too often like well-written bad paragraphs: they handle a single idea competently and cleverly and then quickly move on.

If the confessional voice should speak, it must speak at cleansing, unedited length, as it does in the soul-baring two-person dialogues of Bergman’s Face to Face (1975) and Scenes from a Marriage, or even in Jack Nicholson’s opening static monologue in The King of Marvin Gardens (1972). The HBO series In Treatment15 nobly attempts to rescue the therapeutic art of dialogue, entrapping the viewer in the clinical discursive space mostly without the intrusion of extradiegetic music (occasionally, mournful celli and piano creep in) and orchestrated only through simple, angle/reverse-angle editing structures. Ironically, In Treatment‘s weekly vicarious therapies are more comforting than the ritual of enduring an actual — i.e., expensive — psychiatry session. Transforming the modern confessional voice of Plath into a “psychiatric chamber drama,” this new television subgenre16 returns to a pre-postmodern belief in barely mediated language otherwise seemingly absent in current film culture (contrarily, the Dardennes’ Rosetta (1999) and The Son (2002) take an opposing view of humanism, believing in the verisimilitudinous power of visuality over dialogue). No longer is psychoanalysis demonized as capitalistic voodoo, nor do we require the tonic quixoticism of They Might Be Giants (1971), in which George C. Scott’s delusional Sherlock Holmes overcomes and “cures” the excessive rationalism of Joanne Woodward’s meddling psychiatrist.

The tortured therapist played by Gabriel Byrne in In Treatment is as humanized and vulnerable as his patients, his heroic empathy incapable of compensating for his ineffectuality. If the often coercive psychiatrist and film medium both inflict violence on the unwitting, here the racketeer is properly exposed as yet another neurotic while the medium, in turn, becomes uncharacteristically empathetic. In the context of modern American psychiatry, this move is not at all hypocritical. For Carl Rogers, the foremost proponent of empathetic psychology, the psychiatrist must reveal himself as completely human and actively demystify his authority to effect breakthroughs — exactly what the film culture predicated on non-empathic character identifications does not and will not do, as it insensitively exploits the fear and pity of the audience’s inner world to reproduce mechanical, vicarious effects that only masquerade as catharsis. Though In Treatment never approximates Rogerian therapy, its attempt to represent uninterrupted dialogic empathy onscreen comes as close as American cinema has ever come to renouncing one-dimensional heroism, acknowledging the neurotic limitations of both audience-analysand and filmmaker-analyst, and bridging the lonesome gap between vindictive yet helpless self and supremely violent other.

Rogers’ attempt to empathically reflect the analysand’s inner world had been documented in Everett Shostrom’s Three Approaches to Psychotherapy, which includes footage of one of Roger’s “unconditionally positive” therapy sessions. In his human person, Rogers must be what cinema cannot: authentic, transparent, and willing to admit limitations, lest he become a closet neurotic inflicting his neuroses on a confessed neurotic. In the film, Rogers counsels a divorcee, Gloria, who is adjusting to single life, new romances, and the reactions of her nine-year-old daughter. Though Gloria continually asks Rogers for approbation of her sexual life (“I don’t know where to go . . . I am disappointed in myself”), he deftly refuses to give a coercive opinion, wanting her to arrive at self-created answers instead. Rogers’ role of non-judgmental empath is impossibly difficult to play — his space of unconditional positive regard (“love” would be too subjective) ideally presents the analysand as the primary performer and the analyst as both open, transparent window and reflective mirror, not as arbiter or guru. Most difficult is the notion of absolute transparency, which not only mandates a rejection of bourgeois privacy but also assumes an extraordinary linguistic competence. When Rogers devised his empathic talk in the 1960s, he was initially dealing with literate middle-class neurotics proficient at extended and articulate discourse (Rogers’ later attempts to clinically empathize with schizophrenics proved linguistically impossible). The current independent film culture of “dealing-with-young-adulthood-realism” casts off — or is more likely incapable of — stylized, polysyllabic dialogism, whether spiritual a la Bergman or bourgeois a la Mankiewicz. In Treatment‘s competently penetrating realism seems the best American filmmakers can muster at the moment.

Though in my own life psychiatry has been an exorbitant failure, I found myself watching In Treatment religiously, believing in the mimetic and condensed representation of the clinical dialectic more than in the poorly scripted real thing. The frequent failures and dramatic frustrations in the series are ironically liberating — once we reject the notion that catharses must arrive at half-hour intervals, once we realize that, corrupted by the culture industry’s manufactured desires, catharsis now stands between our social selves and authentic selves and is no longer a doorway between them, we are free again to embrace what we might be.

In his anthology On Becoming a Person, Rogers describes the experience of a client who continually puts up a false front and employs subterfuge to deal with a complex, distressingly non-cathartic world. At one point, the client attempts to demolish his façade and begin a new life of transparency. He describes this process at length:

As I look at it now, I was peeling off layer after layer of defenses . . . I didn’t know what was at the bottom and I was very much afraid to find out, but I had to keep on trying. At first I felt there was nothing within me — just a great emptiness where I needed and wanted a solid core. Then I began to feel I was facing a solid brick wall, too high to get over and too thick to get through. One day the wall became translucent, rather than solid. After this, the wall seemed to disappear, but beyond it I discovered a dam holding back violent, churning waters. I felt as if I were holding back the force of these waters and if I opened even a tiny hole I and all about me would be destroyed in the ensuing torrent of feelings represented by the water. Finally I could stand the strain no longer and I let go. All I did, actually, was to succumb to complete and utter self-pity, then hate, then love. After this experience, I felt as if I had leaped a brink and was safely on the other side, though still tottering a bit on the edge. I don’t know what I was searching for or where I was going, but I felt then as I have always felt whenever I really lived, that I was moving forward.17

We recognize easily his terrible and liberating transformation. Is the “nothing” he first felt after removing his defenses not the nakedness beneath a culture of accumulated violence? Is the solid, insurmountable brick wall he faces not at once the subject’s closed soul and a postindustrial culture of illusions to which only the privileged few have access? Is the brave succumbing to self-pity not the empathy the mass media continually deny us? Is not the final peril the inward journey toward peace that cinema, in its timeless and trivial demand for vengeance, forever insists is an impossible dream?

  1. We will limit ourselves here with wholly human vengeance and not that directed against or perpetrated by whales, birds, ghosts, zombies, and/or other human projections of the ineffable. []
  2. Aristotle’s downgrading of suffering distinguishes the Poetics from the East Asian tragic tradition, which foregrounds suffering and makes pathos indivisible from it. Mizoguchi’s Princess Yang Kwei Fei (1955) and The Life of Oharu (1952) are obvious examples, as are Chikamatsu’s love-suicide plays. []
  3. Kierkegaard, Soren. Repetition and Philosophical Chunks. M. G. Piety, trans. London: Oxford University Press, 2009, p. 3. []
  4. Arendt, Hannah. On Violence. New York: Harcourt, Brace, and World, Inc., 1969, pp. 43-44. []
  5. The Chinese fa (in the work’s Mandarin title, Bing fa) is most literally rendered as “method” or “strategy.” While the translation of “art” is not wholly wrong, it does distort Sun Tzu’s more rational meaning and reflect Western romanticism. []
  6. The film’s moralizing about being non-moralistic was also a canny response to conservative American congressmen who, at the close of the first Bush era, were attempting to rally the conservative base by raising stale and censorious concerns about film violence. []
  7. Reportedly, Johnnie To Ke-fung considered ending his semi-parodic A Hero Never Dies (1998) with the heroes arriving for a violent party and the villains failing to show up. Unfortunately, he opted for a conventional shootout ending. []
  8. Garner Simmons’ Peckinpah: A Portrait in Montage contains a number of interviews that describe Peckinpah’s feelings about the endings of The Wild Bunch and Alfredo Garcia; in both cases he felt his heroes climactically could walk away free men if they so chose. That Peckinpah ultimately respected fatalistic genre rules obviously speaks to his romanticism. []
  9. I should give the American military some credit for developing nonlethal weaponry that not only spares lives but curtails soldiers’ bloodthirst. In recent years, the military has experimented with odor bombs and paralyzing sound waves that rattle around an enemy’s brain like a ping-pong ball. No permanent harm is done — except to heroism, presumably. []
  10. Still, Lang objected strenuously to the fade-out kiss between Tracy and the chaste girl, an imposition by studio bosses. []
  11. Halliwell, Stephen. Aristotle’s Poetics. Chicago: University of Chicago Press, 1986, p. 90. []
  12. Schmalenbach, Herman. Human Experience and Cultural History. “On Lonesomeness.” Ed. and trans. G. Luschen and G. Stone. Chicago: University of Chicago Press, 1977, p. 141. []
  13. Adorno, Theodor. Minima Moralia. Trans. E. F. N. Jephcott. London: Verso, 1974, p. 64 []
  14. The editing of Interiors (1978) and Stardust Memories (1980) is admittedly more patient. []
  15. I have not seen the Israeli series, Be’Tipul, on which it is based. []
  16. The HBO dramas seem intent on therapy; the psychiatric reconciliation sessions of the prison drama Oz are another example. Reconciliations in Oz generally fail for the same reason self-exploration in The Sopranos fails — they are the edited victims of compressed storytelling. []
  17. Rogers, Carl. On Becoming a Person. Boston: Houghton Mifflin Co., 1961, pp. 110-11. []
Exit mobile version