In a dispatch from the Vancouver International Film Festival, film scholar Kristin Thompson relates craft tips from screenwriter Terry Rossio (Aladdin, Pirates of the Caribbean):
Rossio is a big advocate of succinctly creating a strong visual sense in each scene. Even on Rossio’s desktop he comes up with a distinctive icon for each folder (see top): a Rubik’s cube for “Screenwriting,” a little gramophone for “Music,” and so on.
For Rossio, each scene should consist of:
Key moment (character revelations, reversals, etc.)
Throw (i.e., the setup for the next scene)
I love the mention of a desktop. What a good metaphor: each scene should have its icon.
Also, he takes the old question of whether writing can be taught, and the implication that if not you must be born with the it, and turns it on its head:
He feels that it is probably impossible to teach screenwriting: “No, the better question is, can writing be learned?” Yes, but people must teach themselves.
Ready for the best distillation of screenwriting manuals and the most concise critique of them that I have ever seen? In a short slide deck, Eric Hoyt, Assistant Professor of Media & Cultural Studies at the University of Wisconsin-Madison, compares the prescriptive three act structure that Hollywood hopefuls are told to follow with the four act structure that film scholar Kristin Thompson uses to describe the commonalities of dozens of well-crafted films.
I am thinking of a writing machine that would bring to the page all those things that we are accustomed to consider as the most jealously guarded attributes of our psychological life, of our daily experience, our unpredictable changes of mood and inner elations, despairs and moments of illumination. What are these if not so many linguistic “fields,” for which we might well succeed in establishing the vocabulary, grammar, syntax, and properties of permutation?
But one consequence is common in Goldsmith’s and Calvino’s visions: the death of the traditional author. Calvino again:
Literature as I knew it was a constant series of attempts to make one word stay put after another by following certain definite rules; or, more often, rules that were neither definite nor definable, but that might be extracted from a series of examples, or rules made up for the occasion—that is to say, derived from the rules followed by other writers. […] A writing machine that has been fed an instruction appropriate to the case could also devise an exact and unmistakable “personality” of an author, or else it could be adjusted in such a way as to evolve or change “personality” with each work it composes. Writers, as they have always been up to now, are already writing machines; or at least they are when things are going well.
Calvino is content with his thought experiment: he says that “it would not be worth the trouble of constructing such a complicated machine” as he describes. But one wonders whether artificial intelligence might progress to such a place. We are already trying to write under the influence of data. People decry that as forcing the writer into a well-traveled rut, but add a little AI and writing might become more adventurous:
…given that developments in cybernetics lean toward machines capable of learning, of changing their own programs, of developing their own sensibilities and their own needs, nothing prevents us from foreseeing a literature machine that at a certain point feels unsatisfied with its own traditionalism and starts to propose new ways of writing, turning its own codes completely upside down.
Calvino perhaps understates the difficultly in getting machines to ‘develop their own sensibilities’ (he was writing in 1967) but still—imagine what might be added to Goldsmith’s vision of fashioning new art from existing literary material if that material began to have even a rudimentary mind of its own, if it danced with us a little bit, like sculpting with living clay.
These debates about the bounds of fair use will always be important, but they obscure a very unfair dynamic that is squeezing artists — and turning the web into a battleground between humans and machines. The trouble is that in many cases today, there’s no human artist, writer, or editor creating what we see on the web. Some algorithm assembled the photos and it’s enjoying a nice little loophole. The machines sail on past the rules about copyright because the law lets those companies blame any infringement on the chaos of the internet. It’s a system that’s tilting the tables against any of the human artists who write, edit, or illustrate.
The automated machines have me and the photographers beat. Aggregators — whether listmakers, search engines, online curation boards, content farms, and other sites — can scrape them from the web and claim that posting these images is fair use. (BuzzFeed claims that what it does is “transformative,” allowing them to call their lists a new creation.)
We already know these companies make a profit on the ads. But what we don’t know is that the algorithms they use are acting less and less like a card catalog for the web and more and more like an author. In other words, the machine isn’t just a dumb hunk of silicon: It’s a living creator. It’s less like a dull machine and more like a fully functional, content-producing Terminator.
A grad student pursuing his doctorate in composition at Harvard University, Oberholtzer applies the techniques of electronic music to compose works meant to be played by human orchestras. Instead of just stringing note after note, Oberholtzer uses a series of custom tools to translate a nebulous musical intention into a human-readable score. He does this by trying to define in words what the finished piece will sound like.
“When I want to capture some new music concept or idea, I’ll usually write a tool first, then think about it a lot and work it into a piece,” says Oberholtzer. “These tools are kind of like meta-instruments, and I can even write tools on top of tools, giving me a wider palette.”
For Oberholtzer, this seems like a perfectly natural way to write music. “All art is a kind of curatorship. You work through all these possibilities mentally, and then in the end, you try to reproduce the one you’ve decided upon. There’s no difference for me. My computer isn’t writing my music for me. It’s just handling the version control.”
Peter Wayner in the first quote above saw an Intellectual Property Terminator because Google et al were crawling and remixing the web without benefiting the original artist. But this composer demonstrates that involving machines with art isn’t always about an insidious corporate IP grab. Imagine what could be accomplished if the idea of machines collaborating in art were brought to a larger scale while still remaining un-corporate, with the direct oversight of artists?
It’s already happening. The documentary CLOUDS (not released yet, preview below) takes a look at artists who are using code to make art, and in some cases sharing code in order to create stuff they couldn’t create working in isolation.
Imagine that. Invent a cool new art tool? Put it on GitHub and watch your friends make it better.
As transformative as people talk about the internet being for artists/creatives, I feel like we’re at the beginning. Most of the energy has been poured into new distribution mechanisms for existing forms: ebooks with faux page-turn animations, videos and pictures that strive for a lean-back theater or museum experience. We have barely begun to scratch the surface of the unique opportunities enabled by computing and networks.
I’ve been noticing lately some interesting experiments in interactive storytelling. Take a look:
Black Crown is an online interactive story that The Verge calls “a strange blend of interactive fiction and a classic choose-your-own-adventure novel.” They also use the term “game.” Interesting that this thing is coming out of Random House UK — publishers can be a progressive bunch.
Haunting Melissa is a horror movie that is doled out bit by bit through an iOS app. But the interesting thing to me isn’t the serialized form or the App Store distribution, it’s the fact that the narrative can change a little bit in the rewatching, as TechCrunch explains: “To keep viewers hooked on “Haunting Melissa” even though there isn’t a regular schedule, the creators developed technology to allow them to add dynamic story elements to each chapter. In other words, if viewers re-watch a chapter, they see or hear different things that add new layers to the narrative and help set the atmosphere of the ghost story.”
A March Story was a serialized story that used suggestions from the twitterverse about how to fill in certain details, ranging from names to whole sentences, and those details helped guide the evolution of the story. Like Mad Libs, narrative edition.
These examples, in which the reader/viewer is given some measure of control, reminds me of the debate about whether games can ever be Art, or if their interactive nature disqualifies them. Roger Ebert thought that games could not be art even in principle, and one of his reasons was the importance of an single, intentional narrative. Here he is summarizing a debate with filmmaker and game creator Clive Barker:
“I think that Roger Ebert’s problem is that he thinks you can’t have art if there is that amount of malleability in the narrative. In other words, Shakespeare could not have written ‘Romeo and Juliet’ as a game because it could have had a happy ending, you know? If only she hadn’t taken the damn poison. If only he’d have gotten there quicker.”
Well, yes, that is what I think. There was actually a time in history when a version of Romeo and Juliet was performed with a happy ending, and I can’t begin to tell you how much that depressed audiences.
Barker: “Let’s invent a world where the player gets to go through every emotional journey available. That is art. Offering that to people is art.”
Ebert: “If you can go through ‘every emotional journey available,’ doesn’t that devalue each and every one of them? Art seeks to lead you to an inevitable conclusion, not a smorgasbord of choices.
Gamers also recognize the question of control as essential, and in fact deride games that have heavy handed storytelling that leads you along “on rails” without being able to move where you like. Games shouldn’t be like a novel or a movie. But, they say, that doesn’t mean they are not art — just a new art form.
I thought about those works of Art that had moved me most deeply. I found most of them had one thing in common: Through them I was able to learn more about the experiences, thoughts and feelings of other people. My empathy was engaged. I could use such lessons to apply to myself and my relationships with others. They could instruct me about life, love, disease and death, principles and morality, humor and tragedy. They might make my life more deep, full and rewarding.
Not a bad definition, I thought. But I was unable to say how music or abstract art could perform those functions, and yet they were Art. Even narrative art didn’t qualify, because I hardly look at paintings for their messages. It’s not what it’s about, but how it’s about it. As Archibald MacLeish wrote: A poem should not mean, but be.
Ebert never settles on a definition of art, but note that the attempts above have to do with the things that art does to him: moves him, instructs him, engages his empathy. Of course no one could accuse Ebert of having been a passive consumer of art, but he seems to presume that one lets art work on you first, and then you emerge (from the movie theater, the gallery, the pages of a novel) primed to respond. I wonder if gamers have the same assumption, or if they see the action/reaction stages of art experience as much more collapsed.
Truth be told, I’m much more on Ebert’s side: I prefer to sit back and focus and let the author/director/artist guide me somewhere, show me something, and let me see out of someone else’s eyes. But I’m not ready to proclaim games and other interactive experience outside the realm of art on principle. I’m wondering, What would a more sophisticated interactive storytelling experience be like? How could we progress beyond a choose-your-own-adventure-type set of a few possible outcomes, paying only lip service to viewer/player empowerment, and still maintain a story that was coherent: ends that paid off beginnings, characters with satisfying arcs of development, etc.
The only way I can think of, and perhaps it wouldn’t work anyway, but the only way I can think of is to give the viewer/player more control over the story than they currently have in any existing narrative game. Recall the Kuleshev experiment, in which several viewers were shown the same few film clips, but in different orders: they constructed a narrative about who was looking at whom, and what they were thinking and feeling. Maybe for narrative storytelling to embrace interactivity fully rather than providing an experience “on rails” or weakly “choose your own adventure” among a set of four a five different plot threads, the author has to simply become a generator of raw story material that the viewer/player assembles into a story.
Would that mean that for it to be great art the viewer as well as the creator would have to be a great artist? Maybe not. We are all storytelling creatures, making sense of our own identities and those of everyone around us by taking some details, discarding others, and assembling them in a sensical way, constantly, quickly, and often unconsciously. Given the best ingredients and the best tools for splicing, the evolution of interactive storytelling might mean the “author” provides potential story material and the viewer/player provides the plot.
This recalls in a way Kenneth Goldsmith’s argument in Uncreative Writing about the promotion of the author from one who writes to one to controls, curates, programs, remixes literary material. In this vision, the artist is being promoted from storyteller to story material generator, or perhaps more catchily “worldbuilder,” creating the material and the tools for someone on the other end to splice together.
I don’t know yet what any of that would actually look like, but we’re not going to find out by debating what’s art and what’s not. We’ll get there by experimenting and then debating about what’s good and what’s not.
The other day I was listening to a very good episode of On The Media but it reminded me of a troubling linguistic trend. I’m about to rant a little. Indulge me.
The word “content” bothers me when used to refer to art or journalism or other creative expression. I don’t hate it, but it gives me that same sort of feeling of transient disgust that you get walking down the street and getting a whiff of sewer gas. I suppose it makes sense — in a time when books and movies and games are all enjoyed through a few consolidated, multi-purpose screens, and as the boundaries between media blur, and as those bits of media all share shelf space in the same virtual stores — it makes sense to come up with a catch-all word. Fine. I would prefer simply “media,” but I can hold my nose and use “content.”
What really rankles me is the verb that gets matched with “content.” Because once you have “content,” once you reach that level of generalization, there is only one thing you can do with it: consume it. It used to be that you would read a book, or watch a movie, or flip through a magazine, or listen to music. Now all those things fall under the category of “content consumption.” Yes, that’s phrase everybody, at least in the media and tech worlds, seems to have settled on. I’m not going to link to examples. Google it if you must.
Others have noted that the word “consumption” as applied to “content” is awkward and imprecise. But it’s worse than that. The problem with “consumption” is that it takes actions like reading and watching and playing — actions that though sedentary are mentally active — and reduces them to entirely passive, painting us as the consumers sucking at the teat of iTunes rather than collaborators in meaning, integrators of experience. Customers rather than an audience.
Maybe this was inevitable as our information economy matures: “Content” is America’s future stock-in-trade, and the formerly cloistered and subtle language of ideas had no chance once capitalism moved in.
I’m not saying that art and commerce weren’t related before. A Van Gogh picture was worth a lot of money long before “content consumption” reared its head — but that was a valuation of the object itself, not of one’s engagement with it. (Yes, you buy a museum ticket to see a Van Gogh, but museums aren’t selling transcendental aesthetic experiences at $20 a pop.)
“Content consumption” joins ad services’ habit of “selling eyeballs” in tallying our time and mental activity and processing it into a crude fuel to turn the wheels of commerce.
I’ve been reading Uncreative Writing by writer and UbuWeb founder Kenneth Goldsmith, who thinks that writing is ripe for a revolution. He thinks of language not solely as semantic content but also as raw material — material that can be transformed by computers, or written from scratch by computers, sometimes even meant to be read only by other computers. In effect, writers get abstracted (promoted?) a level, from generating language to managing its creation and manipulation.
It has me wondering what this would mean for stories. Goldsmith’s narrative examples involve the appropriation of existing narrative material. What would it look like to not make up a story but to manage a machine generating a story?
The only example I’ve found is A Ship Adrift. Perfectly, the credits say that “a ship adrift is a thing “by” james bridle.” Bridle explains on his blog that he began by creating a system to read information from a weather station:
A Ship Adrift takes the data from that weather station and applies it to an imaginary airship piloted by a lost, mad AI autopilot. The ship is drifting because the pilot is mad or the pilot is mad because the ship is drifting; it doesn’t really matter.
If the wind whips eastwards across the roof of the Southbank centre at 5mph, then the Ship Adrift floats five miles to the East. […]
As the Ship drifts, it looks around itself. It doesn’t know where it is, but it is listening. It’s listening out for tweets and foursquare check-ins and posts on dating sites and geotagged Wikipedia articles and it is remembering them and it is trying to make something out of them. It is trying to understand.
The ship is lost, and I don’t know where it’s going. I don’t know what it’s going to learn, but I want to work with it to tell some stories. I want to build a system for cooperating with software and chance.
The result is, to my eye, gibberish. But it’s gibberish on a timeline, written by hundreds of people and amalgamated through a partnership of man and machine. And if you look at the text in the context of the wandering little dot representing the ship, it’s even a little poignant. Perhaps the beginning of something.
This Wired article on the return of the show Arrested Development on Netflix is worth reading in its entirety, but I want to pull out these interesting tidbits about narrative form:
Arrested Development is exploring the more playful, outré structural possibilities offered by the new platform on Netflix. Each episode will cover events from a different character’s point of view, like a comedic Rashomon. There will be moments and Easter eggs that will make sense only in retrospect. There will be a suggested viewing sequence, but it will be possible—even rewarding—to watch out of sequence. Cross describes the new structure as being “like if you could mash up a Venn diagram with a nautilus shell. And then put that inside a Möbius strip.”
The new Arrested Development is not just a seven-hour movie. It’s something new—a collection of episodes released altogether that can be remixed and recombined and that gain something from each juxtaposition. Right now that’s a framework only Netflix can offer. Asked what the show would have been like had Showtime won its bid, Hurwitz says, “I know that storytelling-wise, saner ideas might have prevailed.”
House of Cards was an earlier Netflix effort to put its own money into a show and release the entire season at once, and it justifiably got a lot of press, but it was basically a seven-hour movie. From a storytelling standpoint this might be the bigger deal.
Since my previous post, in which I responded to Richard Brody’s idea that codified story development processes were turning movies into formulaic gruel, I’ve been mulling over different ways to describe story structure. I think it’s helpful to divide these descriptions into three types.
As the linguist Noam Chomsky showed, all human languages share some basic structural similarities — a universal grammar. So too, I argue, with story. No matter how far we travel back into literary history, and no matter how deep we plunge into the jungles and badlands of world folklore, we always find the same astonishing things: their stories are just like ours. There is a universal grammar in world fiction, a deep pattern of heroes confronting trouble and struggling to overcome. (p. 55)
This grammar is versatile. Gottschall compares it to the human face: the same deep structural pattern the world over, but enabling great variety. It is composed of the structural feature of conflict — the pattern of “complication, crisis, and resolution” — as well as a handful of universal themes about sex and love, death, the desire for power, etc.
The argument recalls Joseph Campbell’s “monomyth,” in which the hero finds a conflict that requires separation from his home, initiation into an unknown world, and a return with special knowledge or powers. He gets much more specific than Gottschall, but the claim to universality is the same, and Gottschall, consciously or not, even draws on the title of Campbell’s most famous book The Hero with a Thousand Faces when he describes the universal grammar using the metaphor of the human face.
Often when you hear about Campbell today, it’s in the context of advice from a screenwriting “guru” about how to structure your script. But in those instances, the monomyth has been turned into something more like one of the two categories below.
I’m borrowing the term “norm” from the film scholar David Bordwell. Norms aren’t instructions; they are standardized sets of options.
In Hollywood film style, for instance, there are several ways to make spatial arrangement stress an important point: instead of a cut-in to a close-up, we can get a track-in, or a shift in lighting, or a character’s movement into the foreground. A norm is usefully considered as what semiologists call a paradigm—a bounded set of alternatives which at some level serve equivalent functions. (Narration in the Fiction Film, p. 151)
That example is a norm of style, but there are also norms of story construction, and this strikes me as a concept that can be applied to any medium.
Here are some story, plot, and narration norms Bordwell identifies for classical cinema:
stories present psychologically defined individuals who struggle to solve a clear-cut problem
plots end with a decisive victory or defeat
causality is the prime unifying principle, and space and time are arranged to emphasize cause and effect
narration tends to be omniscient, highly communicative, and only moderately self-conscious (i.e., the film is free to move around in the fictional world and tell us more than any single character knows, and a film rarely makes a show of the fact that it is a film)
These norms are not universal. Different norms take hold in different parts of the world and in different times. Bordwell identifies other storytelling traditions that are more episodic and elliptical, in which the goal is vague and the end inconclusive.
I reserved the word “rules” for this third type because, unlike the two categories above, these are meant to be prescriptive rather than descriptive. They are most often used by writing instructors, whereas the other two seem to be the province of academics.
Rules are instructions for fulfilling norms. They tend to range from specific to crazily specific. I’m reading a book on screenwriting called Save the Cat by Blake Snyder, in which he says
…the trick is to create heroes who:
Offer the most conflict in that situation
Have the longest way to go emotionally and…
Are the most demographically pleasing!
Those first two rules are ways to fulfill the classical norms of a psychologically defined character with a clear goal. (The third rule is unabashed commercialism.)
He goes on from there to identify which pages certain plot points should happen on: the catalyst on page 12, act break on page 25, b-story starting at 30…
It’s easy to sniff at this stuff from an academic perch, but I’m writing a screenplay right now and it’s been useful.
I’m tempted to always put the word “rules” in quotation marks because I don’t think there are inviolable storytelling rules, but I’ll resist because I think that’s generally understood among writers, as evidenced by the popular saying that you should ‘know the rules so you can know when to break them.’
The Goldilocks approach
I’m feeling somewhat Goldilocks about my list: “rules” feel restrictive and prescriptive. Universals are also problematic: too broad for practical use, and yet there will always be outlying examples that will cause trouble for academics: Gottschall is forced to write off Finnegans Wake and Gertrude Stein’s experiments in writing stories in which “nothing much happens” as failures.
Norms, though, feel just right. I see them being more useful more often to scholars, but I also think practitioners of story (writers, filmmakers, game designers, etc.) would get something out of knowing where the “rules” come from — that they are tried and true ways of fulfilling certain cultural norms, and perhaps sometimes universals, and not the invention of self-appointed experts.
Norms’ usefulness to storytellers might increase in the future as the boundaries between media dissolve: knowing to put the inciting incident on page 12 of a screenplay doesn’t do much good when you’re trying to craft a transmedia experience, but staying focused on presenting a story space with clear cause and effect and highly communicative narration would be essential.
Brody refers to “a crisis that is endemic to the modern cinema, that is, in fact, one of the strange, unintended consequences of cinematic modernity: the very notion of “storytelling” and the obsession with characters and whether they’re admirable or likable.”
The problem, he says, is not so much with classical cinema’s relying on narrative, but that story should be “a basis, not a goal […] merely a starting point for a significant work, not a result.” This is worse than simply a missed opportunity: the focus on story not only precludes invention in other aspects of cinema, but saps the energy of story itself. The rules around storytelling have become so stringently normalized that the movies become “a delivery system for a uniform set of emotional juicings, and the result, whether for C.G.I. or for live-action films, is a sort of cyborg cinema, a prefabricated simulacrum of experience and emotion that feels like the nexus of pornography and propaganda.”
That’s my best try at summarizing an argument that I find unconvincing. (I have skipped some speculation about how this state of affairs came to be and what the popularity of such content means for society.) So we have two related problems: an allegedly restrictive list of storytelling “rules” resulting in the cinematic equivalent of fast food; and the focus on those rules precluding invention in other aspects of cinema. And neither of them are real problems.
The rules he cites as examples (which the author of the list, Emma Coats, calls “story basics” rather than rules) are not at all restrictive. Things like making sure the stakes for the main character are high, and giving us a reason to side with him or her. If these rules are ruinously restrictive, then we are writing off all of classical cinema.
There may be an argument to be made that these broad rules are being followed in narrow, unimaginative ways, that a good portion of films being released are formulaic. But I don’t see how this is a problem specific to “modern cinema” unless by modern Brody means since the 1910s. Really inventive films have always been rare.
As to the idea that formulaic films evoke a mere simulacrum of emotion in the audience, the equivalent of audio-visual fast food, Brody expands on that point later in the piece to say that the emotion he primarily refers to is the false relief of the collective feeling of isolation we moderns carry around with us. It’s an interesting and somewhat baffling hypothesis I would like to see supported.
The main problem here is that Brody confuses a focus on story and character with a myopic focus on script. He hints at this when he complains “[W]hen the script has been built with such solidity and has become an object of obsession for a year or more, it becomes not the springboard for filmmaking but its objective, and constrains the filmmaker to be its illustrator.” No argument here. If a director does nothing but cover the script, the movie will probably limp along. But script and story are not the same thing.
Brody thinks that a strong filmmaker approaches a script with an attitude of “creative destruction.” But a director’s inventiveness must be organized around something — a principle or skeleton that ensures a cohesive whole. The theorist Roman Jakobson called this thing “the dominant.” In classical cinema, the dominant is the narrative. (There are filmmaking traditions outside Hollywood — and mostly outside the West entirely — for which the story is not the dominant. But I don’t think Brody’s aim is to get us to write off all of Western cinema.)
Think of a formally daring filmmaker like Welles. His unusual camera positions don’t just look cool — they make a character big or small at certain points in the plot. His lighting evokes moods that echo characterization. His decisions serve the story. A focus on story does not preclude inventiveness; it enables its success by providing both a grounding and an opportunity for embellishment. The key is not to deviate from the script but to go above and beyond it.