Questions to ask yourself as you consider interactivity in storytelling
A. O. Scott recently articulated a trend that a lot of people sense: the distance between movies, TV shows, web series, and video games is growing smaller and smaller:
Equally hard to refute is the idea that we are approaching a horizon of video convergence, in which all those screens will be equal and interchangeable and the distinctions between the stuff that’s shown on each one won’t seem as consequential as it does now. We still tend to take for granted that a cable drama, a network sitcom, a feature film, a web video and a first-person combat game are fundamentally different creatures, but they might really be diverse species within a single genus, their variations ultimately less important than what they have in common. They are all moving pictures, after all, and as our means of access to them proliferate and recombine, those old categories are likely to feel increasingly arbitrary and obsolete. The infrastructure of a multiplatform future is before us, and resistance to it can look like an especially tiresome kind of sentimentality.
Spoiler alert: Scott doesn’t think cinema is going away. But convergence doesn’t have to mean a complete dissolution of boundaries between media, and it is a useful context to think of the connections between these forms.
The common element linking films, TV, video games, etc., is often thought to be story. And that’s a useful vector to approach the question of convergence, but perhaps not the most fundamental: films can be non-narrative, and I’ve heard of non-narrative video games also.
A candidate for a true shared foundation comes from a recent essay by designer Frank Chimero. In “What Screens Want,” he starts with what is often heralded as the beginning of cinema and instead posits it as being, more generally, the beginning of an aesthetic of screens:
Muybridge’s crazy horse experiment eventually led us to the familiar glow of the screen. If you’re like me, and consider Muybridge’s work as one of the main inroads to the creation of screens, it becomes apparent that web and interaction design are just as much children of filmmaking as they are of graphic design. Maybe even more so. After all, we both work on screens, and manage time, movement, and most importantly, change.
Rather than pixels or celluloid, Chimero sees this capacity for movement as the most salient material quality of screens, one that binds together the art forms presented on them:
Just like any material, screens have affordances. Much like wood, I believe screens have grain: a certain way they’ve grown and matured that describes how they want to be treated. The grain is what gives the material its identity and tells you the best way to use it. […] the grain of screens is something I call flux […] Flux is the capacity for change.
Cinema and graphical software are blood relatives. And Chimero paints film as the father, mentioning that software developers would do well to adopt some of the language we use to understand the cinema.
This isn’t a new idea: there are experiments at the intersection of software and traditional arts both old and recent but with the film and video game industries on a collision course and high definition interactive screens in a majority of pockets, this is not going to remain solely an avant garde concern very long.
But I want to get to this topic of theme […] That sense of, Rian’s movies certainly, and I think the movies that I’m proudest of that I’ve worked on, there’s this kind of fractal quality to it. They’re thematically whole enough that you could take any one scene from them and cut it out and like put it in nice fertile soil and it would grow into a shape of that movie.
Like genetically it’s all part of one consistent thing. And that’s a thing I definitely find in your films is that they’re all of one piece and there’s a central idea, a central thematic idea that is whole. And I find it very hard to start writing until I kind of know what that is. If I don’t have some touchstone to go back to, like this is what the movie feels like, this is what the movie is, it’s very hard to do that.
— John August, transcribed from Scriptnotes podcast 115
There is an aesthetic crisis in writing, which is this: how do we write emotionally of scenes involving computers? How do we make concrete, or at least reconstructable in the minds of our readers, the terrible, true passions that cross telephony lines? Right now my field must tackle describing a world where falling in love, going to war and filling out tax forms looks the same; it looks like typing.
It’s the same problem filmmakers have with hackers – during the height of their drama, they sit there, inert, typing. This is why fiction keeps inventing high drama metaphors of traditional physical life for the shared internal life of the net, ala The Matrix and Snow Crash.
In a dispatch from the Vancouver International Film Festival, film scholar Kristin Thompson relates craft tips from screenwriter Terry Rossio (Aladdin, Pirates of the Caribbean):
Rossio is a big advocate of succinctly creating a strong visual sense in each scene. Even on Rossio’s desktop he comes up with a distinctive icon for each folder (see top): a Rubik’s cube for “Screenwriting,” a little gramophone for “Music,” and so on.
For Rossio, each scene should consist of:
- Opening image
- Key moment (character revelations, reversals, etc.)
- Throw (i.e., the setup for the next scene)
I love the mention of a desktop. What a good metaphor: each scene should have its icon.
Also, he takes the old question of whether writing can be taught, and the implication that if not you must be born with the it, and turns it on its head:
He feels that it is probably impossible to teach screenwriting: “No, the better question is, can writing be learned?” Yes, but people must teach themselves.
Ready for the best distillation of screenwriting manuals and the most concise critique of them that I have ever seen? In a short slide deck, Eric Hoyt, Assistant Professor of Media & Cultural Studies at the University of Wisconsin-Madison, compares the prescriptive three act structure that Hollywood hopefuls are told to follow with the four act structure that film scholar Kristin Thompson uses to describe the commonalities of dozens of well-crafted films.
I am thinking of a writing machine that would bring to the page all those things that we are accustomed to consider as the most jealously guarded attributes of our psychological life, of our daily experience, our unpredictable changes of mood and inner elations, despairs and moments of illumination. What are these if not so many linguistic “fields,” for which we might well succeed in establishing the vocabulary, grammar, syntax, and properties of permutation?
But one consequence is common in Goldsmith’s and Calvino’s visions: the death of the traditional author. Calvino again:
Literature as I knew it was a constant series of attempts to make one word stay put after another by following certain definite rules; or, more often, rules that were neither definite nor definable, but that might be extracted from a series of examples, or rules made up for the occasion—that is to say, derived from the rules followed by other writers. […] A writing machine that has been fed an instruction appropriate to the case could also devise an exact and unmistakable “personality” of an author, or else it could be adjusted in such a way as to evolve or change “personality” with each work it composes. Writers, as they have always been up to now, are already writing machines; or at least they are when things are going well.
Calvino is content with his thought experiment: he says that “it would not be worth the trouble of constructing such a complicated machine” as he describes. But one wonders whether artificial intelligence might progress to such a place. We are already trying to write under the influence of data. People decry that as forcing the writer into a well-traveled rut, but add a little AI and writing might become more adventurous:
…given that developments in cybernetics lean toward machines capable of learning, of changing their own programs, of developing their own sensibilities and their own needs, nothing prevents us from foreseeing a literature machine that at a certain point feels unsatisfied with its own traditionalism and starts to propose new ways of writing, turning its own codes completely upside down.
Calvino perhaps understates the difficultly in getting machines to ‘develop their own sensibilities’ (he was writing in 1967) but still—imagine what might be added to Goldsmith’s vision of fashioning new art from existing literary material if that material began to have even a rudimentary mind of its own, if it danced with us a little bit, like sculpting with living clay.
These debates about the bounds of fair use will always be important, but they obscure a very unfair dynamic that is squeezing artists — and turning the web into a battleground between humans and machines. The trouble is that in many cases today, there’s no human artist, writer, or editor creating what we see on the web. Some algorithm assembled the photos and it’s enjoying a nice little loophole. The machines sail on past the rules about copyright because the law lets those companies blame any infringement on the chaos of the internet. It’s a system that’s tilting the tables against any of the human artists who write, edit, or illustrate.
The automated machines have me and the photographers beat. Aggregators — whether listmakers, search engines, online curation boards, content farms, and other sites — can scrape them from the web and claim that posting these images is fair use. (BuzzFeed claims that what it does is “transformative,” allowing them to call their lists a new creation.)
We already know these companies make a profit on the ads. But what we don’t know is that the algorithms they use are acting less and less like a card catalog for the web and more and more like an author. In other words, the machine isn’t just a dumb hunk of silicon: It’s a living creator. It’s less like a dull machine and more like a fully functional, content-producing Terminator.
Looking at what these machine-authors are doing, you might call it remixing or your might call it plagiarism. I guess it depends on what is being fed into the machine and perhaps how close the human oversight is. But it would be a mistake to think that remixing or plagiarizing is all machines are good for. John Brownlee describes the role of computers in avant garde music composition:
A grad student pursuing his doctorate in composition at Harvard University, Oberholtzer applies the techniques of electronic music to compose works meant to be played by human orchestras. Instead of just stringing note after note, Oberholtzer uses a series of custom tools to translate a nebulous musical intention into a human-readable score. He does this by trying to define in words what the finished piece will sound like.
“When I want to capture some new music concept or idea, I’ll usually write a tool first, then think about it a lot and work it into a piece,” says Oberholtzer. “These tools are kind of like meta-instruments, and I can even write tools on top of tools, giving me a wider palette.”
For Oberholtzer, this seems like a perfectly natural way to write music. “All art is a kind of curatorship. You work through all these possibilities mentally, and then in the end, you try to reproduce the one you’ve decided upon. There’s no difference for me. My computer isn’t writing my music for me. It’s just handling the version control.”
Peter Wayner in the first quote above saw an Intellectual Property Terminator because Google et al were crawling and remixing the web without benefiting the original artist. But this composer demonstrates that involving machines with art isn’t always about an insidious corporate IP grab. Imagine what could be accomplished if the idea of machines collaborating in art were brought to a larger scale while still remaining un-corporate, with the direct oversight of artists?
It’s already happening. The documentary CLOUDS (not released yet, preview below) takes a look at artists who are using code to make art, and in some cases sharing code in order to create stuff they couldn’t create working in isolation.
Imagine that. Invent a cool new art tool? Put it on GitHub and watch your friends make it better.
As transformative as people talk about the internet being for artists/creatives, I feel like we’re at the beginning. Most of the energy has been poured into new distribution mechanisms for existing forms: ebooks with faux page-turn animations, videos and pictures that strive for a lean-back theater or museum experience. We have barely begun to scratch the surface of the unique opportunities enabled by computing and networks.
I’ve been noticing lately some interesting experiments in interactive storytelling. Take a look:
- Black Crown is an online interactive story that The Verge calls “a strange blend of interactive fiction and a classic choose-your-own-adventure novel.” They also use the term “game.” Interesting that this thing is coming out of Random House UK — publishers can be a progressive bunch.
- Haunting Melissa is a horror movie that is doled out bit by bit through an iOS app. But the interesting thing to me isn’t the serialized form or the App Store distribution, it’s the fact that the narrative can change a little bit in the rewatching, as TechCrunch explains: “To keep viewers hooked on “Haunting Melissa” even though there isn’t a regular schedule, the creators developed technology to allow them to add dynamic story elements to each chapter. In other words, if viewers re-watch a chapter, they see or hear different things that add new layers to the narrative and help set the atmosphere of the ghost story.”
- A March Story was a serialized story that used suggestions from the twitterverse about how to fill in certain details, ranging from names to whole sentences, and those details helped guide the evolution of the story. Like Mad Libs, narrative edition.
These examples, in which the reader/viewer is given some measure of control, reminds me of the debate about whether games can ever be Art, or if their interactive nature disqualifies them. Roger Ebert thought that games could not be art even in principle, and one of his reasons was the importance of an single, intentional narrative. Here he is summarizing a debate with filmmaker and game creator Clive Barker:
“I think that Roger Ebert’s problem is that he thinks you can’t have art if there is that amount of malleability in the narrative. In other words, Shakespeare could not have written ‘Romeo and Juliet’ as a game because it could have had a happy ending, you know? If only she hadn’t taken the damn poison. If only he’d have gotten there quicker.”
Well, yes, that is what I think. There was actually a time in history when a version of Romeo and Juliet was performed with a happy ending, and I can’t begin to tell you how much that depressed audiences.
Barker: “Let’s invent a world where the player gets to go through every emotional journey available. That is art. Offering that to people is art.”
Ebert: “If you can go through ‘every emotional journey available,’ doesn’t that devalue each and every one of them? Art seeks to lead you to an inevitable conclusion, not a smorgasbord of choices.
Gamers also recognize the question of control as essential, and in fact deride games that have heavy handed storytelling that leads you along “on rails” without being able to move where you like. Games shouldn’t be like a novel or a movie. But, they say, that doesn’t mean they are not art — just a new art form.
I thought about those works of Art that had moved me most deeply. I found most of them had one thing in common: Through them I was able to learn more about the experiences, thoughts and feelings of other people. My empathy was engaged. I could use such lessons to apply to myself and my relationships with others. They could instruct me about life, love, disease and death, principles and morality, humor and tragedy. They might make my life more deep, full and rewarding.
Not a bad definition, I thought. But I was unable to say how music or abstract art could perform those functions, and yet they were Art. Even narrative art didn’t qualify, because I hardly look at paintings for their messages. It’s not what it’s about, but how it’s about it. As Archibald MacLeish wrote: A poem should not mean, but be.
Ebert never settles on a definition of art, but note that the attempts above have to do with the things that art does to him: moves him, instructs him, engages his empathy. Of course no one could accuse Ebert of having been a passive consumer of art, but he seems to presume that one lets art work on you first, and then you emerge (from the movie theater, the gallery, the pages of a novel) primed to respond. I wonder if gamers have the same assumption, or if they see the action/reaction stages of art experience as much more collapsed.
Truth be told, I’m much more on Ebert’s side: I prefer to sit back and focus and let the author/director/artist guide me somewhere, show me something, and let me see out of someone else’s eyes. But I’m not ready to proclaim games and other interactive experience outside the realm of art on principle. I’m wondering, What would a more sophisticated interactive storytelling experience be like? How could we progress beyond a choose-your-own-adventure-type set of a few possible outcomes, paying only lip service to viewer/player empowerment, and still maintain a story that was coherent: ends that paid off beginnings, characters with satisfying arcs of development, etc.
The only way I can think of, and perhaps it wouldn’t work anyway, but the only way I can think of is to give the viewer/player more control over the story than they currently have in any existing narrative game. Recall the Kuleshev experiment, in which several viewers were shown the same few film clips, but in different orders: they constructed a narrative about who was looking at whom, and what they were thinking and feeling. Maybe for narrative storytelling to embrace interactivity fully rather than providing an experience “on rails” or weakly “choose your own adventure” among a set of four a five different plot threads, the author has to simply become a generator of raw story material that the viewer/player assembles into a story.
Would that mean that for it to be great art the viewer as well as the creator would have to be a great artist? Maybe not. We are all storytelling creatures, making sense of our own identities and those of everyone around us by taking some details, discarding others, and assembling them in a sensical way, constantly, quickly, and often unconsciously. Given the best ingredients and the best tools for splicing, the evolution of interactive storytelling might mean the “author” provides potential story material and the viewer/player provides the plot.
This recalls in a way Kenneth Goldsmith’s argument in Uncreative Writing about the promotion of the author from one who writes to one to controls, curates, programs, remixes literary material. In this vision, the artist is being promoted from storyteller to story material generator, or perhaps more catchily “worldbuilder,” creating the material and the tools for someone on the other end to splice together.
I don’t know yet what any of that would actually look like, but we’re not going to find out by debating what’s art and what’s not. We’ll get there by experimenting and then debating about what’s good and what’s not.
The other day I was listening to a very good episode of On The Media but it reminded me of a troubling linguistic trend. I’m about to rant a little. Indulge me.
The word “content” bothers me when used to refer to art or journalism or other creative expression. I don’t hate it, but it gives me that same sort of feeling of transient disgust that you get walking down the street and getting a whiff of sewer gas. I suppose it makes sense — in a time when books and movies and games are all enjoyed through a few consolidated, multi-purpose screens, and as the boundaries between media blur, and as those bits of media all share shelf space in the same virtual stores — it makes sense to come up with a catch-all word. Fine. I would prefer simply “media,” but I can hold my nose and use “content.”
What really rankles me is the verb that gets matched with “content.” Because once you have “content,” once you reach that level of generalization, there is only one thing you can do with it: consume it. It used to be that you would read a book, or watch a movie, or flip through a magazine, or listen to music. Now all those things fall under the category of “content consumption.” Yes, that’s phrase everybody, at least in the media and tech worlds, seems to have settled on. I’m not going to link to examples. Google it if you must.
Others have noted that the word “consumption” as applied to “content” is awkward and imprecise. But it’s worse than that. The problem with “consumption” is that it takes actions like reading and watching and playing — actions that though sedentary are mentally active — and reduces them to entirely passive, painting us as the consumers sucking at the teat of iTunes rather than collaborators in meaning, integrators of experience. Customers rather than an audience.
Maybe this was inevitable as our information economy matures: “Content” is America’s future stock-in-trade, and the formerly cloistered and subtle language of ideas had no chance once capitalism moved in.
I’m not saying that art and commerce weren’t related before. A Van Gogh picture was worth a lot of money long before “content consumption” reared its head — but that was a valuation of the object itself, not of one’s engagement with it. (Yes, you buy a museum ticket to see a Van Gogh, but museums aren’t selling transcendental aesthetic experiences at $20 a pop.)
“Content consumption” joins ad services’ habit of “selling eyeballs” in tallying our time and mental activity and processing it into a crude fuel to turn the wheels of commerce.
Anyway, thanks for consuming my blog.