From “The Dinner Party” by Joshua Ferris:
On occasion, the two women went to lunch and she came home offended by some pettiness. And he would say, “Why do this to yourself?” He wanted to keep her from being hurt. He also wanted his wife and her friend to drift apart so that he never had to sit through another dinner party with the friend and her husband. But after a few months the rift would inevitably heal and the friendship return to good standing. He couldn’t blame her. They went back a long way and you get only so many old friends.
He leaped four hours ahead of himself. He ruminated on the evening in future retrospect and recalled every gesture, every word. He walked back to the kitchen and stood with a new drink in front of the fridge, out of the way. “I can’t do it,” he said.
Did you catch that? A new drink. Ferris could have had another paragraph or two there, with beautiful and clever language explaining that our narrator had started drinking two hours ago, was on his third, and liked to pair his dry reds with cutting loquaciousness.
Instead what we get is the word “new” and everything else left inferred. Blink and you miss it. The economy is so extreme that it’s a signal: Slow down or you’re not going to get it all.
It’s also a particularly subtle sort of indirect discourse: the narrator is being kind to himself by not dwelling on the decision to have another, and the pouring, and the sipping. Our narrator would just as soon elide all mention of it—Ferris sneaks in “new.”
What Ferris didn’t write speaks a lot louder than anything he could have written.
Robin Sloan’s latest experiment is a set of AI-generated fantasy texts, each unique and snail-mailed to the people whose mad lib-style prompts helped to create it. Sloan had this to say about influencing the AI, named “GPT-2,” to make its output more interesting:
look closely at the final prompt. Notice the empty string at the end:
prompt "The world was quiet.", 1, ""
As I was fiddling with these prompts, my friend Dan proposed an idea: what if the text that GPT-2 received and the text the reader read were sometimes different? In the case above, what’s happening is that GPT-2 is seeing the line “the world was quiet,” which will influence the text it generates; however, “the world was quiet” is not being shown to the reader. The reader is instead seeing… nothing. An empty string. So the reader sees only GPT-2’s response to “the world was quiet,” which in practice goes something like
> No fires burned, and no lamps were lit.
> Every so often, a breeze would rustle the trees and make them shimmer.
> For a few moments, he thought he heard the distant sound of an ancient love song.
I think that’s really lovely! There’s no need to preface those lines with “the world was quiet”; they communicate that on their own. This technique of showing text to GPT-2 that you conceal from the reader is a sneaky way of telling the system what you want. It’s the hidden agenda, the moon behind the clouds. I think it’s potentially very powerful, but/and I’ve only scratched the surface here.
Hard-coding subtlety—a useful skill when you’re writing a writer.