Given what they published over at Wired last week, you may actually read a piece of fiction written by a non-human.
Last week, science fiction author Stephen Marche collaborated with researchers Adam Hammond and Julian Brooke for a feature story/experiment for Wired magazine entitled “What Happens When an Algorithm Helps Write Science Fiction.”
For the experiment, the researchers created a database out of Marche’s 50 favorite works of sci-fi, and used this database to compare against Marche’s own work. Then the researchers designed SciFiQ, a special algorithm-based program that would guide Marche’s writing process via a set of 14 writing rules.
Basically, SciFiQ—non-human algorithm—acted as Stephen Marche’s writing tutor.
In Marche’s words, themselves seemingly from a sci-fi story, this was how the program worked:
“When I typed in a word or phrase and it was more than a little different than what SciFiQ had in mind, the interface would light up red or purple. When I fixed the offending word or phrase, the interface would turn green.”
You can follow the link to read the resulting story. It’s hard for me to judge the content, since I’m only a little familiar with the sci-fi tradition. The basic plot follows a group of scientists who are viewing Earth from afar and making big sciencey and predictions about the world etc. and I just got bored and started reading the footnotes. These footnotes explained what guidance that SciFiQ provided.
The important insight, for now, is that sci-fi has its own sets of patterns that, even with really quality sci-fi (the algorithm ran off data from Ray Bradbury, Ursula LeGuin, and Philip K. Dick, certainly no slouches) algorithms can apparently pick up on and create rules from. This is one way that robots can get involved in human storytelling: by analyzing patterns in texts. Some of these patterns are grammatical, and some of them are thematic.
So what does a computer think good (sci-fi) storytelling looks like?
The rules that SciFiQ created for Marche range from surprising, to obvious, to outright hilarious.
For instance, Rule #5 was a theme rule that dictated that “part of the action should unfold at night during an intense storm.” This, paired with a grammatical rule which directed Marche to add a certain amount of adverbs in order to meet a “per 100 words” quota, produced this doozy of a sentence:
“The lightning from the storm was continuous enough that the room needed no other illumination, and Anne’s skin tingled furtively with the electricity in the air.” Yikes.
On the other hand, rule #10 provides a very important insight into affective prose. In it, SciFiQ instructs the author to not just “include descriptions of intense physical sensations” (obvious), but also to “name the bodily organs that perceive these sensations.” This is a very good tip, one that relates to an important writing principle that I feel very strongly about.
Even more surprising was Rule # 11: “engage the sublime.” “The sublime” is a state that even a great writer might struggle to grapple with. How did SciFiQ determine what it was, let alone that it was important to a good story?
Well, lucky for us it interpreted its own rule in the next sentence, which directs the author to “Consider using the following words: vast, gigantic, strange, radiance, mystery, brilliance, fantastic, and spooky.” Apparently, using key adjectives lifted out of a database of historical sci-fi, the writing-tutor-algorithm picked up on a common move that authors use to evoke a certain human emotion. And, more than that, to decipher their tonal importance.
Some observations, re: robot storytelling
- Robots apparently have no problem with syntax and grammar.
This is true even of high-level grammar concerns, the type that legendary New Yorker writer John McPhee gets at with his Kedit program. How many adverbs is too many? How many words per line will exhaust a reader? Which words should only be use occasionally, based on historical use?
In other words, there is nothing especially human about writing grammatically clean sentences.
- Robots detect patterns, (rules) very well, well enough in fact to track the way we communicate deep emotion.
Thematic and syntactical tics are something that an algorithm can reproduce and explain with ease. Human writers often fall into such tics when trying to express difficult emotional realities. Sense of dread? Night rain, a little lightning. Angst? Character in a dirty coat smoking a cigarette. Etc.
It is important to point out though that this pattern-detecting is fully dependent on past writing (classic sci-fi in this case). That means that SciFiQ cannot sense any of this on its own—it’s just reporting on what it’s seen (and it’s seen a lot!).
Even if SciFiQ tried to produce its own writing based on the patterns it found in its database, it could only produce an altered imitation.
How is human creativity different?
In (at least) two important ways:
- Human beings can react differently to common patterns or rules
This is my favorite observation: humans can learn rules and then break them in surprising ways. Think about what irony is: you set up a scene on a dark and stormy night and send your villain through it, and instead of it being pivotal and serious, a raccoon jumps out of a garbage can and the villain craps his pants. The next scene shows the villain buying briefs at the Dollar Store. The next scene shows him ironing his clothes.
This is narrative irony. The writer knows the rule, uses the rule to set up the expectation, then uses that expectation to create humorous surprise.
- For human writers, language is more than just math.
This is for all the grammar freaks out there: STOP OBSESSING OVER THE TYPES OF THINGS ROBOTS OBSESS OVER.
I have enough English majors on my Facebook feed to understand how passionately people can feel about the oxford comma. But guess what? Whatever you think about it, there’s nothing uniquely human about a rule that says you should add a third comma to a triple descriptor.
What might be particularly human about grammar concerns are matters that have to do with the wisdom writers learn from their experience with words. Things like: rhythm and sensation, the feel of particular sounds, the way punctuation affects the mood of a line—these sorts of insights are completely absent from an algorithm’s analysis.
The math of language is something that, while important, you will one day outsource to your writing helper robot.
- Originality seems to be exclusive to human creatives.
Robots cannot imagine a future way forward. At least not yet.
How is it that humans can do this? This is a remarkable feat for the human spirit.
Robots are rule followers. You know who aren’t very good rule followers? Little humans. Then big humans get to them, and teach them to be quiet, obey, do what they’re told. Then they become robots.
But artists have the ability to redeem that special human quality, that ability to recreate the boundaries. Exceptional writers familiarize themselves with the patterns and rules so that they can improvise, transgress, reorder. By doing this, they force the structure of rules to change.
I cannot stress this enough: humans, at their best, relate imaginatively to the rules of the world, in a way that algorithms cannot. If they commit to developing the skills needed to do this, they can completely reimagine the present for the sake of an unpredictable (and exciting) future.
An algorithm is a rule, and robots thrive as servants to the black-and-white lines they draw across the world.
But what if the story qualities we obsess over are easily imitated by an algorithm? Would that affect what you thought about the story? What if it turned out that Nicholas Sparks or JK Rowling were actually computer programs? Would you still be inspired?
Comment below: What would you think if you found out that a story or some other work of art that you fell in love with was created by a robot?
Also, come back Thursday for another post on automation and the future of creative writing!