In November, around 100 coders and creative writers are expected to take the National Novel Generation Month (NaNoGenMo) challenge, for which they’ll build computer programs, specialized artificial intelligence, and other digital tools capable of generating 50,000-word novels. According to a survey of 352 AI researchers, AI will be writing bestselling novels 20 years from now—and NaNoGenMo offers a very early glimpse of that future.

NaNoGenMo began in 2013, when author and creative programmer Darius Kazemi saw thousands of human writers drafting 50,000-word manuscripts for November’s National Novel Writing Month (NaNoWriMo) challenge and instead invited his Twitter readers to “spend the month writing code that generates a 50k word novel.” Between 100 and 200 people take part each year, a tiny crew compared to the more than 798,000 active NaNoWriMo writers. Nevertheless, according to NaNoGenMo participant Zach Whalen, the coding marathon has generated around 400 completed novels and nearly 45 million words.

“Seeing people in NaNoGenMo use AI and scripting in creative ways has been really illuminating,” said author Janelle Shane, who has participated twice. In November, she will release You Look Like a Thing and I Love You, a nonfiction book from Little, Brown’s Voracious imprint describing her AI writing experiments. For NaNoGenMo 2018, she shared a collection of Dungeons & Dragons character descriptions created by her homemade AI. “AI is going to be used as an increasingly sophisticated tool,” Shane said. “But if you give 10 artists the same tool, they’ll come up with 10 very different things. NaNoGenMo highlights this ingenuity and all the different things we can draw out of the same base model.”

Other highlights of the 2018 marathon included The Valley Girl of Oz, Bjork Bjork Bjork (L. Frank Baum’s The Emerald City of Oz novel completely rewritten with the help of two custom translation programs loaded with Valley Girl slang and the gibberish of the Muppets’ Swedish Chef) and The League of Extraordinarily Dull Gentlemen (a novel created by a text generator programmed with a more sophisticated narrative sense).

While novel-generation tools have evolved dramatically since 2013, coherence and readability remain formidable challenges for NaNoGenMo participants. No one knows that better than Kazemi, who has created what he calls “a small army” of bots that post computer-generated texts on social media. “It’s easy to generate words that keep someone’s attention for up to 500 or even 1,000 words,” Kazemi said. “But once you get past 1,000 words, it’s very difficult to keep a reader’s attention.”

A Powerful New AI

NaNoGenMo participants use a variety of programming languages and styles to generate their novels, but some of those completed projects depended on “language models”—a specialized artificial intelligence that learns how to write by training on a body of texts shared by the programmers.

Back in February, California-based AI company OpenAI unveiled GPT-2, a superpowered language model trained on eight million web pages. This new AI has an uncanny ability to imitate human writing by predicting what should come next in a given writing sample. When PW fed the previous two sentences into a publicly available version of the GPT-2 model, the model delivered an eerily accurate computer-generated response: “This was an interesting idea: a machine learning system that could easily learn by reading books and using its massive data to write its own books.”

When the creators of GPT-2 gave the AI model a single sentence referencing J.R.R. Tolkien’s Lord of the Rings series, the machine generated a few paragraphs of Tolkien-esque prose—capturing a measure of the author’s unique world and sensibility. The company has released smaller versions of the superpowered language model for the public to use and modify, but it has not shared the complete model, fearing it could be abused. (GPT-2’s creators worry that it could be misused to create fake or deceptive articles online, supercharge the production and quality of spam, or make it easier to produce mass quantities of misleading content.)

Robin Sloan, author of Mr. Penumbra’s 24-Hour Bookstore and Sourdough, has spent two years training language models and runs a version of GPT-2 on two GPUs he purchased from a Bitcoin miner. The author uses this extra computing power to “fine-tune” the language model that OpenAI created. Sloan recently trained a GPT-2 model on 100 different “great and dorky” fantasy novels, and then generated 1,000 original and personalized fantasy quest stories for his newsletter readers. “I had to essentially transform this language model into a generic fantasy writer,” he said. “It produced all these really weird, good, and fantastical stories.”

The Copyright Question

AI language models will likely raise legal issues for publishers. For instance, what if a programmer trained a language model on the collected works of J.K. Rowling, creating an AI capable of writing in the style of the author?

“There is no hard and fast clarity about the extent to which copyright interacts with using copyrighted works as input data for machine learning,” said Public Knowledge policy counsel Meredith Rose. “Generally, folks consider this to be a fair use. But it’s technically uncharted territory. You don’t want to draw the line such that no one can ever take their influence from the works of another author.”

Rose highlighted another legal question that could surface as AI language models develop more powerful capabilities: who owns the copyright on an AI-generated novel? “There’s a doctrine in copyright law called work-for-hire,” Rose said, explaining a possible legal answer to this as-yet-unlitigated question. “If I write something in the course of my employment as a part of my contract, then copyright law treats my employer as the author, not me.”

These questions don’t seem pressing when looking at the current output of language models, but some feel that publishers should start paying attention. “If I was Stephen King or his publisher, I would be looking at licensing my backlist to an AI text generator model right now, in order to own this space before anyone else,” wrote Joanna Penn on her blog. Penn is an author who has followed the evolution of GPT-2 very closely. She imagined future opportunities for publishers to train AI on an even larger body of texts. “If you are a publisher with prescriptive guidelines and a deep genre backlist, you could train a GPT-2 model,” she told PW, imagining a language model that imitates an entire publishing imprint’s particular style. “You would need to be a very big publisher, and you would need programmers and millions of data points.”

Kazemi places great hope in these opportunities. “If you’re interested in the future of writing, this is definitely worth keeping an eye on,” he said. “It’s like being a photographer at the advent of Photoshop, because it could be game-changing for the medium.”