"Until very recently, I think it's fair to say, literary translation has kind of been in denial," said moderator Duncan Large, executive director of the British Centre for Literary Translation, in introducing the "AI and Literary Translation" panel at this year's London Book Fair on March 13. "Literary translators have long used computers for basic assistance, for example, in the form of online dictionaries, but they have also long been resistant to the idea that machine translation or even computer-assisted translation tools can have any significant role to play in literary translation."

That changed significantly, Large explained, in November 2022, when the process of artificial intelligence–related automation, "and the anxieties associated with it," were accelerated further by the release of ChatGPT and, in subsequent months and years, other "so-called generative artificial intelligence programs."

A large range of feelings about AI exists, Large noted, from "intense public interest, and perhaps excitement over the opportunities presented by the systems," to the more defensive crouch taking by creative professionals, who "have been understandably more circumspect given the long-term threat to jobs and the short-term threats to IP that AI systems represent," he said. Still, whether these tools can actually adequately parse the subtleties of a text in its original language enough to provide an iterative translation beyond the most basic replacement of words and clauses is still very much at question.

That question has dogged technophiles and technophobes alike in the literary translation business since the advent of neural machine translation (NMT) in 2016, when, Large said, "serviceable, automated machine translation for literary texts could at least be envisaged." But as James Hadley, an assistant professor of literary translation at Trinity College Dublin, put it, NMT, "we were still very much limited in terms of style. If we wanted to just translate a sentence and end up with some kinds of outputs, that was not particularly difficult. But if we wanted to produce or reproduce someone's particular use of verbs, or use of nouns, it was very, very difficult for a neural machine."

Now, Hadley said, following the release of ChatGPT, "we've seen, every month really, another large language model system come out, and some of them are free." This is germane, he noted wryly, because "literary translators are also not known for being incredibly wealthy. When researching these tools, and how we could make them useful for translators, we have to start thinking immediately of the price point."

We have to be careful not to use human language when we're talking about machines, and machine language when we're talking about humans.

The speed of progress in this space, Hadley noted, is extraordinary, in part because of the "scale of training data that goes into training an LLM" compared with that of an NMT system, making the former much more flexible in capability. "When you ask Google Translate to translate something, you just give it the text—'the cat sat on the mat,' or something—and then it gives you the answer, HExplaining this, Hadley explained. "It's very much a 'what you put in, you get out' system." That's not the case with an LLM.

"You could say all sorts of interesting things like, 'translate this in the style of...' and then name your famous author," Hadley said. "And the machine can then, because it often has training data based on that famous author, interpret the text and then translate it in a way that reflects the style—and I don't mean an author in the source language, but an author in the target language. For example, if I'm translating into English, I could say, 'translate in the style of Terry Pratchett,' even though the source text wasn't quite like Terry Pratchett."

The possibilities are heady, if not downright headspinning. But in some cases, as Nicola Solomon, CEO of the U.K.'s Society of Authors (SA) made clear, they're also deeply concerning.

Offering some "sneak previews" of findings from the SA's recently conducted survey of nearly 800 illustrators, translators, and writers, Solomon said that "almost eight in 10 translators—and, actually, also illustrators—believe that generative AI will negatively impact future income from their creative work, with the same concern expressed by only around six in 10 writers of fiction and nonfiction." Nearly nine out of every 10 respondents, she continued, believes that generative AI will replace jobs and opportunities in creative professions.

"Are they deluded, as other people are trying to say here, that this is anti-progress panic brought on by the Daily Mail?" she asked. "Well, no, because we asked about what's happening now: a quarter of illustrators and a third of translators say they've already lost work due to generative AI, and over four in 10 translators say income from their work has decreased in value."

Much of the issue, Solomon pointed out, is the widespread use in developing these tools by using copyrighted work. But Solomon also cautioned the audience to be careful, when referring to LLMs and the like, to how it employs what are the bedrock of the book business: words.

"These machines cannot be trained because they are machines. They copy things. And they copy things in order to develop the machines," she said. "We have to be careful not to use human language when we're talking about machines, and machine language when we're talking about humans."