In Literary Theory for Robots (Norton, Feb.), English professor Tenen delves into the history of text-generating technology.
How did you come to study literary machines?
My start with computers came in the early 1990s with the desktop publishing revolution. I got a job with this little press out of Birmingham, Mich., that was publishing books for cardiologists. When I graduated college, I had a degree in comparative literature and was hired by Microsoft to do websites. I’ve always had this engagement with text and technology.
How much of a threat do you think chatbots pose to professional writers?
I don’t see them as a threat. My research shows that we’ve always written with the help of technology. Going back to the 15th century, we’ve had this anxiety—what will “smart” tools do to our intellect? But we’re constantly using such tools. A dictionary is a sort of augmentation of the intellect. We can go back even further to the written alphabet. That’s a technology that makes thinking social. Technological advances mean that writing and creativity change, and the possibilities expand.
Does using AI for writing and editing change how we value human agency and authorship?
Agency begins with the passive voice, but the metaphors we use for AI obscure this. AI is a collective, not some guy with a mustache, not God with long hair, but many different, distinct technologies created by a team. We say, “AI learned how to write a poem.” Well, that obscures the complexity of the actual process, which is part creative, part political. We’re talking about power, who is doing what to whom. I think for this reason, agency involves not mystifying the technology by personifying and allegorizing it, but instead analyzing the metaphor and asking who the people, corporations, and state actors involved are. Imagine if you did this labor attribution with a poem or book and reconstructed all the voices that went into every sentence. What you would see is that there’s some creative direction happening, that there’s a team of people shaping the final product. ChatGPT, spellchecker, Microsoft Word, and whatever else are all helping from afar, collaborating. It’s a team effort, even when you’re sitting alone.
Do you think students should be allowed to use ChatGPT for writing papers?
Like with any technology, there’s a way to use ChatGPT that diminishes the intellect and makes you lazier. We as educators need to teach our students how to use ChatGPT in a way that improves their critical thinking and research skills. If you take something from a book like Save the Cat, which is full of templates on how to write a bestselling screenplay, and submit it, it’s not going to be very good. You have to know how to use books like that, to transcend the tool. That will happen with ChatGPT once we learn how to use it fluently.