Ask any author or book publisher today what their main concerns are with AI and they’re likely to talk first about copyright. Chief among their concerns: that AI is being trained with copyrighted works—including thousands of books—without permission or compensation. In short, many authors and publishers believe they are being ripped off. Fair enough.

Authors and publishers also have shared concerns about the outputs from AI services. For one, many have heard that AI services can reproduce works verbatim. And on another front, they’re worried that AI services—built in part with unlicensed books and articles—will be used to generate competing works that will flood the marketplace and squeeze them out. Fair enough, as well.

But it strikes me too that many authors and publishers expressing such concerns are doing so because they’re following the headlines. By my count there have been more than a dozen copyright-related class actions filed against AI companies so far. Some of the lawsuits feature well-known named plaintiffs, such as Michael Chabon, Sarah Silverman, and the New York Times. And they can present some frightening scenarios. In its lawsuit against OpenAI and Microsoft, for example, lawyers for the Times suggest that the “business models that supported quality journalism have collapsed” and raise the specter of AI creating a journalistic “vacuum” that “no computer or artificial intelligence can fill.”

For sure, the legal disputes around the development of AI are serious business. But when thinking of these suits, an old adage comes to mind: this isn’t about justice, this is about the law. And the legal questions swirling around AI in this moment are less about what’s broadly fair and more specifically about fair use—which, to many authors and publishers, is only sometimes fair.

The complexities of copyright law are vast. The litigation and the subsequent appeals will likely take months, possibly years. And even then, this current wave of lawsuits will probably yield complicated outcomes. My point: you can’t count on the courts to deliver practical guidance when it comes to AI.

Rather, in this moment, it is important for authors and publishers to look beyond their fears, and to explore the technology for themselves. For example, the Times in its lawsuit claims that it was able to repeatedly induce ChatGPT to regurgitate large chunks of verbatim text from its articles. OpenAI responded with a blog post calling AI regurgitation a rare bug, and suggested that the Times used a series of manipulated prompts to get its desired outcome.

Why rely on these competing accounts when you can easily see for yourself? Go ahead, try to get ChatGPT to regurgitate even a paragraph of your book. You will almost certainly fail. It is highly uncommon. In fact, in November, a federal judge rejected the legal claims presented by a group of authors in one lawsuit, noting that the text generated by Meta’s Llama AI neither copied nor resembled text from their books.

The legal questions swirling around AI are less about what’s broadly fair and more specifically about fair use.

In the face of such rapidly advancing technology (and in an era of declining author incomes), it’s understandable that many authors and publishers feel their backs are against the wall. Should they embrace the concerns they see in today’s headlines? Or should they seek technological enlightenment and embrace AI’s potentially bright future? As both an author and a (sometimes) publisher, I face the same questions.

So, what will I do? Proceed, with caution. As an author (of nonfiction), I intend to explore and embrace AI as a tool—from ideation through final text, including research help, translation, even audiobook creation. And as a publisher, I’ll happily work with authors who use AI—but I’ll expect them to be open and honest in describing exactly what role AI played in the creation of their manuscripts.

I read with interest this month about Japanese author Rie Kudan, winner of the Akutagawa Prize for the best work of fiction by a promising new writer, one of Japan’s most respected literary awards, who admitted she’d used ChatGPT to write about 5% of her book, The Tokyo Tower of Sympathy. Kudan said she planned to continue to use AI because it let her creativity “express itself to the fullest.” As CNN reported, the Akutagawa awards committee said her book was “practically flawless.”

I believe that uses of AI like Kudan’s will become more common over time, and that many of the concerns we’re facing today will become moot. But until that time comes, we can all learn by experimenting with and engaging directly with AI technology, even as we monitor the broader concerns around its development with interest, and anxiety.

Thad McIlroy is an electronic publishing analyst and author, based on the West Coast and at his website, The Future of Publishing.