The Supreme Court's 9-0 ruling this week limiting copyright liability for internet service providers (ISP) turned on a single, deceptively simple question: Did the company intend for its service to be used to illegally download copyrighted music? For Cox Communications, the answer was “no.” As a result, the court found that Sony and other music labels could not hold Cox liable for copyright infringement, allowing the company to walk away from a potential $1 billion judgment for liability.
But the court's reasoning may have introduced a new pressure point in the parallel battle that publishers and authors are fighting with the AI companies over copyright infringement.
Writing for a unanimous court, Justice Clarence Thomas held that an ISP is liable for copyright infringement only if its service was designed for illegal activity or it actively induced infringement.
"A company is not liable as a copyright infringer for merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights," Thomas wrote. Cox, the court found, did neither.
The question publishers and their attorneys may now reasonably ask is whether the same logic can apply to large language models.
Unlike an ISP, which provides a neutral conduit for internet traffic, AI language models are trained specifically to generate text, including prose, poetry, and dialogue, that is stylistically and substantively indistinguishable from the work of human authors. That capability is not an incidental byproduct; it is, by design, the product.
If a Cox subscriber used broadband to pirate a novel, Cox did not build its network to enable that outcome. When a user prompts an AI model to write in the style of Cormac McCarthy or generate a sonnet that reads like Shakespeare, the system was built explicitly to fulfill that request.
Under Judge Thomas's framework, that distinction could matter enormously. The question is whether training on copyrighted text and optimizing for human-quality creative output constitutes, in legal terms, a service "tailored for" infringement or, at minimum, one that induces it.
Several major AI copyright cases now working through the federal courts, including suits brought by the Authors Guild and individual authors against OpenAI, Meta and others, have focused primarily on data training and whether ingesting copyrighted books to build these models constitute infringement. But the Cox ruling suggests a second front may be opening.
If intent is the operative standard, plaintiffs may argue that the very goal of generative AI—to produce text indistinguishable from the work of an author that the LLM has been trained on—constitute the kind of purposeful inducement the Court said tips the scales toward liability.
Book publishers, who have watched the AI copyright litigation with mounting concern while continuing to negotiate licensing deals with AI companies, have so far been more cautious than music industry plaintiffs. The music industry's aggressive—and ultimately costly—campaign against Cox yielded a fractured result: a billion-dollar jury verdict that got vacated, years of litigation, and a Supreme Court ruling that ultimately narrowed rather than expanded their leverage. Publishers may be watching that outcome carefully.
What the Cox ruling clarified, at minimum, is that copyright enforcement in the digital age will hinge not just on what was copied, but on what the copying was for. For AI companies whose core commercial proposition is generating human-quality creative work, that is a standard worth watching.



