The U.K. government announced Wednesday that it is dropping its previously preferred approach of a broad copyright exception for AI training—a policy that would have effectively given AI developers a free pass to use virtually any copyrighted material to train their systems, unless a rights holder specifically told them not to.

In other words, under the scrapped "opt-out" proposal, AI companies could take what they wanted by default; the burden fell on authors, publishers, and other creators to object.

The alternative—an "opt-in" model, in which AI developers must obtain affirmative permission before using protected content—is widely considered more equitable and enforceable by rights holders. Publishing and creative industry groups welcomed the reversal, but several warned that other potentially damaging policy avenues remain open.

The "opt-out" proposal was rejected by 97% of the 11,520 respondents to a government survey launched at the end of 2024. The government said it will now gather further evidence on how copyright law is affecting AI development and deployment, and will consider other policy approaches before introducing any legislative changes. It outlined four priority areas to cover: Digital Replicas, AI Labeling, Creator Control and Transparency. It also said it would establish a working group for “Independent Creatives.”

Dan Conway, CEO of the U.K. Publishers Association’s (PA), called Wednesday’s announcement "a significant moment in cementing the government's reset on copyright and AI policy," but cautioned against treating it as a resolution.

"Not all potentially damaging avenues have been closed down," he said. Conway warned specifically that alternative exception models, including those for science and research, must also be ruled out. "These exceptions have the potential to be even more damaging than the copyright exception initially proposed and are unjustifiable in the context of an established, growing AI licensing market," he said.

Conway said the announcement's "significant positives" include a focus on transparency and on AI labeling to address what he described as "an increasingly polluted online retail space."

Anna Ganley, CEO of the Society of Authors, described the decision as "a hard-won moment for authors and creators" but also warned a true resolution is needed, and fast.

"The pace of progress needs to match the excessive speed at which AI is developing and encroaching on creative industries," Ganley said. "Each day that the uncertainty continues is a risk to author incomes. Failure to act without further delay will unquestionably have a catastrophic and irreversible impact on all U.K. authors."

Just last week, the Society of Authors launched their own version of the "Human Authored" program, a project first established in the U.S. by the Authors Guild which certifies books and works were written by humans, not AI.

Representatives of the U.K. creative industries are convening an emergency summit on March 30 to discuss Wednesday's announcement and next steps.

All this news comes against a backdrop of significant legal uncertainty for U.K. rights holders. U.K. copyright law does not extend extraterritorially, meaning a domestic claim only applies where the infringing act occurs within the U.K., a distinction that proved critical in the Getty Images v. Stability AI litigation, where parts of the U.K. claim were dismissed because the AI training occurred offshore.

In practice, U.K. rights holders pursuing litigation against U.S.-based AI companies must generally assert rights under U.S. law, aided by the Berne Convention, and register their works with the U.S. Copyright Office. The complexity of that path is precisely why many in the industry argue that clear legislation is a more reliable long-term solution than litigation.