It wasn’t long ago when most people scoffed at artificial intelligence. It seemed ridiculous that a computer could ever duplicate, or even credibly approximate, what a human being can do—especially to create brand new content from scratch. But that’s exactly what a particular class of AI apps, called generative AI, can do. Suddenly, it seems as if maybe AI may really mean “author (or artist) invisible.”

Generative AI is already good enough for use in certain contexts. For example, in journalism, it is used to write routine and often somewhat formulaic articles on subjects like finance, reporting notable results from public companies, or real estate, reporting on recent real estate deals, sometimes based on human-created templates. Such straightforward uses seem perfectly reasonable.

The recent experimental release by OpenAI of a “chat bot” called ChatGPT has taken things to the next level, however. There has been a flurry of interest in this in both mainstream and social media. While not always perfect, it’s shockingly good.

As an example, I recently complimented Brendan Quinn, the managing director of the IPTC, the technical standards organization for the global news media, about the excellent job he had done in writing a blog that he ran by me for review. It was engagingly written, informative, well organized, and spot-on with regard to the content. His response: “Don’t compliment me. ChatGPT wrote that. I just added some formatting and links, and a bit of extra information that I forgot to put in the prompt.”

This is both very exciting and very scary. I’m an experienced writer and editor; there was nothing about that blog post that made me doubt that Brendan had written it. It was a useful time-saver for Brendan. It will also be useful to a student needing to write a paper for a class, on which, ahem, they will be graded. A university professor recently reported that of all the essays submitted for a recent assignment, the one that was easily the best paper in the class turned out to have been written by ChatGPT.

How can a publisher know how much of a book has actually been written by the author who claims to have written it? (In case you’re wondering, no, ChatGPT didn’t write this column.) And ChatGPT is just one of many such generative AI apps for text in development.

A time to act

In the long view, does it matter that computers can create content that can’t be distinguished from human-created content? Isn’t that just progress? Google itself put a few competitors out of business because it developed something new and really good, after all. Most of us depend on it and are happy to have it. Should we be worried that computers can now create content and images? Doesn’t that sound pretty useful?

You bet we should be worried. I can give you an example that will drive the point home: deepfakes. Deepfakes are images or audio that most notoriously show a famous person doing or saying something they didn’t really do or say because their likeness or voice has been grafted onto somebody else’s image or voice so convincingly that you can’t tell it’s not who it purports to be.

The software used to do this is readily available and widely used. There are legitimate uses—for example, a fake person can be created for an advertisement to save the cost of a human actor or model. One of the leading such image-creation apps, DALL-E 2, happens also to be from OpenAI, the developer of ChatGPT. It can create images of people from thin air that cannot be distinguished from images of real people. This is called synthetic media. It, too, is shockingly good. You have probably seen such images without realizing they’re fake. There are many such apps for creating synthetic images in development.

But for publishers, and their customers, it’s even more insidious. How can you trust that the content you’re reading or the images you’re seeing have been created by the people you think created them, or haven’t been manipulated or altered in ways you can’t detect?

The counterfeiting question

This is a crisis of authenticity, and of provenance.

Here’s another example, this one not having anything to do with AI or synthetic media, but which is actually a more urgent issue for publishers: counterfeiting. It is a crisis that many commercial publishers, including the very biggest in the world, are facing today: “counterfeit” publishers representing themselves as real publishers and selling their books online for lower prices than those offered by the real publishers.

This is particularly damaging because those fake publishers tend to rise to the top of the results in the retailers’ platforms: they sell the books at lower prices, so they get more action. Some of the books are pirated and of lower quality (though the buyers can’t tell that until they receive the books); some are identical to the publishers’ versions, so the buyers may not even know there was anything illegal about the transactions. An industry colleague of mine, who works for a large commercial publisher, mentioned recently that some of its books have had no sales on a certain retail platform because all of the sales of those books went to counterfeit publishers. All of the sales.

This is a crisis of authenticity, and of provenance. Is the author who they say they are? Is the content what the author created in the first place? Is the version I’m buying the legitimate one from the legitimate publisher? Has this image or video been manipulated? Is the result legitimate or not? Cropping an image is probably okay (though removing relevant context can be misleading); making Joe Schmo look like Tom Cruise is not okay. Making Nancy Pelosi look and sound drunk is not okay.

Progress is being made

In addressing the issues of authenticity and provenance, the good news is significant work is being done, and real progress is being made. Because the most critical problem is deepfakes in news, the work has been driven largely, but not exclusively, by the news media. Most of the focus so far has been on image authenticity, but the work is intended to apply to any media—be it textual, visual, video, or audio.

What is being developed is basically a “certificate of authenticity,” tamper-proof or tamper-evident metadata that can confirm who created a media asset, who altered it over time, how it was altered, and whether the entity providing it is legitimate. This metadata is embedded in the content itself, and there are systems enabling recipients to access it, to document the asset’s provenance (what has been done to it over time, by whom) and validate, or invalidate, its authenticity.

This work is a notable example of industry collaboration and cooperation. It was clear from the outset that no one commercial entity could own the solution; solutions need to be open, freely available, standardized, and global. Three new organizations in particular are doing key work to make this happen.

The Coalition for Content Provenance and Authenticity (C2PA) is developing the technical standards that underpin this work. As documented on the C2PA website, “C2PA is a Joint Development Foundation project, formed through an alliance between Adobe, Arm, Intel, Microsoft and Truepic. C2PA unifies the efforts of the Adobe-led Content Authenticity Initiative (CAI) which focuses on systems to provide context and history for digital media, and Project Origin, a Microsoft- and BBC-led initiative that tackles disinformation in the digital news ecosystem.” Version 1.0 of the specification was released in February 2022, enabling content producers to “digitally sign” metadata using C2PA “assertions”—statements documenting the authenticity and provenance of a media asset. Based on the W3C Verifiable Credentials standard, it is now in version 1.2 and is already enjoying broad support, including in the widely used Adobe Photoshop, where it is called Content Credentials.

The Content Authenticity Initiative (CAI), founded by Adobe in 2019 in collaboration with Twitter and the New York Times with an initial focus on images and video, is now, according to its website (contentauthenticity.org), “a group of hundreds of creators, technologists, journalists, activists, and leaders who seek to address misinformation and content authenticity at scale.” As CAI’s Verify website states, “Content credentials are the history and identity data attached to images. With Verify, you can view this data when a creator or producer has attached it to an image to understand more about what’s been done to it, where it’s been, and who’s responsible. Content credentials are public and tamper-evident, and can include info like edits and activity, assets used, identity info, and more.”

In contrast, Project Origin—led by the BBC, CBC/Radio-Canada, Microsoft, and the New York Times—is news and information oriented. Per its website, it is developing “a framework for an engineering approach, initially focusing on video, images, and audio. The technical approach and standards aim to offer publishers a way to maintain the integrity of their content in a complex media ecosystem. The methods, we hope, will allow social platforms to be sure they are publishing content that has originated with the named publishers—a key in the fight against the imposter content and disinformation” and “help shield the public against the rising danger of manipulated media and ‘deep fakes,’ by offering tools [again, based on the C2PA spec] that can be used to better understand the disinformation they are being served and help them to maintain their confidence in the integrity of media content from trusted organisations.”

The progress on these initiatives has been very rapid. I’m encouraged by how well these three organizations—and the organizations that comprise them—are collaborating for the common good, creating an open ecosystem to guard against disinformation, deepfakes, fake news, and counterfeit sellers in a globally standardized, noncommercial way.

Bill Kasdorf is principal at Kasdorf & Associates, a consultancy focusing on accessibility, information architecture, and editorial and workflows.