I’ve been a witness to—and participant in—the evolution of publishing technology for over four decades, and in all that time, I have not seen a change as consequential as what I see coming today. (Don’t panic; this is good news.)

Not that those earlier developments weren’t game changers; they were. I set metal type by hand in college. I rubbed on a lot of rub-on letters (anybody remember Letraset?). I lived through phototypesetting, digital imaging, word processing machines, computer-driven typesetting, desktop publishing, and the web. Postscript begat PDF. SGML begat XML and HTML. The Tower of Babel of early e-readers led to ePub. Now we’re working on Web Publications.

I remember meeting Jeff Bezos at a Library of Congress conference we were both speaking at when he was talking about this crazy idea he had for selling books over the internet.

Four decades of pretty constant, and pretty dramatic, change.

So it is with some authority that I can make what may strike you as an audacious, or at least overblown, claim: we are on the verge of a golden age of publishing technology. Please note the word verge. We’re not there yet, but we’re close.

And why do I say “golden”? Because we have the opportunity to publish better than ever: better workflows, better products, better user experiences. Yes, including print, but with the opportunity to produce, from a single workflow, whatever formats best suit our content and our markets, and to get our content to our customers however they want.

Since this is the inaugural column that PW has asked my Publishing Technology Partners and me to write, let me give you some examples that touch on what each of the four of us—Thad McIlroy, Bill Rosenblatt, Bill Trippe, and myself—tend to focus on and will be writing about in the months ahead.

I’ll start with Thad, our metadata and e-commerce specialist. “Publishers can do so much more with metadata than they’re doing,” he says, “and they can sell more books if they just give it a bit more effort.” The opportunity is going beyond Onix. Okay, I realize that not all publishers distribute their metadata as Onix yet—some recipients don’t even take it. That’s job one: do it.

“Obviously, you need to get Onix right,” Thad says. “Realize how much more power you can get with Onix 3.0. Don’t just plug in the obvious fields; understand how Onix can be optimized. Even just doing a better job with keywords will get you more sales on Amazon.”

He gave one of his clients this metadata challenge: pick 50 deep-backlist books that still show signs of life—a few sales every month—and go all-out optimizing Onix for them. He predicted a notable increase in sales of those books in six months. It worked. The average increase was 20%—and three of them more than doubled in sales. Imagine what that could do for a full list!

Bill Trippe points out that there are so many more opportunities to raise the profiles of books and authors, and to make them more discoverable, by thinking more broadly about metadata. “I’ve been experimenting with creating Wikidata records for books and authors,” he says. “Bingo: the Knowledge Panels on the right in Google search results appear in searches for those books and authors. Ideally, every search for a book or author should result in a Knowledge Panel, which will lead to further discovery—related titles, other books by that author, and so forth.”

Thad and Bill recommend a whole constellation of potential enrichments: lots of identifiers and subject metadata in various forms. The opportunities are great to get more sales, thanks to today’s rich and ubiquitous web and standards landscape.

Another big opportunity is to get more return out of assets. Bill Rosenblatt does a lot of work on rights. “Rights are a mess right now,” he says. “There are systems out there that can help—machine-readable rights expressions have been around for over 20 years, and I’m working with the Book Industry Study Group to promote standards. The bottleneck is that publishers can’t pin down the business case for adopting solutions and standards so that senior management starts paying attention and making investments. Plus, many publishers are leaving money on the table by not being able to monetize assets in a more granular way. This is especially important to educational publishers. They have moved way beyond print books—though, of course, they still print books, too. Many educational publishers, especially in higher education, have incredibly rich multimedia and interactive resources, as well as images and textual components, that have value outside the publication they were originally created for.”

Bill’s also active in other forms of media, such as music and video. That may sound unrelated to most publishers, but stop and think how audio and video have exploded in the past few years, mainly thanks to the smartphone and the web. “It’s not about whether publishers or authors need YouTube strategies,” he says. “There are lessons that book publishing can learn from the music industry and Hollywood. This is something that a surprising number of people in book publishing, particularly trade publishing, refuse to credit. And they aren’t just positive lessons—‘Do this because record labels did it.’ They are also negative lessons—‘Don’t do this, because record labels did it and look where it got them.’ ” Stay tuned for a future column on this from Bill.

Pretty much everything this column has mentioned thus far is made possible by standards, and those standards are increasingly converging around the web. That means systems and workflows are becoming more interoperable and agile. That’s what’s leading to what I’m characterizing as the coming golden age of publishing technology. No more towers of Babel. No more “I spent good money on this so now I’m stuck with it.”

Most of my work is in the areas of information architecture, workflow, and accessibility. Those areas used to be (and too often still are) a hodgepodge of proprietary solutions. There’s typically no consistency in how we mark up content as it moves through editorial, production, and dissemination workflows. Value is lost at every hand-off. There’s enormous redundancy, inefficiency, and what I call “loopy QC”: check for errors, hand them off to somebody to fix, check whether they got it right, find new errors, rinse, and repeat. We still have mostly Whac-a-Mole workflows.

Standards are helping to fix this. It’s possible now to capture intelligence in content way up front, at the authoring and editing stages, and to keep it all the way through to consumption while enhancing it along the way. Because of standards, this doesn’t require everybody to use the same tools. Authoring is still mainly done in Word (Word files are XML now); editing can be done in Word or online using any number of technologies with plug-ins or open source solutions, locally or in the cloud; typesetting is mainly still done in InDesign for trade (scholarly content more often uses high-end batch composition), but publishers such as HBG are now typesetting most of their books in HTML and CSS—the same technologies every website uses.

It’s the convergence around what is called the Open Web Platform that has made so many of these tools and technologies interoperable. And interoperability is what is taking so much friction out of the process. It’s possible now to design a workflow that enables content to flow smoothly from stage to stage—optimized for whatever a given publisher publishes, and whatever complement of tools and services best suits that publisher—and to produce all the formats users want today: print, online, and e-books.

The proliferation of options can be intimidating. But to quote Bill Trippe, who does a lot of systems design and implementation, “It’s actually the convergence on standards, and especially web standards, that makes it possible to put together a rich suite of tools optimized to solve specific publishing problems. It lets you build a toolchain from interoperable components. Everybody doesn’t have to use the same tools, and when better tools and technologies come along, or new formats emerge, you no longer have to throw out everything and start over.”

Best of all, those formats can be much more easily optimized to make them more accessible to those with print disabilities such as blindness, low vision, and dyslexia. Accessibility is an issue that gives most publishers—at least those that think about it at all—feelings of helplessness and guilt. They realize that publications should be accessible, but they have no idea how to accomplish that, and they’re convinced they couldn’t afford to make theirs accessible anyway.

The good news: today’s accessibility requirements and specifications are based on the technologies we’re already using—primarily HTML and ePub. Most publishers are closer than they realize to making their publications accessible. We’re on the verge of getting this right. No, it’s not trivial, and most publishers haven’t got the job done, but today’s publishing technology ecosystem makes it much more possible than ever before.

We’ll get into all of the topics and technologies I’ve mentioned here, and many more, in subsequent columns. The coming golden age of publishing technology presents opportunities you may not even have thought about—opportunities to publish better, more efficiently, and more profitably.

Bill Kasdorf is principal at Kasdorf & Associates, a consultancy focusing on accessibility, information architecture, and editorial and production workflows. He is a founding partner of Publishing Technology Partners.

Editor’s note: This is the first of a monthly series of columns by the members of Publishing Technology Partners, a consultancy offering guidance to publishers and those who provide services or technology to publishers.