The demise of print has been famously and erroneously predicted for years. In the early 1990s, the CEO of a major professional publisher announced that “print is dead.” To his credit, he publicly recanted that statement several years later. Despite the incredible advances in digital technology and new opportunities for selling e-products, print sales have remained the bread and butter of almost all publishers.

Print, however, is not the solid foundation it used to be. In recent years, most sectors have seen an overall decline in print sales, and though we are likely to see some variations over time, the overall downward trend does not seem likely to be reversed.

The promise of increasing revenue from electronic products remains strong but somewhat elusive. There are many intriguing notions of what will be the next big thing, but the truth is that no one knows for certain. However, there is one strategy that is critical—being prepared for all eventualities.

The key is content. Ensuring safe storage and easy access to content is an obvious first step. In the past, publishers have been known to be too casual about the preservation of files. In many cases, publishers paid almost no attention to archival storage, instead depending on third-party typesetters and printers to maintain our intellectual property. Even now, some smaller and midsize publishers are guilty of such dangerous practices.

Larger publishers have known for some years the importance of maintaining an archival system. In some cases, unfortunately, their files are a relative hodgepodge, with no automated or systematized method of retrieval. The larger the collection of old titles, the more problematic this becomes. Finding the correct format of the correct title is one challenge; ensuring that the file is still in existence is another.

Many major publishers have tended to adopt some sort of product-data repository (PDR) or content-management system (CMS). Whether internally developed or vendor supported, such systems become the foundation for all existing and future uses of content.

In any case, the publisher needs a complete and fully searchable system for finished product files. Such a system is essential in responding to an emerging and evolving market. Whether creating the 2.0 version of an existing e-product, developing a new online site, or merely converting to a format required by a promising aggregator, the publisher needs to be quick. Such projects can no longer take months or even years to complete.

Verifying File Consistency

Not only does a publisher need easy access to files, but those files need to be in a consistent, ready-to-use condition. Publishers often believe that they are maintaining a single version of their content and are surprised and frustrated to find they are not. A familiar mistake derives from the common practice of using multiple third-party vendors for the production of frontlist titles. It is not unusual to ask vendors to provide final files in the various formats needed for each e-product title. There is nothing wrong with this practice except that the publisher needs to be aware that every vendor will develop each format to its particular set of specs. As a result, a publisher using seven different vendors is likely to end up with seven separate interpretations of the same digital format.

Some publishers may argue that this problem does not apply to them because they provide their own detailed specifications for each format to every vendor. That may be true, but it underestimates just how easily inconsistencies can be introduced. Content needs to be treated as a valuable asset. No CFO would argue that since everyone is subject to financial regulations, there is never a need for an audit.

A number of publishers have now built in a quality-assurance (QA) step, or a review-and-correct step, that occurs prior to archiving files. This QA process may be an internal exercise or a task assigned to a third-party vendor. In any case, the objective is to identify any discrepancies from the publisher’s own specs and then to return the file to the original vendor for correction. So when dealing with multiple vendors, the best practice is to take Ronald Reagan’s famous advice “trust but verify.” That simple measure for ensuring quality/structure consistency is key to a publisher’s readiness for any eventuality.

Choosing a System

At this point the discussion circles back to a topic raised earlier: what kind of system best suits the publisher’s needs to manage content?

Basically, there is the need for an interactive system that warehouses finished goods along with their associated metadata. A repository containing both content and its associated metadata will be called, for the purposes of this paper, a product data repository, or PDR. Note: the term repository is used here with some hesitation; there are unfortunate connotations associated with it. Webster’s includes a definition of repository as “a burial vault or tomb,” and it is just that notion publishers need to avoid. A good way to think of a PDR is not as some static storage area but as an active digital warehouse for managing content and data that are vital to the business.

Typically, a PDR will provide an overall structure for storage, as well as features for search and retrieval, utilizing associated metadata. Content stored in a PDR will normally be in the form of finished goods and not accessed by any dynamic delivery system, such as a Web server.

A PDR is often a simple relational database, providing sophisticated search and retrieval while remaining user-friendly. It can be internally developed or adapted from a third-party system to the publisher’s particular needs. The economic advantage here is that such a system is relatively inexpensive and quick to employ. The PDR is an assembly of content. It is not possible to semantically search across an entire library of digital content in a PDR. One can, however, prepare content at a very granular level for future use. With digital-object identifiers (DOIs) stored as metadata, a publisher can tag individual chapters, or even objects within those chapters, for a time when they might be accessed dynamically.

Perhaps the most important question for publishers to ask is whether a PDR meets their most immediate business needs. Often, those business needs are tied to storing, tracking, accessing, and retrieving finished goods and content quickly and reliably. Such modest needs could be metaphorically thought of as what mode of transportation is needed to drive somewhere. If the publisher is going from New York to Boston and is not planning to go to battle along the way, then perhaps a Honda is the best bet. If it serves the publisher’s most urgent needs, then a PDR may be the cheapest, quickest means to an end.

On the other hand, while a CMS can replicate the functions of a PDR, its power lies in the structured assembly of content, allowing the publisher to perform a semantic search across an entire library and dynamically retrieve and repurpose discrete elements. It is key to interactivity between user and content.

It is probably true that most large publishers are going to need a CMS eventually. What is less recognized is that publishers rarely need such a powerful tool when managing content at the book, article, or even chapter level. The market is tempting us with possibilities, but there are presently not that many real-life commercial demands requiring CMS functionality. The PDR fits most present-day commercial needs. However, as Web-based content-delivery platforms take hold, we are seeing a steady migration toward workflows with digital content taking precedence over print. And with the advent of digital-first authoring environments (Inkling, Habitat, Metrodigi’s Chaucer), the CMS becomes inevitable.

A CMS is not going to be an out-of-the-box solution. A publisher’s particular business rules and workflow-management needs will require months of development. In order to function as intended, the CMS will require consistent tagging across all content. A CMS is not inherently user-friendly. If everyday users are going to easily retrieve and validate downloads from a CMS, they will either need to be familiar with query language, or the system is going to need an interpretive layer to accommodate them in a more user-friendly mode.

Going back to the comparison of the PDR to a Honda, the CMS is something of a tank. Systems are incredibly dense and powerful. The dynamic interactive repurposing of content at a granular level is the metaphorical equivalent of going into battle—it requires a tank. It is complicated, expensive, and time-consuming to build, but it is the right vehicle for the job. It is not, however, the right vehicle for driving from New York to Boston under typical circumstances.

A PDR is easily ported to a CMS when the time comes. The two system fit hand in glove. For most publishers’ immediate needs, the answer may be to start with a PDR and let the market dictate when a CMS has become necessary.

Where Are We, and Where Do We Go from Here

Publishers are facing a number of potential pitfalls going forward. Some of these are related to an uncertain market and evolving technology, but some are of our own making. It is a challenge to reconcile our own business strategies with quickly changing and complicated technologies. And there is always the chance of making costly mistakes.

The market for publishers appears to be in a world shifting between print and electronic. While the fortunes of print continue to decline, the promise of electronic-product revenue has yet to be fully realized. There is confidence that the digitization of content is going to eventually recoup lost print revenues, but there is not yet any clear vision of what the market is going to demand going forward.

The best strategy is to be prepared for any eventuality. That means ensuring that archival content is consistent, well structured, and easily retrievable, and that new titles are properly vetted for quality and consistency before being archived. It also means not letting content structure be defined by full-service suppliers. A system for managing content is essential; what form that takes is dependent on each publisher’s business needs.

Here in 2016 there is a good deal more certainty about the digital market for content than there was a decade ago. Still, a number of markets are far from mature. What is certain is that the bold executive decision isn’t about choosing one format over another. It’s not about predicting what markets will do. It’s about being prepared to execute and deliver content to whatever market presents itself.

Read the full white paper this article is based on.

Rick Beardsley is the former v-p of production at Taylor & Francis

The Technology-Publishing Connection

This article is the first in a print and webinar series presented by CodeMantra on how publishers can best use technology to expand their businesses. The series will feature four print articles and four free webinars.

The first free webinar, called Technology Simplified, is set for October 5 at 1 p.m. EDT. The panel includes Michael McGinniss, head of digital transformation at Accenture; Samantha Cohen, v-p, design and digital content development at Simon & Schuster; and Randi Park, publishing officer/publishing and knowledge at World Bank Group.

The second webinar, the Importance of Metadata, will take place October 12. Other installments set for later this year are Working Together and Book Apps vs. E-books.