There is a growing crisis in the academic monograph marketplace. College and university libraries are experiencing budget cuts; there are too many presses publishing too many titles; there’s growing pressure to figure out open access (OA) solutions, particularly in the face of the outrageous Research Works Act; and, aside from crossover or trade titles from the larger presses like Oxford, there is a sense that the barely adequate supply of funding will soon start to slide off a cliff.

Organizations are rallying to devise solutions, however. Libraries, presses, and scholars are pressing forward with several interesting proposals (Clifford Lynch of CNI wrote a prescient overview of the options in 2010). And I just attended an investigatory meeting, held at the Radcliffe Institute, at Harvard for one of the most promising new efforts: the development of a Global Library Constortium (GLC), the brainchild of Frances Pinter, former publisher of Bloomsbury Academic in the UK, and founder of EIFL, an international library consortium of consortia supporting greater access to information.

The meeting was hosted Robert Darnton, of the Harvard University Libraries, and a wide range of academic press directors attended (Yale, Harvard, Oxford, MIT, among others), as well as library leaders (including from the Univ. of Michigan, Yale, Harvard, and MIT). Pinter’s proposal is worth considering in depth because its creativity sheds light on many of the issues testing academic publishing (an early description can be found in a YouTube video).

The GLC is loosely modeled on an existing academic venture called SCOAP3, which was born out of the small, highly internetworked high-energy physics (HEP) community, which possesses only six key peer-reviewed journals. In SCOAP3, the funding agencies supporting HEP work joined together with the libraries that purchased its print journal editions to cover publication costs. In return, publishers made the electronic versions of their journals free to read online. Each SCOAP3 partner finances its contribution by canceling journal subscriptions. Description: http://blogs.publishersweekly.com/blogs/PWxyz/wp-includes/js/tinymce/plugins/wordpress/img/trans.gif

The GLC proposal would operate on a similar basis, with libraries pooling together into a membership coalition that purchases the rights to titles offered by participating publishers. Those books would then be made available on an open access basis, perhaps with Creative Commons license terms. Libraries would place bids for each offered title into a pool, in a fashion similar to the way Groupon works; if there was sufficient interest to hit the price trigger point, the publisher would release the title into the open access pool with costs apportioned among participating institutions. Once made open access, titles would be publicly readable through a web browser interface, but downloadable PDFs or EPUBs would only be freely available to GLC members.

The GLC proposal offers a number of very significant advantages. Primarily, it would stabilize the scholarly monograph market by compensating publishers for their fixed costs in producing their first copy. It also retains a measure of competition by specifying that the more attractive book delivery formats (PDF, EPUB) are sold commercially outside of the GLC membership. It also reduces press overhead by partially releasing marketing and sales staff from the vagaries of having to sell to an unknown number of university library buyers. But the good comes with some bad—or, rather, challenges.

Challenges

One concern is the lack of incentive for publishers to make their top selling titles available to a consortia-based system. For titles that are crossover into trade, or are popular for course adoption, delivery through this kind of platform may represent an unacceptable loss of revenue. This deficiency is shared by the proprietary e-book platforms managed by Johns Hopkins' Project Muse (UPCC), Oxford (Oxford Scholarship Online), and Cambridge (Cambridge Books Online). Indeed, participation in these communities was conditioned on the ability of publishers to withhold portions of their lists, and the resulting coverage gaps render e-book platforms less desirable to the membership-based collective that provides their patronage.

The more general statement of this problem is that it is very difficult to accurately estimate the anticipated revenue potential of e-books, even when amortized across a pool of titles. Any system of rights buy-outs requires publishers to be able to evaluate their lists as a financial portfolio, ideally with input gleaned on a title-by-title basis, to optimize decision-making about which sales and distribution channels their books should be placed in. Eric Hellman’s UnGlue.It also shares this problem, although his project is aided by an initial targeting of end-of-commercial-life books where the stakes are smaller. Unfortunately, publishers have developed very few analytics to assist them with this estimation.

Fundamentally, an open access system implies a redirection of revenue, with a possible diminishing of gross income. The hoped-for cost savings in operation costs may offset a reduced per-unit revenue stream, but there remains a significant amount of unpredictability in monograph uptake. The ability to easily manipulate pricing levels for digital books, such as Amazon’s Kindle Daily Deal program, can greatly enhance the visibility of selected titles that might otherwise be opaque to their buying public. Greater pricing control produces greater conservatism in the release of titles into subsidy-carrying channels, because otherwise unrealized revenue can be spurred through retail marketing trials.

In the GLC model, where publishers are encouraged to make an estimate of their per title buy-out costs, there may be an impetus to simplify pricing by establishing the kind of pricing tiers that were prescribed by the Google Book Search (GBS) settlement proposals. In that case, books would be assigned into various pricing “buckets,” and their sales volumes analyzed to optimize revenue across each publisher’s participating titles. However, it was noted in the Radcliffe meeting that this approach triggered a Department of Justice antitrust concern for pricing collusion among authors and publishers. Consequently, any pre-agreed upon pricing algorithms would have to be carefully considered for antitrust issues.

Fixed price determination, even if premised on a one-time acquisition, also has the unremarkable but unpleasant effect of reducing competitive pressures in the book marketplace. There are a great many monographs published today by university presses that find very small outlets, partially as a result of library budgets that are squeezed by the costs of electronic journal and database licensing. Many libraries are backing away from full-list university press acquisition policies, and a growing number are banding together in shared print and e-acquisition agreements that spread purchasing costs across multiple institutions. Perversely, instead of responding to this decrease in demand, many UPs remain bound up in the academic promotion and tenure process, which for humanities and social science disciplines often requires a published monograph derived from one’s doctoral thesis or dissertation.

The growing interest in open peer review and open access publishing for both monographs and serials suggests, however, that a great deal of existing scholarly output could be redirected into more informal, web-based platforms. Ranging from self-published material deposited into university repositories, to scholarly association-backed peer-review platforms, there are a growing number of ways for web-based systems to publish materials that do not need to be “rolled up” into more formal manifestations, such as a traditionally published manuscripts or journal articles. There is a considerable amount of potential savings lurking here.

There are technical issues with membership-based access models as well, the most insidious being the “free-rider” challenge posed by institutions that benefit from shared arrangements without assisting the subsidy of their operations. In the GLC model, there would be free, global online access to browser-based e-books. Although this does not offer a downloadable, standalone PDF or EPUB, it is likely to be increasingly acceptable for access, as e-reading-capable tablets attain greater penetration in the consumer market. A Pew Internet Research Project recently observed a doubling of tablet ownership over the 2011-12 holidays. Should this come to pass, an alternative membership tier model would have to be proposed, presumably based less on content type and more on functionality, such as enhanced recommending or social functions to support pedagogy and research.

Enhanced features become feasible if the content acquired through GLC is aggregated into a common search and discovery interface. Unfortunately, the current proposal is limited to a bibliographic metadata-based discovery tool, similar to a traditional library catalog. The resulting distribution of content presents problems for book discovery because readers are highly unlikely to know the publishing party responsible for a given title, and quite reasonably make the assumption that they should have the ability to conduct full-text search—a capacity already provided by several large book databases, including the Internet Archive’s OpenLibrary, HathiTrust, and Google Books. Although retaining control over title access may be reassuring to the presses, it is also likely to introduce a smorgasbord of descriptive metadata of varying completeness and accuracy, potentially frustrating the utility of the system.

The lack of full-text search is recapitulated at a higher level among the current academic e-book platform providers: UPCC, OSO, CBO, and JSTOR. Since content is distributed across these platforms, with modest random duplication of titles, a user will only be able to conduct a comprehensive search at the level of the least common denominator-which is metadata. There will be no way for a general reader to know a priori if a title can be found in one subscription database or another. This is a significant failing that has been too little discussed, and its presence suggests that most attention has been paid, not surprisingly, to publishers and the libraries that buy their books, instead of on users. I described this problem to an individual who is familiar with research libraries and his comment was, glibly, "They should just let Google do this."

Worthy Pursuit?

It should be no surprise that a book publishing market that has developed over decades finds it mind-numbingly difficult to extricate itself from a tangle of dependencies and networked relations that no longer serve any functional need. With hope, the next confluence of market arrangements among libraries, publishers, authors, and distributors will be looser and more flexible.

Even in noting these challenges, I want to reiterate my belief that the Pinter proposal is a marked advancement. It is clever and compelling, with significant merit, and, it should be pursued. Most of the difficulties it faces are addressable, and certainly, no solution can be comprehensive. Even more fundamentally, the highest goal for systems like GLC is to provide a bridge from the past into the future, keeping scholarly publishing alive long enough for deeper innovation to take hold and evidence itself in new and sustainable endeavors.