The devil, as they say, is in the details.When I think about why publishers don’t pay closer attention to their metadata I think that maybe it’s because we “experts” generalize the benefits. Publishers get tired of being told that good metadata should be provided without a clear payoff. Show me some money, I hear them saying.
That’s why I was energized by a recent presentation by Joshua Tallent, director of sales and education at Firebrand Technologies. The presentation was part of BookNet Canada’s annual Tech Forum Conference, held online earlier this year. (Many of the presentation videos are on YouTube, with slides available for download.)
I first learned about Tallent’s presentation from publishing guru Jane Friedman. She told me, “While I feel like book publishing has been having this ‘importance of metadata’ discussion for nearly a decade now, I still find many people who don’t even know what the term means. For something so fundamental to book discovery and sales, I find this surprising.” Indeed.
One cautionary note: these results are specific to Tallent’s reported research. Further testing could produce different results; he was not seeking to criticize any particular company.
Different resellers, different data
The world would be a better place if all online sales outlets followed similar metadata procedures, but such is not the case. Not only do certain resellers take longer than others to process metadata but they each cherry-pick the data they churn through and display. For example, Tallent found that Amazon can ingest new metadata within 24 hours, while Barnes & Noble takes up to a week. On the other hand, B&N offered more “care and concern” with the data received and displays that data with a clean design and minimal junk on the page. (Amazon pages, as is widely acknowledged, are a dog’s breakfast of ads and recommendations—though clearly this hasn’t hampered Amazon’s success as a reseller.) B&N is notable also in that it displays a book’s table of contents and an excerpt as separate fields, which most other resellers will not display at all. These are both valuable aids to book discovery.
Keywords continue to demand close attention from publishers. Most resellers ignore keywords, but Amazon pays close attention to them both for search and for determining related products. A publisher that gets keywords right will sell more books.
Much has been written about optimal practices for keyword creation. Tallent focused on the controversial topic of how many keywords should be submitted for optimal sales results. Amazon’s rules have changed repeatedly over the years, but currently it says that for books 210 bytes is the maximum (which translates roughly into 210 characters). Keywords beyond 210 bytes will be ignored, and if a publisher dares to submit 2,000 bytes, all of its keywords will be ignored.
In reality, however, it appears that Amazon’s system indexes more words than that, though exactly how many more is uncertain. Tallent says that a prudent approach is to provide Amazon with more than just 210 bytes, up to but under its stated 2,000-byte limit. But dozens of keywords are not needed, particularly if publishers avoid repetition. Amazon can combine individual words into distinct phrases, so publishers don’t have to. Tallent’s example was that “Japanese cooking” and “Japanese ingredients” can be expressed just as “Japanese cooking ingredients,” and Amazon will figure things out.
Amazon and Google
One of the more subtle charts that Tallent included suggested two competing ideas that publishers need to get their heads around. On the one hand, an increasing proportion of buyers start product searches directly on Amazon—41%, according to the chart. If a search for a book starts on Amazon, rarely will the purchase take place elsewhere. On the other hand, 44% of product searches still begin on Google, and so metadata needs to be optimized for Google’s SEO proclivities, which are not the same as Amazon’s. (For example, Google couldn’t care less about BISAC codes, retail prices, or product dimensions.)
Tallent also argues it’s necessary to closely monitor and then refresh metadata—errors creep in, from multiple sources, too easily and too frequently not to. Publishers should take a proactive approach with their data. There is evidence that Amazon rewards frequent data updates with favored search rankings.
What’s my takeaway here? Metadata demands more attention than just filling in the forms and hoping everything will just work. Publishers must ensure that the staff members they task with the metadata role are able to think more deeply about their work and to keep up with the latest developments and best practices via blogs, webinars, and the BISG metadata committee.
Thad McIlroy is an electronic publishing analyst and author, based on the West Coast and at his website, The Future of Publishing. He is a founding partner of Publishing Technology Partners.