Hugo/Nebula Contenders and Popularity, December 2015
It’s the end of the month, so let’s check in on Goodreads and Amazon popularity (as measured by number of rankings) for various Hugo and Nebula contenders. This is one of many different measures I look at when predicting the Hugo and Nebula nominees.
As I’ve said before, this data is interesting but not necessarily predictive for the Hugos and the Nebulas. Goodreads and Amazon # of rankings doesn’t accurately measure popularity; they measure popularity with the Goodreads and Amazon crowds, which may or may not be well-synced up with Hugo or Nebula voters. We have no real access to sales number to actually measure books sold, so this is about the best we can do. Historically, being popular hasn’t helped much for the Nebulas. For the Hugos, it matters more, but only when that popularity is combined with strong critical response and past Hugo history.
I’m slowly migrating all my data over to Google Sheets and the cloud, so that you can look at and process the data any way you want. Here’s the link.
Table #1: Popularity of Hugo/Nebula Contenders on Goodreads, December 2015
Table #2: Popularity of Hugo/Nebula Contenders on Amazon, December 2015
It’s interesting how static these charts are; no one really moved up or down more than 2 spots from November to January. I also track some books that aren’t contenders (Armada, for instance), just to give us some reference.
What does this mean for the Hugos? Well, Uprooted and Seveneves are hugely popular books this year, with 4 or 5 times more rankings than other award contenders like The Fifth Season or Ancillary Mercy. Even though someone like Stephenson may prove divisive (lots of people love or hate that book), the sheer number of readers may translate into more voters. Remember, you can’t vote against a book in the nomination stage. All that matters is how many people like a book, not how many hate it; the reverse can be true on the final ballot. The huge number of rankings for Novik and Stephenson is why I’ll have them very high in my initial Hugo predictions.
On the flip side, a book like Karen Memory is languishing with only 1,500 Goodreads ratings / 75 even though it came out in February. I don’t think that’s enough readers to drive Bear to a Hugo nomination in a competitive year, but only time will tell. I often use these popularity charts to distinguish between similar books. If Dickinson, Cho, Liu, Jemisin, and Novik all vaguely fall under the category of “experimental fantasy,” I’ll pick Novik/Jemisin over Liu/Cho/Dickinson based on their apparent popularity, using the theory more readers = more votes. Hopefully once I have several years of data I can find a more solid correlation, although one certainly isn’t visible yet.
Lastly, it’s fascinating at how different the Amazon rankings are than Goodreads. Why does Goodreads like Armada more than Seveneves? A book like A Long Time Until Now does terribly on Goodreads but well on Amazon (#12 on my Amazon chart, #28 on my Goodreads chart). Darker Shade of Magic is loved on Goodreads but middle-of-the-pack on Amazon. This goes to show how fundamentally different these audiences are. We shouldn’t trust either. Instead, I boost a book’s chances when it’s high across many of my different lists: if Uprooted is #2 on my Goodreads list, #3 on my Amazon list, #1 on the SFWA list, #1 on the Goodreads vote, #7 on my Mainstream Critics list, #1 on my SFF Critics list, etc., shouldn’t I predict it near the top? Throw in past Hugo/Nebula history, and that’s how the Chaos Horizon logic works; make what you will of it.
Later this month (let’s say mid-January) I’ll look to see what the ranking score is for each of these texts. Those scores don’t change much over time, so it hasn’t been worth tracking them month to month. I’ve also not found any correlation between the ranking score and award chances.
Let’s finish with a threat: I’ve gathered enough lists, 2016 is almost upon us, so I’ll make my first Nebula and Hugo predictions tomorrow!
you may want to take a look at the Author Earnings Report base data. They have captured several snapshots of Amazon this year (and snapshots of other stores/countries once), and are able to make what seems to be a fairly accurate guess of how many books are being sold (by correlating the different lists with each other and with data points where some authors tell them how many books they sold on day X when their rating was Y)
Is the “66” for Somewhither in both tables correct, or does something need to be fixed? Not that I’m much of a copyeditor, just that the other day you mentioned Wright’s book had the most recs on SP4 so it is a natural object of curiosity.
My error. The Amazon number is 59. Must have subconsciously wanted the 6666!
And the chart is fixed! Great catch, and thanks for the free copy-edit.
I’m sort of surprised by how much better Seveneves is doing than Aurora and also how badly The Dark Forest is doing relative to The Three-Body Problem.
As for SP4 I have made some recommendations there myself. How does one know what the top recommended books are? It seems like one big long comment thread without much quantification.
Speaking of quantification, how do you snag the data from Goodreads and Amazon into your spreadsheet? I hope that you are not doing data entry by hand? That would imply a level of dedication (and “free” time) that makes me green with envy!
Very true. I’m not seeing anywhere near the same kind of buzz/endorsement of The Dark Forest as Three Body-Problem. Three Body-Problem really picked up in late January/February though, as more people got a chance to read it.
As for the last two questions . . . Both the SP4 tabulation and the Goodreads/Amazon data-entry were done by hand by me. No easy way to pull the data that I know of. And it’s not necessarily “free time,” more my insomnia that has me doing dull mechanical tasks at 3:00 AM to try and get myself sleepy!