Hugo/Nebula Contenders and Popularity, December 2015
It’s the end of the month, so let’s check in on Goodreads and Amazon popularity (as measured by number of rankings) for various Hugo and Nebula contenders. This is one of many different measures I look at when predicting the Hugo and Nebula nominees.
As I’ve said before, this data is interesting but not necessarily predictive for the Hugos and the Nebulas. Goodreads and Amazon # of rankings doesn’t accurately measure popularity; they measure popularity with the Goodreads and Amazon crowds, which may or may not be well-synced up with Hugo or Nebula voters. We have no real access to sales number to actually measure books sold, so this is about the best we can do. Historically, being popular hasn’t helped much for the Nebulas. For the Hugos, it matters more, but only when that popularity is combined with strong critical response and past Hugo history.
I’m slowly migrating all my data over to Google Sheets and the cloud, so that you can look at and process the data any way you want. Here’s the link.
Table #1: Popularity of Hugo/Nebula Contenders on Goodreads, December 2015
It’s interesting how static these charts are; no one really moved up or down more than 2 spots from November to January. I also track some books that aren’t contenders (Armada, for instance), just to give us some reference.
What does this mean for the Hugos? Well, Uprooted and Seveneves are hugely popular books this year, with 4 or 5 times more rankings than other award contenders like The Fifth Season or Ancillary Mercy. Even though someone like Stephenson may prove divisive (lots of people love or hate that book), the sheer number of readers may translate into more voters. Remember, you can’t vote against a book in the nomination stage. All that matters is how many people like a book, not how many hate it; the reverse can be true on the final ballot. The huge number of rankings for Novik and Stephenson is why I’ll have them very high in my initial Hugo predictions.
On the flip side, a book like Karen Memory is languishing with only 1,500 Goodreads ratings / 75 even though it came out in February. I don’t think that’s enough readers to drive Bear to a Hugo nomination in a competitive year, but only time will tell. I often use these popularity charts to distinguish between similar books. If Dickinson, Cho, Liu, Jemisin, and Novik all vaguely fall under the category of “experimental fantasy,” I’ll pick Novik/Jemisin over Liu/Cho/Dickinson based on their apparent popularity, using the theory more readers = more votes. Hopefully once I have several years of data I can find a more solid correlation, although one certainly isn’t visible yet.
Lastly, it’s fascinating at how different the Amazon rankings are than Goodreads. Why does Goodreads like Armada more than Seveneves? A book like A Long Time Until Now does terribly on Goodreads but well on Amazon (#12 on my Amazon chart, #28 on my Goodreads chart). Darker Shade of Magic is loved on Goodreads but middle-of-the-pack on Amazon. This goes to show how fundamentally different these audiences are. We shouldn’t trust either. Instead, I boost a book’s chances when it’s high across many of my different lists: if Uprooted is #2 on my Goodreads list, #3 on my Amazon list, #1 on the SFWA list, #1 on the Goodreads vote, #7 on my Mainstream Critics list, #1 on my SFF Critics list, etc., shouldn’t I predict it near the top? Throw in past Hugo/Nebula history, and that’s how the Chaos Horizon logic works; make what you will of it.
Later this month (let’s say mid-January) I’ll look to see what the ranking score is for each of these texts. Those scores don’t change much over time, so it hasn’t been worth tracking them month to month. I’ve also not found any correlation between the ranking score and award chances.
Let’s finish with a threat: I’ve gathered enough lists, 2016 is almost upon us, so I’ll make my first Nebula and Hugo predictions tomorrow!