Predicting how the “Sad Puppy” voters are going to nominate in 2016 is the most speculative part of all. The Sad Puppies drastically changed their approach, moving from a recommended slate to a crowd-sourced list. It’s an open question of how that change will impact the Hugo nominations.
What we do know, though, is that last nomination season the Sad Puppies were able to drive between 100-200 votes to the Hugos in most categories, and the their numbers likely grew in the finally voting stage. I estimated 450. All those voters are eligible to nominate again; if you figured the Sad Puppies doubled from the nomination stage in 2015 to now, they’d be able to bring 200-400 votes to the table. Then again, their votes might be diffused over the longer list; some Sad Puppies might abandon the list completely; some Sad Puppies might become Rabid Puppies, and so forth into confusion.
When you do predictive modelling, almost nothing good comes from showing how the sausage is made. Most modelling hides behind the mathematics (statistical mathematics forces you to make all sorts of assumptions as well, they’re just buried in the formulas, such as “I assume the responses are distributed along a normal curve”) or black box the whole thing since people only care about the results. Black boxing is probably the smart move as it prevents criticism. Chaos Horizon doesn’t work that way.
So, I need some sort of decay curve of the 10 Sad Puppy recommendations to run through my model. What I decided to go with is treating the Sad Puppy list as a poll showing the relative popularity of the novels. That worked pretty well in predicting the Nebulas. Here’s that chart, listing how many votes each Sad Puppy received, as well as the relative % compared to the top vote getter.
|Somewhither||John C Wright||25||100%|
|Honor At Stake||Declan Finn||24||96%|
|The Cinder Spires: The Aeronaut’s Windlass||Jim Butcher||21||84%|
|A Long Time Until Now||Michael Z Williamson||17||68%|
|Son of the Black Sword||Larry Correia||15||60%|
|Strands of Sorrow||John Ringo||15||60%|
|The Discworld||Terry Pratchett||11||44%|
|Ancillary Mercy||Ann Leckie||9||36%|
What this says is that for every 100 votes the Sad Puppy generates for John C. Wright, they’ll generate 36 votes for Ann Leckie. I know that stat is suspect because not everyone who voted in the Sad Puppy list was a Sad Puppy, and that the numbers are so small it was easy for one person to get boosted up the list by a small group of fans. Still, this gives us something. What I’ll do is plug this into my chart of 40%, 60%, and 80% using the 450 Sad Puppy estimate to come up with:
|The Fifth Season|
|The Aeronaut’s Windlass||151||227||302|
|Agent of the Imperium|
|Honor At Stake||173||259||346|
|A Long Time Until Now||122||184||245|
Does this make any sense? I’m sure many will answer no. But look closely: could the remnants of the Sad Puppies, no matter how they’re impacted by the list, generate 300-150 votes for Jim Butcher this year? I find it hard to believe that they couldn’t produce that number. Remember, Butcher got 387 votes last year in the nomination stage. Some of that was Rabid Puppies (maybe up to 200), but where did the rest come from? And will all the Sad Puppy votes for Butcher vanish in just a year?
How about that Somewhither number—is it too big? This could also model some Sad Puppies being swayed over to the Rabid Puppy side, as would the Seveneves number. The Novik and Leckie numbers could represent the opposite happening: Sad Puppies who joined in 2015 and are now drifting over to more mainstream picks and choices. I think I’d go conservative with this, staying in the 40% band to model the dispersion effect.
So now I have predictions for each of the 3 groups. If I combine those, I get 27 different models. Each model may be flawed in itself (overestimating or underestimating a group), but when we start looking at trends that emerge across multiple models, that’s where this project has been heading. In predictive modelling, normally you make the computers do this and you hide all the messy assumptions behind a cool glossy surface. Then you say “As a result of 1,000 computer simulations, we determined that the Warriors will win 57% of the time.” For the record, the Chaos Horizon model now says the Warriors will win 100% of the time and that Steph Curry will be nominated for Best Related Work.
We could go on and do 100 more models based on different assumptions and see if trends keep emerging. This kind of prediction is messy, unsatisfying, and flawed, and the more you actually understand the nuts and bolts behind it, the more it makes you doubt predictive modelling at all. Of course, the only thing worse would be if predictive modelling was 100% (or even 90% or 80%) accurate. Then we’d know the future with 100% accuracy. Come to think of it, wouldn’t that make for a good SF series . . . Better get Isaac Asimov on the phone. Maybe I should argue that this series is eligible for “The Best SF Story of 2017” Hugo.
Tomorrow we’ll start combining the models and see if anything useful emerges.
This is where things get messy—perhaps to the point of incoherence. I estimated 5,000 voters in last years Hugos who seemed to not be associated with the Sad or Rabid Puppies: some Hugo voters of past years, some who joined to vote No Award for the Puppies, some who joined just to vote, some who maybe joined to participate in the controversy, and some who joined for unknown reasons. We don’t have much past data on this group, so how can we calculate how they’re likely to vote in 2016?
To be honest, we probably can’t, not with any definite certainty. What I want to achieve is just getting in the ballpark, of producing a low and high estimate that we can compare to the Rabid Puppies estimate. That’ll at least tell us something. You know me: I always like numbers, no matter how rough they are.
What we do have is data from past Hugo years, and if we make the assumption that voting patterns won’t be wildly different from previous years, we’d at least have a place to start. So, first thing I’ll do is take a look at previous voting percentages in the Hugos. This chart shows what percentage of the Vote the #1, #2, etc. novel receives in a typical Hugo year. For these purposes, I average the voting patterns from 2010-2013 (the 4 prior years with no Puppy influence, drawn from the Hugo voting packets).
Table 1: Voting Percentages in Hugo Best Novel Category
This means that the most popular Hugo book, in any given year, gets around 17% of the vote, with a range of 15%-20% in the years I looked at. So if 4,000 people vote in 2016, we might estimate the top book as getting between 600-800 votes. If 3,000 people vote, that drops us down to an estimate of 450-600 votes.
Does this estimate tell us anything, or is it just useless fantasizing? I can see people arguing either way. What this does is narrow the range down to something somewhat sensible. We’re not predicting Ann Leckie is going to get 2000 votes for Best Novel. We’re not predicting she’s going to get 100. I could predict 450-800 and then match that against the 220-440 Rabid Puppies prediction. That would tell me Leckie seems like a likely nominee.
We can go destroy this prediction if we make different assumptions. I could assume that the new voters to the Hugos won’t vote in anything like typical patterns, i.e. that they are complete unknowns. Maybe they’ll vote Leckie at a 75% rate. Maybe they’ll vote her 0%. Those extremes grate against my thought patterns. If you know Chaos Horizon, I tend to chose something in the middle based on last year’s data. That’s a predictive choice I make; you might want to make other ones.
I believe voting patterns will be closer to traditional patterns than totally different from them. You may believe otherwise, and then you’ll need to come up with your own estimate. If I’m off, how far off am I, and in what direction? Too low by 100-200 votes? Too high by 100-200 votes? And if I’m off by only that much, is the outcome of this prediction affected?
So, this says . . . if these 5,000 vote along similar lines to past Hugo voters, and we imagine three turnout scenarios, where do we end up?
Let’s not drop in book titles yet, let’s just multiply table #1 by three different turnout scenarios for our 5,000 2015 voters (40%, 60%, and 80%):
Table 2: Estimated Votes in 2016 Hugo Best Novel Category Based on Prior Voting Patterns
|# of Typical Voters||2000||3000||4000|
What I’m interested is whether or not these numbers beat the Rabid Puppy numbers from last post. Even if we assume robust Rabid Puppy turnout of creating 440 votes, we have 1 novel above that in the 60% scenario and 3 in the 80% scenario. Even if you pump the Rabid Puppy number up to 500, we still have at least 1 novel above in both the 60% and 80% scenario. If we lower the Rabid Puppy vote to a more modest 400, we wind up with 2 in the 60% and 4 in the 80%. This is the piece of data I want: in most turnout scenarios, a few “typical” books beat the Rabid Puppies, but not all. I’d estimate we’re in store for a mixed ballot. Only in very modest turnout scenarios (40%) do the Rabid Puppies sweep Best novel.
We do need to factor the Sad Puppies in (next post), but the numbers don’t suggest to me that the Rabid Puppies will manage 5 picks on the final Hugo ballot. The numbers also don’t suggest that there won’t be 0 Rabid Puppy picks.
Two questions remain for this post: what will the turnout be? I don’t know how many of those 5,000 will vote. I know passions are high so I assume the turnout will be high. Then again, this stage is difficult to vote in: you have to read a bunch of novels, remember when the ballot is due, realize you’re eligible to nominate, etc. To my eye, somewhere between 50-75% seems about right. I’m going to pick a conservative 60% just to have something to work with. Since I included all three bands, you’re free to pick anything according to your tastes and your own sense of what will happen.
Next, what is Novel #1 going to be? Novel #2? Novel #3? I’m just going to utilize my SFF Best of Critics list to drop in here. I definitely think the top of the list (Leckie, Jemisin, Novik) is what this group will be voting for. I’ll preserve ties, so if we go to the Top 12 contenders I’m tracking for this prediction, here’s where they show up:
1. Ancillary Mercy (estimate 17.73% of vote)
1. Uprooted (estimate 17.73% of vote)
3. The Fifth Season (estimate 13.06% of vote)
3. Aurora (estimate 13.06% of vote)
10. Seveneves (estimate 6.54% of vote)
Golden Son and The Aeronaut’s Windlass did show up on the list, so I’m going to give them a 1% of the vote. Something but not a lot. This is in line with Butcher’s past totals from before the Sad Puppies; he was only getting a few votes based on 2009 Nomination Data, where WorldCon showed us how many votes everyone got.
Could those be off significantly? Sure they could. That’s why it’s an estimate! Multiply those out in the scenarios, and you get this chart:
|The Fifth Season||261||392||522|
|The Aeronaut’s Windlass||20||30||40|
|Agent of the Imperium|
|Honor At Stake|
|A Long Time Until Now|
Leckie and Novik get lots of votes, probably enough to beat the Rabid Puppies without any help. Given Leckie pulled in 15.3% of the vote in 2015 and 23.1% in 2014, wouldn’t her vote percentage be somewhere in that ballpark for 2016? The average of those two is 18.15%, and I predicted her at 17.73% in the chart above. Estimating is a different act than logical proof, and one that is ultimately settled by the event—come a few weeks, we’ll known the ballot, and I’ll either be right or wrong. Jemisin and Robinson get votes and will be competitive. Stephenson is down lower, but he also appears on the Rabid and Sad lists, so that jumps him over Jemisin and Robinson’s total and onto the ballot.
I know this will be the most disliked of the predictions. You know my theory at Chaos Horizon: any estimate gives you a place to start thinking. Even if you vehemently disagree with my logic, you now have something to contest. What makes more sense? If you apply those estimates, what ballot do you come up with? So argue away!
Tomorrow the Sad Puppies, and then we combine the three charts to get a bunch of different scenarios. If we see patterns across those scenarios, that’s the prediction.
Let’s start with the most controversial group, the Rabid Puppies. Vox Day posted a “list” on his website; how will this affect the Hugos?
I estimated the Rabid Puppies at around 550 strong in the 2015 Final Hugo vote. I feel solid about that number; I estimated it from the 586 people who voted Vox Day #1 for Best Editor, Short Form. Vox Day leapt up to 900 by the end of the voting, and that extra 400 is how I estimated the low range of the Sad Puppies.
If the Rabid Puppies had around 550 votes in 2015, how many will they bring to 2016? Since all those who voted in 2015 can nominate in 2016, I imagine it will be a big number. Even so, I can’t imagine carrying 100% over—the nomination stage is simply less interesting, less publicized, and more difficult to vote in. Let’s imagine three scenarios: an 80% scenario, a 60% scenario, and a 40% scenario. I think 80% is the most likely; this is the group most invested in impacting the Hugos and the most likely to team up again. And since they don’t have to pay an entry fee to participate in the nomination stage . . .
I also think this group will have solid slate discipline, voting the list as Vox Day published it. If you want to factor in some slate decay, I’d do so for lesser known books like Agent of the Imperium. I won’t bother with any decay in the model. With that in mind, here’s my three scenarios the following chart:
|550 max Rabid Puppies|
|The Fifth Season|
|The Aeronaut’s Windlass||220||330||440|
|Agent of the Imperium||220||330||440|
|Honor At Stake|
|A Long Time Until Now|
A pretty simple model and not terribly informative so far. What you’ll glean from this is that the Rabid Puppies are likely to deliver a large block of votes to the works on their list. When we combine this chart with the estimated chart from the Typical vote and the Sad Puppy vote, that’s when we’ll be in business.
The core question is whether or not this block will be larger than other voting groups. In more lightly voted categories like Best Related Work or categories where the vote is more dispersed like Best Short Story, 400 votes is likely enough to sweep all or most of the ballot. Think about Best Related Work: the highest non-Puppy pick last year managed only around 100 votes. The top non-Puppy short story only managed 76 votes last year. Even if you triple those this year, you’re still well under 400 votes. In a more popular category like Best Novel or Best Dramatic Work, I expect the impact to be substantial but not sweeping. Perhaps 3 out of 5? 2 out of 5?
In 2015, the Rabid Puppies placed 4 out of their 5 picks on the initial Hugo ballot (Correia and Kloos declined, leaving them with only 2 spots). They were this successful partially due to their overlap with the Sad Puppies on those 4 choices. This year, the overlap is less (only 3), so I expect the effect to be less. Even with first mover advantage—remember, the Puppies took the 2015 ballot largely by surprise—Ann Leckie still had enough votes to break up the Puppy sweep in 2015. I fully expect some non-Puppy novels to show up on the final ballot.
How does this number compare to last year’s nomination vote? My best estimate of the Rabid Puppy 2015 nomination vote comes from the Rabid Puppy pick that placed #9, Brad Torgesen’s Chaplain’s War with 196 votes. Now, Torgersen could have received a number of votes outside the Rabid Puppy process, but other solo Rabid Puppy picks like the John C. Wright novellas earned in the range of 150 votes. This year’s estimate would double to triple that vote. Is this reasonable? I’ll leave that in your hands. Has the year-long controversy, with thousands of blog posts, increased the Rabid Puppies to the range of 400-500 votes? Controversy tends to drive strong reactions on both sides. Or is there a top limit to Rabid Puppy support? How would you calculate that? Is this roughly 200-400 votes enough to sweep a lot of categories? Or will the typical vote also triple, making this year much more competitive? Since the Rabid Puppies overlap with the Sad Puppies on several picks, are those novels a sure thing? What band do you expect the Rabid Puppies to be in, 40%, 60%, 80%, or something else?
Tomorrow, I’ll wade into the typical vote. Be warned!
Time to do what Chaos Horizon does: break out some numerical estimates for the 2016 Hugo Awards. Over the next several posts, I’m going to try to estimate how many votes the different voting groups in the 2016 Hugos are likely to generate under a number of different scenarios. We can then combine them to come up with my prediction, which I’ll post April 1st, the day after Hugo voting closes.
I’m going to start with my estimates from the end of the 2015 Hugo season using the final vote statistics. Here’s what I estimated back then:
Core Rabid Puppies: 550-525
Core Sad Puppies: 500-400
Sad Puppy leaning Neutrals: 800-400 (capable of voting a Puppy pick #1)
True Neutrals: 1000-600 (may have voted one or two Puppies; didn’t vote in all categories; No Awarded all picks, Puppy and Non-Alike)
Primarily No Awarders But Considered a Puppy Pick above No Award: 1000
Absolute No Awarders: 2500
I think those numbers are at least in the ballpark and give us a place to start modelling. Since you can’t vote against a pick in the nomination stage, we don’t need to know the difference between “No Awarders” and other more traditional Hugo voters. I’m going to combine all the non-Puppy voters into one big group, called the “Typical Voters.” I’ll initially assume that they’ll vote in similar patterns to past Hugo seasons before the Puppies. I’ll have more to say about that assumption later on.
Here’s the numbers I’ll be using; you may wish to adjust them up or down depending on your thoughts from last year.
Rabid Puppies: 550
Sad Puppies: 450
Due to a quirk in Hugo voting rules, everyone who voted in 2015 is eligible to nominate in 2016. Note those are the max raw numbers, not how many votes each group is likely to generate. I don’t think everyone will vote in 2016, but due to the high passions surrounding the 2015 season, I expect we’ll get a high turnout. I’m going to model the three groups at 40%, 60%, and 80% turnout. By using data from past voting patterns, specifically what percentage the various choices for each group received in past Hugo nominations, we might be able to ballpark which books will make the ballot. We can pull this data for the typical voters from the past Hugo packets. Remember, I even estimated what the “decay” percentages were for both the Rabid and Sad Puppies.
There are a lot of shifting variables and unknowns here, so I don’t know if we can land at something reasonable. So, to estimate (as an example) the #3 pick from the Typical voters, I’ll need to do the following:
Typical Pick #3 estimated total: 5000 * estimated turnout * average % of the #3 pick
So, if you estimate 60% turnout and used past Hugo data to see that #3 pick averages a 13% showing, you’d get 390 votes. Now, there’s plenty to critique here: maybe turnout will be higher or lower. Maybe this year’s patterns won’t follow previous years. Maybe I don’t have the right books in the right slots. Still, I always find any estimate more interesting than no estimate. If you don’t, Chaos Horizon probably isn’t the website for you!
The first thing I need is to come up with a list of books to try and model. By taking the top 5 novels from the Sad Puppies, Rabid Puppies, and my SFF Critics list we get a total of 12 likely Hugo nominated novels. Note there is overlap, and also overlap lower down on the lists. That’ll be accounted for in my estimates:
Ancillary Mercy, Ann Leckie (SFF Critics)
Uprooted, Naomi Novi (SFF Critics, Sad Puppies)
The Fifth Season, N.K. Jemisin (SFF Critics)
Aurora, Kim Stanley Robinson (SFF Critics)
Sorcerer to the Crown, Zen Cho (SFF Critics)
Seveneves, Neal Stephenson (Rabid Puppies, Sad Puppies)
Golden Son, Pierce Brown (Rabid Puppies)
Somewhither, John C. Wright (Rabid Puppies, Sad Puppies)
The Cinder Spires: The Aeronaut’s Windlass, Jim Butcher (Rabid Puppies, Sad Puppies)
Agent of the Imperium, Marc Miller (Rabid Puppies)
Honor At Stake, Declan Finn (Sad Puppies)
A Long Time Until Now, Michael Z Williamson (Sad Puppies)
Let’s call the list of “possible contenders.” Sure, a different novel may sneak up on us—but I find it unlikely in a year that’s as hotly contested as this. If you don’t show up on near the top of best of list / recommendations / slate, how are you going to beat the books that do? What’s your path to accumulating votes?
Let’s stop here for today—no need to overwhelm each post with data. I’ve got a couple questions for my readers: are there are any other “major” novels I should try and estimate? I could see someone thinking that Aurora isn’t a top pick, but that spot should go to The Water Knife or The Just City (i.e. novels by former Hugo winners), but changing the name of the novel won’t change the estimate for that slot. It’s still “Typical Slot #4”. Same thing with Sorcerer to the Crown: maybe it should be Sorcerer of the Wildeeps in that spot. The important thing to have on the list are any novels that might overlap between the groups. But I’d be interested to hear if you think there’s another big contender, and why. I can add a few more novels to the list pretty easily. Otherwise, we’ll dig into the Rabid Puppy vote tomorrow.
To supplement the mainstream’s view of SFF, I also collate 10 different lists by SFF critics. Rules are the same: appear on a list, get 1 point.
For this list, I’ve been looking for SFF critics who are likely to reflect the tastes of the Hugo award voters. That way, my list will be as predictive as possible. I’m currently using some of the biggest SFF review websites, under the theory that they’re so widely read they’ll reflect broad voting tastes. These were Tor.com, the Barnes and Noble SF Blog, and io9.com.
For the other 7 sources on my list, I included semiprozines, fanzines, and podcasts that have recently been nominated for the Hugo award. The theory here is that if these websites/magazines were well enough liked to get Hugo noms, they likely reflect the tastes of the Hugo audience. Ergo, collating them will be predictive. This year, I used the magazines Locus Magazine and Strange Horizons, the fan websites Book Smugglers, Elitist Book Reviews, and Nerds of a Feather (to replace the closing Dribble of Ink; Nerds didn’t get a Hugo nom last year, but was close, and I need another website), and fancasts Coode Street Podcast and SF Signal Podcast.
Here’s the results (and a link to the spreadsheet):
1. Ancillary Mercy, Leckie, Ann: on 8 lists
1. Uprooted, Novik, Naomi: on 8 lists
3. The Fifth Season, Jemisin, N.K.: on 7 lists
3. Aurora, Robinson, Kim Stanley: on 7 lists
5. Sorcerer to the Crown, Cho, Zen: on 6 lists
5. Dark Orbit, Gilman, Carolyn Ives: on 6 lists
7. The Sorcerer of the Wildeeps, Wilson, Kai Ashante: on 5 lists
7. A Darker Shade of Magic, Schwab, V.E.: on 5 lists
7. The Just City/Philosopher Kings, Walton, Jo: on 5 lists
10. Karen Memory, Bear, Elizabeth: on 4 lists
10. The Traitor Baru Cormorant, Dickinson, Seth: on 4 lists
10. Europe at Midnight, Hutchison, Dave: on 4 lists
10. Archivist Wasp, Kornher-Stace, Nicole: on 4 lists
10. The Grace of Kings, Liu, Ken: on 4 lists
10. Luna: New Moon, McDonald, Ian: on 4 lists
10. Seveneves, Stephenson, Neal: on 4 lists
10. Radiance, Valente, Catherynne: on 4 lists
This list was much more top-heavy than the mainstream list. Those top 4 novels of Ancillary Mercy, Uprooted, The Fifth Season, and Aurora were pretty much the consensus of critics in 2015; almost everyone mentioned them in glowing terms. In an ordinary Hugo year—uninflected by Sad or Rabbid Puppies—I think Leckie, Jemisin, Novik, and Robinson would be good bets to make the final Hugo ballot. I’d round that out with Seveneves from slightly lower down on the list, and that’s based on Neal Stephenson’s strong Hugo history, including nominations for similarly long books like Cryptonomicon and Anathem, as well as a win for The Diamond Age. Familiarity with the voting audience always helps, as well as how popular Seveneves is.
The books in the 5-15 range make for an interesting and varied bunch. You see a lot of more unusual fantasy in that part of the list, from Zen Cho to Kai Ashante Wilson to Elizabeth Bear to Seth Dickinson to Ken Liu. All those texts will likely split the vote with each other, preventing one from emerging from that crush. Dave Hutchinson is absolutely adored by European critical voices; if the Hugos were taking place overseas, like they will next year, he’d be a good bet. This Europe series hasn’t made much of an impact in the United States, so that dooms his chances for the Hugo this year. Dark Orbit did very well, but with more obvious SF choices like Leckie, Robinson, and Stephenson, I think she’ll get lost in the Hugo shuffle. This might be a strong contender for some of the smaller SFF awards like the Clarke or Campbell Memorial.
Some of our Nebula nominees didn’t fair particularly well with SFF critics. None of our SFF critics recommended Barsk: The Elephant’s Graveyard, possibly due to how late it came out in the year (end of December). Charles Gannon also had no recommendations from these 10 critical lists. Fran Wilde’s Updraft only had 2. Tough to see any of these making the final Hugo ballot, particularly in a competitive year.
Normally, I’d combine this list with past Hugo history and overall popularity to make my predictions. This year, I’ll need to balance those 3 factors with the Rabid Puppy and Sad Puppy recommendations to come up with something credible. Not taking the Puppies into account, I’d have Leckie / Jemisin / Novik / Robinson / Stephenson as my Top 5, with Bacigalupi / Walton / Liu / Cho / Dickinson / Bear / McDonald / Wilson following. Brandon Sanderson is also popular enough to be in the 8-12 range, and The Expanse TV show might help James S.A. Corey push back up into that territory. You’d figure Cixin Liu would get some votes for The Dark Forest, being last year’s winner and all.
Some final notes on the lists: to collate them, I try to include everything that was identified as “Best of the Year” in each post. Some of these lists are very long (20-40 items), so I sometimes thinking not making the list means more than making it. “Honorable Mentions” don’t count, and I make no effort to ensure eligibility/genre. I also don’t track “Best First Novel” lists. Many of these lists are themselves collections, gathering the opinions of 5, 10, or more critics. Sometimes a book will be mentioned 3 or 4 times in a single post. I limit points to one per post. Once again, the spreadsheet is here.
It’s Spring Break for me, so I’ve got a chance to wrap up some of my “lists of lists.” The first we’ll look at is my Best of 2015 Mainstream Meta-List. This list collates 20+ “Best of 2015” lists by mainstream outlets such as the NY Times, Amazon, Goodreads, Entertainment Weekly, and so on.
The collation works in a simple fashion: appear on a list, get 1 point. I then add up the points from all 20 lists. Results are below. I tried to use the same sources as last year so we can meaningful year-to-year comparisons. Here’s what I said last year about why I stopped at 20 lists:
I’m stopping at 20 for a couple reasons. First, this is a sample, not a comprehensive study. Second, I don’t think I’m adding much new information: each new “Best of List” is repeating the same books over and over again, so I think we’ve triangulated into what the mainstream believes are the best SFF books of the year. Lastly, I’ve got other things to look at—I don’t want to spend too much time getting caught up with what the mainstream thinks.
These mainstream websites tend to be very mainstream (duh) in their tastes. They don’t dip very deeply into SFF waters, often choosing the biggest names and buzziest books. So this list gives us a very incomplete picture of who might win the Nebula or Hugo. Last year, David Mitchell’s The Bone Clocks won and didn’t even manage a Hugo or Nebula nomination. Our eventual Nebula winner was #3 (Annihilation), and the Hugo winner (The Three-Body Problem) didn’t make the Top 20. What this list does, though, is give us a picture of what Science Fiction & Fantasy works broke through into mainstream culture.
So here’s the top of the list, with all works that received at least 3 votes. The whole list is available at this link.
1. The Water Knife, Bacagulupi, Paolo: on 7 lists
1. The Fifth Season, Jemisin, N.K.: on 7 lists
1. Seveneves, Stephenson, Neal: on 7 lists
4. Ancillary Mercy, Leckie, Ann: on 6 lists
4. Aurora, Robinson, Kim Stanley: on 6 lists
6. Golden Son, Brown, Pierce: on 5 lists
6. The Buried Giant, Ishiguro, Kazuo: on 5 lists
8. Three Moments of an Explosion, Mieville, China: on 4 lists
8. Slade House, Mitchell, David: on 4 lists
8. Uprooted, Novik, Naomi: on 4 lists
10. The Aeronaut’s Windlass, Butcher, Jim: on 3 lists
10. Sorcerer to the Crown, Cho, Zen: on 3 lists
10. The Traitor Baru Cormorant, Dickinson, Seth: on 3 lists
10. Get in Trouble, Link, Kelly: on 3 lists
10. The Dark Forest, Liu, Cixin: on 3 lists
10. The Shepherd’s Crown, Pratchett, Terry: on 3 lists
What do we learn? From the point of view of mainstream outlets, there was no breakthrough SFF novel in 2015. Compare that to 2014, where two novels appeared on more than 10 lists:
1. The Bone Clocks, David Mitchell: on 13 lists
2. The Martian, Andy Weir: on 10 lists
What we have this year is a lot of works essentially tied. It’s no surprise to see Bacagalupi, Stephenson, Leckie, and Robinson at the top of the list. Those are some of the obvious choices for mainstream critics who don’t know that much about SF. Jemisin had a very good year with The Fifth Season, getting more mainstream acclaim than ever before. That might bode well for her Hugo chances. Remember, she’s already grabbed a Nebula nom.
Ishiguro and Mitchell are your standard literary interlopers. Golden Son did very well and Pierce Brown could be an author to keep an eye on. Red Rising didn’t make the Top 15 in Hugo nomination voting last year, though. Uprooted is lower down than I thought it would be; I figured “Fairy Tale” would be an easy sell to the mainstream. Live and learn. That might be an important piece of info as we weigh the relative chances of Uprooted versus The Fifth Season.
For me, my main take-away is that there is no “consensus” book in 2015. There isn’t one novel that blew everyone away and that is going to steamroll to the Hugo and Nebula like American Gods. Instead, we’ll have a very close and competitive year.
How predictive will this list be for the Hugos and Nebulas? Well, 3 of the Top 8 were Nebula nominees. Not terrible. In a normal Hugo year, I could easily imagine a ballot of Bacigalupi / Stephenson / Leckie / Jemisin / Novik. This year with all the slate activity, we’ll likely get something very different. If the ballot winds up as Butcher / Stephenson / Leckie / Jemisin / Wright (as a guess), that will still be 3 out of the 5. Pretty pretty good.
Some final notes on sources: I try to include a wide variety of venues. Some list are flat-out “Best of the Year” with no genre specifications. Some are “Best SF and F” of the year. Some are short, 5 or so items. Some are long, giving 20-30 books. Some are “Holiday Guides” that are de facto Best of 2015 lists. Here are the sources with links: Amazon, Washington Post, Publisher’s Weekly, Goodreads Choice Science Fiction, Goodreads Choice Fantasy, Entertainment Weekly, NY Times, NPR, The Guardian, Buzzfeed, Chicago Tribune, Kirkus, Library Journal, Christian Science Monitor, Huffington Post, Slate, LA Times, A.V. Club, Wall Street Journal, SF Chronicle, and Audible.
Time to check in with my annual SFF Award Meta-List, where I keep track of 14 different Science Fiction and Fantasy awards. I track the nominees and the winners, and then add those all up to see which book was the most popular on the awards circuit. I find this a great measure of whether or not there is a “consensus” book of the year. That happened just a few years ago with Ancillary Justice: it had 8 nominations going into the Hugos and Nebulas, and thus was a pretty obvious Hugo/Nebula winner.
There are a lot of SFF awards. I’ve chosen ones that are “best novel” awards. Some are restricted by genre (either Fantasy or SF), some by format (the Philip K. Dick is paperback only). Some are juried; some are membership votes; the Gemmell is an open internet vote. By combining all of these, I think we get a very broad view of the field. One note: I don’t include awards that are “First Novels” only, as I feel that is too restrictive. This includes awards like the Crawford or the Locus First Novel category.
Not every year produces a consensus book. On last year’s list, Cixin Liu’s The Three-Body Problem; led with 5 nominations. It won only once, but that was the Hugo. The eventual Nebula winner, Jeff VanderMeer’s Annihilation, tied for second with 4 total noms and that one Nebula win. So even in a year where no one reached a huge number, the list was still fairly predictive.
In 2016, 4 different awards have already announced their nominees: the Philip K. Dick, the British Science Fiction Association Awards (BSFA), the Kitschies, and the Nebulas. Not a lot so far, but has anyone emerged as an early leader? Here’s the list of everyone who has gotten more than one nomination:
Europe at Midnight, Dave Hutchinson (2 nominations, Kitschies, BSFA)
The Fifth Season, N.K. Jemisin (2 nominations, Nebulas, Kistschies)
Hutchinson’s Europe series has been very well-received across the pond, but it hasn’t had much impact over here in the US. Since both the BSFA and the Kitschies are British awards, I think this is Hutchinson’s moment in the sun at the top of this list. Jemisin seems to be doing slightly better than Uprooted so far. Possible predictor for the Nebula? Probably too early to use this list for much.
Here’s the whole list on Google Docs. I’ll update it as more nominations come out. So far, 21 different novels have received nominations, meaning this might be a very scattered year.