2016 Nebula Winners Announced: Novik Wins Best Novel
The SFWA announced the Nebula winners this weekend:
Novel Winner: Uprooted, Naomi Novik (Del Rey)
Other nominees:
Raising Caine, Charles E. Gannon (Baen)
The Fifth Season, N.K. Jemisin (Orbit US; Orbit UK)
Ancillary Mercy, Ann Leckie (Orbit US; Orbit UK)
The Grace of Kings, Ken Liu (Saga)
Barsk: The Elephants’ Graveyard, Lawrence M. Schoen (Tor)
Updraft, Fran Wilde (Tor)
Novik wins for her fairy-taleish feeling Uprooted. This year, popularity seems to have won out. Compare Novik’s number of ratings to Jemisin’s and Leckie’s as of today, 5/16/16:
Goodreads | Amazon | |
Uprooted | 38,266 | 1,154 |
Ancillary Mercy | 9,782 | 205 |
The Fifth Season | 5,658 | 120 |
In fact, Uprooted is about the most popular Science Fiction or Fantasy book of last year. I can’t think of a single book that has more Goodreads ratings this year. It just passed Armada this past month. You can check my list, which only tracks until March 31st (I only use it for predicting nominees, not winners, although maybe I should start using it for both!). Seveneves, Armada, and The Aeronaut’s Windlass still beat Novik out on Amazon, though. Novik being so much more popular than anyone else seems to have given her the edge: more readers, more potential voters, even in the relative small pool of the SFWA.
This makes Uprooted a prohibitive Hugo favorite. When a Nebula winner is up for the Hugo, it almost always wins. Sadly, my Nebula prediction formula isn’t working very well; I’ll have to tweak it this summer to take raw popularity more into account.
Congrats to Novik!
UPDATE 5/16/16: Here’s some historical data on the Best Novel winners from the SFWA recommended reading list. Eventual winner is in orange, nominees in green.
This year, Novik won even though she was much lower down on the list, in position #4. She was beaten in the recs by Gannon, Schoen, and Wilde. I think each of those books had a very strong nomination support group that didn’t translate to the larger voting audience. Any thoughts on why this data wasn’t predictive? Here’s this year SFWA Recommendations, with perfect correlation to the nominees but not the winner. Far left column is the number of recs.
35 | Barsk: The Elephants’ G… | Schoen, Lawrence M. | Tor Books | 12 / 2015 | |
33 | Raising Caine | Gannon, Charles E. | Baen | 7 / 2015 | |
29 | Updraft | Wilde, Fran | Tor Books | 9 / 2015 | |
25 | Uprooted | Novik, Naomi | Del Rey | 5 / 2015 | |
22 | The Grace of Kings | Liu, Ken | Saga Press | 4 / 2015 | |
21 | Ancillary Mercy | Leckie, Ann | Orbit | 10 / 2015 | |
19 | The Fifth Season | Jemisin, N. K. | Orbit | 8 / 2015 | |
18 | Beasts of Tabat | Rambo, Cat | WordFire Press | 4 / 2015 | |
18 | Karen Memory | Bear, Elizabeth | Tor Books | 2 / 2015 |
Interesting thing: In 4 out of the 6 categories (novelette, short story, Bradbury, Norton) the winner was #1 in the SFWA reading list, but for novel and novella they were #4 (of 7) and #5 (of 6). Any thoughts on this?
One possible explanation I thought of: perhaps consensus is reached sooner for the shorter categories and drama, but for the longer ones it takes more time. (Sadly they’ve taken down previous years’ data so can’t do more checking).
Let me add the chart to the post—I ran just that analysis earlier in the year.
I think the Novel category in particular gets more votes, and thus the initial polling is inaccurate due to the sample size. A small group of Schoen/Gannon fans can drive a nomination, not a win. Take a look at the four years of data I have and see what you think.
Yeah the sample size argument is convincing, especially given Uprooted’s insanely high popularity.
Looking at previous years, the only similar popularity outlier I can find is Neil Gaiman’s Ocean at the End of the Lane – but he withdrew his Hugo nom and made it clear that he wanted to step aside and let other books win, so that would have affected the Nebula voters.
The list of Nebula winners is interesting enough that I’ve gotten interested in the process. The trend is to all-female winners, which suggests the women members are voting for women’s works and the men are out to lunch. If you assume this, then you can cross off Gannon, Liu and Schoen. That makes it a contest between Jemison, Leckie, Wilde and Novik, or actually between Wilde and Novik which (without the men) will be #1 and #2 on the reading list. At that point, your prediction of which would win was correct.
I’m sure that would make for an excellent headline, but the simple fact is that Uprooted is by far the most popular fantasy novel of the year. (It has double the ratings of Sanderson’s Shadows of Shelf on Goodreads, which is quite remarkable.) That makes your conclusions seem extremely speculative and unlikely.
I’m looking at the Nebula results as a whole in offering this observation. Van der Meer has been the only male winner out of 16 fiction categories in the last three years, including novel, novella, novelette, short story. All the other 15 winners have been women. I think any attempt to predict the winners should take this trend into account.
I think this is an interesting comment. I don’t think this is happening just yet.
But it brings up the question if SFWA and publishing in general is undergoing “feminization” (where patriarchy labels a traditionally male dominated field as feminine and proceeds to devalue that field) instead of moving towards equality? Generally, (correct me if I’m wrong) feminization happens when women make up more than 60% of the field, but there is some debate about the precise number.
Which leads to the question of what is the gender balance of the voters and is it impacting what wins? If fewer men are participating in the Nebulas, then it could be on the path to feminization. But I think as long critically acclaimed and popular stories are winning then this isn’t happening.
In the Best Novel category, Jeff VanderMeer won just last year and Kim Stanley Robinson three years ago; not much of a trend in this category.
Hey, Lela. I don’t think there’s enough data to make that claim yet. There’s one chance in 8 that a randomly selected group of 4 people will all be of one gender.
These selections aren’t random, but are based on the preferences of a self-selecting population of voters. I’m just suggesting that Brandon might consider the effects of gender in making up his model. He’s already noted the strong predictive quality of the SFWA reading list, and the last time there was an equitable gender breakdown in the final fiction winners was for 2012.
But it seems odd that a 1/8 probability event is happening repeatedly (winners of the same gender). But I agree with Lela that the selections aren’t random. To me the evidence is compelling that gender plays a role in the probability of winning a Nebula award.
I’m interested by Brandon’s statement that he thinks Novik is now the prohibitive favorite to win the hugo.can we see the stats on that? I had no intention of reading UPROOTED, now that it has won I have it on my Amazon wish list and will probably read it before the end of the summer.
You gave the exact reason why Uprooted is now the favorite: people see that it won the Nebula and then bump it up their reading list. More readers = more votes. Here’s the post where I crunched the Hugo/Nebula convergence numbers. In recent years, when the Nebula winner has a chance to win the Hugo, it usually does; only 2312 beat/lost the odds, and that’s cause Scalzi turned down the Nebula nom for Redshirts.
This year I think Uprooted is even more of a favorite than that due to the Rabid Puppies campaign. Since the Hugo voting system is designed to produce a consensus winner, isn’t Uprooted the only consensus book left? Some voters will No Award Butcher and Stephenson for appearing on Rabid Puppies; some (most?) Rabid Puppies will, based on their history, no award Leckie and Jemisin. This is the exact scenario that let Three Body-Problem beat The Goblin Emperor last year. Throw in Uprooted‘s massive popularity and critical acclaim, and it’s hard to see a path to victory for anyone else.
This is a bit off-topic, but personally I’m curious whether there’s a correlation between the presence of a full novel (rather than an excerpt) in the Hugo Voter Packet, and the voting results.
I think that there are quite a lot of people who refuse, as a principle, to assess a work’s quality based solely on an excerpt (even if they’re sometimes really sizable excerpts.)
I dunno that it has much effect. I can generally tell from the excerpt whether I like the book or not. A hundred pages allows you to assess the characters, the writing style, pacing, etc. If I really like what I’ve read, then I’ll buy the book to finish it. If I’m not hooked that far in, I’m not going to vote for it anyway.
Interestingly, the reviews seem to be putting Seveneves in the top spot.