SFWA 2015 Nebula Recommended Reading List: Analyis and Prediction
In a somewhat surprising move, the Science Fiction and Fantasy Writers of America (SFWA) decided to make their annual “Nebula Recommended Reading List” available to the public. I say this is surprising because the Nebulas have historically been a closed shop. They don’t share voting numbers with the public, only the list of nominees and winners. Here’s the Press Release, and a link to the list pages themselves.
This list is gathers the various SFWA members’ (i.e. Nebula voters) recommendations as to the best of the year. They also included the number of recommendations for each work. They even threw in the data from 2014! All of this is stunning because it gives us an enormous amount of information to predict the Nebulas. In fact, this is the best data I’ve ever had at Chaos Horizon!
To use this information to predict the upcoming Nebulas, I’ll need to look to see whether the 2014 recommendations were correlated to the eventual Nebula nominees and winners. Sneak preview: they are, to the tune of 84% accuracy! Winning was a little dicier, with only 50% accuracy from the Suggested Reading, although the two other winners placed #2 on their respective Suggested Reading list. Why the SFWA would want to give so much information away is beyond me.
Once we know the correlation, I can make a common Chaos Horizon assumption (i.e. in the absence of better data, what happened last year is likely to happen again next year), and use that 2014 correlation to predict what will happen in 2015. I’m not claiming that this list is causal; I’ll discuss some possibilities below. What I will claim is that if the 2014 list predicted the Nebulas with 84% accuracy, you better take that into account for 2015!
So, let’s dig in. I’m interested in the Novel, Novella, Novelette, and Short Story categories. The SFWA also gives a movie/TV award named the Bradbury and a Young Adult award named the Norton; those seem to work a little differently, so analyze them on your own.
What I’m going to do is see what correlation exists between the Top 6 works from the “Suggested Reading List” and the Nebula nominees. Simple language: do the works show up on both lists?
Table 1: Correlation Between Top 6 (and Ties) of the 2014 Nebula Suggested Reading List and the Eventual 2014 Nebula Nominees
Novel: 4 out of 6, 67.7%
Novella: 6 out of 6, 100%
Novelette: 5 out of 6, 83.3%
Short Story: 6 out of 7, 85.7%
Total: 21/25, 84%
If Chaos Horizon ever manages to be 84% accurate, I’m packing it in. Those are staggering correlation numbers. With only a few exceptions here and there, the top 6 works from the 2014 Suggested Reading List were the eventual Nebula nominees. Will the same happen in 2015? I wouldn’t bet against it.
In novel, our outlier with only 2/3 accuracy, the Top 3 books from the Top 10 list got nominated, and the lowest nominees were two books tied for #7. Here’s the 2015 Suggested Reading List, Novel category; the number at the start of the column is the number of recommendations.
32 Annihilation VanderMeer, Jeff FSG 2 / 2014 (nominee, winner)
23 Trial By Fire Gannon, Charles Baen 8 / 2014 (nominee)
22 The Goblin Emperor Addison, Katherine Tor 4 / 2014 (nominee)
19 Afterparty Gregory, Daryl Tor 4 / 2014
17 My Real Children Walton, Jo Tor 5 / 2014
16 Ancillary Sword Leckie, Ann Orbit 10 / 2014 (nominee)
15 Coming Home McDevitt, Jack Ace 11 / 2014 (nominee)
15 The Three-Body Problem Liu, Cixin Tor Books 11 / 2014 (nominee)
9 Lagoon Okorafor, Nnedi Hodder & Stou… 4 / 2014
8 American Craftsmen Doyle, Tom Tor 5 / 2014
Gregory got a Novella nomination for We Are All Completely Fine; perhaps people didn’t want to nominate him twice. I can’t account for why Walton didn’t make it, but the difference 15 and 17 votes is pretty small. The McDevitt and Liu also came out in November and maybe didn’t have enough time to pick up votes on this list.
Let’s not get too overwhelmed. Perhaps 2014 was an odd year, and there are normally greater differences? We’re also talking about some pretty fine numerical differences in categories like Short Story, where the difference between being in the Top 6 and outside that is 1 vote.
Nonetheless, based on 1 year of data, using the Suggested Reading List to predict the Nebulas seems very viable. It even seems to work to predict the winner: Annihilation easily won the Suggested Reading list and went on to the win the Nebula.
There are a couple explanations I can think of as to why these lists are so correlated:
1. The Suggested Reading list is Causal: Under this theory, you’d claim that the Suggested Reading List exerts such a gravitational force on the Nebulas that it essentially forms the final nominations, i.e. it works like a slate. Once you get the ball rolling on these recommendations, more people read those works, which leads to more recommendations, which leads to more votes, etc. While the suggested list isn’t 100% accurate, 84% is pretty substantial . This analysis would seemingly open the door to manipulation: i.e. if someone could logroll a specific author up into the Top 3 or 4, would they receive a nomination? I don’t know.
2. The Suggested Reading List Operates as an Accurate Poll: Perhaps we shouldn’t think of the list as causal, but more from a polling perspective. There were 372 total votes on the 2014 Novel list. Let’s say your average recommender recommended 3 books; that’s mostly a number I pulled out of the air, but does sync up with how many books the average Hugo voter nominates. I’m sure some people recommended 1, some people recommended 5 or 6. The average has got to be somewhere in the middle. Using an average of 3, that would mean around 124 people participated in building the Suggested Reading List. Is that substantial snapshot of who actually nominates? The SFWA says they have 1800 members, but how many vote in the Nebula nomination stage? Half? A quarter?
Let’s say it’s half (which I think is way too high), and that you accept my 124 number. That means the Suggested Reading list is effectively polling (124/900) = 13.7% of the eventual nominators. That’s a solid N. Now, since this isn’t a random poll and I had to estimate voting pools size, etc., we can’t calculate things like margin of error with any accuracy. This is also a self-selected poll (i.e. people choose to participate), and, as such, the most passionate voters will be participating.
But it may seem—based on one year’s data—that these suggesters have their finger on the pulse of the Nebula voters. Perhaps the 15% (or 25%, or 35%, whatever the actual number is) just accurately reflects how these voters think.
3. The Number of Nominators is Small: This is another interpretation you could put on these numbers. We’ve never known how many votes it takes to get a Nebula nomination, but maybe the numbers of voters from the Suggested List is all that it takes (or close to it). If you go back to 2012 with the Hugos (well before recent controversies/slates set in), it only took between 72-36 votes to grab a Short Story Hugo nomination. If the Nebulas have half as many voters as the Hugo (again, we don’t know), the 17-11 votes indicated by the Suggested Reading list may have been in the ballpark of what it takes to grab a nom.
We don’t need to know fully understand how the list impacts the Nebulas. In fact, Chaos Horizon is more interested in predicting what will happen than explaining why things happen. The Suggested List is published, and that data goes into the black box of the Nebula nominators minds. Something happens in there, then the list of nominees pops out. If the data in consistently matches the data out to the tune of 84%, who cares what happens in the black box?
What It All Means: We don’t need to know fully understand how the list impacts the Nebulas. In fact, Chaos Horizon is more interested in predicting what will happen than explaining why things happen. The Suggested List is published, and that data goes into the black box of the Nebula nominators minds. Something happens in there, then the list of nominees pops out. If the data in consistently matches the data out to the tune of 84%, who cares what happens in the black box?
For Chaos Horizon, it means that to predict the Nebulas, I should use the Suggested Reading List because it has an 84% success rate. Even if that drops to 75%, it’s still stellar.
So, we could take the 2015 suggestions and predict the eventual Nebula nominees right now. If we check out the total numbers of votes this year, we’re already at 337—not quite the 372 of last year, so I’d still expect some changing, particularly for works that came out late in the year (the Gannon and McDevitt would be prime examples of works I’d expect to rise). Making the list public, which will bring more scrutiny, might also change the way people vote.
Here’s the current Top 10 for the Novel:
20 Uprooted Novik, Naomi Del Rey 5 / 2015
17 The Grace of Kings Liu, Ken Saga Press 4 / 2015
16 Karen Memory Bear, Elizabeth Tor Books 2 / 2015
13 The Traitor Baru Cormorant Dickinson, Seth Tor Books 9 / 2015
11 Ancillary Mercy Leckie, Ann Orbit 10 / 2015
10 Updraft Wilde, Fran Tor Books 9 / 2015
9 Beasts of Tabat Rambo, Cat WordFire Press 4 / 2015
9 Last First Snow Gladstone, Max Tor Books 7 / 2015
9 The Fifth Season Jemisin, N. K. Orbit 8 / 2015
8 Sorcerer to the Crown Cho, Zen Ace
Updraft has more votes over in the Norton as YA novel (11), so that might eventually show up in that category, not this one. Cat Rambo is the current President of the SFWA, so there might be some conflict of interest issues in accepting a Nebula nomination. Given Jemisin’s prior Nebula nominations in this category, I’d give her the edge over Gladstone in making it into the Top 6. So that would make my current prediction:
Uprooted, Naomi Novik
The Grace of Kings, Ken Liu
Karen Memory, Elizabeth Bear
The Traitor Baru Cormorant, Seth Dickinson
Ancillary Mercy, Ann Leckie
The Fifth Season, N.K. Jemisin
I suspect either Gannon or McDevitt has a strong chance to creep up into the Top 6 once the fans of those authors begin suggesting their works. I’d kick Dickinson out first, because I think Leckie and Jemisin will rise based on past Nebula performance. Gannon had 22 votes last years and only has 2 votes this year. Don’t count out Bacigalupi who is tied for 10th with 8 votes, or Robinson at 15 with 6 votes; both have strong recent Nebula history. Strangely, The Dark Forest by Cixin Liu doesn’t show up on the list at all.
Last year, Annihilation was the clear winner of the Suggested Reading list and then won the Nebula. I’m not willing to crown Uprooted yet because Novik is only 3 votes ahead of Liu, but I think she’s still the clear favorite to win at this point.
The SFWA has been updating the list constantly. I’ll keep my eye on it, and see what we can learn. There’s lots of other analysis that could be run with this data (gender breakdowns, genre breakdowns, ethnic/race breakdowns, etc.) that would tell us a great deal about the Nebulas.
If the list above is the eventual Nebula nominees, or even 5/6 of them, the SFWA is going to have seriously consider whether or not they want to release so much info so early.
So, what do you think? Does this list give away too much information? Should the SFWA take it down? Or should we just be happy we have such good data?