Let’s me finalize my 2016 Hugo Best Novel Prediction:
- Uprooted by Naomi Novik
- The Fifth Season by N.K. Jemisin
- Ancillary Mercy by Ann Leckie
- Seveneves by Neal Stephenson
- The Aeronaut’s Windlass by Jim Butcher
Remember, Chaos Horizon doesn’t predict what I want to happen, but rather what I think will happen based on my analysis and understanding of past Hugo trends.
First off, those past trends have been shot full of holes in recent years. The Puppy controversies have fundamentally transformed the voting pool of the Hugos, meaning that past trends might not apply given how widely the voters have changed. New voters have come in with the Puppies; new voters have come in to contest the Puppies; some of those voters might have stayed, some might have dropped out. Some voters are voting No Award out principle; some aren’t. How exactly you balance all of that is going to be largely speculative, maybe to the point that no predictions are meaningful. That’s why we’re called “Chaos Horizon” here!
However, I think the potential Kingmaker effect, when combined with past Hugo trends and the popularity of Uprooted, makes Novik a reasonable favorite. Novik has already won the Nebula and the Locus Fantasy (beating Jemisin twice); her book is a stand-alone, making it feel more complete than either the Jemisin, Leckie, or Butcher; and Novik, along with Stephenson, is more popular than the other nominees. In the past, these have all been characteristics of the Hugo winner.
In the past few years, I’ve developed a mathematical formula to help me predict the Hugos. The formula won’t be accurate this year because of the Rabid Puppy voters, but here’s what it came up with:
|The Fifth Season||17.1%|
|The Cinder Spires||15.3%|
The formula obviously doesn’t take pro or anti-Puppy sentiment into account. Uprooted is a big favorite because of her Nebula win this year. The Nebula has been the best predictor of the Hugo in the last decade: in 5 of the last 10 years, the Nebula winner has gone on to win the Hugo. The stats are actually better than that. In the 2006, 2007, 2009, and 2015, the Nebula winner was not even nominated for the Hugo. So the only time a Nebula winner has lost the Hugo in the final voting round was when Redshirts beat 2312 in 2013. So the last 5/6 times when the Nebula winner had a chance to win the Hugo, it did. Those are nice odds.
My formula is not designed to accurately predict second place. With that in mind, I think Jemisin is too low. Leckie won too recently with Ancillary Justice to seem to have a chance to win again, and Seveneves is pretty divisive. One reason my formula fails is because it currently doesn’t punish books for being sequels. Leckie should be lower for that reason. It’s something I’ll factor in next year. Stephenson will be lower because some people will vote him “No Award” for appearing on the Rabid Puppy list. So this pushes Jemisin up two spots.
However, Jemisin is lower down in the formula because she doesn’t have a history of winning major awards, unlike Stephenson and Leckie. Check out Jemisin’s sfadb page. She’s been nominated for 12 major awards and hasn’t won any. Not good odds. That’s what my formula is picking up on. My formula has trouble gauging changes in sentiment. I think most readers believe The Fifth Season is better than Jemisin’s earlier works, but I have trouble quantifying that.
What my numbers give is a percentage to win based on the patterns of past Hugo votes, based on my analysis and combination of those formulas using a Linear Opinion Pool prediction model. Prediction is different than statistical analysis: different statisticians would build different models based on different assumptions. You should never treat a prediction (either on Chaos Horizon or in something like American elections) the same way you would treat a statistical analysis; they are guided by different logical methods. Someone who disagrees with one of my assumptions would come up with a different prediction. Fair enough. This is all just for fun! You can trace back through the model using some of these posts: Hugo Indicators, and my series of Nebula model posts beginning here. The Hugo model uses the same math but different data.
Let’s look at this with some other data. Here’s the head to head popularity comparison of our five Hugo finalists, based on the number of ratings at Goodreads and Amazon.
|The Aeronaut’s Windlass||18,249||1,285|
|The Fifth Season||7,676||184|
These aren’t perfect samples, as neither Goodreads nor Amazon is 100% reflective of the Hugo voting audience, nor has the Hugo Awards always correlated with popularity. Still, it gives us another interesting perspective.
Jemisin does not break out of the bubble in ways that Novik and Stephenson do. These aren’t small differences, either: Uprooted is 5x more popular on Goodreads and 7x on Amazon than The Fifth Season. I put stock in that—the more people read your book, the more there are to vote for it. While the Hugo voting audience is a subset of all readers, popularity matters.
Two other notes: it’s fascinating how different Amazon and Goodreads are. Novik outpaces Stephenson on Goodreads but gets crushed on Amazon. Different audiences, different reading habits. The question for Chaos Horizon is which one better correlates with the Hugo winner? Second, Butcher may be a very popular Urban Fantasy writer with Dresden, but he’s only a moderately popular Fantasy writer.
So, all told, Novik has big advantages in popularity and same year awards (having won the Nebula already). Neither Jemisin, Leckie, or Stephenson managed to do better than Novik in critical acclaim or award nominations. Stephenson and Leckie do beat Novik in past awards history. When we factor in the possible Kingmaker effect from the Rabid Puppies, Novik is a clear favorite.
It would take a lot of change in the voting pool to overcome Novik’s seeming advantages. I wouldn’t count it out completely—these past two years have been so volatile that anything can happen.
Last question—where will No Award place? Last year, voters chose to place Jim Butcher and Kevin J. Anderson below No Award, likely as punishment for appearing on the Puppy slates. Will it happen again this year? I have a hard time seeing Stephenson getting No Awarded, Puppy appearance or not. He’s been a well-liked Hugo writer for a long time, and he may well have scored a nomination without Puppy help. I think Stephenson will beat No Award.
That leaves Butcher. He was No Awarded in 2015 by 2674 to 2000 votes, so a No Award margin of 674. That’s a pretty substantial number. If we go back to 2014, when Larry Correia’s Warbound was the first Puppy pick to make it to the Best Novel category, he beat No Award by a 1161 to 1052 margin. So that means the “No Awarders” are picking up steam. At Chaos Horizon, I go with the past year’s results to predict the future unless there’s some compelling data to suggest otherwise. So I’ll predict that Butcher will lose to No Award in 2016 just as he did in 2015.
So, what do you think? Are we in for a Best Novel surprise, or will Novik walk away with the crown?
Let’s start with the most controversial group, the Rabid Puppies. Vox Day posted a “list” on his website; how will this affect the Hugos?
I estimated the Rabid Puppies at around 550 strong in the 2015 Final Hugo vote. I feel solid about that number; I estimated it from the 586 people who voted Vox Day #1 for Best Editor, Short Form. Vox Day leapt up to 900 by the end of the voting, and that extra 400 is how I estimated the low range of the Sad Puppies.
If the Rabid Puppies had around 550 votes in 2015, how many will they bring to 2016? Since all those who voted in 2015 can nominate in 2016, I imagine it will be a big number. Even so, I can’t imagine carrying 100% over—the nomination stage is simply less interesting, less publicized, and more difficult to vote in. Let’s imagine three scenarios: an 80% scenario, a 60% scenario, and a 40% scenario. I think 80% is the most likely; this is the group most invested in impacting the Hugos and the most likely to team up again. And since they don’t have to pay an entry fee to participate in the nomination stage . . .
I also think this group will have solid slate discipline, voting the list as Vox Day published it. If you want to factor in some slate decay, I’d do so for lesser known books like Agent of the Imperium. I won’t bother with any decay in the model. With that in mind, here’s my three scenarios the following chart:
|550 max Rabid Puppies|
|The Fifth Season|
|The Aeronaut’s Windlass||220||330||440|
|Agent of the Imperium||220||330||440|
|Honor At Stake|
|A Long Time Until Now|
A pretty simple model and not terribly informative so far. What you’ll glean from this is that the Rabid Puppies are likely to deliver a large block of votes to the works on their list. When we combine this chart with the estimated chart from the Typical vote and the Sad Puppy vote, that’s when we’ll be in business.
The core question is whether or not this block will be larger than other voting groups. In more lightly voted categories like Best Related Work or categories where the vote is more dispersed like Best Short Story, 400 votes is likely enough to sweep all or most of the ballot. Think about Best Related Work: the highest non-Puppy pick last year managed only around 100 votes. The top non-Puppy short story only managed 76 votes last year. Even if you triple those this year, you’re still well under 400 votes. In a more popular category like Best Novel or Best Dramatic Work, I expect the impact to be substantial but not sweeping. Perhaps 3 out of 5? 2 out of 5?
In 2015, the Rabid Puppies placed 4 out of their 5 picks on the initial Hugo ballot (Correia and Kloos declined, leaving them with only 2 spots). They were this successful partially due to their overlap with the Sad Puppies on those 4 choices. This year, the overlap is less (only 3), so I expect the effect to be less. Even with first mover advantage—remember, the Puppies took the 2015 ballot largely by surprise—Ann Leckie still had enough votes to break up the Puppy sweep in 2015. I fully expect some non-Puppy novels to show up on the final ballot.
How does this number compare to last year’s nomination vote? My best estimate of the Rabid Puppy 2015 nomination vote comes from the Rabid Puppy pick that placed #9, Brad Torgesen’s Chaplain’s War with 196 votes. Now, Torgersen could have received a number of votes outside the Rabid Puppy process, but other solo Rabid Puppy picks like the John C. Wright novellas earned in the range of 150 votes. This year’s estimate would double to triple that vote. Is this reasonable? I’ll leave that in your hands. Has the year-long controversy, with thousands of blog posts, increased the Rabid Puppies to the range of 400-500 votes? Controversy tends to drive strong reactions on both sides. Or is there a top limit to Rabid Puppy support? How would you calculate that? Is this roughly 200-400 votes enough to sweep a lot of categories? Or will the typical vote also triple, making this year much more competitive? Since the Rabid Puppies overlap with the Sad Puppies on several picks, are those novels a sure thing? What band do you expect the Rabid Puppies to be in, 40%, 60%, 80%, or something else?
Tomorrow, I’ll wade into the typical vote. Be warned!
Time to do what Chaos Horizon does: break out some numerical estimates for the 2016 Hugo Awards. Over the next several posts, I’m going to try to estimate how many votes the different voting groups in the 2016 Hugos are likely to generate under a number of different scenarios. We can then combine them to come up with my prediction, which I’ll post April 1st, the day after Hugo voting closes.
I’m going to start with my estimates from the end of the 2015 Hugo season using the final vote statistics. Here’s what I estimated back then:
Core Rabid Puppies: 550-525
Core Sad Puppies: 500-400
Sad Puppy leaning Neutrals: 800-400 (capable of voting a Puppy pick #1)
True Neutrals: 1000-600 (may have voted one or two Puppies; didn’t vote in all categories; No Awarded all picks, Puppy and Non-Alike)
Primarily No Awarders But Considered a Puppy Pick above No Award: 1000
Absolute No Awarders: 2500
I think those numbers are at least in the ballpark and give us a place to start modelling. Since you can’t vote against a pick in the nomination stage, we don’t need to know the difference between “No Awarders” and other more traditional Hugo voters. I’m going to combine all the non-Puppy voters into one big group, called the “Typical Voters.” I’ll initially assume that they’ll vote in similar patterns to past Hugo seasons before the Puppies. I’ll have more to say about that assumption later on.
Here’s the numbers I’ll be using; you may wish to adjust them up or down depending on your thoughts from last year.
Rabid Puppies: 550
Sad Puppies: 450
Due to a quirk in Hugo voting rules, everyone who voted in 2015 is eligible to nominate in 2016. Note those are the max raw numbers, not how many votes each group is likely to generate. I don’t think everyone will vote in 2016, but due to the high passions surrounding the 2015 season, I expect we’ll get a high turnout. I’m going to model the three groups at 40%, 60%, and 80% turnout. By using data from past voting patterns, specifically what percentage the various choices for each group received in past Hugo nominations, we might be able to ballpark which books will make the ballot. We can pull this data for the typical voters from the past Hugo packets. Remember, I even estimated what the “decay” percentages were for both the Rabid and Sad Puppies.
There are a lot of shifting variables and unknowns here, so I don’t know if we can land at something reasonable. So, to estimate (as an example) the #3 pick from the Typical voters, I’ll need to do the following:
Typical Pick #3 estimated total: 5000 * estimated turnout * average % of the #3 pick
So, if you estimate 60% turnout and used past Hugo data to see that #3 pick averages a 13% showing, you’d get 390 votes. Now, there’s plenty to critique here: maybe turnout will be higher or lower. Maybe this year’s patterns won’t follow previous years. Maybe I don’t have the right books in the right slots. Still, I always find any estimate more interesting than no estimate. If you don’t, Chaos Horizon probably isn’t the website for you!
The first thing I need is to come up with a list of books to try and model. By taking the top 5 novels from the Sad Puppies, Rabid Puppies, and my SFF Critics list we get a total of 12 likely Hugo nominated novels. Note there is overlap, and also overlap lower down on the lists. That’ll be accounted for in my estimates:
Ancillary Mercy, Ann Leckie (SFF Critics)
Uprooted, Naomi Novi (SFF Critics, Sad Puppies)
The Fifth Season, N.K. Jemisin (SFF Critics)
Aurora, Kim Stanley Robinson (SFF Critics)
Sorcerer to the Crown, Zen Cho (SFF Critics)
Seveneves, Neal Stephenson (Rabid Puppies, Sad Puppies)
Golden Son, Pierce Brown (Rabid Puppies)
Somewhither, John C. Wright (Rabid Puppies, Sad Puppies)
The Cinder Spires: The Aeronaut’s Windlass, Jim Butcher (Rabid Puppies, Sad Puppies)
Agent of the Imperium, Marc Miller (Rabid Puppies)
Honor At Stake, Declan Finn (Sad Puppies)
A Long Time Until Now, Michael Z Williamson (Sad Puppies)
Let’s call the list of “possible contenders.” Sure, a different novel may sneak up on us—but I find it unlikely in a year that’s as hotly contested as this. If you don’t show up on near the top of best of list / recommendations / slate, how are you going to beat the books that do? What’s your path to accumulating votes?
Let’s stop here for today—no need to overwhelm each post with data. I’ve got a couple questions for my readers: are there are any other “major” novels I should try and estimate? I could see someone thinking that Aurora isn’t a top pick, but that spot should go to The Water Knife or The Just City (i.e. novels by former Hugo winners), but changing the name of the novel won’t change the estimate for that slot. It’s still “Typical Slot #4”. Same thing with Sorcerer to the Crown: maybe it should be Sorcerer of the Wildeeps in that spot. The important thing to have on the list are any novels that might overlap between the groups. But I’d be interested to hear if you think there’s another big contender, and why. I can add a few more novels to the list pretty easily. Otherwise, we’ll dig into the Rabid Puppy vote tomorrow.
The 2016 Hugo Best Novel is going to be extremely unpredictable. We know that it’s going to attract enormous attention—just think of how many posts were published about the 2015 Hugos—and that it’s going to be controversial.
The difficulty in predicting the 2016 Hugo lies in how little information we have: how big will the Rabid Puppies vote be? How will the Sad Puppies 4 operate? How much will the rest of the Hugo vote increase? Will other Hugo voters change their voting habits to stop a Puppy sweep? Will specific authors turn down endorsements and/or nominations? Earlier, I anticipated a year-to-year nominating vote increase of at least 1.8x, and that could wind up much higher depending on how broadly discussed the nominations are. The kind of predictive methods I use at Chaos Horizon (data-mining) react to such massive changes very poorly. As such, my goal is to begin developing a broad picture and then refine that as more data becomes available.
So, while I listed my prediction in order from #1-#15, I think any of the works from #1-#10 have a strong chance of grabbing an eventual nomination. Remember, I predict what I think is likely to happen, not what should happen, and that my predictions are based on past Hugo patterns and a variety of data lists I collate and track. Opinions are mine alone, and this should be used as a starting place for discussion, nothing more. Have fun with the chaos!
Anyone can vote in the Hugo awards, provided you pay the supporting membership fee ($50 this year, I believe). EDIT 1/1/16: Remember, anyone who was a member of last year’s WorldCon (Sasquan) can also vote in this year’s nomination stage. So that means everyone who was part of last year’s kerfuffle has another vote. You do have to join this year’s WorldCon to vote in the final stage, however.
Last year, the nominations came out on April 4, 2015. The Hugos nominate 5 works per category unless there are ties.
1. Uprooted, Naomi Novik: Novik and Stephenson are pretty interchangeable at the top. These books are just so much more popular than every other contender this year that it’s hard to picture them not grabbing nominations. Novik has a prior Hugo nomination, a front-running Nebula status, and strong placement on whatever popular votes we see out there, including the Sad Puppies themselves. Combine all of that overlapping support, and I think Novik’s fairy-tale inflected Fantasy novel has a strong chance of getting nominated (and eventually winning) this year’s Hugo.
2. Seveneves, Neal Stephenson: Stephenson is another author who does well across all sectors of the Hugo voters. Prior nominations for massive books like Anathem and Cryptonomicon show that Hugo voters aren’t turned away by Stephenson’s length or complexity. The Hugo still leans towards Science Fiction, and this was one the biggest SF books of the year. It shows up well on a variety of lists, including Sad Puppies 4, and that broad support should drive it to a nomination. There is some dislike of this book out there (it splits into two very different parts), but dislike doesn’t really impact the nomination stage, only the final vote.
3. Rabid/Sad Puppy Overlap Nominee: Before Correia and Kloos declined their nominations in 2015, the Sad/Rabid overlaps (i.e. appeared on both lists) took 4 of the top 5 Hugo slots. While we won’t know what these overlaps will be until the Rabid Puppies announce their slate, we can predict that they’ll grab several tops spots. Based on my early Sad Puppy census, I’m currently thinking this overlap could be something like Jim Butcher’s Aeronaut’s Windlass, John C. Wright’s Somewhither, or Michael Z. Williamson’s A Long Time Until Now. Of those three, Butcher would place highest because of his massive popularity. More popularity = more potential voters.
4. Ancillary Mercy, Ann Leckie: Leckie broke up the Puppy sweep last year with the middle volume of her well-liked trilogy; this final volume was received as a fitting end to a series that has already won a Hugo and Nebula. This series is one of the most talked (and nominated) SF publications of recent years.
5. Rabid/Sad Puppy Overlap Nominee: The less popular/mainstream book that the Rabid/Sad Puppies overlap on could land here. A John C. Wright or Michael Z. Williamson just has so many fewer readers (thus fewer votes) than a Bucther. Based on last year’s number, you would still anticipate a Top #5 placement for this overlap, although we won’t know the exact numbers/impact of the Sad Puppies and Rabid Puppies until the nominations come out.
6. The Fifth Season, N.K. Jemisin: The 2015 Hugo was very close. Spots #3-#11 were separated by only 100 votes. If we have 1000+ new voters, any of these #3-#11 spots could shuffle. I have Jemisin high because of the strong critical reception of this book, her previous Hugo and Nebula nominations, likely Nebula nomination this year, and her increased visibility in the field (she now has a regular NY Times Book Review column). The Fifth Season also fits the mold of The Goblin Emperor, as a sort of twist/revisioning of secondary world fantasy. The fantasy side of the Hugos has been driving quite a few nominations/wins lately: think about Graveyard Book, Norrell & Strange, or Among Others.
7. Rabid Puppy Nominee: This is a wildcard. Last year, when the Sad/Rabid puppies separated, they fell below 3 non-Puppy picks (Leckie, Addison, and Liu). Would the same happen this year? I’ve got no idea or even suggestion of what this book might be; we’ll have to wait and see. This would be the truest measure/test of the Rabid Puppies voting strength. Even a slight rise of the Rabid Puppy numbers could push this up 2, 3, or more slots. Depending on how often the Rabid/Sad Puppies overlap, you may have to add more Rabid Puppy nominee slots in at about this point.
8. Sad Puppy Nominee: The longer SP4 list will dilute their vote somewhat, so I expect their solo picks to place below the Rabid puppies. In the similar spot last year, they clocked in with 199 votes for Trial by Fire, although Gannon’s vote total was doubtless helped by his Nebula nomination.
9. Aurora, Kim Stanley Robinson: The next three are basically interchangeable in this prediction, all belonging to the category of SF books by past Hugo winners. Aurora is a tale of a multi-generational ship and planetary colonization, and is almost the opposite of Sevenves in terms of its approach, characterization, and philosophy. SF voters looking for an alternative to Stephenson—or even just a book to round out their ballots—might go in this direction.
10. The Dark Forest, Cixin Liu: Normally last year’s Hugo winner would be higher, but I’m not seeing the buzz for Liu you would expect. Cixin Liu himself commented on Chinese voters potential driving this book to a nomination by saying, “That’s the best way to destroy The Three-Body Trilogy. And not just this sci-fi work, but also the reputation of Chinese sci-fi fans. The entire number of voters for the Hugo Awards is only around 5,000. That means it is easily influenced by malicious voting. Organizing 2,000 people to each spend $14 is not hard, but I am strongly against such misbehavior. If that really does happen, I will follow the example of Marko Kloos, who withdrew from the shortlist after discovering the ‘Rabid Puppies’ had asked voters to support him.”
11. The Water Knife, Paolo Bacigalupi: Bacigalupi is under the radar going into the 2016 awards season, but The Water Knife was a well-reviewed SF novel, his first since the Hugo and Nebula winning The Windup Girl, with many of the same eco-SF themes Bacigalupi is acclaimed for. Can it cut through the noise of this year’s Hugo controversies? If this shows up on a lot of the other awards, it could move up the Hugo list.
12. Nebula Nominee (Grace of Kings by Ken Liu, The Traitor Baru Cormorant by Seth Dickinson, Karen Memory by Elizabeth Bear, etc.): The Nebulas have exerted considerable influence on the Hugos over the past few years. The increased visibility of the Hugo nominees can springboard a book to a Hugo nomination; this seemed to have helped both The Goblin Emperor and The Three-Body Problem last year. I’ll keep an eye on who gets Nebula noms, and then boost them in my Hugo predictions.
13. The Shepherd’s Crown, Terry Pratchett: Pratchett is going to be a sentimental favorite going into 2016. I think some people will try to nominate Discworld as a whole, which will split the Pratchett vote. Even if Pratchett is nominated, I suspect his estate would turn it down, following the precedent established by Pratchett turning down his Hugo nomination for Going Postal.
14. Nemesis Games, James S.A. Corey: I may be too high with this, but I think The Expanse TV series is going to revitalize Corey’s Hugo chances over time. The big impact may be felt next year, particularly if we have Hugo rule changes.
15. The Just City, Jo Walton: Walton’s a stealth candidate—she missed last year’s ballot by only 90 votes, and The Just City is a little more accessible and well-liked than My Real Children. Walton still has a lot of good will (and readers!) as a result of the Hugo and Nebula winning Among Others. I don’t expect a nom, but it should get some votes.
Scalzi’s not on the list because of this post saying he’s sitting out the 2015 awards. Brandon Sanderson just missed because Shadows of Self is #2 in a series; he’s an author that could greatly benefit from Hugo rule changes (huge fanbase). Darker Shade of Magic by V.E. Schwab has a huge Goodreads following, but isn’t showing up as popular in other places. Mira Grant had a run of numerous best Novel noms earlier this decade, so she might be hanging around the Top #15. Her current series isn’t a popular as her earlier zombie series, though. Charles Stross tends to get nominated for his SF, not The Laundry Files, so that’s why he isn’t in the Top #15 for Annihilation Score. Anyone else who seems an obvious contender that I missed?
Also remember that January is very early. Three Body-Problem and Ancillary Justice, the last two Hugo winners, just started picking up steam about now. As we see more year-end lists and the beginning of the 2016 Award nominations, the picture should snap into sharper focus. I’ll update my prediction on the first of the month in February, March, and April.
Well, it’s the new year, so time to roll up our sleeves and get started. Let’s begin with my first 2016 Nebula prediction. Remember, I try to predict what will happen, based on past evidence and patterns in the Nebulas and various lists and data from this year, rather than what should happen. These are my opinions, so they have no particularly authority, and I always think Chaos Horizon is best used in conjunction with other opinions and websites on the internet.
Predicting the Nebulas this year was made much easier since the Science Fiction and Fantasy Writers of America (SFWA), the group that administers the Nebulas, made their “Recommended Reading List” public. Remember, the Nebulas are a vote of SFWA members; by making their recommendations public, we get a good idea of the direction these awards are leaning. Last year, the final Recommended List correctly predicted 4/6 of the final nominees (the other two nominees were in spots #7 and #8). Since Chaos Horizon always uses the past year as a guide for the next year’s prediction, I predict something similar will happen this year.
If you look at the SFWA list as of right now (January 1), we can see that the top of the list is too heavily slanted towards Fantasy when compared to Nebula history. 5 of the top 6 are Fantasy novels (Leckie being the only SF), as are 8 of the top 10. I suspect one or two of the SF novels will creep up the list over time. Right now, I’m looking at a gang of four: either a novel by a past Nebula winner (Aurora by Kim Stanely Robinson (tied #11) or The Water Knife by Paolo Bacigalupi (also tied at #11)), Thunderbird by perennial Nebula favorite Jack McDevitt, or Raising Caine by Charles Gannon, #3 in a series that garnered Nebula noms in 2013 and 2014. One or two of these books making the final ballot would create a more balance Fantasy/SF ratio.
The Nebulas nominate 6 novels in the category.
Here’s my initial prediction, as of January 1, 2016:
1. Uprooted, Naomi Novik: Novik has almost every metric going for her: good sales, good placement on year-end lists, strong fan response. She has no Nebula history (0 nominations), although she did a grab a Hugo best novel nomination back in 2007 for Temeraire. I’ve got this #1 because I see it as the “buzziest” book of the year; it’s also #1 on the SFWA recommendations. Why second-guess the data?
2. Ancillary Mercy, Ann Leckie: Leckie is coming off of two straight Nebula nominations for this series, including her win for Ancillary Justice in 2014. I don’t expect anything to change this year; the final volume was well-received as a fitting conclusion to this trilogy. As of January 1, 2016, she’s #6 on the SFWA recommended list.
3. Grace of Kings, Ken Liu: Liu has been a recent Nebula darling : 7 short fiction nominations since 2012. This is his first novel, and since the Nebula audience is already very familiar with his short fiction from prior nominations, that brings a lot of eyeballs to the text. In Chaos Horizon predictions, eyeballs = possible voters. It’s also #2 on the SFWA Nebula recommendations list, and he scored a Best Novel nomination last year for translating Cixin Liu’s Three-Body Problem.
4. The Fifth Season, N.K. Jemisin: Jemisin has three prior best novel Nebula noms in 2011, 2012, and 2013, which is every year she’s been eligible for the novel category (she’s published 5 novels, but some years she published more than one novel). She’s at 8th on the recommended list, but with that strong Nebula history, I think she’s a good bet for a nomination this year.
5. Aurora, Kim Stanley Robinson: Robinson has been a perennial Nebula favorite (12 total nominations, 3 wins, including Best Novel wins for 2312 in 2013 and Red Mars back in 1994). Even though he’s tied #14 on the SFWA list, this is a kind of Hard SF novel that appeals to the SF wing of the Nebulas; that group has always had enough votes to put 1-2 books on every Nebula ballot.
6. Karen Memory, Elizabeth Bear: I’m less certain about the Bear. Her high placement on the SFWA list (#3), as well as the generally positive reception of the book, would seem to place her in good stead. In the negative column, she has 0 total Nebula nominations ever, and Karen Memory doesn’t perform particularly well in popularity metrics. The 19th century steampunk setting might be a challenge for some voters as well. I think any of the texts from 4-10 in my list has a real chance of making it this year.
7. Thunderbird, Jack McDevitt: The first rule of Nebula prognostication: you never count Jack McDevitt out. 12 Best Novel Nebula nominations, including 9 out of the past 12 years! This book is from one of his less popular series, and it came out very late in the year (December 1, 2015); otherwise, I’d have him higher.
8. The Water Knife, Paolo Bacigalupi: If Aurora doesn’t make it, this book is the other logical choice for a SF novel from a recent winner. Bacigalupi roared to huge Nebula and Hugo success with The Windup Girl back in 2010, and this is his first proper “adult” SF novel since then. 5 years is an eternity in these awards—has his popularity cooled off? Or will he return to the ranks of the nominees?
9. The Traitor Baru Cormorant, Seth Dickinson: This placed #5 on the SFWA recommended list, so why do I have it so low? Genre, genre, genre: I can’t predict a Nebula with 5 or 6 fantasy novels in it, and I think Dickinson has to be slotted behind the other more obvious fantasy contenders. Keep an eye to see if this picks up steam in January.
10. Raising Caine, Charles Gannon: I place a lot of stock in Gannon’s two previous nominations in 2014 and 2015 for books from this series. He’s currently only at 4 votes in the SFWA list (versus 23 last year). Is this an indication of poor reception of Raising Caine or am I looking at the list too early? If that number increases, expect him to rise in my prediction.
11. Updraft, Fran Wilde: Currently #4 on the SFWA list, I think this is more likely to get a nomination in the Andre Norton (the Young Adult category, where it sits at #1 in the recommendations). While nothing prevents a novel from getting both a Nebula and a Norton nomination, I don’t see nominators voting for the same book in 2 different categories.
12. The Dark Forest, Cixin Liu: You’d think the sequel to last year’s Hugo winner and Nebula nominee would be higher in the recommended list, but The Dark Forest currently doesn’t make the SFWA recommended list at all. I don’t know how to explain that (maybe Ken Liu, who translated The Three-Body Problem but not this volume, was the name that brought the Nebula voters?), but you got to go by the stats. Last year’s Hugo win and Nebula nom should at least keep it in the mix.
13.Barsk: The Elephant’s Graveyard, Lawrence Schoen: The surprise of this list, this places an impressive 7th on the SFWA list. This just came out December 29th, 2015; I think that’s too late for a Nebula book to pick up steam with the rest of the SFWA voters that don’t have access to early copies.
14. Seveneves, Neal Stephenson: You’d think Stephenson would be neck-and-neck with the Robinson and Bacigalupi, but the Nebulas have never liked Stephenson much. He only has 1 nomination back in 1997 for Diamond Age and zero wins. If the Nebulas ignored Snow Crash, Cryptonomicon, and Anathem, why would you predict this? It’s tied for #16 on the current recommendations.
15. Sorcerer to the Crown, Zen Cho: If one of the fantasy novels higher on the list falters, Cho’s book could stand poised to take it’s place. Somewhat similar in setting to the well-liked Hugo/Nebula winning Jonathan Strange & Mr Norrell, this seems to hit some marks that previous Nebula voters have liked.
So, there’s my initial Top #15 Nebula list! Remember, this is a starting place, not the finishing place, and these awards can be very dynamic between January-February, with lots of shifts as books pick up steam. So, what do you think? Did I miss any obvious contenders? Thinks someone should be higher or lower? Argue away in the comments, and happy predicting!
Here we go . . . the official Chaos Horizon Nebula prediction for 2015!
Disclaimer: Chaos Horizon uses data-mining techniques to try and predict the Hugo and Nebula awards. While the model is explained in depth (this is a good post to start with) on my site, the basics are that I look for past patterns in the awards and then use those to predict future behavior.
Chaos Horizon predictions are not based on my personal readings or opinions of the books. There are flaws with this model, as there are with any model. Data-mining will miss sudden changes in the field, and it does not do a good job of taking into account the passion of individual readers. So take Chaos Horizon lightly, as an interesting mathematical perspective on the awards, and supplement my analysis with the many other discussions available on the web.
Lastly, Chaos Horizon predicts who is most likely to win based on past awards, not who “should” win in a more general sense.
1. Ann Leckie, Ancillary Sword: 19.4%
2. Katherine Addison, The Goblin Emperor: 19.2%
3. Cixin Liu and Ken Liu (translator), The Three-Body Problem: 17.7%
4. Jeff VanderMeer, Annihilation: 16.8%
5. Jack McDevitt, Coming Home: 16.5%
6. Charles Gannon, Trial by Fire: 10.4%
The margin is incredibly small this year, indicating a very close race. Last year, Leckie had an impressive 5% lead on Gaiman and an impressive 14% lead over third place Hild in the model. This year, Leckie has a scant .2% lead on Addison, and the top 5 candidates are all within a few percentage points of each other. I think that’s an accurate assessment of this year’s Nebula: there is no breakaway winner. You’ve got a very close race that’s going to come down to just a few voters. A lot of this is going to swing on whether or not voters want to give Leckie a second award in two years, or whether they prefer fantasy to science fiction (Addison would win in that case), or how receptive they are to Chinese-language science-fiction, or of they see Annihilation as SF and complete enough to win, etc.
Let’s break-down each of these by author, to see the strengths and weaknesses of their candidacy.
Ancillary Sword: Leckie’s sequel to her Hugo and Nebula winning Ancillary Justice avoided the sophomore jinx. While perhaps less inventive and exciting than Ancillary Justice, many reviewers and commenters noted that it was a better overall novel, with stronger characterization and writing. Ancillary Sword showed up on almost every year-end list and has already received the British Science Fiction Award. This candidacy is complicated, though, by the rareness of winning back-to-back Nebulas. She would join Samuel R. Delany, Frederik Pohl, and Orson Scott Card as the only back-to-back winners. Given how early Leckie is in her career (this is only her second novel), are SFWA voters ready to make that leap? Leckie also is competing against 4 other SF novels: it’s possible she could split the vote with someone like Cixin Liu, leaving the road open for Addison to win.
Still, Leckie is the safe choice this year. Due to all the attention and praise heaped on Ancillary Justice, Ancillary Sword was widely read and reviewed. More readers = more voters, even in the small pool of SFWA authors. People that are only now getting to The Three-Body Problem may have read Ancillary Sword months ago. I don’t think you can overlook the impact of this year’s Hugo controversy on the Nebulas: SFWA authors are just as involved in all those discussions, and giving Leckie two awards in a row may seem like a safe and stable choice amidst all the internet furor. If Ancillary Justice was a consensus choice last year, Ancillary Sword might be the compromise choice this year.
The Goblin Emperor: My model likes Addison’s novel because it’s the only fantasy novel in the bunch. If there is even a small pool of SFWA voters (5% or so) who only vote for fantasy, Addison has a real shot here. The Goblin Emperor also has had a great year: solid placement on year-end lists, a Hugo nomination, and very enthusiastic fan-reception. Of the six Nebula nominees this year, it’s the most different in terms of its approach to genre (along with Annihilation, I guess), giving a very non-standard take on the fantasy novel. The Nebula has liked those kinds of experiments recently. The more you think about it, the more you can talk yourself into an Addison win.
The Three-Body Problem: The wild-card of the bunch, and the one my model has the hardest time dealing with. This come out very late in the year—November—and that prevented it from making as many year-end lists as other books. Secondly, how are SFWA voters going to treat a Chinese-language novel? Do they stress the in A (America) in SFWA? Or do they embrace SF as a world genre? The Nebula Best Novel has never gone to a foreign-language novel before. Will it start now?
Lastly, do SFWA voters treat the novel as co-authored by Ken Liu (he translated the book), who is well known and well liked by the SFWA audience? Ken Liu is actually up for a Nebula this year in the Novella category for “The Regular.” I ended up (for the purposes of the model) treating Cixin Liu’s novel as co-authored by Ken Liu. Since Ken Liu was out promoting the novel heavily, Cixin Liu didn’t get the reception of a new author. I think many readers came into The Three-Body Problem because of Ken Liu’s reputation. If I hadn’t done that, this novel drops 1% point in the prediction, from 3rd to 5th place.
The Three Body-Problem hasn’t always received the best reviews. Check this fairly tepid take on the novel published this week by Strange Horizons. Liu is writing in the tradition of Arthur C. Clarke and other early SF writers, where character is not the emphasis in the book. If you’re expecting to deeply engaged by the characters of The Three Body-Problem, you won’t like the novel. Given that the Nebula has been leaning literary over the past few years, does that doom its chances? Or will the inventive world-building and crazy science of the book push it to victory? This is the novel I feel most uncertain about.
Annihilation: I had VanderMeer’s incredibly well-received start to his Southern Reach trilogy as the frontrunner for most of the year. However, VanderMeer has been hurt because of his lack of other SF awards this season: no Hugo, and he’s only made the Campbell out all of the other awards. I think this reflects some of the difficulty of Annihilation. It’s a novel that draws on weird fiction, environmental fiction, and science fiction, and readers may be having difficulty placing it in terms of genre. Add in that it is very short (I believe it would be the shortest Nebula winner if it ever wins) and clearly the first part of something bigger, is it stand-alone enough to win? The formula doesn’t think so, but formulas can be wrong. I wouldn’t be stunned by a VanderMeer win, but it seems a little unlikely at this point.
Coming Home: Ah, McDevitt. The ghost of the Nebula Best Novel category: he’s back for his 12th nomination. He’s only won once, but could it happen again? There’s a core of SFWA voters who must love Jack McDevitt. If the vote ends up getting split between everyone else, could they drive McDevitt to another victory? It’s happened once already, in 2007 with Seeker. I don’t see it happening, but stranger things have gone down in the Nebula.
Trial by Fire: The model hates Charles Gannon. He actually did well last year. According to my sources, he placed 3rd in the Nebula last year. Still, this is the sequel to that book, and sequels tend to move down in the voting. Gannon’s lack of critical acclaim and lack of Hugo success are what kills him in the model.
Remember, the model is a work in progress. This is only my second year trying to do this. The more data I collect, and the more we see how individual Nebula and Hugos go, the better the model will get. As such, just treat the model as a “for fun” thing. Don’t bet your house on it!
So, what do you think? Another win for Leckie? A fantasy win for Addison? A late tail-wind win for Liu?
One last little housekeeping post before I post my prediction later today. Here are the 10 indicators I settled on using:
Indicator #1: Author has previously been nominated for a Nebula (78.57%)
Indicator #2: Author has previously been nominated for a Hugo (71.43%)
Indicator #3: Author has previously won a Nebula for Best Novel (42.86%)
Indicator #4: Has received at least 10 combined Hugo + Nebula noms (50.00%)
Indicator #5: Novel is science fiction (71.43%)
Indicator #6: Places on the Locus Recommended Reading List (92.86%)
Indicator #7: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #8: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)
Indicator #9: Receives a same-year Hugo nomination (64.29%)
Indicator #10: Nominated for at least one other major SFF award (71.43%)
I reworded Indicator #4 to make the math a little clearer. Otherwise, these are the same as in my Indicator posts, which you can get to by clicking on each link.
If you want to see how the model is built, checking out the “Building the Model” posts.
I’ve tossed around including a “Is not a sequel” indicator, but that would take some tinkering, and I don’t like to tinker at this point in the process.
The Indicators are then weighted according to how well they’ve worked in the pass. Here are the weights I’ve used this year:
Indicator #1: 8.07%
Indicator #2: 8.65%
Indicator #3: 13.78%
Indicator #4: 11.93%
Indicator #5: 10.66%
Indicator #6: 7.98%
Indicator #7: 7.80%
Indicator #8: 4.24%
Indicator #9: 16.54%
Indicator #10: 10.34%
Lots of math, I know, but I’m going to past the prediction shortly!
Here are the last two indicators currently in my Nebula formula. These ones try to chart how well a book is doing in the current awards season, based on the assumption that if you are able to get nominated for one award, you’re more likely to win another. Note that it’s nominations that seem to correlate, not necessarily wins. Many of the other SFF awards are juried, so winning isn’t as good a measure of votes like the Hugo and Nebula use. Nominations raise your profile and get your book buzzed about, which helps pull in those votes. If something gets nominated 4-5 times, it becomes the “must-read” of the year, and that leads to wins.
Indicator #9: Receives a same-year Hugo nomination (64.29%)
Indicator #10: Nominated for at least one other major SFF award (71.43%)
I track things like the Philip K. Dick, the British Science Fiction Award, the Tiptree, the Arthur C. Clarke, the Campbell, and the Prometheus. Interestingly, the major fantasy awards—the World Fantasy Award, the British Fantasy Award—don’t come out until later in the year. This places someone like Addison at a disadvantage in these measures. We need an early in the year fantasy award!
In recent years, the Nebula has been feeding into the Hugo and vice-versa. Since the same awards are talked about so much in the same places, getting a Nebula nom raises your Hugo profile, which in turn feeds back and shapes the conversation about the Nebulas. If everyone on the internet is discussing Addison, Leckie, and Liu, someone like VanderMeer or Gannon can fall through the cracks. More exposure = more chances of winning.
So, how do things look this year?
The star by Leckie’s name means she won the BSFA this year. 2015 is very different than 2014: at this time last year, Ancillary Justice was clearly dominating, having already picked up nominations for the Clarke, Campbell, BSFA, Tiptree, and Dick. She’d go on to win the Clarke, BSFA, Hugo, and Nebula.
This year there isn’t a consensus book powering to all the awards. I thought VanderMeer would garner more attention, but he missed a Philip K. Dick Award nomination, and I figured the Clarke would have been sympathetic to him as well. Those are real storm clouds for Annihilation‘s Nebula chances. Maybe the book was too short or too incomplete for readers. Ancillary Sword isn’t repeating Leckie’s 2014 dominance, but it has already won the BSFA. Liu has some momentum beginning to build for him, while Gannon and McDevitt are languishing.
So those are the 10 factors I’m currently weighting in my Nebula prediction. I’ve been tossing around the idea of adding a few more (publication date, sequel, book length), but I might wait until next year to factor them in. I’d like to factor in something about popularity but I haven’t found any means of doing that yet.
What’s left? Well, we have to weight each of these Indicators, and once I do that, I can run the numbers to see who leads the model!
These indicators try to wrestle with the idea of critical and reader reception by charting how the Nebula nominees do on year-end lists. While these indicators are evolving as I put together my “Best of Lists”, these are some of our best measures of critical and reader response, which directly correlate to who wins the awards.
Right now, I’m using a variety of lists: the Locus Recommended Reading List (which has included the winner 13 out of the past 14 years, with The Quantum Rose being the lone exception), the Goodreads Best of the Year Vote (more populist, but they’ve at least listed the winner in the Top 20 4 years since they’ve been fully running, so that’s at least promising), and then a very lightly weighted version of my SFF Critics Meta-List. With a few years more data, I’ll split this into a “Hugo” list and a “Nebula” list, and we should have some neatly correlated data. Until then, one nice thing about my model is that it allows me to decrease the weights of Indicators I’m testing out. The Meta-List will probably only account for 2-3% of the total formula, with the Goodreads list at around 5% and the Locus at around 9%. I can’t calculate the weights until I go through all the indicators.
Indicator #6: Places on the Locus Recommended Reading List (92.86%)
Indicator #7: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #8: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)
There are separate Fantasy and SF Goodreads lists, hence the SF and F indicators. These are fairly bulky lists (the Locus is at least 40+, the Goodreads the same, etc.), so it isn’t too hard to place on one of them. If you don’t, that’s a real indicator that your book isn’t popular enough (or popular enough in the right places) to win a mainstream award. So these indicators more punish books that don’t make the lists than help those that do, if that makes any sense.
Results are as expected: Gannon and McDevitt suffer in these measures a great deal. Their books did not garner the same kind of broad critical/popular acclaim that other authors did. Cixin Liu missing the Goodreads vote might be surprising, but The Three-Body Problem came out very late in the year (November), and didn’t have time to pick up steam for a December vote. This is something to keep you eye on: did Liu come out too late in the year to pick up momentum for the Nebulas? If The Three-Body Problem ends up losing, I might add a “When did this come out?” Indicator for the 2016 Nebula model. Alternatively, these lists may have mismeasured Liu because of its late arrival, and then these lists would need to be weighted more lightly.
The good thing about the formula is that the more data we have, the more we can correct things. Either way Chaos wins!
One of my simplest indicators:
Indicator #5: Novel is science fiction (71.43%)
The Nebula—just look at that name—still has a heavy bias towards SF books, even if this has been loosening in recent years. See my Genre Report for the full stats. In its 33 year history, only 7 fantasy novels have taken home the award. Chaos Horizon only uses data since 2001 in my predictions, but we’re still only looking at 4 of the last 14 winners being fantasy.
How do this year’s nominees stack up?
Table 3: Genre of 2015 Nebula Award Nominees
Table 3: Genre of 2015 Nebula Award Nominees
Obviously, it’s a heavy SF year, with 5 of the 6 Nebula nominees being SF novels. There were plenty of Nebula-worthy fantasy books to choose, including something like City of Stairs, but the SFWA voters went traditional this year. I think Annihilation could be considered a “borderline” or “cross-genre” novel, although I see most people classifying it as Science Fiction.
Ironically, all of this actually helps Addison’s chances with the formula. Think about that logically: fantasy fans only have 1 book to vote for, while SF fans are split amongst 5 choices. The formula won’t give Addison a huge boost (the probability chart works out 28.57% for Addison, 14.29% for everyone else), but it’s the one part of the formula where she does better than everyone else.
Next time, we’ll get into the indicators for critical reception.