2016 Hugos: Some Initial Stat Analysis

So they put the stats up already.

It’s actually a pretty easy analysis this year to see what the Rabid Puppy numbers were in the Nomination and Final stage. We can use Vox Day himself to do that: on the Rabid Puppies, he included himself in the “Best Editor Long Form” category and then later, in his final vote post, suggested himself as #1 in that category. Given the controversy surrounding him, I think we can safely assume that almost all the votes for him were from the Rabid Puppies.

So, here’s where he landed:

Nomination stage:

801 Toni Weisskopf 45.41%
465 Anne Sowards 26.36% *
461 Jim Minz 26.13%
437 Vox Day 24.77%
395 Mike Braff 22.39% **
302 Sheila Gilbert 17.12% 287
Liz Gorinsky 16.27%

So that means 437 votes for the Rabid Puppies in the nomination stage. This is in line with other obvious Puppy picks: Somewhither by John C. Wright with 533,(novels always pick up more votes), 433 votes for Stephen King’s “Obits” (King is a major author, but no one thinks of him for a Hugo), 387 for “Space Raptor Butt Invasion,” 482-384 in the Puppy swept Best Related Work category, 398 for the Castalia House blog, and so forth. That’s a stable enough range for me to say that the Rabid Puppy strength in the nomination stage was 533-384, with around 440 being right in the middle.

So what happened? The Rabid Puppy vote collapsed from the Nomination to the Final Voting stage. This most likely happened because you can nominated in 2016 for free (provided you paid in 2015), but to vote in the final stage in 2016, you had to pay again. Here’s the stats from the first round of the Best Long Form Editor:

Vox Day 165

It’s highly unusual to get 437 votes in the nomination stage and then collapse to 165 in the more popular, more voted in final stage. That 165 represents the most “Rabid” of the Rabid Puppies; some of the other Rabid Puppy picks did considerably better in the first round of voting “Space Raptor Butt Invasion” got 392 votes in the first round of Short Story, for instance. It’s hard to know what exactly to chalk that up to at this point—people enjoying the joke, a broader pool of Rabid Puppy associated voters who didn’t want to vote Vox Day #1 in Long Form Editor.

My initial conclusion then would be around 440 Rabid Puppies in the nomination stage but less than 200 in the final voting stage. Thoughts? Seeing something I’m not?

Jemisin Wins Hugo Award for Best Novel

At WorldCon, they just announced that N.K. Jemisin won the Hugo for Best Novel for The Fifth Season this year! Congratulations to her in what was hard-fought and controversial year.

Jemisin’s win is something of a surprise. We’ll have the stats soon, but Uprooted had a lot of the traditional markers going for it: it won the Nebula, it won the Locus Fantasy, it was more popular then The Fifth Season on Goodreads and Amazon. Jemisin’s win goes to show how unpredictable the Hugos have become–the influx of new voters, along with the high sentiments regarding the Puppy controversy, have basically shot past Hugo patterns all to hell. After ten or so years when the Nebula winner pretty regularly won the Hugo, this may mark a new era in Hugo history. It certainly makes for a more dynamic award, even if it plunges Chaos Horizon and my predictions, into, well, chaos.

In the other categories, things played out largely as expected. In many cases, there was only one non-Puppy choice; that usually won. The two exceptions were in the Campbell, where Andy Weir beat Alyssa Wong, and in Novelette, where Hao Jingfang beat Brooke Bolander. Both Weir and Jingfang were very mainstream picks, however, and could have made the ballot without any Rabid support. In general, the Hugo voters didn’t No Award” Hugo categories just because of overlap with the Rabid Puppies. Neither Gaiman nor File 770 were punished for appearing on the slate. When categories were given “No Award” (Best Related Work, Fancast),  it was because none of the nominees overlapped with typical Hugo picks.

As the dust clears and the stats come out, I’ll continue to do some more analysis. Whatever we can say about 2016, we know that 2017 will be even more unpredictable. We’ve got ballot rule changes, more high passions and controversies, and a new seat of books to ponder over!

Final 2016 Hugo Best Novel Prediction

Let’s me finalize my 2016 Hugo Best Novel Prediction:

  1. Uprooted by Naomi Novik
  2. The Fifth Season by N.K. Jemisin
  3. Ancillary Mercy by Ann Leckie
  4. Seveneves by Neal Stephenson
  5. The Aeronaut’s Windlass by Jim Butcher

Remember, Chaos Horizon doesn’t predict what I want to happen, but rather what I think will happen based on my analysis and understanding of past Hugo trends.

First off, those past trends have been shot full of holes in recent years. The Puppy controversies have fundamentally transformed the voting pool of the Hugos, meaning that past trends might not apply given how widely the voters have changed. New voters have come in with the Puppies; new voters have come in to contest the Puppies; some of those voters might have stayed, some might have dropped out. Some voters are voting No Award out principle; some aren’t. How exactly you balance all of that is going to be largely speculative, maybe to the point that no predictions are meaningful. That’s why we’re called “Chaos Horizon” here!

However, I think the potential Kingmaker effect, when combined with past Hugo trends and the popularity of Uprooted, makes Novik a reasonable favorite. Novik has already won the Nebula and the Locus Fantasy (beating Jemisin twice); her book is a stand-alone, making it feel more complete than either the Jemisin, Leckie, or Butcher; and Novik, along with Stephenson, is more popular than the other nominees. In the past, these have all been characteristics of the Hugo winner.

In the past few years, I’ve developed a mathematical formula to help me predict the Hugos. The formula won’t be accurate this year because of the Rabid Puppy voters, but here’s what it came up with:

Uprooted 27.1%
Ancillary Mercy 20.5%
Seveneves 20.0%
The Fifth Season 17.1%
The Cinder Spires 15.3%

The formula obviously doesn’t take pro or anti-Puppy sentiment into account. Uprooted is a big favorite because of her Nebula win this year. The Nebula has been the best predictor of the Hugo in the last decade: in 5 of the last 10 years, the Nebula winner has gone on to win the Hugo. The stats are actually better than that. In the 2006, 2007, 2009, and 2015, the Nebula winner was not even nominated for the Hugo. So the only time a Nebula winner has lost the Hugo in the final voting round was when Redshirts beat 2312 in 2013. So the last 5/6 times when the Nebula winner had a chance to win the Hugo, it did. Those are nice odds.

My formula is not designed to accurately predict second place. With that in mind, I think Jemisin is too low. Leckie won too recently with Ancillary Justice to seem to have a chance to win again, and Seveneves is pretty divisive. One reason my formula fails is because it currently doesn’t punish books for being sequels. Leckie should be lower for that reason. It’s something I’ll factor in next year. Stephenson will be lower because some people will vote him “No Award” for appearing on the Rabid Puppy list. So this pushes Jemisin up two spots.

However, Jemisin is lower down in the formula because she doesn’t have a history of winning major awards, unlike Stephenson and Leckie. Check out Jemisin’s sfadb page. She’s been nominated for 12 major awards and hasn’t won any. Not good odds. That’s what my formula is picking up on. My formula has trouble gauging changes in sentiment. I think most readers believe The Fifth Season is better than Jemisin’s earlier works, but I have trouble quantifying that.

What my numbers give is a percentage to win based on the patterns of past Hugo votes, based on my analysis and combination of those formulas using a Linear Opinion Pool prediction model. Prediction is different than statistical analysis: different statisticians would build different models based on different assumptions. You should never treat a prediction (either on Chaos Horizon or in something like American elections) the same way you would treat a statistical analysis; they are guided by different logical methods. Someone who disagrees with one of my assumptions would come up with a different prediction. Fair enough. This is all just for fun! You can trace back through the model using some of these posts: Hugo Indicators, and my series of Nebula model posts beginning here. The Hugo model uses the same math but different data.

Let’s look at this with some other data. Here’s the head to head popularity comparison of our five Hugo finalists, based on the number of ratings at Goodreads and Amazon.

Goodreads Amazon
Uprooted 41,174 1,332
Seveneves 35,428 2,487
The Aeronaut’s Windlass 18,249 1,285
Ancillary Mercy 11,698 247
The Fifth Season 7,676 184

These aren’t perfect samples, as neither Goodreads nor Amazon is 100% reflective of the Hugo voting audience, nor has the Hugo Awards always correlated with popularity. Still, it gives us another interesting perspective.

Jemisin does not break out of the bubble in ways that Novik and Stephenson do. These aren’t small differences, either: Uprooted is 5x more popular on Goodreads and 7x on Amazon than The Fifth Season. I put stock in that—the more people read your book, the more there are to vote for it. While the Hugo voting audience is a subset of all readers, popularity matters.

Two other notes: it’s fascinating how different Amazon and Goodreads are. Novik outpaces Stephenson on Goodreads but gets crushed on Amazon. Different audiences, different reading habits. The question for Chaos Horizon is which one better correlates with the Hugo winner? Second, Butcher may be a very popular Urban Fantasy writer with Dresden, but he’s only a moderately popular Fantasy writer.

So, all told, Novik has big advantages in popularity and same year awards (having won the Nebula already). Neither Jemisin, Leckie, or Stephenson managed to do better than Novik in critical acclaim or award nominations. Stephenson and Leckie do beat Novik in past awards history. When we factor in the possible Kingmaker effect from the Rabid Puppies, Novik is a clear favorite.

It would take a lot of change in the voting pool to overcome Novik’s seeming advantages. I wouldn’t count it out completely—these past two years have been so volatile that anything can happen.

Last question—where will No Award place? Last year, voters chose to place Jim Butcher and Kevin J. Anderson below No Award, likely as punishment for appearing on the Puppy slates. Will it happen again this year? I have a hard time seeing Stephenson getting No Awarded, Puppy appearance or not. He’s been a well-liked Hugo writer for a long time, and he may well have scored a nomination without Puppy help. I think Stephenson will beat No Award.

That leaves Butcher. He was No Awarded in 2015 by 2674 to 2000 votes, so a No Award margin of 674. That’s a pretty substantial number. If we go back to 2014, when Larry Correia’s Warbound was the first Puppy pick to make it to the Best Novel category, he beat No Award by a 1161 to 1052 margin. So that means the “No Awarders” are picking up steam. At Chaos Horizon, I go with the past year’s results to predict the future unless there’s some compelling data to suggest otherwise. So I’ll predict that Butcher will lose to No Award in 2016 just as he did in 2015.

So, what do you think? Are we in for a Best Novel surprise, or will Novik walk away with the crown?

The Kingmaker Effect and the 2016 Hugos

As we accelerate towards the announcement of the 2016 Hugo winner, it’s time to think once again about the Kingmaker effect and the Hugo awards. This is going to be essential both for 2016 and moving forward into 2017, even if the Hugo voting changes pass. The “E puribus Hugo” only addresses the nominating stage, leaving plenty of room for kingmaker effects in the final voting stage.

The long and short of it is that a dedicated block of voters can change the outcome by voting for what would normally be the #2 or even #3 place finisher, pushing them into the winner’s circle by overcoming the “organic” winner. Let’s define margin of victory as how many votes there wound up being between the winner and the second place finisher. You can pull this information off of the Hugo voting packets. Basically, this number tells us how many votes you would need to change the outcome of the Hugo. If The Goblin Emperor beat The Three-Body Problem by 300 votes, you’d need a block of at least 300 voters to come in and vote for Cixin Liu to change the outcome (in my opinion, this is pretty much what happened last year):

Other initial Best Novel analysis: Goblin Emperor lost the Best Novel to Three-Body Problem by 200 votes. Since there seem to have been at least 500 Rabid Puppy voters who followed VD’s suggestion to vote Liu first, this means Liu won because of the Rabid Puppies. Take that as you will.

Here’s the data from 2010-2014. I left off last year because the Puppy campaigns changed the results so profoundly:

Margin of Victory 2010 2011 2012 2013 2014
Novel 0 26 161 213 644
Novella 11 244 35 158 83
Novelette 3 97 116 109 460
Short Story 167 194 210 232 307
Related Work 60 53 163 3 84

With a couple of exceptions—Leckie dominating the 2014 novel race with Ancillary Justice, and Mary Robinette Kowal winning the Novellete category in 2014 after she was disqualified a year earlier for the same story—a block vote of 300 would almost always be enough to sway the outcome. In some years, you’d only need a handful of votes. The 0 value in a 2010 is the tie between Bacigalupi and Mieville. It wouldn’t have taken much to push Feed over Blackout/All Clear in 2012, and only a little more to elevated 2312 over Redshirts in 2013. Even without deeply impacting the nominating stage, a block vote can fundamentally change who wins the Hugo award.

So, are we in for any kingmaker scenarios in the fiction categories this year?

Best Novel: I don’t think we’re in a kingmaker situation here, although I do think the Puppy block vote makes Uprooted an almost sure winner. A refresher of where we’re at:

Uprooted by Noami Novik, The Fifth Season by N.K. Jemisin, and Ancillary Mercy by Ann Leckie all made the ballot “organically,” i.e. without appearing on the Rabid Puppy list.

Seveneves by Neal Stephenson and The Cinder Spires by Jim Butcher also made the final ballot. Both appeared on the Rabid Puppy list. Prior to the Puppies, Jim Butcher had never been nominated for a Hugo Best Novel, and past Hugo voting packets show him  receiving very few votes. I often refer back to the 2009 nominating data, where Butcher only received 6 votes for Small Favor, one of his Dresden novels. If that number seems shockingly low for so popular writer, remember that Butcher is associated with Urban Fantasy writing, a sub-genre that has not historically been part of the Hugos.

Stephenson is the most complex situation. He has received Hugo nominations three times before without any Puppy help: for Anathem in 2009, for Cryptonomicon in 2000, and for The Diamond Age in 1996. Diamond Age went on to win the Hugo that year. So while the Puppies certainly helped, we won’t know whether or not Stephenson would have received a nomination on his own until the final data comes out. A few more bits of data: Stephenson received 93 nominating votes in 2009, second most to Little Brother by Cory Doctorow. In the final ballot, Anathem took second, losing to The Graveyard Book by 120 votes, 477 to 357. If Seveneves performs similarly, it could have come down to how many people voted Seveneves as “No Award” based solely on its appearance on the Rabid Puppies list.

However, that’s all a moot point. On the final Rabid Puppy Hugo ballot, Vox Day put Uprooted above Seveneves (Novik/Stephenson/Butcher/No Award was the exact order). That will pretty much clinch the race for Uprooted, based on this logic:

  1. Uprooted was already very likely to finish either #1 or #2 in the Hugo voting, based on Novik’s strong performance in winning the Nebula, the Locus Fantasy Award, and grabbing nominations in the World Fantasy Award and British Fantasy. She has also done very well with SF Critics and Mainstream Critics, all of which are good indicators of Hugo success. She’s sold a ton of copies (46,000+ ratings on Goodreads, for instance). The closest competitor seems to be The Fifth Season, but Jemisin has already lost the Nebula and Locus Fantasy votes to Novik. As such, I think Uprooted was likely to win the Hugo without any help from the Puppies.
  2. The Rabid Puppies were at least 200 strong in the nominating stage, possibly higher. They might be anywhere from 200-500+ in the final voting stage (the final voting always brings more people to the table). Let’s use a very conservative 300.
  3. 300 additional votes for Uprooted at #1 will be enough to cover any potential margin of victory that either Jemisin, Stephenson, or Leckie might have had without the Rabid Puppies. Let’s say Jemisin squeaked out an “organic” victory of 100 votes; once the Rabid Puppies are tallied, that swings outcome back to Novik. You’d have to predict a scenario where Jemisin (or Leckie) would beat Novik by a number greater than the total number of Rabid Puppies. That’s only happened once in the last 5 years, when Ancillary Justice was a consensus book against a weaker field. So could it be Leckie again? I don’t think so; she’s already won a Hugo for this series and I don’t think voters are ready to give her a second. Even if she squeaked out an organic win, I can’t see it being by a 300 vote margin. Butcher will attract tons of No Award votes, so he’s not even in the conversation.

So that leaves Uprooted as the only novel that seems to have a chance of winning the Hugos. What other book has a path to victory? You’d have to predict a huge “organic” win for either Jemisin or Leckie, and that just doesn’t seem likely. We’ll find out shortly!

Best Novella, Best Novelette, Best Short Story:

In each of these categories, the Rabid Puppies swept 4 out of 5 positions. This means that the non-slate story is the prohibitive favorite, based on how many people voted slated works No Award last year.  If there’s any drama, it might be in “Best Novella.” Nnedi Okorafor’s Binti, the Nebula winner, is the non-slate work. Louis McMaster Bujold’s Penric’s Demon, from the same universe as her Hugo winning Paladin of Souls, is the number #1 Rabid Puppy pick. How many people will No Award Bujold based solely on her appearing on the Rabid Puppies slate? Let’s say Binti wins by an organic margin of 200 (before factoring in the Rabid Vote) and the Rabids are 400. It would take only 200 voters “No Award”ing Penric’s Demon to keep Binti the winner. I expect that to happen, but this will be some great data to sort through once the packets are released.

I don’t see anything preventing “And You Shall Know Her by the Trail of Dead” or “Cat Pictures Please” from winning the Novelette and Short Story category. Stephen King’s huge popularity will be blunted by his not being primarily associated with Science Fiction or Fantasy. “Folding Beijing” might be competitive, but the Rabid Puppies put it lower on their list, minimizing its chances.

So, what do you think? Will there be any kingmaker effects this year? Or will the Hugo fiction categories play out pretty much as they would have without the Rabid Puppies?

Updating the 2016 Awards Meta-List

I’m back from vacation—always important to leave the internet behind for a while. When am I actually going to do my reading? I had a grand old time touring New Mexico, Colorado, and Oregon.

Quite a bit happened in the last few months. Chaos Horizon will spend this week catching up, and then make my 2016 Hugo prediction once the Hugo voting closes at the end of July. I’ll also have my too-early 2017 Hugo and Nebula lists up soon. Beware!

A lot of other SFF nominations and awards have been handed out in the past few weeks. These are good indication of who will win the eventual Hugo—every award nomination raises visibility, and the awards that using votes are often good predictors of who will win the Hugo. Lastly, the full range of SFF awards gives us a better sense of what the “major” books of the year than the Hugo or Nebula alone. Since each award is idiosyncratic, a book that emerges across all 14 is doing something right.

Here’s the top of the list, and the full list is linked here. Total number of nominations is on the far left.

5 The Fifth Season Jemisin, N.K.
5 Uprooted Novik, Naomi
4 Europe at Midnight Hutchinson, Dave
4 Seveneves Stephenson, Neal
3 Ancillary Mercy Leckie, Anne
2 The House of Shattered Wings Bodard, Aliette de
2 Apex Naam, Ramez
2 A Borrowed Man Wolfe, Gene
2 Luna: New Moon McDonald, Ian
2 The Thing Itself Roberts, Adam
2 The Book of Phoenix Okorafor, Nnedi
2 The Water Knife Bacigalupi, Paolo
2 Aurora Robinson, Kim Stanley

No dominant book this year. At the top of the list are the Hugo nominees, with Europe at Midnight swapped out for the Jim Butcher novel. Butcher has no nominations other than his Hugo, and since The Cinder Spires is a fantasy novel, it was certainly more likely for these awards than his urban fantasy Dresden series.

On to the top contenders:

Since we last checked, Uprooted picked up wins in the Nebula, Locus Fantasy, as well as two more nominations in the British Fantasy and World Fantasy awards. She’s now beaten Jemisin head to head in two voted awards (the Locus Fantasy and Nebula). While neither of those perfectly mirror the Hugo voting audience, I place a lot of stock in those past wins heading into the Hugo.

While Jemisin has 5 nominations, she has zero wins so far. The other Hugo nominees have all managed at least one: Seveneves in the libertarian Prometheus (not a good indicator of future Hugo success), Ancillary Mercy in the Locus SF, and Uprooted in the Nebula and Locus Fantasy.

Hutchinson does well in the British awards (Clarke, BSFA, Kistchies) and poorly in the American ones (only managing a Campbell). This shows, to me at least, a divide between European and American SF readerships. Since the Hugos are in Finland next year, we will see a very different set of Hugo nominees? I don’t think Hutchinson has a novel out in 2016,  but it’s something to keep an eye on.

Otherwise, no other books are really emerging as “consensus” books that the Hugos missed. All the awards have nominated, and about have half have given their awards. Aurora did more poorly than I would have expected given Robinson’s reputation. Same with The Water Knife; I expected Bacigalupi’s follow-up to The Windup Girl to garner more attention. Maybe 5 years is too long between novels? Who knows.

Interestingly, of the 7 Nebula nominees, 4 (Schoen, Wilde, Gannon, and Ken Liu) didn’t receive any other nominations in the 14 awards I track. A big surprise for me was Cixin Liu’s The Dark Forest, which had 5 nominations last year (Hugo winner, Nebula, Locus SF, Prometheus, Campbell), and got 0 this year.

Anything else useful to be learned from the list this year?


2016 Nebula Winners Announced: Novik Wins Best Novel

The SFWA announced the Nebula winners this weekend:

Novel Winner: Uprooted, Naomi Novik (Del Rey)

Other nominees:
Raising Caine, Charles E. Gannon (Baen)
The Fifth Season, N.K. Jemisin (Orbit US; Orbit UK)
Ancillary Mercy, Ann Leckie (Orbit US; Orbit UK)
The Grace of Kings, Ken Liu (Saga)
Barsk: The Elephants’ Graveyard, Lawrence M. Schoen (Tor)
Updraft, Fran Wilde (Tor)

Novik wins for her fairy-taleish feeling Uprooted. This year, popularity seems to have won out. Compare Novik’s number of ratings to Jemisin’s and Leckie’s as of today, 5/16/16:

Goodreads Amazon
Uprooted 38,266 1,154
Ancillary Mercy 9,782 205
The Fifth Season 5,658 120

In fact, Uprooted is about the most popular Science Fiction or Fantasy book of last year. I can’t think of a single book that has more Goodreads ratings this year. It just passed Armada this past month. You can check my list, which only tracks until March 31st (I only use it for predicting nominees, not winners, although maybe I should start using it for both!). Seveneves, Armada, and The Aeronaut’s Windlass still beat Novik out on Amazon, though. Novik being so much more popular than anyone else seems to have given her the edge: more readers, more potential voters, even in the relative small pool of the SFWA.

This makes Uprooted a prohibitive Hugo favorite. When a Nebula winner is up for the Hugo, it almost always wins. Sadly, my Nebula prediction formula isn’t working very well; I’ll have to tweak it this summer to take raw popularity more into account.

Congrats to Novik!

UPDATE 5/16/16: Here’s some historical data on the Best Novel winners from the SFWA recommended reading list. Eventual winner is in orange, nominees in green.


This year, Novik won even though she was much lower down on the list, in position #4. She was beaten in the recs by Gannon, Schoen, and Wilde. I think each of those books had a very strong nomination support group that didn’t translate to the larger voting audience. Any thoughts on why this data wasn’t predictive? Here’s this year SFWA Recommendations, with perfect correlation to the nominees but not the winner. Far left column is the number of recs.

35 Barsk: The Elephants’ G… Schoen, Lawrence M. Tor Books 12 / 2015
33 Raising Caine Gannon, Charles E. Baen 7 / 2015
29 Updraft Wilde, Fran Tor Books 9 / 2015
25 Uprooted Novik, Naomi Del Rey 5 / 2015
22 The Grace of Kings Liu, Ken Saga Press 4 / 2015
21 Ancillary Mercy Leckie, Ann Orbit 10 / 2015
19 The Fifth Season Jemisin, N. K. Orbit 8 / 2015
18 Beasts of Tabat Rambo, Cat WordFire Press 4 / 2015
18 Karen Memory Bear, Elizabeth Tor Books 2 / 2015

2016 Nebula Prediction

I’ve spun my creaky model around and around, and here is my prediction for the Nebulas Best Novel category, taking place this weekend:

N.K. Jemisin, The Fifth Season: 22.5%
Ann Leckie, Ancillary Mercy: 22%
Naomi Novik, Uprooted: 14.7%
Ken Liu, The Grace of Kings: 13.3%
Lawrence Schoen, Barsk: 10.7%
Charles Gannon, Trial by Fire: 9.5%
Fran Wilde, Updraft: 7.3%

Remember, Chaos Horizon is a grand (and perhaps failed!) experiment to see if we can predict the Nebulas and Hugos using publicly available data. To predict the Nebulas, I’m currently using 10 “Indicators” of past Best Novel winners. I’ve listed them at the bottom of this post, and I suggest you dig into the previous year’s prediction post to see how I’m building the model. If you travel down that hole, I suggest you bring plenty of coffee.

Simply put, though, I treat a bunch of indicators as competing experts (one person says the blue horse always wins! another person says when it’s rainy, green horses win!) and combine those expert voices to come up with a single number. While my model gives Jemisin a very slight edge this year, anyone can (and has) won the Best Nebula Novel award. We’ve had some real curveballs in this category in the last 15 years, and if you bet money on this award, you’d lose. What I suggest is treating the list as a starting point for further thought and discussion . . . Why would Jemisin be in the lead? What about The Fifth Season seems to make it a front-runner?

This year, Jemisin does very well because of her impressive Hugo and Nebula history (6 prior nominations), her sterling placement on year-end lists, her nominations for the Hugo, Locus Fantasy, and Kitscies, and the fact this is the first novel in a series. Jemisin is very familiar to the Nebula audience and critically acclaimed. That’s a recipe for winning. The Nebulas tend to go to first books in a series (think Ancillary Justice or Annihilation from the past two years), so if Jemisin doesn’t win for Book #1 of The Broken Earth series, it could be quite a while before she has viable chance to win again. Does that help? Sure, SFWA voters could vote Book #3 to win, but that hasn’t happened in the past. I tend not to look much at content (there are plenty of other websites for that), but The Fifth Season does have some of the more experimental/literary prose Nebula voters have liked recently. Parts of it are in the second person, for instance. This book would fit pretty well with the Leckie and VanderMeer wins.

Leckie is probably too high in my formula—and that’s not because SFWA voters don’t like Leckie, but because Ancillary Justice just won 2 years ago. Do the SFWA voters really want to give Leckie another award for the same series so soon? Aside from that wrinkle, Ancillary Mercy has everything going for it: critical acclaim, award nominations, etc. A decade from now, I expect Leckie to have won the Nebule at least once more . . . but not until she publishes a new series.

I think Uprooted has a real shot. This is actually a great test case year, allowing us to balance what SFWA voters value the most: past Nebula history/familiarity? That helps Jemisin and Liu; Novik has 0 prior Nebula noms. If it’s popularity, that helps Novik—stroll over to Amazon or Goodreads, and you can see that Uprooted has 4-5 times more rankings than Jemisin or Leckie. In the past, though, the SFWA hasn’t much cared about mainstream popularity. If Uprooted wins, I need to recalculate my formulate take popularity more into account.

Ken Liu will be familiar to the Nebula audience–he’s already won a Nebula in short fiction. My formula dings him because he didn’t show up on year-end lists or in the other awards. Same for Updraft, although we’re lacking the Nebula history for Wilde.

Gannon is the new Jack McDevitt—and McDevitt got nominated a bunch of times and then won. So it’s not out of the realm of reason for Gannon to win this year: the other books split the vote, etc. Still, it’s hard to imagine voters jumping on to Book #3 of a series if Books #1 and #2 couldn’t win.

That leaves Schoen—a true wild card. Schoen had the most votes in the SFWA nomination recommended list, and we don’t yet know how much that matters. If Schoen wins, I’ll have to completely rejigger my formula. Things are getting a little creaky as is, and it’s probably time to go back and rebuild the model for Year #4.

Always remember the Nebula is an unpredictable award. Remember, The Quantum Rose won over A Storm of Swords. Who saw that coming? That’s why everyone has a decent chance in my formula: no one dips below 5%.

Lastly, remember Chaos Horizon is just for fun, a chance to look at some predictions and think about who is likely to win. A different statistician would build a different model, and there’s no problem with that—statistics can’t predict the future. Instead, they help us to think about events that haven’t happened yet. That’s just one of many possible engagements with the awards. Good luck to all the Nebula nominees, and enjoy the ceremonies this weekend!

Indicator #1: Author has previously been nominated for a Nebula (80%)
Indicator #2: Author has previously been nominated for a Hugo (73.33%)
Indicator #3: Has received at least 10 combined Hugo + Nebula noms (46.67%)
Indicator #4: Novel is science fiction (73.33%)
Indicator #5: Places on the Locus Recommended Reading List (93.33%)
Indicator #6: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #7: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)
Indicator #8: Receives a same-year Hugo nomination (60%)
Indicator #9: Nominated for at least one other major SFF award (73.33%)
Indicator #10: Is the first novel of a series or standalone. (80%)

The percentage afterward tracks the data from 2001-2015 (when available), so it reads that 80% of the time, the eventual winner had previously been nominated for a Nebula, etc.

Checking in with the 2016 Awards Meta-List

For this Meta-List, I track 15 of the biggest SFF awards. Since each award has its own methodologies, biases, and blind spots, this gives us more of a 10,000 foot view of the field, to see if there are any consensus books emerging.

As of early May we have nominees for 10 of the 15 awards. I track the following awards: Clarke, British Fantasy, British SF, Campbell, Compton Crook, Gemmell, Hugo, Kitschies, Locus SF, Locus Fantasy, Nebula, Dick, Prometheus, Tiptree, World Fantasy. I ignore the first novel awards.

Here’s the current results:

4 nominations: The Fifth Season, Jemisin, N.K.
3 nominations: Europe at Midnight, Hutchinson, Dave
3 nominations: Ancillary Mercy, Leckie, Anne
3 nominations: Uprooted, Novik, Naomi
3 nominations: Seveneves, Stephenson, Neal
2 nominations: The House of Shattered Wings, Bodard, Aliette de
2 nominations: Apex, Naam, Ramez
2 nominations: A Borrowed Man, Wolfe, Gene

Everyone else has 1.

As you can see, the top of that list correlates very well to the Hugo awards. Dave Hutchinson is very well-liked by the British based awards and largely ignored by American awards. With nominations in the Hugos, Nebulas, Kitschies, and Locus Fantasy, Jemisin is leading the way this year. Does this make her a favorite for the Nebulas this weekend? Or is she so neck and neck with Leckie and Novik that we don’t learn anything form this list?

No book has really broken out of the pack, like when Ancillary Justice took a huge lead a few years ago. I think we’ll have a close season with different books winning the different awards.

Here’s the whole spreadsheet, with links to every award I track.

Analyzing the 2016 Hugo Noms, Part 1

No use putting this off any longer. I was hoping we’d see some more leaked information/numbers, but we’re stick with pretty minimal information this year. Here we go . . .

Where We’re At: Yesterday, the 2016 Hugo Nominations came out. Once again, the Rabid Puppies dominated the awards, grabbing over 75% of the available Hugo nomination slots.

If you’re here for the quick drive-by (Chaos Horizon is not a good website for the casual Hugo fan), I’m estimating the Rabid Puppies at 300 this year, with a broader range of 250-370. Lower than that, you can’t sweep categories. Higher, more would have been swept. Given this year’s turnout, 300 seems about the number that gets you these results. Calculations below. Be warned!

EDIT 4/28/2016: Sources are telling me that there were indeed withdrawals in several categories. This greatly muddies the upper limit of the Rabid Puppy vote. As such, I think the 250-370 should be read as the lower limit of the Rabid Puppy vote, with the upper limit being somewhere in the range of 100 higher of that. I did some quick calculations for the upper RP limit using the Best Novel category, assuming Jemisin got 10%, 15%, or  20% of the vote. We know she beat John C. Wright’s Somewhither. That gives upper limits of 335, 481, and 615. I think 481 is a good middle-of-the-road estimate. Remember that Best Novel numbers are always inflated because more people vote in this category than any other, so a big RP number in Best Novel doesn’t necessarily carry over to all categories.

So revised RP estimate: 250-480. If there were many withdrawals, push to the high end (or beyond) of that range. Fewer withdrawals, low end. Perhaps the people who withdrew will come forward in the next few days and this will allow us to be more precise. If those withdrawals are made public, please post them in the comments for me.

EDIT 4/28/2016: Over at Rocket Stack Rank, Greg has done his own Hugo analysis, using a different set of assumptions. While I assume a linear increase of “organic” voters (non-Puppy voters), he uses a “power law” distribution. Most simply put, it’s the difference between fitting a line or a curve to the available data. I go with the line because of the low amount of data we have, but Greg is certainly right that the curve is the way to go if you trust the amount of data you have.

Using his method, Greg comes up with a lower Rabid Puppy number (around 200), but that’s also accompanied by a lower number of “organic” voters than my method estimates. Go over and take a look at his estimate. It’s a great example of how different statistical assumptions can yield substantially different results. I’ll leave it up to you to decide which estimate you think is better. I personally love that we now have multiple estimates using different approaches. It really broadens our understanding of this whole process. Now we need someone to come along and do a Bayesian analysis!

The Estimate: This year, MidAmeriCon II released minimal data information at this stage. They’re not obligated to release any, so I guess we should be happy with what we got. Last year, we got the range of votes, which allowed us to estimate how strong the slate effect was. This year, we only have the list of nominees and the votes per category. Is that enough to make any estimates?

Here on Chaos Horizon, I work with what I have. I think we can piece together an estimate using the following information:

  1. The Rabid Puppies swept some but not all of the categories. That’s a very valuable piece of information: it means the Rabid Puppies are strong, but not strong enough to dominate everything. With careful attention, we should be able to find the line (or at least the vicinity of the line).
  2. Zooming more closely in, the Rabid Puppies swept the following categories: Short Story, Related Work, Graphic Story, Professional Artist, Fanzine. Because of this, we know that the Rabid Puppies had to beat whatever the #1 non-Rabid Puppy pick was in those categories.
  3. The Rabid Puppies took 4/5 slots in Novella, Novelette, Semiprozine, Fan Writer, Fan Artist, Fan Cast, and Campbell. This means that, in the categories, the #1 non-Rabid Puppy pick had to be larger than Rabid Puppy slate number.

With that information, if I could just find out what the number of votes the #1 non-Rabid Puppy pick likely received, I could estimate the Rabid vote. Now, couldn’t I use the historical data—the average percentage that the #1 pick has received in past years—to come up with this estimate?

One potential wrench: what if people withdrew from nominations? There’s no way to know this, and that would screw the numbers up substantially. However, with more than 10 categories to work with, we can only hope this didn’t happen in all 10. If you believe at least one person withdrew in Novelette, Semiprozine, Fan Writer, Fan Artist, Fan Cast, and Campbell, add 100 to my Rabid Puppy estimate for 400. There’s also the question of Sad Puppy influence, which I’ll tackle in a later post.

Or, to write it out: In the swept categories, Rabid Puppy Number (x) is likely greater than the Non-Rabid voters (Total – x) * the average percentage of the #1 work from previous years.

In the 4/5 categories, the Rabid Puppy number (x) is likely less than the Non-Rabid voters (Total – x) * the average percentage of the #1 work from previous years.

While that won’t be 100% accurate, as the #1 work gets a range of numbers, it’s going to give us something to start with. Here’s the actual formula for calculating the Rabid Puppy lower limit in swept categories using this logic:

x > (Total – x) * #1%
x > #1% * Total – #1% * x
x + #1% * x > #1% * Total
(1 + #1%)x > #1% * Total
x > (#1% * Total) / (1 + #1%)

So, quick chart: we need the #1%, the average percent of vote the #1 work gets, i.e. the highest placing non-RP work, in all categories that were either swept or had 4/5. I’ll use the 4/5 Rabid categories in a second to establish an upper limit.

Off to the Hugo stats to create the chart. I used data from 2010-2013, giving me 4 years. I didn’t use 2014 and 2015 because the Sad Puppies and Rabid Puppies changed the data sets by their campaigns. I didn’t use 2009 data because the WorldCon didn’t format it conveniently that year, so it is much harder to pull the percentages off. I don’t have infinite time to work on this stuff. :). I also had to toss out Fan Cast because it’s such a new category.

Chart #1: Percentage the #1 Hugo Nominee Received 2010-2013

2013 2012 2011 2010 Average High Low Range
Short Story 16.2% 12.3% 14.0% 13.7% 14.0% 16.2% 12.3% 3.9%
Related Work 15.4% 11.1% 18.4% 21.6% 16.6% 21.6% 11.1% 10.5%
Graphic Story 29.7% 17.4% 22.3% 19.0% 22.1% 29.7% 17.4% 12.3%
Professional Artist 23.9% 40.1% 26.9% 33.6% 31.1% 40.1% 23.9% 16.2%
Fanzine 26.9% 25.2% 20.3% 16.1% 22.1% 26.9% 16.1% 10.8%
Novella 17.6% 24.8% 35.1% 21.1% 24.7% 35.1% 17.6% 17.6%
Novelette 14.5% 12.1% 11.3% 12.9% 12.7% 14.5% 11.3% 3.2%
Semiprozine 42.6% 29.3% 37.8% 32.4% 35.5% 42.6% 32.4% 13.3%
Fan Writer 23.9% 21.7% 21.7% 13.8% 20.3% 23.9% 13.8% 10.1%
Fan Artist 16.7% 22.6% 26.1% 20.6% 21.5% 26.1% 16.7% 9.4%
Campbell 18.7% 13.1% 20.3% 16.0% 17.0% 20.3% 13.1% 7.2%

Notice that far right column of “range”: that’s the difference between the high and low in that 4 year period. This big range is going to introduce a lot of statistical noise into the calculations: if I estimate Best Related work to get 16.6%, I’d be off as much as 5% in some years. I could try to offset this by fancier stat tools, but 4 data points will produce a garbage standard deviation, though, so I won’t use that. On 300 votes, this 5% error would throw a +/- halo of 15 votes. Significant but not overwhelming.

Okay, now that I have this data, let’s use it to calculate the lower limit of Rabid Puppies:

Chart 2: Calculating Min Rabid Puppy Number from 2016 Swept Categories

Swept Category Total Votes #1 % Min RP
Short Story 2451 0.140275 301.52
Related Work 2080 0.166225 296.47
Graphic Story 1838 0.2211 332.8
Professional Artist 1481 0.310975 351.31
Fanzine 1455 0.22125 263.6
Average 309.14

Okay, what the hell does this chart say? The Short Story category had 2451 voters this year. In past years, the #1 Sad Puppy pick grabbed 14% of the vote. To beat that 14%, there needed to be at least 302 Rabid Puppy voters. With that number, you get 302 Rabid Votes, (2451-302) = 2149 Non-Rabid votes, voting at 14% = 301 votes. Thus, the Rabid Puppies would beat all the Non-Rabid votes by 1 point.

Now, surely that number isn’t 100% accurate. Maybe the top short story this year got 18% of the vote. Maybe it got 12%. But 300 seems about the line here–if Rabid Puppies are lower than that, you wouldn’t expect it to sweep.

Keep in mind, this chart just gives us a minimum. Now, let’s do the other limit, using the categories were the Puppies took 4/5. This is uglier, I’m warning you:

Chart 3: Calculating Max Rabid Puppy Number from 2016 4/5 Categories

4/5 Category Total Votes #1 % Max
Novella 2416 0.246575 477.89
Novelette 1975 0.12665 222.02
Semiprozine 1457 0.35505 381.76
Fan Writer 1568 0.20265 264.21
Fan Artist 1073 0.2151 189.95
Campbell 1922 0.170125 279.44

Ugh. Disaster befalls Chaos Horizon. This number should be higher than the last one, creating a nice range. Oh, the failed dreams. This chart is full of outliers, ranging from that huge 477 in Novella to that paltry 190 in Fan Artist. Did someone withdraw from the Fan Artist category, skewing the numbers? If I take that out, it bumps the average up to 325, which fixes my problem. Of course, if I dump the low outlier, I should dump the high outlier, which puts us back in the same fix.

A couple conclusions: the fact that both calculations turned up the 300 number is actually pretty remarkable. We could conclude that this is just about the line: if the Rabid Puppies are much stronger than 300 (say 350), they should have swept more categories. If they’re much weaker (250), they shouldn’t have swept any. 300 is the sweet spot to be competitive in most of these categories, with the statistical noise of any given year pushing some works over, some works not.

It also really, really looks like Novelette and Fan Artist should have been swept. Withdrawals?

To wrap up my estimate, I took the further step of using the 4 year high % and the 4 year low % (i.e. I deliberately min/mixed to model more centralized and less centralized results). You can find that calculation on this 2016 Hugo Nom Calcs. This gives us the range of 250-370 I mentioned earlier in the post. I’d keep in mind that the raw number of Rabid Puppies might be higher than that—this is just the slate effect they generated. It may be that some Rabid Puppies didn’t vote in all categories, didn’t vote for all the recommended works, etc.

There’s lots of factors that could skew my calculation: perhaps more voters spread the vote out more rather than consolidating it. Perhaps the opposite happened, with voters taking extra care to try to centralize their vote. Both might throw the estimate off by 50 or even 100.

Does around 300 make sense? That’s a good middle ground number that could dominate much of the voting in downballot categories but would be incapable of sweeping popular categories like Novel or Dramatic Work. I took my best shot, wrong as it may be. I don’t think we’ll do much better with our limited data—got any better ideas on how to calculate this?

2016 Hugo Finalists Announced

The 2016 Hugo Finalists have been announced. Press release here.

The best novel category played out in this fashion:

Ancillary Mercy, Ann Leckie
The Cinder Spires: Aeronaut’s Windlass, Jim Butcher
The Fifth Season, N.K. Jemisin
Seveneves, Neal Stephenson
Uprooted, Naomi Novik

I got 4/5 right here on Chaos Horizon, and Jemisin was the novel I had as #6 on my prediction. I’ll take that in an unpredictable and chaotic year. I also estimated 3620 votes, and the category had 3695 votes, so at least that part was close!

Jemisin making the list means that a surge of extra Hugo voters broke in her direction, pushing her over the combined weight of the Rabid and Sad Puppy vote and Wright’s Somewhither. Given how well the Rabid Puppies performed elsewhere, that means Jemisin performed very well. The Fifth Season also outperformed Jemisin’s previous novels (this is a scourge of how I model on Chaos Horizon), which may speak to her chances of winning either the Hugo or the Nebula.

The Rabid and Sad Puppies are primarily responsible for the Butcher nomination, and doubtless pushed Stephenson up higher, although Stephenson had a good shot of making it normally. Uprooted appeared high on the Sad Puppy list, and likely picked up voters from that area.

With the exception of Butcher, that looks pretty similar to what I would have predicted the Hugos to be without the Rabid/Sad Puppies. That is certainly not the case lower down the ballot: categories like Best Short Story, Best Related Work, Best Graphic Story, were swept by the Rabid Puppies, and Best Novella and Best Novelette almost swept. I’ll do some more careful analysis over the next few days, but the main reason this happened is because the large number of voters in Best Novel did not carry over to those categories. We had 3695 Best Novel ballots, but only 2451 Short Story ballots and 2080 Best Related Work Ballots. Those missing 1000 voters are the difference between a sweep and a mixed ballot.

My initial thought is that Uprooted will win, as it’s the only novel that seems acceptable to all camps. The typical voters will shoot down the Butcher; the Rabid Puppies will shoot down the Leckie and Jemisin. That leaves Stephenson or Novik. We’ll need to track the dialogue around the Stephenson nomination; if it is deemed a “Rabid Puppy” pick and thus No Awarded, that would seem to clear the path for Novik to win. It’ll take some time for me to sort through the numbers, though.

Perhaps the most interesting categories are Best Novella and Best Novelette, which had 4/5 Rabid Puppy sweeps, with the other book being the #1 story on the Sad Puppy list. While those stories—“Binti” and “And You Shall Know Her By The Trail Of Dead”—doubtless picked up support from other quarters (they were Nebula nominees, after all), that shows the Sad Puppies had a noticeable effect on the Hugos. I’ll give some thought to what that means and report back to you!

As I suspected, the “overlaps” did very well: if you appeared on multiple lists (Rabid + Sad Puppies) or being one of the Nebula nominees + Sad Puppies, you made the ballot. That may be the key to unraveling the fiction ballots: Rabid Puppies won unless a work appeared on both the Nebula and Sad Puppy lists. That makes for an odd alliance, with Sad Puppies possibly being the swing vote against a total Rabid Puppy sweep.

More analysis to come!

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction


Pluralism and Individuation in a World of Becoming

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"


A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Read & Survive

How-To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

SFF Book Reviews

random thoughts about fantasy & science fiction books

Philip K. Dick Review

A Re-read Project

Notes From the Darknet

Book reviews and literary discussion