Archive | 2016 Hugo Award RSS for this section

Final 2016 SFF Awards Meta-List

As I wrap up my analysis from last year, let’s look at my final 2016 SFF Awards Metalist, now with all winners marked. This covers books published in 2015 that got award nominations in 2016. For this list, which gives a good 10,000 foot view of the Science Fiction and Fantasy Awards, I track 14 different awards to see who got nominated and who won. Here’s the top of the list, with all the books that got more than 2 nominations:

Nominations Title Author Wins
5 The Fifth Season Jemisin, N.K. 1
5 Uprooted Novik, Naomi 3
4 Europe at Midnight Hutchinson, Dave
4 Seveneves Stephenson, Neal 1
3 Ancillary Mercy Leckie, Anne 1

Jemisin’s The Fifth Season and Novik’s Uprooted finished atop the list with 5 nominations each, although Novik grabbed 3 wins (Nebula, British Fantasy, Locus Fantasy) to Jemisin’s one (Hugo). Seveneves won the Prometheus, and Ancillary Mercy won the Locus SF. A wide range of books won SF awards this year, including lesser known works such as The Chimes by Anna Smaill (British Fantasy), Radiomen by Eleanor Lerman (Campbell), and Lizard Radio by Pat Schmaltz (Tiptree). I’ll also note that this list correlated with 4/5 of the Hugo nominees, with only Hutchinson missing out.

I think a list like this gives us a good place to start thinking about the 2017 SFF Awards season. Since the SFF voting public doesn’t change massively from year to year, they tend to duplicate picks from year to year. For 2016, Jemisin is back with The Obelisk Gate, a sequel to The Fifth Season; I expect that to be a stalwart on the 2017 awards circuits, probably matching the number of noms and wins of The Fifth Season. Novik published League of Dragons in 2016, the final book of her 9 novel Temraire sequence. Books that are #9 in a series rarely get SFF awards nomination, although she may be a possibility in the new Best Series Hugo.

Leckie and Stephenson didn’t publish books last year, which opens up some spots in the the awards. Leckie in particular has grabbed a host of nominations in these 14 awards over the past 3 years: 16 nominations and 9 wins by my count. That’s a big vacuum to fill: who’s going to step and grab these spots?

Dave Hutchinson is an interesting possibility for the Hugo this year. His third volume in his Fractured Europe series just came out November 3, Europe in Winter. Hutchinson is not particularly well known here in the United States, but he’s racked up 2 best novel  nominations on the Clarke (a British award), 3 in the British Science Fiction Award (obviously British), and 2 in the Campbell (a more literary American SF award). Could the Hugos being held in Europe this year—and presumably more British voters making the trip to Finland than Americans—result in a European bounce? London in 2014 didn’t produce much of a boon for European writers, but Glasgow in 2005 resulted in an all British/Scottish final ballot. The new Hugo voting rules will prevent a 2005 style-sweep, but they could also help push a British or maybe even Finnish author onto the ballot. Hutchinson might also be competitive in the Best Series category, although I think Charles Stross and his well-liked Laundry Files might be the better bet for the Best Series category, given the fact that he’s won the Best Hugo Novella 3 times already for works from that series.

Looking further down the list, no one from last year’s nominees really jumps out as a major contender for 2016. Amazingly, The Dark Forest by Cixin Liu didn’t get a single SFF award nomination last year despite winning the Hugo the year before, which probably speaks poorly to Death’s End‘s chances. Ken Liu only got the Nebula nomination for The Grace of Kings, so he might be a contender in that category again. Becky Chambers got only one nomination for The Long Way to a Small, Angry Planet, but that was an indie published book that came out in several different formats over several different years; her sequel A Closed and Common Orbit has none of those publication issues and may grab some nominations.

All in all, we’re going to be looking at some new faces for 2017. I’ll start hacking away with some preliminary lists of contenders for the 2017 Hugo and Nebula later this week.

2016 Hugos: Some Initial Stat Analysis

So they put the stats up already.

It’s actually a pretty easy analysis this year to see what the Rabid Puppy numbers were in the Nomination and Final stage. We can use Vox Day himself to do that: on the Rabid Puppies, he included himself in the “Best Editor Long Form” category and then later, in his final vote post, suggested himself as #1 in that category. Given the controversy surrounding him, I think we can safely assume that almost all the votes for him were from the Rabid Puppies.

So, here’s where he landed:

Nomination stage:

801 Toni Weisskopf 45.41%
465 Anne Sowards 26.36% *
461 Jim Minz 26.13%
437 Vox Day 24.77%
395 Mike Braff 22.39% **
302 Sheila Gilbert 17.12% 287
Liz Gorinsky 16.27%

So that means 437 votes for the Rabid Puppies in the nomination stage. This is in line with other obvious Puppy picks: Somewhither by John C. Wright with 533,(novels always pick up more votes), 433 votes for Stephen King’s “Obits” (King is a major author, but no one thinks of him for a Hugo), 387 for “Space Raptor Butt Invasion,” 482-384 in the Puppy swept Best Related Work category, 398 for the Castalia House blog, and so forth. That’s a stable enough range for me to say that the Rabid Puppy strength in the nomination stage was 533-384, with around 440 being right in the middle.

So what happened? The Rabid Puppy vote collapsed from the Nomination to the Final Voting stage. This most likely happened because you can nominated in 2016 for free (provided you paid in 2015), but to vote in the final stage in 2016, you had to pay again. Here’s the stats from the first round of the Best Long Form Editor:

Vox Day 165

It’s highly unusual to get 437 votes in the nomination stage and then collapse to 165 in the more popular, more voted in final stage. That 165 represents the most “Rabid” of the Rabid Puppies; some of the other Rabid Puppy picks did considerably better in the first round of voting “Space Raptor Butt Invasion” got 392 votes in the first round of Short Story, for instance. It’s hard to know what exactly to chalk that up to at this point—people enjoying the joke, a broader pool of Rabid Puppy associated voters who didn’t want to vote Vox Day #1 in Long Form Editor.

My initial conclusion then would be around 440 Rabid Puppies in the nomination stage but less than 200 in the final voting stage. Thoughts? Seeing something I’m not?

Jemisin Wins Hugo Award for Best Novel

At WorldCon, they just announced that N.K. Jemisin won the Hugo for Best Novel for The Fifth Season this year! Congratulations to her in what was hard-fought and controversial year.

Jemisin’s win is something of a surprise. We’ll have the stats soon, but Uprooted had a lot of the traditional markers going for it: it won the Nebula, it won the Locus Fantasy, it was more popular then The Fifth Season on Goodreads and Amazon. Jemisin’s win goes to show how unpredictable the Hugos have become–the influx of new voters, along with the high sentiments regarding the Puppy controversy, have basically shot past Hugo patterns all to hell. After ten or so years when the Nebula winner pretty regularly won the Hugo, this may mark a new era in Hugo history. It certainly makes for a more dynamic award, even if it plunges Chaos Horizon and my predictions, into, well, chaos.

In the other categories, things played out largely as expected. In many cases, there was only one non-Puppy choice; that usually won. The two exceptions were in the Campbell, where Andy Weir beat Alyssa Wong, and in Novelette, where Hao Jingfang beat Brooke Bolander. Both Weir and Jingfang were very mainstream picks, however, and could have made the ballot without any Rabid support. In general, the Hugo voters didn’t No Award” Hugo categories just because of overlap with the Rabid Puppies. Neither Gaiman nor File 770 were punished for appearing on the slate. When categories were given “No Award” (Best Related Work, Fancast),  it was because none of the nominees overlapped with typical Hugo picks.

As the dust clears and the stats come out, I’ll continue to do some more analysis. Whatever we can say about 2016, we know that 2017 will be even more unpredictable. We’ve got ballot rule changes, more high passions and controversies, and a new seat of books to ponder over!

Final 2016 Hugo Best Novel Prediction

Let’s me finalize my 2016 Hugo Best Novel Prediction:

  1. Uprooted by Naomi Novik
  2. The Fifth Season by N.K. Jemisin
  3. Ancillary Mercy by Ann Leckie
  4. Seveneves by Neal Stephenson
  5. The Aeronaut’s Windlass by Jim Butcher

Remember, Chaos Horizon doesn’t predict what I want to happen, but rather what I think will happen based on my analysis and understanding of past Hugo trends.

First off, those past trends have been shot full of holes in recent years. The Puppy controversies have fundamentally transformed the voting pool of the Hugos, meaning that past trends might not apply given how widely the voters have changed. New voters have come in with the Puppies; new voters have come in to contest the Puppies; some of those voters might have stayed, some might have dropped out. Some voters are voting No Award out principle; some aren’t. How exactly you balance all of that is going to be largely speculative, maybe to the point that no predictions are meaningful. That’s why we’re called “Chaos Horizon” here!

However, I think the potential Kingmaker effect, when combined with past Hugo trends and the popularity of Uprooted, makes Novik a reasonable favorite. Novik has already won the Nebula and the Locus Fantasy (beating Jemisin twice); her book is a stand-alone, making it feel more complete than either the Jemisin, Leckie, or Butcher; and Novik, along with Stephenson, is more popular than the other nominees. In the past, these have all been characteristics of the Hugo winner.

In the past few years, I’ve developed a mathematical formula to help me predict the Hugos. The formula won’t be accurate this year because of the Rabid Puppy voters, but here’s what it came up with:

Uprooted 27.1%
Ancillary Mercy 20.5%
Seveneves 20.0%
The Fifth Season 17.1%
The Cinder Spires 15.3%

The formula obviously doesn’t take pro or anti-Puppy sentiment into account. Uprooted is a big favorite because of her Nebula win this year. The Nebula has been the best predictor of the Hugo in the last decade: in 5 of the last 10 years, the Nebula winner has gone on to win the Hugo. The stats are actually better than that. In the 2006, 2007, 2009, and 2015, the Nebula winner was not even nominated for the Hugo. So the only time a Nebula winner has lost the Hugo in the final voting round was when Redshirts beat 2312 in 2013. So the last 5/6 times when the Nebula winner had a chance to win the Hugo, it did. Those are nice odds.

My formula is not designed to accurately predict second place. With that in mind, I think Jemisin is too low. Leckie won too recently with Ancillary Justice to seem to have a chance to win again, and Seveneves is pretty divisive. One reason my formula fails is because it currently doesn’t punish books for being sequels. Leckie should be lower for that reason. It’s something I’ll factor in next year. Stephenson will be lower because some people will vote him “No Award” for appearing on the Rabid Puppy list. So this pushes Jemisin up two spots.

However, Jemisin is lower down in the formula because she doesn’t have a history of winning major awards, unlike Stephenson and Leckie. Check out Jemisin’s sfadb page. She’s been nominated for 12 major awards and hasn’t won any. Not good odds. That’s what my formula is picking up on. My formula has trouble gauging changes in sentiment. I think most readers believe The Fifth Season is better than Jemisin’s earlier works, but I have trouble quantifying that.

What my numbers give is a percentage to win based on the patterns of past Hugo votes, based on my analysis and combination of those formulas using a Linear Opinion Pool prediction model. Prediction is different than statistical analysis: different statisticians would build different models based on different assumptions. You should never treat a prediction (either on Chaos Horizon or in something like American elections) the same way you would treat a statistical analysis; they are guided by different logical methods. Someone who disagrees with one of my assumptions would come up with a different prediction. Fair enough. This is all just for fun! You can trace back through the model using some of these posts: Hugo Indicators, and my series of Nebula model posts beginning here. The Hugo model uses the same math but different data.

Let’s look at this with some other data. Here’s the head to head popularity comparison of our five Hugo finalists, based on the number of ratings at Goodreads and Amazon.

Goodreads Amazon
Uprooted 41,174 1,332
Seveneves 35,428 2,487
The Aeronaut’s Windlass 18,249 1,285
Ancillary Mercy 11,698 247
The Fifth Season 7,676 184

These aren’t perfect samples, as neither Goodreads nor Amazon is 100% reflective of the Hugo voting audience, nor has the Hugo Awards always correlated with popularity. Still, it gives us another interesting perspective.

Jemisin does not break out of the bubble in ways that Novik and Stephenson do. These aren’t small differences, either: Uprooted is 5x more popular on Goodreads and 7x on Amazon than The Fifth Season. I put stock in that—the more people read your book, the more there are to vote for it. While the Hugo voting audience is a subset of all readers, popularity matters.

Two other notes: it’s fascinating how different Amazon and Goodreads are. Novik outpaces Stephenson on Goodreads but gets crushed on Amazon. Different audiences, different reading habits. The question for Chaos Horizon is which one better correlates with the Hugo winner? Second, Butcher may be a very popular Urban Fantasy writer with Dresden, but he’s only a moderately popular Fantasy writer.

So, all told, Novik has big advantages in popularity and same year awards (having won the Nebula already). Neither Jemisin, Leckie, or Stephenson managed to do better than Novik in critical acclaim or award nominations. Stephenson and Leckie do beat Novik in past awards history. When we factor in the possible Kingmaker effect from the Rabid Puppies, Novik is a clear favorite.

It would take a lot of change in the voting pool to overcome Novik’s seeming advantages. I wouldn’t count it out completely—these past two years have been so volatile that anything can happen.

Last question—where will No Award place? Last year, voters chose to place Jim Butcher and Kevin J. Anderson below No Award, likely as punishment for appearing on the Puppy slates. Will it happen again this year? I have a hard time seeing Stephenson getting No Awarded, Puppy appearance or not. He’s been a well-liked Hugo writer for a long time, and he may well have scored a nomination without Puppy help. I think Stephenson will beat No Award.

That leaves Butcher. He was No Awarded in 2015 by 2674 to 2000 votes, so a No Award margin of 674. That’s a pretty substantial number. If we go back to 2014, when Larry Correia’s Warbound was the first Puppy pick to make it to the Best Novel category, he beat No Award by a 1161 to 1052 margin. So that means the “No Awarders” are picking up steam. At Chaos Horizon, I go with the past year’s results to predict the future unless there’s some compelling data to suggest otherwise. So I’ll predict that Butcher will lose to No Award in 2016 just as he did in 2015.

So, what do you think? Are we in for a Best Novel surprise, or will Novik walk away with the crown?

The Kingmaker Effect and the 2016 Hugos

As we accelerate towards the announcement of the 2016 Hugo winner, it’s time to think once again about the Kingmaker effect and the Hugo awards. This is going to be essential both for 2016 and moving forward into 2017, even if the Hugo voting changes pass. The “E puribus Hugo” only addresses the nominating stage, leaving plenty of room for kingmaker effects in the final voting stage.

The long and short of it is that a dedicated block of voters can change the outcome by voting for what would normally be the #2 or even #3 place finisher, pushing them into the winner’s circle by overcoming the “organic” winner. Let’s define margin of victory as how many votes there wound up being between the winner and the second place finisher. You can pull this information off of the Hugo voting packets. Basically, this number tells us how many votes you would need to change the outcome of the Hugo. If The Goblin Emperor beat The Three-Body Problem by 300 votes, you’d need a block of at least 300 voters to come in and vote for Cixin Liu to change the outcome (in my opinion, this is pretty much what happened last year):

Other initial Best Novel analysis: Goblin Emperor lost the Best Novel to Three-Body Problem by 200 votes. Since there seem to have been at least 500 Rabid Puppy voters who followed VD’s suggestion to vote Liu first, this means Liu won because of the Rabid Puppies. Take that as you will.

Here’s the data from 2010-2014. I left off last year because the Puppy campaigns changed the results so profoundly:

Margin of Victory 2010 2011 2012 2013 2014
Novel 0 26 161 213 644
Novella 11 244 35 158 83
Novelette 3 97 116 109 460
Short Story 167 194 210 232 307
Related Work 60 53 163 3 84

With a couple of exceptions—Leckie dominating the 2014 novel race with Ancillary Justice, and Mary Robinette Kowal winning the Novellete category in 2014 after she was disqualified a year earlier for the same story—a block vote of 300 would almost always be enough to sway the outcome. In some years, you’d only need a handful of votes. The 0 value in a 2010 is the tie between Bacigalupi and Mieville. It wouldn’t have taken much to push Feed over Blackout/All Clear in 2012, and only a little more to elevated 2312 over Redshirts in 2013. Even without deeply impacting the nominating stage, a block vote can fundamentally change who wins the Hugo award.

So, are we in for any kingmaker scenarios in the fiction categories this year?

Best Novel: I don’t think we’re in a kingmaker situation here, although I do think the Puppy block vote makes Uprooted an almost sure winner. A refresher of where we’re at:

Uprooted by Noami Novik, The Fifth Season by N.K. Jemisin, and Ancillary Mercy by Ann Leckie all made the ballot “organically,” i.e. without appearing on the Rabid Puppy list.

Seveneves by Neal Stephenson and The Cinder Spires by Jim Butcher also made the final ballot. Both appeared on the Rabid Puppy list. Prior to the Puppies, Jim Butcher had never been nominated for a Hugo Best Novel, and past Hugo voting packets show him  receiving very few votes. I often refer back to the 2009 nominating data, where Butcher only received 6 votes for Small Favor, one of his Dresden novels. If that number seems shockingly low for so popular writer, remember that Butcher is associated with Urban Fantasy writing, a sub-genre that has not historically been part of the Hugos.

Stephenson is the most complex situation. He has received Hugo nominations three times before without any Puppy help: for Anathem in 2009, for Cryptonomicon in 2000, and for The Diamond Age in 1996. Diamond Age went on to win the Hugo that year. So while the Puppies certainly helped, we won’t know whether or not Stephenson would have received a nomination on his own until the final data comes out. A few more bits of data: Stephenson received 93 nominating votes in 2009, second most to Little Brother by Cory Doctorow. In the final ballot, Anathem took second, losing to The Graveyard Book by 120 votes, 477 to 357. If Seveneves performs similarly, it could have come down to how many people voted Seveneves as “No Award” based solely on its appearance on the Rabid Puppies list.

However, that’s all a moot point. On the final Rabid Puppy Hugo ballot, Vox Day put Uprooted above Seveneves (Novik/Stephenson/Butcher/No Award was the exact order). That will pretty much clinch the race for Uprooted, based on this logic:

  1. Uprooted was already very likely to finish either #1 or #2 in the Hugo voting, based on Novik’s strong performance in winning the Nebula, the Locus Fantasy Award, and grabbing nominations in the World Fantasy Award and British Fantasy. She has also done very well with SF Critics and Mainstream Critics, all of which are good indicators of Hugo success. She’s sold a ton of copies (46,000+ ratings on Goodreads, for instance). The closest competitor seems to be The Fifth Season, but Jemisin has already lost the Nebula and Locus Fantasy votes to Novik. As such, I think Uprooted was likely to win the Hugo without any help from the Puppies.
  2. The Rabid Puppies were at least 200 strong in the nominating stage, possibly higher. They might be anywhere from 200-500+ in the final voting stage (the final voting always brings more people to the table). Let’s use a very conservative 300.
  3. 300 additional votes for Uprooted at #1 will be enough to cover any potential margin of victory that either Jemisin, Stephenson, or Leckie might have had without the Rabid Puppies. Let’s say Jemisin squeaked out an “organic” victory of 100 votes; once the Rabid Puppies are tallied, that swings outcome back to Novik. You’d have to predict a scenario where Jemisin (or Leckie) would beat Novik by a number greater than the total number of Rabid Puppies. That’s only happened once in the last 5 years, when Ancillary Justice was a consensus book against a weaker field. So could it be Leckie again? I don’t think so; she’s already won a Hugo for this series and I don’t think voters are ready to give her a second. Even if she squeaked out an organic win, I can’t see it being by a 300 vote margin. Butcher will attract tons of No Award votes, so he’s not even in the conversation.

So that leaves Uprooted as the only novel that seems to have a chance of winning the Hugos. What other book has a path to victory? You’d have to predict a huge “organic” win for either Jemisin or Leckie, and that just doesn’t seem likely. We’ll find out shortly!

Best Novella, Best Novelette, Best Short Story:

In each of these categories, the Rabid Puppies swept 4 out of 5 positions. This means that the non-slate story is the prohibitive favorite, based on how many people voted slated works No Award last year.  If there’s any drama, it might be in “Best Novella.” Nnedi Okorafor’s Binti, the Nebula winner, is the non-slate work. Louis McMaster Bujold’s Penric’s Demon, from the same universe as her Hugo winning Paladin of Souls, is the number #1 Rabid Puppy pick. How many people will No Award Bujold based solely on her appearing on the Rabid Puppies slate? Let’s say Binti wins by an organic margin of 200 (before factoring in the Rabid Vote) and the Rabids are 400. It would take only 200 voters “No Award”ing Penric’s Demon to keep Binti the winner. I expect that to happen, but this will be some great data to sort through once the packets are released.

I don’t see anything preventing “And You Shall Know Her by the Trail of Dead” or “Cat Pictures Please” from winning the Novelette and Short Story category. Stephen King’s huge popularity will be blunted by his not being primarily associated with Science Fiction or Fantasy. “Folding Beijing” might be competitive, but the Rabid Puppies put it lower on their list, minimizing its chances.

So, what do you think? Will there be any kingmaker effects this year? Or will the Hugo fiction categories play out pretty much as they would have without the Rabid Puppies?

Analyzing the 2016 Hugo Noms, Part 1

No use putting this off any longer. I was hoping we’d see some more leaked information/numbers, but we’re stick with pretty minimal information this year. Here we go . . .

Where We’re At: Yesterday, the 2016 Hugo Nominations came out. Once again, the Rabid Puppies dominated the awards, grabbing over 75% of the available Hugo nomination slots.

If you’re here for the quick drive-by (Chaos Horizon is not a good website for the casual Hugo fan), I’m estimating the Rabid Puppies at 300 this year, with a broader range of 250-370. Lower than that, you can’t sweep categories. Higher, more would have been swept. Given this year’s turnout, 300 seems about the number that gets you these results. Calculations below. Be warned!

EDIT 4/28/2016: Sources are telling me that there were indeed withdrawals in several categories. This greatly muddies the upper limit of the Rabid Puppy vote. As such, I think the 250-370 should be read as the lower limit of the Rabid Puppy vote, with the upper limit being somewhere in the range of 100 higher of that. I did some quick calculations for the upper RP limit using the Best Novel category, assuming Jemisin got 10%, 15%, or  20% of the vote. We know she beat John C. Wright’s Somewhither. That gives upper limits of 335, 481, and 615. I think 481 is a good middle-of-the-road estimate. Remember that Best Novel numbers are always inflated because more people vote in this category than any other, so a big RP number in Best Novel doesn’t necessarily carry over to all categories.

So revised RP estimate: 250-480. If there were many withdrawals, push to the high end (or beyond) of that range. Fewer withdrawals, low end. Perhaps the people who withdrew will come forward in the next few days and this will allow us to be more precise. If those withdrawals are made public, please post them in the comments for me.

EDIT 4/28/2016: Over at Rocket Stack Rank, Greg has done his own Hugo analysis, using a different set of assumptions. While I assume a linear increase of “organic” voters (non-Puppy voters), he uses a “power law” distribution. Most simply put, it’s the difference between fitting a line or a curve to the available data. I go with the line because of the low amount of data we have, but Greg is certainly right that the curve is the way to go if you trust the amount of data you have.

Using his method, Greg comes up with a lower Rabid Puppy number (around 200), but that’s also accompanied by a lower number of “organic” voters than my method estimates. Go over and take a look at his estimate. It’s a great example of how different statistical assumptions can yield substantially different results. I’ll leave it up to you to decide which estimate you think is better. I personally love that we now have multiple estimates using different approaches. It really broadens our understanding of this whole process. Now we need someone to come along and do a Bayesian analysis!

The Estimate: This year, MidAmeriCon II released minimal data information at this stage. They’re not obligated to release any, so I guess we should be happy with what we got. Last year, we got the range of votes, which allowed us to estimate how strong the slate effect was. This year, we only have the list of nominees and the votes per category. Is that enough to make any estimates?

Here on Chaos Horizon, I work with what I have. I think we can piece together an estimate using the following information:

  1. The Rabid Puppies swept some but not all of the categories. That’s a very valuable piece of information: it means the Rabid Puppies are strong, but not strong enough to dominate everything. With careful attention, we should be able to find the line (or at least the vicinity of the line).
  2. Zooming more closely in, the Rabid Puppies swept the following categories: Short Story, Related Work, Graphic Story, Professional Artist, Fanzine. Because of this, we know that the Rabid Puppies had to beat whatever the #1 non-Rabid Puppy pick was in those categories.
  3. The Rabid Puppies took 4/5 slots in Novella, Novelette, Semiprozine, Fan Writer, Fan Artist, Fan Cast, and Campbell. This means that, in the categories, the #1 non-Rabid Puppy pick had to be larger than Rabid Puppy slate number.

With that information, if I could just find out what the number of votes the #1 non-Rabid Puppy pick likely received, I could estimate the Rabid vote. Now, couldn’t I use the historical data—the average percentage that the #1 pick has received in past years—to come up with this estimate?

One potential wrench: what if people withdrew from nominations? There’s no way to know this, and that would screw the numbers up substantially. However, with more than 10 categories to work with, we can only hope this didn’t happen in all 10. If you believe at least one person withdrew in Novelette, Semiprozine, Fan Writer, Fan Artist, Fan Cast, and Campbell, add 100 to my Rabid Puppy estimate for 400. There’s also the question of Sad Puppy influence, which I’ll tackle in a later post.

Or, to write it out: In the swept categories, Rabid Puppy Number (x) is likely greater than the Non-Rabid voters (Total – x) * the average percentage of the #1 work from previous years.

In the 4/5 categories, the Rabid Puppy number (x) is likely less than the Non-Rabid voters (Total – x) * the average percentage of the #1 work from previous years.

While that won’t be 100% accurate, as the #1 work gets a range of numbers, it’s going to give us something to start with. Here’s the actual formula for calculating the Rabid Puppy lower limit in swept categories using this logic:

x > (Total – x) * #1%
x > #1% * Total – #1% * x
x + #1% * x > #1% * Total
(1 + #1%)x > #1% * Total
x > (#1% * Total) / (1 + #1%)

So, quick chart: we need the #1%, the average percent of vote the #1 work gets, i.e. the highest placing non-RP work, in all categories that were either swept or had 4/5. I’ll use the 4/5 Rabid categories in a second to establish an upper limit.

Off to the Hugo stats to create the chart. I used data from 2010-2013, giving me 4 years. I didn’t use 2014 and 2015 because the Sad Puppies and Rabid Puppies changed the data sets by their campaigns. I didn’t use 2009 data because the WorldCon didn’t format it conveniently that year, so it is much harder to pull the percentages off. I don’t have infinite time to work on this stuff. :). I also had to toss out Fan Cast because it’s such a new category.

Chart #1: Percentage the #1 Hugo Nominee Received 2010-2013

2013 2012 2011 2010 Average High Low Range
Short Story 16.2% 12.3% 14.0% 13.7% 14.0% 16.2% 12.3% 3.9%
Related Work 15.4% 11.1% 18.4% 21.6% 16.6% 21.6% 11.1% 10.5%
Graphic Story 29.7% 17.4% 22.3% 19.0% 22.1% 29.7% 17.4% 12.3%
Professional Artist 23.9% 40.1% 26.9% 33.6% 31.1% 40.1% 23.9% 16.2%
Fanzine 26.9% 25.2% 20.3% 16.1% 22.1% 26.9% 16.1% 10.8%
Novella 17.6% 24.8% 35.1% 21.1% 24.7% 35.1% 17.6% 17.6%
Novelette 14.5% 12.1% 11.3% 12.9% 12.7% 14.5% 11.3% 3.2%
Semiprozine 42.6% 29.3% 37.8% 32.4% 35.5% 42.6% 32.4% 13.3%
Fan Writer 23.9% 21.7% 21.7% 13.8% 20.3% 23.9% 13.8% 10.1%
Fan Artist 16.7% 22.6% 26.1% 20.6% 21.5% 26.1% 16.7% 9.4%
Campbell 18.7% 13.1% 20.3% 16.0% 17.0% 20.3% 13.1% 7.2%

Notice that far right column of “range”: that’s the difference between the high and low in that 4 year period. This big range is going to introduce a lot of statistical noise into the calculations: if I estimate Best Related work to get 16.6%, I’d be off as much as 5% in some years. I could try to offset this by fancier stat tools, but 4 data points will produce a garbage standard deviation, though, so I won’t use that. On 300 votes, this 5% error would throw a +/- halo of 15 votes. Significant but not overwhelming.

Okay, now that I have this data, let’s use it to calculate the lower limit of Rabid Puppies:

Chart 2: Calculating Min Rabid Puppy Number from 2016 Swept Categories

Swept Category Total Votes #1 % Min RP
Short Story 2451 0.140275 301.52
Related Work 2080 0.166225 296.47
Graphic Story 1838 0.2211 332.8
Professional Artist 1481 0.310975 351.31
Fanzine 1455 0.22125 263.6
Average 309.14

Okay, what the hell does this chart say? The Short Story category had 2451 voters this year. In past years, the #1 Sad Puppy pick grabbed 14% of the vote. To beat that 14%, there needed to be at least 302 Rabid Puppy voters. With that number, you get 302 Rabid Votes, (2451-302) = 2149 Non-Rabid votes, voting at 14% = 301 votes. Thus, the Rabid Puppies would beat all the Non-Rabid votes by 1 point.

Now, surely that number isn’t 100% accurate. Maybe the top short story this year got 18% of the vote. Maybe it got 12%. But 300 seems about the line here–if Rabid Puppies are lower than that, you wouldn’t expect it to sweep.

Keep in mind, this chart just gives us a minimum. Now, let’s do the other limit, using the categories were the Puppies took 4/5. This is uglier, I’m warning you:

Chart 3: Calculating Max Rabid Puppy Number from 2016 4/5 Categories

4/5 Category Total Votes #1 % Max
Novella 2416 0.246575 477.89
Novelette 1975 0.12665 222.02
Semiprozine 1457 0.35505 381.76
Fan Writer 1568 0.20265 264.21
Fan Artist 1073 0.2151 189.95
Campbell 1922 0.170125 279.44
302.54

Ugh. Disaster befalls Chaos Horizon. This number should be higher than the last one, creating a nice range. Oh, the failed dreams. This chart is full of outliers, ranging from that huge 477 in Novella to that paltry 190 in Fan Artist. Did someone withdraw from the Fan Artist category, skewing the numbers? If I take that out, it bumps the average up to 325, which fixes my problem. Of course, if I dump the low outlier, I should dump the high outlier, which puts us back in the same fix.

A couple conclusions: the fact that both calculations turned up the 300 number is actually pretty remarkable. We could conclude that this is just about the line: if the Rabid Puppies are much stronger than 300 (say 350), they should have swept more categories. If they’re much weaker (250), they shouldn’t have swept any. 300 is the sweet spot to be competitive in most of these categories, with the statistical noise of any given year pushing some works over, some works not.

It also really, really looks like Novelette and Fan Artist should have been swept. Withdrawals?

To wrap up my estimate, I took the further step of using the 4 year high % and the 4 year low % (i.e. I deliberately min/mixed to model more centralized and less centralized results). You can find that calculation on this 2016 Hugo Nom Calcs. This gives us the range of 250-370 I mentioned earlier in the post. I’d keep in mind that the raw number of Rabid Puppies might be higher than that—this is just the slate effect they generated. It may be that some Rabid Puppies didn’t vote in all categories, didn’t vote for all the recommended works, etc.

There’s lots of factors that could skew my calculation: perhaps more voters spread the vote out more rather than consolidating it. Perhaps the opposite happened, with voters taking extra care to try to centralize their vote. Both might throw the estimate off by 50 or even 100.

Does around 300 make sense? That’s a good middle ground number that could dominate much of the voting in downballot categories but would be incapable of sweeping popular categories like Novel or Dramatic Work. I took my best shot, wrong as it may be. I don’t think we’ll do much better with our limited data—got any better ideas on how to calculate this?

2016 Hugo Finalists Announced

The 2016 Hugo Finalists have been announced. Press release here.

The best novel category played out in this fashion:

Ancillary Mercy, Ann Leckie
The Cinder Spires: Aeronaut’s Windlass, Jim Butcher
The Fifth Season, N.K. Jemisin
Seveneves, Neal Stephenson
Uprooted, Naomi Novik

I got 4/5 right here on Chaos Horizon, and Jemisin was the novel I had as #6 on my prediction. I’ll take that in an unpredictable and chaotic year. I also estimated 3620 votes, and the category had 3695 votes, so at least that part was close!

Jemisin making the list means that a surge of extra Hugo voters broke in her direction, pushing her over the combined weight of the Rabid and Sad Puppy vote and Wright’s Somewhither. Given how well the Rabid Puppies performed elsewhere, that means Jemisin performed very well. The Fifth Season also outperformed Jemisin’s previous novels (this is a scourge of how I model on Chaos Horizon), which may speak to her chances of winning either the Hugo or the Nebula.

The Rabid and Sad Puppies are primarily responsible for the Butcher nomination, and doubtless pushed Stephenson up higher, although Stephenson had a good shot of making it normally. Uprooted appeared high on the Sad Puppy list, and likely picked up voters from that area.

With the exception of Butcher, that looks pretty similar to what I would have predicted the Hugos to be without the Rabid/Sad Puppies. That is certainly not the case lower down the ballot: categories like Best Short Story, Best Related Work, Best Graphic Story, were swept by the Rabid Puppies, and Best Novella and Best Novelette almost swept. I’ll do some more careful analysis over the next few days, but the main reason this happened is because the large number of voters in Best Novel did not carry over to those categories. We had 3695 Best Novel ballots, but only 2451 Short Story ballots and 2080 Best Related Work Ballots. Those missing 1000 voters are the difference between a sweep and a mixed ballot.

My initial thought is that Uprooted will win, as it’s the only novel that seems acceptable to all camps. The typical voters will shoot down the Butcher; the Rabid Puppies will shoot down the Leckie and Jemisin. That leaves Stephenson or Novik. We’ll need to track the dialogue around the Stephenson nomination; if it is deemed a “Rabid Puppy” pick and thus No Awarded, that would seem to clear the path for Novik to win. It’ll take some time for me to sort through the numbers, though.

Perhaps the most interesting categories are Best Novella and Best Novelette, which had 4/5 Rabid Puppy sweeps, with the other book being the #1 story on the Sad Puppy list. While those stories—“Binti” and “And You Shall Know Her By The Trail Of Dead”—doubtless picked up support from other quarters (they were Nebula nominees, after all), that shows the Sad Puppies had a noticeable effect on the Hugos. I’ll give some thought to what that means and report back to you!

As I suspected, the “overlaps” did very well: if you appeared on multiple lists (Rabid + Sad Puppies) or being one of the Nebula nominees + Sad Puppies, you made the ballot. That may be the key to unraveling the fiction ballots: Rabid Puppies won unless a work appeared on both the Nebula and Sad Puppy lists. That makes for an odd alliance, with Sad Puppies possibly being the swing vote against a total Rabid Puppy sweep.

More analysis to come!

MidAmeriCon II Announces Record Number of Hugo Nominating Ballots

MidAmeriCon II issued a press release, tipping their hat as the number of Hugo nominating ballots:

 Kansas City, Missouri, USA – MidAmeriCon II, the 74th World Science Fiction

Convention (Worldcon), is delighted to announce that the finalists for the 2016 Hugo Awards, 2016 John W. Campbell Award for Best New Writer, and the 1941 Retro Hugo Awards will be announced on Tuesday, April 26. We are also proud to announce that this year’s number of nomination ballots set a new record.

Science fiction fans around the world will be able to follow the announcement live via MidAmeriCon II’s social media, and celebrate the authors, editors, artists, and works that have been selected as the best of 2015. The finalists will be released category by category, starting at Noon CDT (1 p.m. EDT, 10 a.m. PDT, 6 p.m. London, 7 p.m. Western Europe), through the convention’s Facebook page (www.facebook.com/MidAmeriCon2/) and Twitter feed @MidAmeriCon2

The announcement will begin with the 1941 Retro Hugo Awards then continue with the 2016 Hugo Awards and Campbell Award.  The full list of finalists will be made available on the MidAmeriCon II website directly after the completion of the live announcement, and will also be distributed as a press release to all MidAmeriCon II press contacts.

Over 4,000 nominating ballots were received for the 2016 Hugo Awards, nearly doubling the previous record of 2,122 ballots set last year by Sasquan, the 73rd Worldcon held in Spokane, WA.

The final ballot to select this year’s winners will open in mid-May, 2016, and will be open to all Attending, Young Adult, and Supporting members of MidAmeriCon II. The winners will be announced on Saturday, August 20, at the MidAmeriCon II Hugo Awards Ceremony.

The Hugo Awards are the premier award in the science fiction genre, honoring science fiction literature and media as well as the genre’s fans. The Awards were first presented at the 1953 World Science Fiction Convention in Philadelphia (Philcon II), and they have continued to honor science fiction and fantasy notables for well over 60 years.

For additional information, contact us at press@midamericon2.org.  ENDS

4,000 votes is a huge number, even bigger than I estimated. Nothing drives interest like controversy! Since most of us think the Rabid/Sad influence caps out around 750 votes, that high turnout number pushes us toward a mixed ballot in many categories. I wouldn’t anticipate a full sweep like last year.

We’re done with predicting though, and now we’re just waiting! Just a week or two before we know.

Estimating the 2016 Hugo Nominations, Part 5

Let’s wrap this torturous series of posts up with a few final things.

Over the last few days, I’ve built a series of models to predict the 2016 Hugos based on a number of assumptions. Those assumptions are that voters will vote for the 2016 Hugos in similar patterns to last years. That’s an easy assumption to knock, but it gives us a place to start thinking and debating. Here are those posts with estimates: Introduction, Post 2 (Rabid Puppies), Post 3 (Typical Voters), Post 4 (Sad Puppies). I view Chaos Horizon more as a thought experiment (can we get anywhere with this kind of thinking?) than as some definitive fount of Hugo knowledge. The goal of any prediction is to be correct, not elegant.

By breaking these out into three groups and three turnout scenarios (40%, 60%, 80%), I produced 27 different models. To conclude, we can look to see if certain books show up in a lot models, and then I’ll make that my prediction.

To view the models or create your own, use this Google Worksheet. Instructions are included in the worksheet, but you can cut and paste the data to create your own prediction.

Let’s look at one likely scenario: 80% Rabid Puppy vote, 60% Typical Vote, and 40% Sad Puppy vote. This represents the organization and high turnout of the Rabid Puppies, moderate enthusiasm from the more typical (or new) Hugo voters, and then lower turnout because of the way the Sad Puppy list was built. Here’s what you end up with:

Novel Names Rabid  Vote Sad Vote Typical Vote Totals
Vote per Group 440 180 3000 3620
Seveneves 440 108 196 744
Uprooted 144 532 676
The Aeronaut’s Windlass 440 151 30 621
Somewhither 440 180 620
Ancillary Mercy 65 532 597
Golden Son 440 30 470
Agent of the Imperium 440 440
The Fifth Season 392 392
Aurora 392 392
Honor At Stake 173 173
A Long Time Until Now 122 122

So that makes the official 2016 Chaos Horizon Hugo prediction as follows:
Seveneves, Neal Stephenson
Uprooted, Naomi Novik
The Aeronaut’s Windlass, Jim Butcher
Ancillary Mercy, Ann Leckie
Somewhither, John C. Wright

Seveneves makes it in every scenario because it receives votes from all 3 groups. Now, my assumptions could be wrong—perhaps some voters are so angry that Seveneves appeared on the Rabid Puppy list that they won’t nominate it at all. However, even a modest showing for Sevenves among typical voters gets it on the ballot. Remember, a similarly complex SF novel by Stephenson in Anathem made the ballot just a few years ago.

Uprooted does well in my model because it’s own of the most popular SFF books of the year (as evidenced by both its Nebula nomination, it’s appearance on year-end lists, and it’s popularity on Amazon and Goodreads), and picks up votes from both the Typical and the Sad Puppies (it’s #4 on their list). This might be the major effect of the Sad Puppies in 2016, to act as a kind of swing vote when things are close, as they’ll likely be between Novik, Jemisin, and Leckie.

Then we have the two other books that overlap between the Sad and Rabid Puppies. You can think of this in two ways: two separate groups voting for these texts, or some Sad Puppies converting to Rabid Puppies. Statistical results are the same. I’d be a little cautions about the John C. Wright. While it placed #1 on the Sad Puppies list, was this placement inflated by passionate Wright fans? Compared to Butcher’s massive popularity, Wright is a fairly niche author. If Puppy support is weaker than predicted, I’d drop the Wright out and replace it with Jemisin’s book. That’s the slot I’m watching closely when nominations come out. I do have Jemisin down as a real possibility; it seems like a lot of readers think The Fifth Season is her best book. I may be underestimating Jemisin based on past performance. The modelling I use is prone to that problem, of using historical data even when conditions on the ground have changed. Every model has its flaws.

Then we have Leckie. Lost in the Hugo controversies is the fact that these Ancillary novels have been some of the best received, reviewed, and rewarded SF novels of the millennia. Take a look at SFADB to see just how well these books have done: 12 major award nominations, 5 major wins including prior Hugos and Nebulas. If one book is likely to break up a Rabid sweep, this is it.

Of course, things can also go the other way—I may be under-predicting the Rabid Puppies, and if I’m by around 100 votes, that would push Leckie out and Pierce Brown up.

So that’s it! I’ll update my Hugo prediction page tomorrow when I get a chance. Does a ballot of Stephenson / Novik / Butcher / Wright / Leckie make sense? Is Jemisin or Pierce next in line after that? Are there other books that could be major contenders that I’m not seeing?

Predict away!

Estimating the 2016 Hugo Nominations, Part 4

Predicting how the “Sad Puppy” voters are going to nominate in 2016 is the most speculative part of all. The Sad Puppies drastically changed their approach, moving from a recommended slate to a crowd-sourced list. It’s an open question of how that change will impact the Hugo nominations.

What we do know, though, is that last nomination season the Sad Puppies were able to drive between 100-200 votes to the Hugos in most categories, and the their numbers likely grew in the finally voting stage. I estimated 450. All those voters are eligible to nominate again; if you figured the Sad Puppies doubled from the nomination stage in 2015 to now, they’d be able to bring 200-400 votes to the table. Then again, their votes might be diffused over the longer list; some Sad Puppies might abandon the list completely; some Sad Puppies might become Rabid Puppies, and so forth into confusion.

When you do predictive modelling, almost nothing good comes from showing how the sausage is made. Most modelling hides behind the mathematics (statistical mathematics forces you to make all sorts of assumptions as well, they’re just buried in the formulas, such as “I assume the responses are distributed along a normal curve”) or black box the whole thing since people only care about the results. Black boxing is probably the smart move as it prevents criticism. Chaos Horizon doesn’t work that way.

So, I need some sort of decay curve of the 10 Sad Puppy recommendations to run through my model. What I decided to go with is treating the Sad Puppy list as a poll showing the relative popularity of the novels. That worked pretty well in predicting the Nebulas. Here’s that chart, listing how many votes each Sad Puppy received, as well as the relative % compared to the top vote getter.

Somewhither John C Wright 25 100%
Honor At Stake Declan Finn 24 96%
The Cinder Spires: The Aeronaut’s Windlass Jim Butcher 21 84%
Uprooted Naomi Novik 20 80%
A Long Time Until Now Michael Z Williamson 17 68%
Seveneves Neal Stephenson 15 60%
Son of the Black Sword Larry Correia 15 60%
Strands of Sorrow John Ringo 15 60%
Nethereal Brian Niemeier 13 52%
The Discworld Terry Pratchett 11 44%
Ancillary Mercy Ann Leckie 9 36%

What this says is that for every 100 votes the Sad Puppy generates for John C. Wright, they’ll generate 36 votes for Ann Leckie. I know that stat is suspect because not everyone who voted in the Sad Puppy list was a Sad Puppy, and that the numbers are so small it was easy for one person to get boosted up the list by a small group of fans. Still, this gives us something. What I’ll do is plug this into my chart of 40%, 60%, and 80% using the 450 Sad Puppy estimate to come up with:

Sad Puppies
Scenario 40% 60% 80%
Voters 180 270 360
Ancillary Mercy 65 97 130
Uprooted 144 216 288
The Fifth Season
Aurora
Seveneves 108 162 216
Golden Son
Somewhither 180 270 360
The Aeronaut’s Windlass 151 227 302
Agent of the Imperium
Honor At Stake 173 259 346
A Long Time Until Now 122 184 245

Does this make any sense? I’m sure many will answer no. But look closely: could the remnants of the Sad Puppies, no matter how they’re impacted by the list, generate 300-150 votes for Jim Butcher this year? I find it hard to believe that they couldn’t produce that number. Remember, Butcher got 387 votes last year in the nomination stage. Some of that was Rabid Puppies (maybe up to 200), but where did the rest come from? And will all the Sad Puppy votes for Butcher vanish in just a year?

How about that Somewhither number—is it too big? This could also model some Sad Puppies being swayed over to the Rabid Puppy side, as would the Seveneves number. The Novik and Leckie numbers could represent the opposite happening: Sad Puppies who joined in 2015 and are now drifting over to more mainstream picks and choices. I think I’d go conservative with this, staying in the 40% band to model the dispersion effect.

So now I have predictions for each of the 3 groups. If I combine those, I get 27 different models. Each model may be flawed in itself (overestimating or underestimating a group), but when we start looking at trends that emerge across multiple models, that’s where this project has been heading. In predictive modelling, normally you make the computers do this and you hide all the messy assumptions behind a cool glossy surface. Then you say “As a result of 1,000 computer simulations, we determined that the Warriors will win 57% of the time.” For the record, the Chaos Horizon model now says the Warriors will win 100% of the time and that Steph Curry will be nominated for Best Related Work.

We could go on and do 100 more models based on different assumptions and see if trends keep emerging. This kind of prediction is messy, unsatisfying, and flawed, and the more you actually understand the nuts and bolts behind it, the more it makes you doubt predictive modelling at all. Of course, the only thing worse would be if predictive modelling was 100% (or even 90% or 80%) accurate. Then we’d know the future with 100% accuracy. Come to think of it, wouldn’t that make for a good SF series . . . Better get Isaac Asimov on the phone. Maybe I should argue that this series is eligible for “The Best SF Story of 2017” Hugo.

Tomorrow we’ll start combining the models and see if anything useful emerges.

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Mountain Was Here

writing like a drunken seismograph

thegrimdarkreview.wordpress.com/

Grimdark Book Reviews Every Wednesday

SFF Book Reviews

a reader's thoughts about fantasy & science fiction books

Philip K. Dick Review

A Re-read Project

Notes From the Darknet

Book reviews and literary discussion

Bookish

All books, reviews, genres, and bookish types welcome