Archive | 2015 Hugo Award RSS for this section

2015 Hugo Analysis: Nominating Stats, Part 2

There’s a last few pieces of data I want easy access to before I move on from 2015. It’s been quite a year, and I think 2016 is going to be even more chaotic.

Not only do we have the specter of further Hugo voting kerfuffles, but 2015 is one of the stronger award years in recent memory. Of the last 7 Hugo Best Novel winners, 5 published new novels in 2015. Bacigalupi, Walton, Scalzi, Cixin Liu, and Leckie are all back at the party with well-received works—and that’s not including past Hugo heavy-hitters like Neal Stephenson (I think Sevenves will be a major player) and Kim Stanley Robinson (Aurora likely for a Nebula nom?).

That’s 7 books already . . . and you have to think Pratchett will grab plenty of sentimental votes for the final Discworld novel. Then we’ve got a whole range of other authors with competitive novels in 2015: George R.R. Martin’s A Knight of the Seven Kingdoms (I don’t know whether this is eligible or not as a novel, but if it is, watch out), Ken Liu’s debut Grace of Kings, Brandon Sanderson’s Shadows of Self (without the Puppies, he was awfully close to breaking through for a nom this year; see the stats below), N.K. Jemisin’s The Fifth Season (1 Hugo Best novel nom and 3 Nebula Best novel noms for Jemisin in the last 5 years), the list goes on and on. And I haven’t even mentioned potential breakout novels this year, like Naomi Novik’s Uprooted. Throw in the chaos of various Puppy picks, and you’ve got a very murky year coming up. I better roll up my sleeves and get to work!

So there’s two sets of numbers I want to look at today: sweep margin in various categories and the Best Novel nominations with the Puppy picks removed. Both will give us some good insight as to what might happen next year. Once again, I’m using the official Hugo stats for this info, which can be accessed here: 2015HugoStatistics.

If we look at categories where the Puppies, Sad and Rabid combined, had at least 5 picks, we can calculate something I’m calling the “sweep margin.” Basically, we subtract the Puppy #5 nominating number from the highest non-Puppy pick. This tells us how close the category came to getting swept. If the number is greater than 0, that means the category was swept, and we know how many votes it would have taken the non-Puppies to break up that sweep. If the number is less than zero, that means the category wasn’t swept. Note that I’m not taking withdrawals into account here; I just want to look at the raw numbers. Here’s the table:

Table #1: Sweep Margin for 2015 Hugos, Selected Cateogries

Sweep Margins

A little hard to parse, I know. Let’s think about Novella first: this chart tells us that Patrick Rothfuss’s The Slow Regard of Silent Things just missed the ballot by a mere 21 votes. So the Puppy sweep in Novella wasn’t particularly dominant; just a few more voters and one non-Puppy would have made it in. Does that mean a category like Novella might not be swept next year?

Look at another category, though, like Short Story. Since there are more short stories published in any given year, the vote is usually more spread out, and this time the Puppies were much more dominant. They actually had seven stories that placed above every non-Puppy pick. Two eventually declined, Bellet and Grey, but I’m not tracking that in this study. If we look at the #5 Puppy story, “Turncoat” by Steve Rzasa, it got 162 votes. The highest non-Puppy story only got 76 votes, “Jackalope Wives” by Ursula Vernon. That’s a heft margin of 86 votes—Vernon would have had to double her vote total to make it into the field (before Bellet and Grey declined, but you can’t count on people declining). Possible? Sure. I think the Hugo nomination vote can double next year—but the Puppy vote will also increase. Depending on Puppy strategy, it’s very possible that Short Story or Best Related Work will be swept next year.

I’m most interested in Best Novel here at Chaos Horizon; it’s the area of SFF I know best, and what I’m most interested in reading and predicting. Despite all the Puppy picks, Leckie was safely in the 2015 field: she placed 3rd in the raw nomination stats, above 5 different Puppy picks. Even Addison’s The Goblin Emperor (256 votes) and Cixin Liu’s The Three-Body Problem (210) beat two Puppy picks, Trial by Fire (199) and The Chaplain’s War. Those two picks, Ganon for Sad and Torgersen for Rabid, are examples of the Sad and Rabid picks not overlapping. The Best Novel category received enough attention, and enough votes, and was centralized enough, that the non-Puppy voters were able to overcome the Puppy votes when they didn’t team up. I think that’s a key piece of evidence for next year’s prediction: when the Sad and Rabid picks overlap, they’ll be very strong contenders. If they’re separate, I don’t think they’ll be able to beat the motivated Hugo nominators. We’ll see, of course.

That leaves out one obvious point—if the Sad/Rabid picks overlap with something already in the mainstream, that will definitely boost the Sad/Rabid chances.

Last thing is to look at the Nomination stats with the Puppies taken out. I need these for my records, because they’ll me to do year-to-year comparisons for authors. What I’m going to do is subtracting all the Puppy picks and then recalculating the percentages. Skin Game got 387 votes, so I’m just going to brute force subtract 387 from 1827 votes and recalculate. Were all of Butcher’s votes from the Sad Puppies? Probably not, but Butcher doesn’t have a strong history of vote-getting in the Hugos. By erasing those 387 votes, I’ll restore the percentages to what they might have looked like otherwise, which will help for my year to year tracking. I like to ask questions like, is Sanderson getting more or less popular?

Here’s that chart:

Table 2: Estimated Hugo Best Novel Nomination %s without Puppy VotesBest Novel Nominating %

Fun little note: they misspelled Katherine Addison’s name as Katherine Anderson in the Hugo nominating stats. It’s incredibly hard to enter all the data correctly when you’ve got so many data-points! EDIT 9/13/15: The error has been fixed! Check the comments for the full story.

Those percentages make a lot of sense. Leckie grabbed 23.1% for Ancillary Justice in 2015, and I don’t think the sequel was quite as popular.

A couple things of note: Weir probably would have been ineligible and many Hugo voters knew that. If that wasn’t the case, I expect he would have easily made the top 5. VanderMeer is much lower than I would have expected. Walton did very well (8th) for what was an experimental novel; that means The Just City might have a shot this year. Sanderson in 7th place has been moving steadily up in these Hugo nomination stats; he managed only 4.2% for Steelheart last year. Will his return to his very popular Mistborn universe be enough? I’m still going to predict just outside the Top #5, but it looks like it’s just a matter of time. Okorafor is actually eligible again in 2016 (Lagoon just only came out in the United States). There’s also a lot of experimental-ish fantasy on the list (Addison, Bennett, Hurley); that might speak well of Novik or Bear’s chances in 2016.

Well, that brings to an end my 2015 Hugo analysis! It’s been quite a year. I’m going to spend the several weeks doing Review Round-Ups of the big contenders for the 2016, and I should have my first 2016 Hugo/Nebula predictions in early October.

2015 Hugo Analysis: Nominating Stats, Part 1

Time to dig into the nomination stats. Since Chaos Horizon is a website dedicated to award predictions, this is data we really need—2015 is going to be our best model for 2016, after all.

Let’s tackle this in a methodical and organized fashion. The 2015 nominating stats are included as part of the 2015 Hugo packet, easily available at the Hugo website or right here: 2015HugoStatistics. The first thing we can do is go back to the Sad Puppy and Rabid Puppy slates and see how many votes each of those texts got. I’ve divided this into three lists: joint Sad/Rabid selections, Sad selections, and Rabid selections.

Joint Sad and Rabid Picks, Number of Nominations in 2015 Hugos:
263 The Dark Between the Stars
387 Skin Game
372 Monster Hunter Nemesis
270 Lines of Departure

292 Flow
338 One Bright Star to Guide Them
338 Big Boys Don’t Cry

259 The Journeyman
248 The Triple Sun
267 Championship B’tok
266 Ashes to Ashes

Short Story
230 Goodnight Stars
184 On a Spiritual Plain
226 Totaled

Best Related
206 Letters from Gardner
273 Transhuman and Subhuman
254 The Hot Equations
236 Wisdom from my Internet
265 Why Science is Never Settled

Best Graphic Story
201 Reduce Reuse Reanimate

Dramatic Long
314 Lego Movie
769 Guardians of the Galaxy
489 Interstellar
170 The Maze Runner

Dramatic Short
169 Grimm “Once We Were Gods”
170 The Flash “Pilot”

Editor Long
368 Toni Weisskopf
276 Jim Minz
238 Anne Sowards
292 Sheila Gilbert

Editor Short
236 Jennifer Brozek
217 Bryan Thomas Schmidt
279 Mike Resnick
228 Edmund Schubert

Professional Artist
173 Carter Reid
160 Jon Eno
188 Alan Pollack
181 Nick Greenwood

229 InterGalactic Medicine Show

181 Tangent
208 Elitist Book Reviews
187 Revenge of Hump Day

179 Sci Phi Show
158 Dungeon Crawlers Radio
169 Adventures in SF Publishing

Fan Writer
150 Mathew Surridge
156 Jeffro Johnson
175 Amanda Green
201 Cedaer Sanderson

229 Jason Cordova
224 Kary English
219 Eric S. Raymond

If we toss out the Best Dramatic, Long Form as an outlier (the stat numbers are way high, indicating that far more than just the Rabid + Sad Puppies voted for Guardians of the Galaxy, as anyone would predict), we wind up with this as the following range:

387-150. That takes us from the most popular pick to least popular choice (Skin Game by Jim Butcher in Novel, down to Matthew Surrdige in Fan Writer). That’s the “effective joint Sad/Rabid Puppy vote,” or how many votes the Puppies delivered to the 2015 Hugo nomination process. That wide range reflects two things: the lack of popularity of categories like Fan Writer, and lack of slate discipline (not every Puppy voter voted for all the works on the slate). To illustrate how some people didn’t follow the slate, look at Best Novel:

387 Skin Game
372 Monster Hunter Nemesis
270 Lines of Departure
263 The Dark Between the Stars

All four are joint Rabid/Sad picks, but Skin Game and Monster Hunter Nemesis grabbed 100 more votes than the Kloos or Anderson. That means that least 25% of these voters were picking and choosing from the slate, not voting it straight down the line.

A couple number to parse: how do we know Skin Game (or any other nominee) didn’t pick up some non-Puppy voters? We don’t know that for sure, but we can look at the 2009 Hugo Nominating stats for references. That’s the last year where they released the complete list of everyone who got a vote. Small Favor, Butcher’s Dresden Files #10, only got 6 votes that year. Now, this year’s pool is bigger, and maybe people liked Skin Game more, but that looks like a relatively trivial number to me. Your mileage may vary.

On the flip side, how do we know that every Puppy voter voted for Skin Game? Again, we don’t know for sure—there could have been 500 Sad Puppies, and only 80% of them voted for Butcher. In this case, I don’t think it matters. We’re looking at “effective” strength: this is how many votes the Puppies actually delivered in the categories, not a potential estimate of their max number. The actual number of votes is what is useful in my predictions.

Conclusion: So, Chaos Horizon is concluding that the effective Sad/Rabid combined block vote was 387-150, with sharp decay by both the popularity of the chosen work and the popularity of the category. I think that number can explain some of the vitriol in the field: of the 387 people who voted for Skin Game, at least 200 of them didn’t vote all the way to the bottom of the slate. More people only voted part of the slate than voted the whole thing—thus opening up the door for all kinds of online arguments as to exactly how “slate”-like this whole thing was. Expect those to continue as we move into Sad Puppies 4.

On to the Sad Puppy picks. When all was said and done, the Sad Puppies only had a few picks that were not mirrored by the Rabid Puppies (8, in fact), so we’ll learn far less here.

Sad Puppy Picks, Number of Nominations in 2015 Hugos:
199 Trial by Fire

Short Story
132 A Single Samurai
185 Tuesdays With Molakesh the Destroyer

Dramatic Short
41 Adventure Time “The Prince Who Wanted Everything”
Didn’t make top 15 Regular Show “Saving Time”

111 Abyss & Apex
100 Andromeda Spaceways In-Flight Magazine

Fan Writer
132 Dave Freer

If we toss out the “Dramatic Short” category as an obvious outlier (the Sad Puppy voters didn’t seem to have liked picking cartoon shows in that category, as “Regular Show” didn’t even make the top 15), we wind up with this as a range:

199-100. I think the Trial by Fire number (at 199) is a little inflated; Gannon did grab a Nebula nom for this series in both 2014 and 2015, and I expect he picked up a fair amount of votes outside the Puppy blocks. That 185 number for “Molakesh” might be the more solid estimate of the max Sad Puppy core; that story is from Fireside Fiction, a rather obscure venue. Neither Andromeda Spaceways nor Abyss and Apex placed in the Top 15 in the 2014 Hugos, and the cut off there was a mere 10 votes, so I think we can attribute the lion’s share of those votes to the Sad Puppies.

Conclusion: We only have 8 data points here, but we’ve got a 199-100 range, with the top end only happening in popular categories (Novel, Short Story). That’s a 50% difference from the highest voted to the lowest voted, perhaps suggesting that only 50% of the Sad Puppy voters voted straight down the slate. You could get that number even lower, though, if you counted the television shows that not even the Sad Puppies voted for.

Rabid Puppy Picks, Number of Nominations in 2015 Hugos:
196 The Chaplain’s War

172 The Plural of Helen of Troy
145 Pale Realms of Shade

165 Yes, Virginia, There is a Santa Claus

Short Story
162 Turncoat
151 The Parliament of Beasts and Birds

Dramatic Long
100 Coherence

Dramatic Short
141 Game of Thrones “The Mountain and the Viper”
86 Supernatural “Dog Dean Afternoon”

Editor Long
166 Vox Day

Editor Short
162 Vox Day

Professional Artist
118 Kirk DouPonce

119 Black Gate

Fan Writer
66 Daniel Enness

143 Rolf Nelson

A couple interesting outliers here. The Dramatic Television category seems strange; you’d have to imagine that more people voted for Game of Thrones than just the Rabid Puppies, and Supernatural only picked up a scant 86 votes. Even the Rabid Puppies didn’t follow VD’s instructions in Fan Writer, only voting 66 times for Daniel Enness. I think the most sensible explanation is that Rabid Puppy voters didn’t follow the recommended picks in these categories. If you get rid of those 3 outliers, you end up with a very tight grouping of:

196-100. The Torgersen is probably inflated from Sad Puppies; even though he didn’t include himself on his own list, I can imagine some Sad Puppies coming over to vote for him. He’d also had a prior Hugo nomination outside the Puppy process. The tightly grouped Vox Day number (166 and 162) might be an equally sensible top number for the Rabid Puppies group. We’re only take 20-30 votes difference, though, and we’d be splitting hairs. I’m a stat site, though, so if you want to split hairs, go ahead!

Conclusion: 196-100 seems safe, and not even the Rabid Puppies had perfect slate discipline. This surprised me, although I could probably be persuaded there was a core group of 166 (Vox Day’s editor nom) to 119 (the Fanzine/Professional Artist number) of Rabid Puppies that did stick pretty closely.

So that leaves us:

Nomination Estimates, Sad, Rabid, and Joint Puppy Picks (percentage calculated using 1595 total nominating ballots):
Joint: 387-150; 24.2% – 9.5%
Sad Puppy: 199-100; 12.5% – 6.3%
Rabid Puppy: 196-100; 12.2% – 7.4%

Let’s double check-the math. If we add the Rabid and Sad picks together, we wind up with 395-200. The joint picks is 387-150. Obviously, that top number looks great; those 8 extra votes would seem to fall within the margin of other votes Skin Game is likely to have picked up. 200 and 150 are quite a bit farther apart, but this might reflect the limited data set we have for Sad Puppy picks alone (8 data points) and Rabid Puppy picks alone (15) compared to joint Sad/Rabid picks (52). Some of the joint picks may have been unappealing to both the Sad and Rabid voters, as well as being in categories with low voter turnout (Fan Writer, Fan Cast, etc.). Take a look at this chart, showing how quickly the various voting groups decayed (excluding Best Drama, for the reasons stated above):

Chart 1 Nomination Study

The chart just lines up the most popular pick to the least popular pick to take a look at the decay curve. Rabid and Joint alike fell off very quickly and then evened out. I think that reflects how much more popular the Best Novel category is than the rest of the Hugos. In 2015, it pulled in almost twice as many votes as the other fiction categories. Sad Puppies fell quickly the whole way down, but I don’t know if that reflects a greater variance amongst Sad Puppy voters or just a lack of data.

What does this all mean? That’s the big question. What it means for Chaos Horizon is that I can use these ranges and totals as I put together my 2016 prediction. The max number of nominating ballots was in Best Novel, where 1595 were cast; 5950 voted in the Hugo finals, an increase of almost 3.75. According to me previous analysis, here’s my final Puppy estimates:

Core Rabid Puppies: 550-525 (9.2% – 8.9%, using 5950 total votes for percentage)
Core Sad Puppies: 500-400 (8.4% – 6.7%, using 5950 total votes for percentage)
There are also some Puppy inclined neutrals; I’m not including them, because I don’t know if they’ll follow the Puppies into the nomination stage.

Those percentages are a little down from the nominating ballot, but not aggressively so. That’s what you would expect: the Puppies had the advantage of surprise in the nomination stage, while the push-back against them came in the final balloting. Much of the growth in the final ballot was from people wanting to vote specifically against the slates.

Boil this all down, and we now have a set of numbers to use in future predictions. In my next nominating analysis, I’ll be looking at how big the sweeps were for each category. With that data in place, I can then predict whether or not there will be sweeps (or in which categories) in 2016.

2015 Hugo Analysis: Category Participation

My posts have been slow these last weeks—my university is starting it’s Fall semester, and I’ve been getting my classes up and running. I’m teaching Kurt Vonnegut’s Slaughterhouse-Five, Nathanael Hawthorne’s The Scarlet Letter, and Mark Twain’s Adventures of Tom Sawyer across my three classes this week. Strange combination. Toni Morrison, Edith Wharton, and Benjamin Franklin next week!

Today, I want to look at category participation: how many people voted in each of the Hugo categories. Historically, there has been a very sharp drop off from the Best Novel category (which has around 85% participation) to the less popular categories like Fan Writer, Fanzine, and the Editor Categories (usually around the 40%-45% range). The stat we’re looking at today is what percentage of people who turned in a ballot voted for that particular category.

There are a lot of categories in the Hugos, and it’s unlikely that every fan engages in every category with the same intensity. That stats show that, but the 2015 controversy changed some of those patterns in interesting ways.

Let’s take at a big table of data from 2011-2015. I pulled the numbers directly from the last five years of Hugo packets, and what my table shows is the number of total ballots and the number of votes in each category. Divide those by each other, and you get the percentage participation in each category. Notice how skewed 2015 is compared to the other numbers; we had a total change in voting patterns this year. Click the table to make it larger:

Table 1: Participation in Final Ballot Hugo Categories, 2011-2015

Hugo Participation, 2011-2015

A lot of numbers, I know. Let’s look at that visually:

Percentage of Voters

That’s a very revealing chart. Ignore the top turquoise line for the moment; that’s 2015. The other four lines represent 2011-2014, and they’re pretty consistent with each other. Participation across categories declines until the Dramatic Presentations, then it declines again, then spikes at Best Professional Artist (who knew), before declining again. Best Fancast began in 2012, messing up the end of the 2011 line in the chart.

Historically, the ballot plunges from 85%-90% for Novel down to 40%. Even the major fiction categories (Novella, Novelette, and Short Story) manage only about 75% participation. Those declines are relatively consistent year to year, with some variation depending on how appealing the category is in any given cycle.

Now 2015: that line is totally inconsistent with the previous 4 years. Previously ignored categories like Editor grabbed an increase of 30 points—there’s your visual representation of how the Puppy kerfuffle drove votes. Thousands of voters voted in categories they would have previously ignored. I imagine this increase is due to both sides of the controversy, as various voters are tying to make their point. Still, 80% participation in a category like Editor, Short or Long Form is highly unusual for the Hugos. Even the Best Novel had a staggering 95% participation rate, up from a prior 4 year average of 87.4%.

Not every category benefitted in the same way. Let’s see if we can’t chart that increase:

Table 2: Increases of 2015 Hugo Participation Over 2011-2014 Participation Averages
Table 2 Increases

Last three columns are key. I averaged the 2011-2014 stats, and then looked to see how much they increased in 2015. If you take that absolute value (i.e. 80% to 90% is a 10 point increase), you can then calculate the percentage increase (divide it by the average value). That shows us which categories had the biggest relative boosts. Best Novel only increased slightly. Categories like Editor, Short, Editor, Long, and Fan Writer had huge relative boosts. The categories with little controversy, such as Fan Artist, didn’t enjoy the boosts other categories saw. A visual glance:

Chart 2 Percentage Increase

So, what did we learn from all this? That there was a Hugo controversy in 2015, and that it drove huge increases in participation to categories that had previously been ignored. I think we knew that already, but it’s always good to have the data.

Here’s my Excel file with the numbers and the charts: Participation Study. All data on Chaos Horizon is open, and feel free to use it in any wish you wish. Please provide a link back to this post if you do. Otherwise, you might want to check out my similar “Hugo Nomination Participation” study, which looks at the same data but in the nominating stage. It’s linked under the “Reports” tab.

Up next: the 2015 nominating numbers!

2015 Hugo Analysis: Best Novel

The controversial 2015 Hugos have been given, and now it’s time to sift through the stats to see what we can learn. A couple preliminaries before we dive into the Best Novel category:

Here are the stats themselves: 2015HugoStatistics. I believe they are going to release the anonymous ballots at a later data so people can run simulations on them, but I don’t know when or how they’re going to do that.

If you’re not familiar with the tabulating process, this post from Staffer’s Book Review does a good job explaining the basics.

I know emotions are highly charged around the 2015 Hugos, but Chaos Horizon is a stat-driven site, and I try to hit the middle of the road in my estimates. This angers some commentators, who want me to minimize or maximize my estimates to bolster/attack a certain side. Chaos Horizon is not the frontline of the Hugo wars; we’re the room in the back where they do the autopsies. Feel free to question/interrogate the numbers as vigorously as you want, but keep the sniping and arguing about politics to a minimum.

Lastly, the analysis can only analyze behavior, not intent. If someone votes Butcher / Anderson one-two in the Best Novel, that’s behaving like a Sad Puppy. If you follow Vox Day’s suggestions, that’s behaving like a Rabid Puppy. If you vote all the Puppy picks below “No Award,” that’s behaving like a No Awarder. I can’t tell you why an individual voter did those things. I think we all know there’s a range of reasons someone might vote that way, and it’s never fun to get lumped into a group. That’s what voting does, though; it turns the individual into a list of numbers on the page. We’ll never be able to precisely “know” the reasons behind these numbers; the best we can do is have some rough estimates. Depending on your tolerance, you may want to add a “haze” around each of my estimates of 10-25%.

So, let’s get started. Here are my rough estimates from earlier in the week:

Core Rabid Puppies: 550-525
Core Sad Puppies: 500-400
Neutrals: 1400 (voted some Puppies, but not all)
Primarily No Awarders But Considered a Puppy Pick: 1000
Absolute No Awarders: 2500

That “Neutrals” is the most unsatisfying. Let’s see if we cant refine that today.

So let’s look at the initial pass of the Best Novel Results in the Race for Position 1:
1691: Three-Body Problem
1515: Goblin Emperor
1054: Ancillary Sword
874: Skin Game
268: No Award
251: Dark Between the Stars

5653 total votes. A couple immediate facts: 5950 voted for the Hugos in total, meaning 300 people sat out the Best Novel category. 95% of voters voted here. Around 3% voted No Award over everything, which is in-line with past years. That means 8% checked out of this category totally, or around 550 voters. Those are the true neutrals, I guess.

Vox Day, leader of the Rabid Puppies, suggested Three-Body Problem for first place. Is that number 525? Let’s confirm right now. Since Three-Body Problem won, it was eliminated immediately for the race for position #2. That means it was crossed off the ballot, and all of those first place votes went to the second name on the list. Since VD suggest Butcher, we should see an increase of around 500.

What do we see? 1264 Butcher votes. Subtract the first pass Butcher total 874, and we get 390 votes. That’s a little lower than expected, but not horribly so. It may be that not all the RP followed VD’s first place suggestion, or that some didn’t follow his second place selection. Anderson picked up (314-251) 63 votes from the Cixin Liu voters, which puts us back to around 450 Puppies who voted for Liu in round #1, then moved to a Puppy pick in Round #2.

So I think we can estimate that least 450 Puppy voters voted for Liu #1 in the Best Novel category. Since that novel beat Goblin Emperor by 200 for Best Novel, we can reaffirm that it was the Puppy voters who drove Liu to the win.

Now, let’s estimate the total number of Puppy voters. 874 voted for Skin Game #1, followed by 251 for Dark Between the Stars. That totals 1125, which we can think of as a rough estimate of the “maximum Sad Puppy vote,” or the number of people who could look past the controversy to vote a Sad Puppy pick #1. That group will consist of core Sad Puppies (those who voted all the Puppy picks above No Award) and neutrals who liked Anderson/Butcher better than any of the non-Puppy picks. Remember this 1125 doesn’t include the 450 Rabid Puppies who voted for Liu #1.

So we can say that we have a maximum Puppy vote of 1575, including Rabid Puppies, Sad Puppies, and Puppy-leaning neutrals. I think we can confirm that number by looking at some other categories. Take the max number of people in the swept categories (Novella, Short Story, etc.), who had a preference above “No Award.” I find that by looking at the Race for Position #5. Those numbers are 1476 (Novella), 1885 (Short Story), 1527 (Editor Short), 1769 (Editor Long), 1624 (Best Related Work). While there is some variation in those categories, I think 1600 looks like a good central estimate, perhaps with a +/- 200 around it. Again, these are rough, but I think they better reflect the data than my initial estimate.

So what does that mean? It means I should refine my neutral number:

Core Rabid Puppies: 550-525
Core Sad Puppies: 500-400
Sad Puppy leaning Neutrals: 800-400 (capable of voting a Puppy pick #1)
True Neutrals: 1000-600 (may have voted one or two Puppies; didn’t vote in all categories; No Awarded all picks, Puppy and Non-Alike)
Primarily No Awarders But Considered a Puppy Pick above No Award: 1000
Absolute No Awarders: 2500

That looks like a slightly more reasonable and refined range.

So let’s look at what happened after the first pass. Dark Between the Stars was eliminated, yielding the following numbers:
1727 (+36): Three-Body Problem
1544 (+29): Goblin Emperor
1082 (+28): Ancillary Sword
1004 (+130): Skin Game
271 (+3): No Award

The + number in parenthesis is the number of votes added in this pass.

Roughly half of KJA’s vote went to Butcher, and the other half equally split to the three other candidates. I think that’s a great indication that half the Anderson vote was core Sad Puppy, half was neutral.

No Award was eliminated next.

1758 (+31): Three-Body Problem
1565 (+21): Goblin Emperor
1117 (+35): Ancillary Sword
1013 (+9): Skin Game

These are voters who had a ballot that had No Award and then some works listed after that. A lot of these awarders (175) went to No Preference, indicating they didn’t have anything listed after No Award.

No it gets interesting. Skin Game gets cut next, and those 1013 votes are redistributed. Let me add No Preference in as well

2162 (+404): Three-Body Problem
1814 (+249): Goblin Emperor
1240 (+123): Ancillary Sword
437 (+237): No Preference

We had a huge surge to Three-Body Problem from the Skin Game voters, with half of those (the core Sad Puppies) choosing Liu over Leckie or Addison. Don’t forget those 237 who checked out completely, moving from Preference to No-Preference. That’s a pretty compelling stat: 641 Butcher voters wanted Liu or nobody. If you’re looking for a core number on the Puppy side who were “protesting” the more mainstream Hugo picks, is this it?

Next, Leckie goes by the wayside. If my analysis is correct, more voters should go to Addison than Liu.

2649 (+487): Three-Body Problem
2449 (+635): Goblin Emperor
555 (+118): No Preference

That’s the first time Addison has picked up more votes in a round than Liu. This lets us really see how different the tastes are of the Leckie supporters than the Butcher supporters. If those 400+ Rabid Puppy votes weren’t bolstering Liu’s totals, Addison would have won.

We don’t get as much data from the other passes because an author quickly gathered more than 50% of the vote, making a second pass unnecessary. From the Round #2, I can tell you that of The Dark Between the Stars 314 votes, more than half (169) went over to Butcher when he was eliminated. When Skin Games 1448 votes are eliminated, a massive 860 votes go to Goblin Emperor and a mere 226 to Leckie. 860-226 is more than 600, and that might be the Vox Day Rabid Puppy block in action.

In the Race for Position #4, No Award grabbed 2674 votes (very close to my core No Award vote of 2500), with Skin game at 2000 and Dark at 592. That means Butcher had a theoretical max of 2592 votes (if everyone voted had Butcher below Anderson on those ballots). That’s the closest a Puppy pick outside of Drama got to not getting No Awarded. That’s pretty close!

If we go back to my stats, does it work? 2500 No Awarders, 1800-1300 Rabid, Sad, and Puppy-leaners, plus around another 2000-1500 neutrals, some of whom sat out this round? If we subtract the 700 who didn’t have an opinion in this round from the neutrals, that leaves us with 1300-800. Did Butcher pick up close to 700 votes from that group?

It’s unusual to No Award a Novel in the Hugos. Even last year, Correia squeaked by, with a vote of 1161 to 1052, and Butcher is an order of magnitude more famous/well-liked than Correia. The vote here was a potential 2674-2592 against Butcher, meaning that the Anti-Puppy picked up (2674-1052) 1622 votes from 2014 to 2015, and the Puppy vote 1431 votes. That’s an almost 50/50 split in who was added this year. Of course, there are a lot of variables, including Butcher’s massive popularity—and we’re not talking about Butcher wining, put simply placing above No Award.

Still, a lot of numbers to chew on. Everyone take a look at the Best Novel stats, chew on them, and let’s see what else we can learn!

2015 Hugo Stats: Initial Analysis

Some preliminary numbers from the stats:

There were 5,950 total voters.

Rabid Puppies looks to be a little less than 10% of that. Comparing Vox’s recommendations in swept categories to the first round results (i.e. those who probably followed the suggestions):
Best Novella (Wright, “One Bright Star”): 556 (took fourth with 1050 eventual votes)
Best Short Story (Rzasa, “Turncoat”): 525 (took fourth with 1064 eventual votes)
Best Related Work (“The Hot Equations”): 595 (took second with 973 votes)
Best Editor, Short Form (Vox Day): 586 votes (took fifth with 900 votes)
Best Editor, Long Form (Toni Weiskopf): 1216 (obviously more people voted for her)
For reference:
Campbell (Eric Raymond): 489 votes (eventually took 4 with 748 votes)

I think those numbers are pretty clear: 556, 526, 595, 586. That’s the Rabid Puppy range, or at least those who closely followed VD’s suggestions. Should we call it 550-525? (Raymond was close to that range, but fewer people make it to the bottom of the ballot to vote the Campbell). I think the number who voted VD for Best Editor is probably closest to the actual number.

Initial Rabid Puppy Estimate: 550-525
That makes around 10% of the total vote, which is in line with what I expected.

I think we can use these same numbers to grab a “Sad Puppy” initial estimate, or at least the most hardcore Sad Puppy supporters (who voted all the Rabid/Sad Puppy picks above No Award). If look at the second set of numbers I gave, that’s 1050, 1064, 973, 900. Even Vox Day picked up 320 more voters from No Award. Is it fair to say those are the Sad Puppies? We’d get 1064-900 for the total Puppy vote. It looks to me like 500-400 Sad Puppies. I want to be looser here, because maybe some Rabid Puppies didn’t follow the VD suggestions, and maybe some other voters drifted in and voted for these texts.

Initial Sad Puppy Estimate: 500-400
We’ll have to refine that number over the next few days.

We can use the same swept categories to estimate the “No Awarders”: the people who voted for No Award over every Puppy pick. Here’s those numbers:

Best Novella: 3459 No Awarders
Best Short Story: 3053 No Awarders
Best Related Work: 3259 No Awarders
Best Editor, Short Form: 2672 No Awarders
Best Editor, Long Form: 2496 No Awarders

One more interesting number:
Best Novelette: 1732 No Awarders (remember, Heuvelt was a non-Puppy pick! In the final pass, he beat No Award 2618 to 2078).

That 3459, 3053, and 3259 number are pretty close. That seems the max No Award number: people who couldn’t stand any Puppy Pick. When there were more valid choices, such as in the Editor awards, No Award was still picking up 2600-2500 votes. In a case where a category was almost swept, the number was close to 2000. So I’m calling the No Awarders at 3450-2500. That’s a huge number, over 50% of the total pool.

I’m stunned at the 2500 No Awarders in the Editor categories; there were some mainstream, decent editors on that list. If 2500 people were voting No Award on that, that’s out of principle. So here’s how I’m estimating:

Initial Estimate of No Awarders Who Voted No Award out of Principle: 2500.
Initial Estimate of No Awarders Who At Least Considered Voting for a Puppy Pick But Eventually Didn’t: 1000.

Those numbers will clearly need some work. So that leaves us:
Core Rabid Puppies: 550-525
Core Sad Puppies: 500-400
Absolute No Awarders: 2500
Primarily No Awarders But Considered a Puppy Pick: 1000
That sums up to 4600 voters. We had 5950, so I think the remaining 1400 or so were the true “Neutrals” or the “voted some Puppies but not all.”

UPDATE, 8/25/15: By looking closely at the Best Novel category, I’ve updated my estimate, breaking the Neutrals into two categories:

Core Rabid Puppies: 550-525
Core Sad Puppies: 500-400
Sad Puppy leaning Neutrals: 800-400 (capable of voting a Puppy pick #1)
True Neutrals: 1000-600 (may have voted one or two Puppies; didn’t vote in all categories; No Awarded all picks, Puppy and Non-Alike)
Primarily No Awarders But Considered a Puppy Pick above No Award: 1000
Absolute No Awarders: 2500

END UPDATE, 8/25/15

Some percentages (estimates, not precise):
No Awarders: 3500 / 5950 = 59%
Neutrals: 1400 / 5950 = 22%
Rabid Puppies = 10%
Sad Puppies = 9%

What the Best Novel category would have looked like with No Puppy votes:
Ancillary Sword, Ann Leckie
The Goblin Emperor, Katherine Addison
The Three Body Problem, Cixin Liu
Lock In, John Scalzi
City of Stairs, Robert Jackson Bennett

Other initial Best Novel analysis: Goblin Emperor lost the Best Novel to Three-Body Problem by 200 votes. Since there seem to have been at least 500 Rabid Puppy voters who followed VD’s suggestion to vote Liu first, this means Liu won because of the Rabid Puppies. Take that as you will.

I’m going to get some sleep. I’m tired, so I’m sure I slipped at least once on one of these numbers. Too much data! Happy analyzing!

2015 Hugo Results

The controversial 2015 Hugo season reached its final end with the awarding of the awards tonight. It was an interesting (if predictable) result, and over the next few days I’ll be breaking down the stats.

The full winners list is here. Cixin Liu’s The Three-Body Problem won the Best Novel award. I’m looking forward to breaking down those numbers; there’s a possibility of a swing vote in regard to this novel. We’ve never had a translated novel win before, and I don’t think we’ve ever had a novel win from the lowest nominated position.

For the other categories: No Award won in every category swept by the Puppies (Related Work, Short Form Editor, Long Form Editor, Short Story, and Novella). This indicates that the “No Awarders” were a strong majority of 2015 Hugo voters. We don’t know why they voted “No Award” (whether on principle or because they read and disliked the nominated works/editors), and we’re never likely to know that. I assume it was a mix, but it’d be interesting to know the exact ratio. Interestingly, No Award didn’t bleed over into a category like Best Novelette, where the only non-puppy pick was Heuvelt’s “The Day the World Turned Upside Down.” There had been some online chatter of not giving Heuvelt a Hugo “by default”; the stats should tell us whether or not this was a close race.

With the Heuvelt and Liu wins, and Helsinki winning the WorldCon bid for 2017, this is the first time there’s actually a “World” in WorldCon.

What Chaos Horizon will do now is wait for the data set to come out and begin breaking things down. Good night, and blog with you tomorrow!

Gearing Up for Hugo 2015 Analysis

The Hugo Awards are almost upon us! These will be given out tonight, but I’m more interested in the numbers the Hugos will release alongside those. Over the next week, Chaos Horizon will be doing what Chaos Horizon does, digging into those to find trends, data, and info.

It should be a truly interesting analysis this year. At this point, we have no real idea about the numerical strength of the Sad Puppies, the Rabid Puppies, or the No Awarders. Once we begin putting some of those together, we’ll have a much better sense of the current shape of the field.

It’s going to take a while to sift through the data. Here’s my early game-plan; I’m laying this out there for a full critique from anyone who wants to comment. We need good numbers, no matter your position on any of the 2015 Hugo controversies.

Here’s what I think we can do:

1. Estimate the number of Rabid Puppies: Since Vox Day, the leader of the Rabid Puppies, posted Hugo voting recommendations, we can use those to estimate the Rabid Puppy numbers. In particular, I’ll be looking at the “first pass” numbers for a few swept categories, namely Best Editor, Short Form and the Campbell to come up with my initial estimate.

Here’s my methodology and chain of assumptions: Vox recommended himself for Short Form Editor. I’m making the assumption that only hardcore Rabid Puppy supporters are going to follow that. Given how controversial Vox currently is and how niche Vox’s editing is, I find it hard to believe you would support Vox Day for Short Form Editor if you aren’t a Rabid Puppy supporter. We have to make some assumptions here; this seems the safest to me. Contest in the comments if you wish.

With that number in place, I’m going to compare it to the Campbell, where Vox recommended Eric S. Raymond, often known as ESR. ESR is best known as an open source software advocate, and he has a very popular blog. He is not well-known as a SF writer, having only published one story in a Castalia House publication (Vox Day’s house). Again, the connection to Vox Day means that probably—and this is an estimate, not a fact—that primarily Rabid Puppy supporters are voting ESR. If the ESR vote number is close to the Vox number from Short Editor, that’ll be some good confirmation. If it’s not, I’ll rethink my assumptions.

I’ll then compare this number to Vox’s other recommendations in the other categories, particularly those that are controversial. Some categories won’t tell us anything; Vox recommended The Three-Body Problem in Best Novel and Guardians of the Galaxy in Best Long Form Dramatic; both of those will attract lots of non-Puppy support. In other categories, such as John C. Wright’s “One Bright Star to Guide Them” in Novella or “No Award” in Graphic Novel, we have a narrower field of support. If these numbers are close to each other (let’s say within 50), I’m confident in calling that the Rabid Puppy range. I should be able to double-check to see how many votes move from Vox Day’s recommended first choice to second choice in the voting.

2. Estimate the No Awarders: By looking at how many people voted “No Award” for their first choice in the swept categories (Novella, Novelette, Short Story, Related Work, Short Form Editor, Long Form Editor), we can get an easy initial estimate of how many people voted “No Award” over every Puppy pick.

3. Compare the first past No Awarders to the final pass No Awarders: This will give us a good estimate of the number of people who gave the Puppy ballots a chance. So if 300 voted No Award over every choice, but by the time we get to the 4th pass 1000 people voted no Award, I can produce an estimate of roughly 700 for “voted at least one Puppy pick.” This will be most useful in swept categories, and will allow me to come up with the what I’m calling the “Neutrals.”

4. Estimate Sad Puppies numbers: This is actually the hardest number to estimate. In theory if I have the Rabid Puppies, the No Awarders, and the Neutrals, everyone else is the Sad Puppies? This would be the group of people who didn’t follow Vox Day’s recommendations but still voted every Sad/Rabid Puppy pick above No Award. We’ll have some contamination by people who just liked that individual story, but if we had a broad group from 5-6 categories, the estimate should be decent. If you’ve got a better way of estimating this, let me know.

5. See if the Rabid Puppies impacted the Best Novel: If we take the final margin of victory in Best Novel and compare it to the Rabid Puppy estimate, we’ll know whether or not they were the swing vote for Best Novel.

So that’s the initial outline. Everything I do here on Chaos Horizon is open, so let me know what you think of the methodology. Once we sort through the final numbers, I’ll go back and start working on nomination numbers.

TL;DR: So here’s the basic initial approach. I’m going to break down the Hugo 2015 voters into four categories:
1. Rabid Puppies: People who followed Vox Day’s Hugo voting recommendations.
2. No Awarders: People who vote No Award over every Rabid/Sad Puppy pick.
3. Neutrals: People who voted at least one Puppy pick above No Award.
4. Sad Puppies: People who voted all Rabid/Sad Puppy picks above No Award, but didn’t follow Vox Day’s recommendations.

I’ll primarily be using the swept or nearly swept categories to do this.

Not perfect, I know, but it should give us something. Comments? Suggestions? Mathematical or analytical tricks I missed?

2015 Hugo Awards: The Kingmaker Scenario and Margin of Victory

As I finish up my investigation and predictions of the 2015 Best Novel Hugo Award, we need to seriously think about a possible kingmaker scenario. A term borrowed from game theory (which in term borrowed the term from royal politics), Wikipedia gives us a good working definition: “A kingmaker scenario, in a game of three or more players, is an endgame situation where a player unable to win has the capacity to determine which player among others is the winner. Said player is referred to as the kingmaker or spoiler.” I know we shouldn’t trust Wikipedia, but this is a well-established theoretical, and that’s as good an intro as any I could easily find.

In a normal Hugo year, it’s hard to be a kingmaker . While it’s easy to boost the #2 choice to the #1 place, you’d have to guess at who would be #1 and #2. It would be very hard to boost a #5 work to #1—but let’s imagine a year where you only have three viable candidates. How hard would it be to boost #3 into the #1 position? That’s exactly what we have in the Best Novel this year: two Puppy candidates, three “regular” candidates, and a tailor-made kingmaker situation.

For the Hugos, you can consider the “players” the supporters of the 5 Hugo nominees and the “endgame” as the final vote. It would appear—and I stress that we don’t know this for sure—that Katherine Addison, Cixin Liu, and Ann Leckie are the only viable players for the 2015 Hugo. Debate that if you will, but I think that the negative reaction to the Sad/Rabid Puppy campaigns and the use of “No Award” make it nearly impossible for Butcher or Anderson to win. I also think “No Award” isn’t viable in the Best Novel category, due to the strong support that Addison, Liu, and Leckie have received.

In a kingmaker scenario, the Butcher/Anderson voters would have the power to choose between Addison, Liu, and Leckie as the winner of Hugo. Even a smaller set of the Butcher/Anderson voters—like the Sad or Rabid Puppies—could operate as kingmaker. Of course, they would have to be organized, have to have sufficient numbers (see below for the estimate), and choose to execute a kingmaker move.

Interestingly, the Hugo voting rules make it incredibly easy to deploy a kingmaker effect. Since the Hugo operate via instant-runoff voting, you just vote for your favorites (even if they don’t have a chance), then vote for who you want to win (your kingmaker effect), and then you leave off anyone you want to lose. The instant-runoff vote will maximize your impact for you. In a first-past-the-post voting system, a kingmaker would have to be more organized (you have to leave off your favorite and vote only as a kingmaker), making it less likely to work.

So, what kind of numbers do we need? The math gets a little tricky, but if we look at the Hugo data from 2011-2014, we can get a rough sense in the main fiction categories. I left off 2010 because that’s the year Mieville and Bacigalupi tied, which messes up the stats a little. Still, that shows how close these votes can be. A single kingmaker voting for Mieville would have pushed the balance!

Here’s the data separating first place from second place, in the final pass of the instant run-off voting:

Table 1: Margin of Victory in the Hugo Fiction Categories, 2011-2014
First to Second Place Margin of Victory

Throw out Leckie’s domination last year, and you see that in most Hugo years, it would takes only a modest number of organized voters (between 100-200) to boost the #2 novel to the #1 position. As I said before, because of the way Instant Run-Off voting works, all the campaign would need to do is leave that #1 novel off the ballot.

It’s interesting how up and down these numbers are. Sometimes a kingmaker effect is easy, sometimes hard. I’m surprised that Short Story had such large margins of victory in recent years. Also pay careful attention to 2014: that’s the first year we had a big boost in Hugo voting numbers. Since we might double that this year, we could be in a situation in 2015 where margin of victory is so large that no kingmaker effect is possible. Ancillary Justice had a very unusual year last year, though, sweeping all the major awards. We don’t have that kind of dominance in 2015.

With somewhat higher block voting numbers, you could exert more control over the ballot. I took a look at the difference between 2nd and 3rd place in the next to last run-off (that’s what you’d need to boot the #2 work from reaching the final stage). If you add these numbers to the previous chart, that’s a rough estimate of what it might take to get work #3 to win the Hugo. This isn’t 100% accurate (we can’t do that math, or at least I wasn’t able to figure out a way to do that math), but it’s a good eyeball test.

Table 2: Margin of Victory of 2nd Place over 3rd Place in the Hugo Fiction Categories, 2011-2014
Second to Third Place Margin of Victory

Again, widely variable by year and category, with some remarkably close and others far apart. Still, as a rough estimate, I think it’s revealing. So let’s say you wanted 2312 by Kim Stanley Robinson to beat Redshirts in 2013. It would have taken around (213+6), 219 votes to do so. It would have only taken 213 votes to give Lois McMaster Bujold’s Captain Vorpatril’s Alliance the win.

Would an organized Hugo campaign want to do something like that? I’ll let you decide.

So, what does that mean for this year? Keep in mind that we’re due for perhaps a doubling of the Hugo vote, which could double margin of victory, making a kingmaker effect harder to pull off. Second, the vote between Liu, Addison, and Leckie is a 3-way, not a 5-way battle, and that may further concentrate votes. But . . . I have a feeling we’re in for a close contest, which might offset the increased number of voters.

I’ve previously estimated the Sad/Rabid Puppy campaigns at around 300-400. However, that was only for the nominating stage; we don’t know what that number will be in the final vote. Doubled? 1.5 times higher? What’s the ratio of Sad/Rabid Puppies? Will those groups chose to pull a kingmaker? The Sad Puppies, to the best of my knowledge, have made no move to suggest how their group should vote. The Rabid Puppies, on the other hand, seem to have made just such a move, given Vox Day’s Best Novel post. Note that this is an exact kingmaker play: Leckie is left off the ballot completely. The result will be to push the award to either Liu or Addison if the Rabid Puppy vote is larger than Leckie’s (possible) margin of victory. If Addison were to be in a close race with Liu, this would also give Liu the win (if the Rabid Puppy block size is larger than Addison’s margin of victory). So, that leaves two unknowns: how big is the Rabid Puppy block, and will they stick together? I could make guesses, but I try to avoid sheer guesswork on Chaos Horizon.

The Hugo is remarkably amenable to kingmaker scenarios. In fact, you could argue kingmaker scenarios are more favorable to block voting groups than outright sweeps, as it allows them to exert more control of the final result. In a sweep, No Award is an option; in a kingmaker scenario, it really isn’t. This is an issue the any proposed changes to the Hugo votes should keep in mind; you could implement a change that makes kingmaker scenarios even easier to set up. The best way to defuse kingmaker scenarios is to have more viable players in the game, but, even then, it’s easy to make sure one player doesn’t win if you control a block vote of any size.

So, in conclusion, in close years and in close categories, a kingmaker effect is remarkably easy to deploy. It is even easier to deploy if a campaign places 1 or 2 of their works on the ballot. In fact, if someone was trying to dominate the Hugos, that’s the situation you would probably want: 2 of your picks on the ballot, 3 other picks, and then you could chose between the three other works to see who would win. You get to make your point with your picks and then shape the outcome of the award. There’s also relatively little risk in deploying it: you don’t hurt your own choices’ chances. On the other hand, we don’t know if the increased number of Hugo voters will make margins of victory so large that kingmaker effect doesn’t come in to play. Stay tuned to Chaos Horizon; I’ll run the numbers when they’re published after the Hugos, and see how things worked out.

Final 2015 Hugo Prediction and Problems

No use putting this on any longer! The situation isn’t going to become any clearer or easier. Here’s the official Chaos Horizon mathematical model for the 2015 Hugos:

Ann Leckie, Ancillary Sword: 25.7% chance to win
Cixin Liu, The Three-Body Problem: 22.4% chance to win
Katherine Addison, The Goblin Emperor: 21.1% chance to win
Jim Butcher, Skin Game: 18.1% chance to win
Kevin J. Anderson, The Dark Between the Stars: 12.7% chance to win

Unfortunately, the model depends on the idea that 2015 Hugo voters will vote like the Hugo voters of the past 15 years have voted. Of course, that’s not going to happen this year. Too many new voters have come into the process this year for the model to be reliable. But I’ll get to that in a second. I also didn’t factor in “No Award” sentiment (the model can’t handle that). So think of this as a raw snapshot, that’ll need to be corrected by an analysis of what happened this year.

Leckie emerges as a close winner this year. Ancillary Sword has everything going for it: Leckie has a strong Hugo history (winning last year), it was universally praised by critics, it won the Locus SF vote, it won the British SF award. My model doesn’t currently punish texts for being sequels; if Leckie loses this year, that’s something I’ll factor in for next year.

Cixin Liu does second best, and that’s partly due to Ken Liu’s influence. Ken Liu is a well-known quantity to Hugo voters, and that helps what would otherwise be a debut novel in the formula. Addison is also right in the mix. Think about last year’s prediction, which gave Leckie a 33.6% chance and Stross only 24.9%. Here, the difference is only 4.6%. Keep that in mind as the analysis continues: it’s likely a close year. That makes any potential swing votes (such as Rabid Puppies . . .) incredibly important.

The two Puppy picks, Butcher and Anderson, do not do well in the formula. The formula elevates works with strong Hugo history and works that do well in the current awards season; neither book has those credentials. Even without the “No Award” sentiment floating around the SFF blogosphere, these would have had slim chances. Butcher had a slight chance if the community had decided Dresden as a whole was worthy, but that hasn’t (at least to my ear) been the chatter on SFF websites. Being so late in the series also hurts its odds. Still, Butcher is popular enough to bring in casual fans; if everyone who attends WorldCon votes, he could do surprisingly well.

Here’s what the formula is based on:

Indicator #1: Nominee has previously been nominated for a Hugo award. (73.3%)
Indicator #2: Nominee has previously been nominated for a Nebula award (prior to this year). (73.3%)
Indicator #3: Novel won a same year Nebula award. (87.5%)
Indicator #4: Nominated novel is science fiction. (53.3%)
Indicator #5: The nominated novel wins one of the main Locus Awards categories. (53.3%)
Indicator #6: Nominee places in the Goodreads Choice Awards (100%)
Indicator #7: Nominated for at least one other major award (80%)
Indicator #8: Nominee highly regarded by critics, as judged by Critics Meta-List. (86.7%)

If you want to run down the rabbit hole of how things work, check out my Nebula methodology posts; the Hugo methodology is the same, just with different data.


Let me repeat. THE CHAOS HORIZON MODEL IS NOT RELIABLE FOR 2015. It may still work, but that’d just be luck. Too much has changed in the last six months. The Sad/Rabid Puppy controversy has led to a huge surge in the Supporting Memberships of WorldCon. According to their own webpage, the membership of Sasquan as of June 30, 2015 is:

Total: 9776
Attending: 3945
Supporting: 5410
(there are also other categories like Children I’m not listing)

Compare that to the LonCon (the 2014 Hugo location) numbers at the equivalent time (June 30, 2014):

Total: 8518
Attending: 5457
Supporting: 2768

Over 1200 more members for Sasquan—and Spokane is not the same attraction as London. In fact, the number that really matters here is “Supporting Memberships.” I assume that most new people who bought those bought them for the express purpose of voting in the Hugos. We’re looking at difference of 5410-2768=2642 potential voters! 3587 people voted in the 2015 Hugos; we could be looking at a voter total of over 6000 in 2015.

Chaos Horizon works by the premise that WorldCon voters will vote in the ways they have in the past. Since we may have over 2500 new voters, we have no data on how they’ll vote. Are they here just to vote against the Puppies? What does that mean for Liu, Leckie, and Addison? I suspect this might help Leckie since she won last year; it’s easy to vote for the familiar. But maybe these new voters will drift towards Addison, or Liu. There is no way to tell at this point.

The situation grows even murkier when we factor in two additional unknowns. We don’t know whether these 2500 new members are here to vote for or against the Sad/Rabid Puppy slates. The huge controversy will bring in passionate voters on both sides: but what will the ratio be? In the nomination stage, I estimated about 300-400 Sad/Rabid Puppy supporters, or about 15% of the whole. Of these 2500 new voters, will that ratio hold? Will it be 10% Puppy/90% Anti-Puppy? 20% Puppy/40% Neutral/40% Anti-Puppy?

Let me answer as honestly as I can: I don’t know what the ratios will be. I look forward to seeing the final numbers, but any guesses at this point are simply guesses.

We also have to consider another possibility, the so-called “Kingmaker Scenario.” In a closely divided election (i.e. if Leckie, Liu, and Addison are within a few hundred votes of each other), any unified block vote can swing the balance in one direction or the other. If all the Rabid puppies, for instance, vote Cixin Liu ahead of Ann Leckie, that might be enough to push Liu to a win.

There hasn’t been much discussion of the Kingmaker scenario online yet (or I haven’t see it; if you know of some good articles please link them in the comments), but this may be where the Sad and Rabid Puppies have their greatest influence on the 2015 Hugos. While categories swept by the Puppies will likely result in “No Award,” a category like the Best Novel could be more decisively influenced. Let’s say the non-Puppy voters give Ann Leckie a 300 vote win over Cixin Liu, but 400 Rabid Puppy voters vote Liu over Leckie. Liu would end up winning in that scenario.

To understand whether or not a Kingmaker scenario is in play, we’ll have to explore a couple things over the next few days. We’ll have to look at “Average Margin of Victory” in the 2011-2014 Hugos to get a sense of how wide the final vote count is. Then we’ll have to consider whether or not either the Sad Puppies or the Rabid Puppies have enough influence/organization to overcome that gap. My initial thought is that the Sad Puppies do not have the influence or numbers, but that the Rabid Puppies might. That’s a lot of “ifs,” and it will make this already unpredictable Hugo season the most unpredictable one on record.

Happy predicting!

Inside the Locus Results

My copy of Locus Magazine arrived today, and with it some interesting insights on how the Hugo nominees did in those awards. While not a perfect match to the Hugos, the Locus are the closest thing going: a popular vote by SFF “insiders” to determine the best novel of the year. The Locus splits Fantasy off from Science Fiction, which makes the award have a very different feel, and Locus voters tend to be more receptive to sequels. Locus also doubles the weight of subscribers versus non-subscriber, meaning the most involved fans get the most say. If you’re so into SFF that you have a subscription to Locus, you’re definitely not casual.

Locus has posted the finalists and winners here. For our purposes, the key categories are the two Best Novels. Here’s the order of the top 5 placement, taken from the print edition of Locus. Don’t worry, I won’t share all the data; buy Locus if you want the full details!

1. Ancillary Sword, Ann Leckie (Orbit US; Orbit UK)
2. The Three-Body Problem, Cixin Liu (Tor)
3. Annihilation/Authority/Acceptance, Jeff VanderMeer (FSG Originals; Fourth Estate; HarperCollins Canada)
4. The Peripheral, William Gibson (Putnam; Viking UK)
5. Lock In, John Scalzi (Tor; Gollancz)

1. The Goblin Emperor, Katherine Addison (Tor)
2. City of Stairs, Robert Jackson Bennett (Broadway; Jo Fletcher)
3. The Magician’s Land, Lev Grossman (Viking; Arrow 2015)
4. Steles of the Sky, Elizabeth Bear (Tor)
5. The Mirror Empire, Kameron Hurley (Angry Robot US)

You’ll notice that the Top 2 from the SF and the Top 1 from F make up 3/5 of the Hugo Best Novel ballot. Neither the Jim Butcher nor the Kevin J. Anderson made the Top 28 SF novels or the Top 21 fantasy novels. If you were going by Locus vote counts alone, VanderMeer and Gibson would have been next in line for nominations. Since Hugo voters have ignored Gibson since 1994 (seriously, no nominations since 1994), the 5th spot would have been a toss up between Scalzi and Bennett. Given Scalzi’s past Hugo performance, you might lean in that direction, although we’ll find out when the full nomination stats are released.

If we go deeper into the details, let’s look at the number of votes for each of our Hugo Nominees. They use something called the “Carr system,” which gives 8 points for a 1st place voice, 7 for a 2nd place vote, 6 for a 3rd place vote, 5 for a 4th place vote, and 4 for a 5th place vote, with no points after that. This tries to balance preference with sane math: instead of a 1st place vote counting 5 times as much as a 5th place vote, it only counts twice as much. So it goes.

Ancillary Sword, Ann Leckie: 2818 pts, 321 vts, 107 1sts
The Goblin Emperor, Katherine Addison: 2556 pts, 285 vts, 126 1sts
The Three-Body Problem, Cixin Liu, 1869 pts, 223 vts, 58 1sts

First thing to notice: it looks like Leckie and Addison have separated themselves from Liu. Those 58 1st place votes for Liu is lower than books like The Peripheral or City of Stairs received, and about half as much as the Leckie or Addison. Since the Hugo ballot ranks by preference, this might spell trouble for his chances. This would echo what I’ve seen online as well: the Addison and the Leckie seem to inspire more passion in readers than the Liu. Is it an issue of translation, Liu’s unique and somewhat strange approach to character and plot, or simply the difficulty of relating to Chinese (rather than American) SFF? Who knows?

Everything about this year’s Hugos—as we’ll delve into this week—is pointing to a very close race between Leckie and Addison. If you look just at the Locus, Leckie has broader support (she’s probably better known due to last year’s wins), but Addison had more 1st place votes. All of that will play out in interesting ways in the Hugos, once we factor in the new voters, this year’s controversies, and the difficulty Leckie will have in repeating as a Hugo winner.

The Locus Award results factor heavily into my Hugo prediction. I’m going to be building that prediction this week, so stay tuned!

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction


Pluralism and Individuation in a World of Becoming

Rare Horror

We provide reviews and recommendations for all things horror. We are particularly fond of 80s, foreign, independent, cult and B horror movies. Please use the menu on the top left of the screen to view our archives or to learn more about us.

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"


A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

a nightmare of wires

Reading SFF

Reading science fiction and fantasy novels and short fiction.

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

The Other Side of the Rain

Book reviews, speculative fiction, and wild miscellany.

Read & Survive

How- To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

From couch to moon

Sci-fi and fantasy reviews, among other things