Archive | 2014 Hugo Award RSS for this section

How Many Nominating Ballots for the 2015 Hugo?

The turnout for the 2014 Hugo Award was unprecedented, with a huge jump in both nominating and final ballots. There are several reasons for that jump: London was a bigger than normal WorldCon city, and that certainly brought in more fans. Spokane in 2015 isn’t going to come close. The 2014 Hugo debate was also unusually vehement, with Larry Correia’s Sad Puppy campaign and the Wheel of Time‘s “whole series deserves a nomination” campaign drawing in an unusual number of supporters and protesters. All of that led to a near-doubling of the final Hugo Best Novel vote, from 1649 final Best Novel ballots in 2013 to 3137 Best Novel ballots in 2014.

That leaves a huge question open: how is that record turnout going to impact 2015? Remember, everyone who voted in 2014 is eligible to nominate in 2015. That’s a real oddity of the award, and it’s part of the reason the Hugo has a very repetitive flavor. Just consider: if you voted one book as best novel in 2014, aren’t you more likely to vote that same author again in 2015? Here’s the WorldCon rules:

3.7.1: The Worldcon Committee shall conduct a poll to select the nominees for the final Award voting. Each member of the administering Worldcon, the immediately preceding Worldcon, or the immediately following Worldcon as of January 31 of the current calendar year shall be allowed to make up to five (5) equally weighted nominations in every category.

While not everyone from LonCon3 will participate, the potential pool of nominators for 2015 is huge. Let’s do what Chaos Horizon does and look at some stats. First up, I data mined the Hugo Award website to come up with the number of nominating and final ballots for the Hugo Award over the past 10 years:

Table 1: Number of Nominating and Final Ballots for the Best Novel Hugo, 2005-2015
Ballot Data Hugo

Frustratingly, I couldn’t find any data for the number of nominating ballots in 2007. The .pdf of that info only gives the number of nomination per book, not the total number of Best Novel ballots. If anyone has that data point, I’d love to patch up the chart.

Carryover is my awkward term for the ratio between Next Year’s Nominating Ballots divided by Previous Year’s Final Ballots. Since the last year’s Final Balloteers can vote in the next award, this ratio is what we need to predict the number of voters for 2015. Of course, not every voter from the past year will vote, and you can see a definite perturbation in the line based on location as well. Nonetheless, if we average those out, we can see that the number of nominees works out to be around 78% of the previous year’s final vote. Let’s just call it 75% for a ballpark. Imprecise, I know, but this will give us a start.

And visually:

Ballto Data Hugo Chart

That chart really shows how unusual 2014 was, but the general trend has definitely been towards more Hugo voters and more Hugo nominators.

What does that mean for this year’s award? Well, let’s tackle it from a couple of angles:

1. If we accept that 75% number, that would mean roughly 2350 nominating ballots for 2015. That seems huge, but even if you went with only 50% carryover, we’d have 1580, which is almost the biggest ever. If we land on a middle-ground number like 2000, that’s means each novel will need around 200 votes (10%) to make it into the final Hugo slate.
2. Each of the 2014 Hugo nominees (Leckie, Correia, Stross, Grant, Sanderson) has a sizeable built-in Hugo advantage for 2015. Let’s zoom in on Correia as an example: last year, 332 people voted Warbound as the Best Novel of 2014. If 75% of those nominate Correia again—as they’re fully eligible to do—that would likely get him into this year’s field. Sanderson’s case is even more interesting: 658 voters placed Wheel of Time in the #1 spot on their final 2014 ballots. Now, Sanderson isn’t Robert Jordan, but what percentage of those are going to support Words of Raidance? If we use that 200 bar I estimated above, that means Sanderson needs to keep only 30% of the WOT vote to make the slate. Doable? I don’t know.
3. A bigger pool of nominators might make it harder for lesser known authors to get into the 2015 field. In 2013, it only took Saladin Ahmed 118 votes to sneak into the final slate. In 2014, it took Mira Grant 98. That number could double for 2015. You’ll need either broad or passionate support to make the 2015 slate, something more niche novels might not be able to muster. It’s easy to imagine a 2015 scenario where Leckie keeps her vote, Correia keeps his vote, and Sanderson grabs a sizeable percentage of The Wheel of Time vote. That’s 3 spots already. Stross could keep his vote, or it could slide over to Scalzi (all S authors are interchangeable, right?). Grant is the borderline case. She was the lowest nominator and final ballot recipient of the bunch last year, and thus the most likely to drop off. In that scenario, the entire rest of the SFF world is fighting for one open Hugo spot.

One thing that makes the Hugo unusual—and interesting—are some of the oddities of its nominating and balloting practices. How is 2015 going to play out? Are we looking at a record turnout? And does a record turnout mean record repetition? Stay tuned . . . for more Chaos! (Insert laughter and more terrible jokes).


Hugo Prediction: The Indicators

The main purpose of my blog Chaos Horizon is to use mathematical modeling to predict the winners of the Hugo and Nebula awards. To do this, I use a Linear Opinion Pool constructed by data mining the last 15 years (since 2000) of award-winning data, as provided by excellent websites like SFADB.

The Hugo Formula (see the 2014 prediction here) uses 8 Indicators of Hugo success, each of which is weighted in turn. The percentage afterwards gives the basic reliability of the Indicator, with links to a fuller explanation of each indicator:

Indicator #1: Nominee has previously been nominated for a Hugo award. (78.6%)
Indicator #2: Nominee has previously been nominated for a Nebula award (prior to this year). (78.6%)
Indicator #3: Nominated novel is in the fantasy genre. (50%)
Indicator #4: The nominated novel wins one of the main Locus Awards categories. (57.1%)
Indicator #5: The nominated novel receives the most votes in the Goodreads Awards. (33%)

Indicator #6: Novel was the most reviewed on at the time of the Hugo nomination. (75%)
Indicator #7: Novel won a same year Nebula award. (85.6%)
Indicator #8: Novel received a same year Campbell nomination. (50%)

To generate these, I went through many possible interpretations of the available data. The Indicators are not perfect, nor are they intended to be. For them to be perfect, this would imply that the Hugo award is perfectly predictable—it is not. The pool of voters is too small, and too many outside factors can influence the awards.

Instead, by building a model with multiple indicators like this allows us to not overly-stress one factor, but rather look at a fuller range of issues. Since the point of this model is to generate discussion and have fun, we want the math to be a little elastic to encompass the human element of prediction.

2014 Hugo Results: Inside the Numbers

The 2014 Hugo results were announced on August 17, 2014, with Ann Leckie winning the Hugo for Ancillary Justice. The Hugo website has the official results here. Unlike the Nebula, the Hugo gives the voting details of the award in an extensive .pdf, also locating at the Hugo website.

Those details can give us a good idea of how the prediction formula worked, and suggest future fixes to the formula. While the Hugo has an “instant run-off” system, the raw data from first place votes is the most important piece of information:
1. Ann Leckie, Ancillary Justice, 1335 first place votes, 43.8% (prediction: 33.6%)
2. Robert Jordan, The Wheel of Time, 658 first place votes, 21.6% (prediction: 17.2%)
3. Charles Stross, Neptune’s Brood, 445 first place votes, 14.6% (prediction: 24.9%)
4. Larry Correia, Warbound, 332 first place votes, 10.9% (prediction: 11.9%)
5. Mira Grant, Parasite, 279 votes, 9.2% (prediction: 12.4%)

All in all, the model worked fairly well. Leckie performed much better than the model, nearly 10% higher, drawing her votes primarily from Charles Stross. What likely happened is that voters gave Stross the Hugo for best novella (which he won) and didn’t want to give him two awards. The influx of new Hugo voters didn’t seem to have much effect on the overall numbers.

Leckie thus completes one of the most dominant award seasons in recent memory. Her Hugo vote percentage was much higher than last year’s winner. John Scalzi pulled down 24.7% of the first round vote for his winner Redshirts. The year before, Among Others by Jo Walton pulled down a similar 25.3%. Leckie’s 43.8% was almost 20% higher. As the formula is revised for next year, Leckie’s dominating win will shift the prediction a little more towards same-year Nebula and Campbell winners and away from previous winners. I’ll be back to discuss possible changes to the formula in the coming days.

2014 Hugo Results

Ann Leckie has won the 2014 Hugo Award for Ancillary Justice, as predicted by the prediction model.

More results and discussion of the formula tomorrow.

2014 Hugo Prediction: Final Prediction

Here’s the final prediction for the 2014 Hugo, not taking into account the increased amount of voters due to various issue (see the Hugo Award Prediction: Storm Clouds post) below. Based on past Hugo performance, here’s how things would go:

1. 33.6% chance to win: Ann Leckie, Ancillary Justice

2. 24.9% chance to win: Charles Stross, Neptune’s Brood

3. 17.2% chance to win: Robert Jordan, The Wheel of Time

4. 12.4% chance to win: Mira Grant, Parasite

5. 11.9% chance to win: Larry Correia, Warbound

From a statistical perspective, Leckie is a clear frontrunner. She not only won the Nebula, but dominated the rest of the award season, winning the Clarke, the British SF Award, and racking up nominations for pretty much every single major award.

You might think Jordan is a little low, and he probably is. However, Jordan has zero Hugo history, and almost zero award history at all. If he wins this year, it’ll be because his fans have made a considerable push for him, not because he’s a natural for an award like the Hugo. Series fantasy simply doesn’t win awards like this, or hasn’t in the past.

2014 Hugo Prediction: Storm Clouds

Loncon (the convention award the Hugo award) has announced a huge increase in Hugo voters for this year (from here):

London, 7 August 2014 – Loncon 3, the 72nd World Science Fiction Convention being held at London ExCeL from 14-18 August, is proud to announce that it received 3,587 valid ballots for the 2014 Hugo Awards. 3,571 ballots were submitted online through the Loncon 3 website and 16 paper ballots were received. This total eclipses the previous record participation of 2,100 ballots (set by Renovation in 2011) by over 50%. Participation in the 1939 Retro Hugo Award process was strong as well with 1,307 valid ballots being received: 1,295 submitted electronically and 12 by postal mail.

This substantial increase—by at least 50% over any previous Hugo—is going to severely compromise any statistical analysis of the Hugos. Remember, anyone can vote for the Hugo, as long as you register and pay the fee (somewhere in the range of $40). More voters = more passion = more unpredictable results.

So what is causing this surge in voters? There are a number of factors, and we won’t know what is the most prominent until after the results come in:
1. There was a highly organized and vigorous campaign to nominate Robert Jordan’s The Wheel of Time. Jordan had never received a nomination before, and this time his whole series was nominated. How many people have joined just to vote for Wheel of Time?
2. Larry Correia ran a somewhat less organized and less vigorous campaign (a few posts on his blog) to nominate some more socially conservative SFF texts to the Hugo slate. While the “controversy” is complex, this campaign undeniably pushed some nominees onto the Hugo slate (including Correia himself), and at least some of those additional voters are coming solely to vote for said texts. How many?
3. In response to the Correia campaign, there has been clamors of outrage on the SFF left, who see such interventions in the slate as problematic. Why Correia would be faulted but the Wheel of Time fans praised is beyond me, but that isn’t the point of this blog. The reaction to point #2 is going to cause some more liberal voters to register when they wouldn’t have.

How will this affect the outcomes? No one knows.

2014 Hugo Prediction: Indicators #7 and #8

Back from a restful summer vacation–and the Hugo awards are just around the corner. The final two indicators are as follows:

Indicator #7: Novel won a same year Nebula award. (85.6%)
Indicator #8: Novel received a same year Campbell nomination. (50%)

The Nebula is the huge indicator here: the Nebula award is high profile and, most importantly, awarded well before the Hugo. This gives Hugo readers a great chance to read the Nebula winner and then vote for it. We’ve had quite a few novels in the last 15 years sweep their way to both awards. Although some years are open–the Nebula winner is not nominated for the Hugo–the Nebula winner has won 6 out of the last 7 times it’s been eligible, with only last years upset of Kim Stanley Robinson by John Scalzi to mess up the data.

The Campbell isn’t as reliable, but it is a good indicator that the novel is on reader’s radar. Expect very heavy weighting to the Nebula, with modest weighting to the Campbell.

2014 Hugo Prediction: Indicator #6

The Hugo is all about the book that’s most popular—so we have to find good indicators that reflect popularity. Unfortunately, winning a Hugo greatly increases the popularity of a book, so it’s hard to go back in time and find out how popular the book was before it won the award.

Right now, we can establish some more speculative indicators, based on Amazon ratings, and see if these become more reliable over time. The more people have reviewed a book, the more people have read it, thus more people can vote for it in the Hugo. Seems pretty straight-forward. There’s not much history here, so this category will be weighted relatively lightly.

This leaves us with:

Indicator #6: Novel was the most reviewed on at the time of the Hugo nomination. (75%)

So how about this year?

Wheel of Time 3,124 reviews
Ancillary Justice 232 reviews
Parasite 160 reviews
Neptune’s Brood 110 reviews
Warbound 109 reviews

This order echoes the Goodreads vote, except Correia and Stross swapped position (by one vote, though). Once again, this shows the huge advantage Jordan has in terms of sales in relation to the rest of the nominees.

2014 Hugo Prediction: Indicators #4 and #5

For the next part of our model, and just like the Nebula model, we’ll move on from awards history to critical and reader response.

Unlike the Nebula, critical response isn’t that important. Fans vote for the novels they like, not the most “esteemed” novels. There will be some critical response worked into same-year awards performance, but for Indicators #4 and #4 we’ll focus in on reader votes.

There are two reliable reader votes currently taking place: the Locus Awards and the Goodreads Choice Awards. The Locus Awards is the more established of the two. The readers of Locus Magazine vote in a variety of major categories (Science Fiction, Fantasy, Young Adult, First Novel). The statistics are pretty good here: 57.1% of the time, the eventual Hugo winner won one of the major categories. Often, both the first place Fantasy novel and the first place Science Fiction novel make the final slate, so you have a face off amongst different categories winners.

The Goodreads Choice Awards, voted on by the readers at the Goodreads website, hasn’t been around as long, and it isn’t showing the same reliability statistically. My hope is that as time passes, this becomes a more reliable Indicator. As of now, 33% of the time the book that received the most votes wins the Hugo, but that only gives us 3 years of data. As a consequence, this Indicator will be lightly weighted in the final rankings.

That leaves us with:

Indicator #4: The nominated novel wins one of the main Locus Awards categories. (57.1%)
Indicator #5: The nominated novel receives the most votes in the Goodreads Awards. (33%)

Where are we at this year? Well, the Locus Awards are usually given in late June. Finalists have been announced, and this years nominees haven’t done too well. Only Neptune’s Brood is a finalist for best SF novel and Ancillary Justice was nominated for First Novel.

The Goodreads vote has been held, and here’s how our novels fared. I counted the votes of A Memory of Light, the last volume of The Wheel of Time), for Jordan/Sanderson.

Wheel of Time 28,470 votes
Ancillary Justice 3,815 votes
Parasite 3,431 votes
Warbound 1,509 votes
Neptune’s Brood 1,144 votes

This is the first category where the popularity of Wheel of Time shines through. The gap between A Memory of Light and Ancillary Justice is enormous, and may factor into the final Hugo vote.

2014 Hugo Prediction: Indicator #3

While the Nebula award showed a clear bias towards science fiction novels, the Hugo actually shows the opposite. While almost 70% of the nominees are science fiction novels, fantasy novels win 50% of the time. While 50% may not seem like much of a statistical advantage, it’s the 70%/30% nominee split that gives fantasy novels a statistical boost.

On a practical level, this makes sense: there are dedicated fantasy and science fiction blocks within the Hugo voters. Few readers are equally passionate about both genres, and since the fantasy nominee pool is smaller, fantasy voters tend to boost those nominees.

So, this works out to:
Indicator #3: Nominated novel is in the fantasy genre. (50%)

3 of this years nominees are best described as science fiction: Ancillary Justice, Parasite, and Neptune’s Children. Warbound looks like a detective/fantasy hybrid, leaving Jordan’s and Sanderson’s The Wheel of Time as the go to choice for fantasy readers.

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction


Pluralism and Individuation in a World of Becoming

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"


A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Read & Survive

How-To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

SFF Book Reviews

random thoughts about fantasy & science fiction books

Philip K. Dick Review

A Re-read Project

Notes From the Darknet

Book reviews and literary discussion