Hugo Award Nomination Ranges, 2006-2015: A Chaos Horizon Report

Periodically, Chaos Horizon publishes extensive reports on various issues relating to SFF awards. One important context for this year’s Hugo controversy is the question of nomination numbers. New readers who are coming into the discussion may be unaware of how strange (for lack of a better word) the process is, and how few votes it has historically taken to get a Hugo nomination, particularly in categories other than Best Novel. As a little teaser of the data we’ll be looking at, consider this number: in 2006, it only took 14 votes to make the Best Short Story Hugo final ballot.

While those numbers have risen steadily over the past decade, they’re still shockingly low: in 2012, it took 36 votes in the Short Story category; in 2013, it took 34 votes; in 2014, we jumped all the way to 43. This year, with the Sad Puppy/Rabid Puppy influence, the number tripled to 132. That huge increase causes an incredible amount of statistical instability, to the point that this year’s data is “garbage” data (i.e. confusing) when compared to other years.

Without having a good grasp of these numbers and trends, many of the proposed “fixes”—if a fix is needed at all, and that this isn’t something that will work itself out over 2-3 years via the democratic process—might exacerbate some of the oddities already present within the Hugo. The Hugo has often been criticized for being an “insider” award, prone to log-rolling, informal cliques, and the like. While I don’t have strong opinions on any of those charges, I think it’s important to have a good understanding of the numbers to better understand what’s going on this year.

Chaos Horizon is an analytics, not an opinion, website. I’m interested in looking at all the pieces that go into the Hugo and other SFF awards, ranging from past patterns, biases, and oddities, to making future predictions as what will happen. I see this as a classic multi-variable problem: a lot of different factors go into the yearly awards, and I’ve been setting myself the task of trying to sort through some (and only some!) of them. Low nominating numbers are one of the defining features of the Hugo award; that’s just how the award has worked in the past. That’s not a criticism, just an observation.

I’ve been waiting to launch this report for a little while, hoping that the conversation around this year’s Hugo to cool off a little. It doesn’t look like that’s going to happen. The sheer flood of posts about this year’s Hugos reveal the desire that various SFF communities have for the Hugo to be the “definitive” SFF award, “the award of record.” File 770 has been the best hub for collecting all these posts; check them out if you want to get caught up on the broader conversation.

I don’t think any award can be definitive. That’s not how an award works, whether it’s the Hugo, the Pulitzer, or the Nobel prize. There are simply too many books published, in too many different sub-genres, to too many different types of fans, for one award to sort through and “objectively” say this is the best book. Personally, I don’t rely on the Hugo or Nebula to tell me what’s going on in the SFF field. I’ve been collating an Awards Meta-List that looks at 15 different SFF awards. That kind of broad view is invaluable if you want to know what’s happening across the whole field, not only in a narrow part of it. Lastly, no one’s tastes are going to be a perfect match for any specific award. Stanislaw Lem, one of my favorite SF authors, was never even nominated for a Hugo or Nebula. That makes those awards worse, not Lem.

Finally, I don’t mean this report to be a critique of the Worldcon committees who run the Hugo award. They have an incredibly difficult (and thankless) job. Wrestling with an award that has evolved over 50 years must be a titanic task. I’d like to personally thank them for everything they do. Every award has oddities; they can’t help but have oddities. Fantasizing about some Cloud-Cuckoo-Land “perfect” SFF award isn’t going to get the field anywhere. This is where we’re at, this is what we’ve have, so let’s understand it.

So, enough preamble: in this report we’ll be looking at the last 10 years of Hugo nomination data, to see what it takes to get onto the final Hugo ballot.

Background: If you already know this information, by all means skip ahead. themselves provide an intro to the Hugos:

The Hugo Awards, presented annually since 1955, are science fiction’s most prestigious award. The Hugo Awards are voted on by members of the World Science Fiction Convention (“Worldcon”), which is also responsible for administering them.

Every year, the attending or supporting members of the Worldcon go through a process to nominate and then vote on the Hugo awards. There are a great many categories (it’s changed over the years; we’re at 16 Hugo categories + the Campbell Award, which isn’t a Hugo but is voted on at the same time by the same people) ranging from Best Novel down to more obscure things like Best Semiprozine and Best Fancast.

If you’re unfamiliar with the basics of the award, I suggest you consult the Hugo FAQs page for basic info. The important bits for us to know here are how the nomination process works: every supporting and attending member can vote for up to 5 things in each category, and each of those votes counts equally. This means that someone who votes for 5 different Best Novels has 5 times as much influence as a voter who only votes for 1. Keep that wrinkle in mind as we move forward.

The final Hugo Ballot is made up of the 5 highest vote getters in each category, provided that they reach at least 5% total votes. This 5% rule has come into play several times in the last few years, particularly in the Short Story category.

Methodology: I looked through the Hugo Award nominating stats, archived at, and manually entered the highest nominee, the lowest nominee, and the total number of ballots (when available) for each Hugo category. Worldcon voting packets are not particularly compatible with data processing software, and it’s an absolute pain to pull the info out. Hugo committees, if you’re listening, create comma separated value files!

I chose 10 years as a range for two reasons. First, the data is easily available for that time range, and it gets harder to find for earlier years. The Hugo website doesn’t have the 2004 data readily linked, for instance. While I assume I could find it if I hunted hard enough, it was already tedious enough to enter 10 years of data. Second, my fingers get sore after so much data entry!

Since the Worldcon location and organizing committees change every year, the kind of data included in the voting results packet varies from year to year as well. Most of the time, they tell us the number of nominating ballots per category; some years they don’t. Some have gone into great detail (number of unique works nominated, for instance), but usually they don’t.

Two methodological details: I treated the Campbell as a Hugo for the purposes of this report: the data is very similar to the rest of the Hugo categories, and they show up on the same ballot. That may irk some people. Second, there have been a number of Hugo awards declined or withdrawn (for eligibility reasons). I marked all of those on the Excel spreadsheet, but I didn’t go back and correct those by hand. I was actually surprised at how little those changes mattered: most of the time when someone withdrew, it affected the data by only a few votes (the next nominee down had 20 instead of 22 votes, for instance). The biggest substantive change was a result of Gaiman’s withdrawal last year, which resulted in a 22 vote swing. If you want to go back and factor those in, feel free.

Thanks to all the Chaos Horizon readers who helped pull some of the data for me!

Here’s the data file as of 5/5/2015: Nominating Stats Data. I’ll be adding more data throughout, and updating my file as I go. Currently, I’ve got 450 data points entered, with more to come. All data on Chaos Horizon is open; if you want to run your own analyses, feel free to do so. Dump a link into the comments so I can check it out!

Results: Let’s look at a few charts before I wrap up for today. I think the best way to get a holistic overview of the Hugo Award nominating numbers is to look at averages. Across all the Hugo categories and the Campbell, what were the average number of ballots per category, the votes per top nominee (i.e. the work that took #1 in the nominations), and the votes per low nominee (the work that took place #5 in the nominations)? That’s going to set down a broad view and allow us to see what exactly it takes (on average) to get a Hugo nom.

Of course, every category works differently, and I’ll be more closely looking at the fiction categories moving forward. The Hugo is actually many different awards, each with slightly different statistical patterns. This makes “fixing” the Hugos by one change very unlikely: anything done to smooth the Best Novel category, for instance, is likely to destabilize the Best Short Story category, and vice versa.

On to some data:

Table 1: Average Number of Ballots, Average Number of Votes for High Nominee, and Average Number of Votes for Low Nominee for the Hugo Awards, 2006-2015
Average Ballots Table
(click to make this bigger)

This table gives us a broad holistic view of the Hugo Award nominating data. What I’ve done is taken all the Hugo categories and averaged them. We have three pieces of data for each year: average ballots per category (how many people voted), average number of votes for the high nominee, and average votes for the low nominee. So, in 2010, an average of 362 people voted in each category, and the top nominee grabbed 88 votes, the low nominee 47.

Don’t worry: we’ll get into specific categories over the next few days. Today, I want the broad view. Let’s look at this visually:

Chart 1 Average Ballots, High Nom, Low Nom

2007 didn’t include the number of ballots per category, thus the missing data in the graph. You can see in this graph that the total number of ballots is fairly robust, but that the number of votes for our nominated works are pretty low. Think about the space between the bottom two lines as the “sweet spot”: that’s how many votes you need to score a Hugo nomination in any given year. If you want to sweep the Hugos, as the Puppies did this year in several categories, you’d want to be above the Average High Nom line. For most years, that’s meant fewer than 100 votes. In fact, let’s zoom in on the High and Low Nom lines:

Chart 2 Average High Nom Low Nom

This graphs let us set mathematical patterns that are hard to see when just looking at numbers. Take your hand and cover up everything after 2012 on Chart #2: you’ll see a steady linear increase in the high and low ranges over those 8 years, rising from about 60 to 100 for the high nominee and 40 to 50 for the low nominee. Nothing too unusual there. If you’re take your hand off, you’ll see an exponential increase from 2012-2015: the numbers shoot straight up. That’s a convergence of many factors: the popularity of LonCon, the Puppies, and the increased scrutiny on the Hugos brought about by the internet.

What does all this mean? I encourage you to think and analyze this data yourself, and certainly use the comments to discuss the charts. Don’t get too heated; we’re a stats site, not a yell at each other site. There’s plenty of those out there. :).

Lastly, this report is only getting started. Over the next few days—it takes me a little bit of time to put together such data-heavy posts—I’ll be drilling more deeply into various categories, and looking at things like:
1. How do the fiction categories work?
2. What’s the drop off between categories?
3. How spread out (i.e. how many different works are nominated) are the categories?

What information would be helpful for you to have about the Hugos? Are you surprised by these low average nomination numbers, or are they what you’d expect? Is there a discrepancy between the “prestige” of the Hugo and the nomination numbers?


Tags: ,

One response to “Hugo Award Nomination Ranges, 2006-2015: A Chaos Horizon Report”

  1. Barb Caffrey says :

    I’m glad you’re doing this. It’s important to see the statistics.

    While I’m someone who believes fans should have more input into the Hugos, I’m not a Sad Puppy. (I’m standing somewhere in the middle of those long-time insiders in SF&F and the Sad Puppies.)

    The data truly is startling; 2015 definitely is the outlier. But what I’d like to see, personally, is for there to be some awareness of fan frustration from the long-time SF&F insiders. (I know it is not likely to happen. But I still hope for it.)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction


Pluralism and Individuation in a World of Becoming

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"


A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Read & Survive

How-To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

SFF Book Reviews

random thoughts about fantasy & science fiction books

Philip K. Dick Review

A Re-read Project

Notes From the Darknet

Book reviews and literary discussion

%d bloggers like this: