Archive | 2015 Hugo Award RSS for this section

Hugo Award Nomination Ranges, 2006-2015, Part 5

Let’s wrap this up by looking at the rest of the data concerning the Short Fiction categories of Novella, Novelette, and Short Story. Remember, these stories receive far fewer votes than the Best Novel category, and they are also less centralized, i.e. the votes are spread out over a broader range of texts. Let’s start by looking at some of those diffusion numbers:

Table 9: Number of Unique Works and Number of Votes per Ballot for Selected Best Novel Hugo Nominations, 2006-2015
Table 9 Diffusion Fiction Categories

Remember, the data is spotty because individual Worldcon committees have chosen not to provide it. Still, the table is very revealing: the Short Story category is far more diffuse (i.e. far more different works are chosen by the voters) than either the Novella, Novelette, or Novel categories. To look at this visually:

Chart 10 Number of Unique Works

In any given year, there are more than 3 times as many unique Short Stories nominated as Novellas. Now, I imagine there are far more Short Stories are published in any given year, but this also means that it’s much easier—much easier—to get a Novella nomination than a Short Story nomination. More voters may make something like the Short Story category more scattered, not more centralized, and this further underscores a main conclusion of this report: each Hugo category works differently.

This diffusion has some pretty profound implications on nominating %. Remember, the Hugos have a 5% rule: if you don’t appear on 5% of the nominating ballots, you don’t make the final ballot. Let’s look at the percentages of the #1 and the #5 nominee for the Short Fiction categories:

Table 10: High and Low Nominating % for Novella, Novelette, and Short Story Hugo Categories, 2008-2015
Table 10 Nominating % for Short Fiction

I think these percentage numbers are our best measure of “consensus.” Look at that 2011 Novella number of 35%: that means Ted Chiang’s Lifecycle of Software Objects appeared on 35% of the ballots. That seems like a pretty compelling argument that SFF fans found Chiang’s novella Hugo worthy (it won, for the record). In contrast, the Short Story category has been flirting with the 5% rule. In both 2013, only 3 stories made it above 5%, and in 2014 only 4. You could interpret that as meaning there was not much agreement in those years as to what the “major” short stories were. If you think the 5% bar is too high, keep in mind that each ballot has 5 votes, and each voter often votes for 3 works. That means that appearing on 5% of the ballots means you only got around (5%/3) = 1.67% of the total vote. If a story can’t manage that much support, is it Hugo worthy?

Now, this may be unfair. There might simply be too many venues, with too many short stories published, for people to narrow their thoughts down. Given the explosion of online publishing and the economics of either paying for stories or getting free stories (I’ll let you guess which one readers like more), there may simply be too much “work” for readers to agree upon their 5 favorite stories in the nominating stage. Perhaps that what the final stage is for, and it’s fine for the nominating stage to have low % numbers. Still, these low numbers make these categories very easy for either slate sweeps or other undue influences, even including “eligibility” posts. Either way, this should be a point of discussion in any proposed change to the Hugo award.

The landscape of SFF publishing and blogging has changed rapidly over the last 10 years, and the Hugos have not made many changes to adapt to this new landscape. Some categories remain relatively healthy, with clear centralization happening in the nomination stage. Other categories are very diffuse, with little agreement amongst the multiplicity of SFF fans.

To these complexities, we have to think about how much scrutiny the Hugos are under: people—myself included, and perhaps worst of all—comb through the data, looking for patterns, oddities, and problems. No longer is the Hugo a distant award given in relative quiet at the Worldcon, with results trickling out through the major magazines. It’s front and center in our instant reaction world of Twitter and the blogosphere. A great many SFF fans seem to want the Hugo to be the definitive award, to provide some final statement on what SFF works were the best in any given year. I’m not sure any award can do that, or that it’s fair to ask the Hugo to carry all that weight.

So, those rather tepid comments conclude this 5 part study of Hugo nominating ranges. All the data I used is right here, drawn primarily from the Hugo nominating packets and the Hugo Award website: Nominating Stats Data. While there are other categories we could explore, they essential work on similar lines to what we’ve discussed so far. If you’ve got any questions, let me know, and I’ll try to answer the best I can.


Hugo Award Nomination Ranges, 2006-2015, Part 3

Today, we’ll start getting into the data for the fiction categories in the Hugo: Best Novel, Best Novella, Best Novelette, Best Short Story. I think these are the categories people care about the most, and it’s interesting how differently the four of them work. Let’s look at Best Novel today and the other categories shortly.

Overall, the Best Novel is the healthiest of the Hugo categories. It gets the most ballots (by far), and is fairly well centralized. While thousands of novels are published a year, these are widely enough read, reviewed, and buzzed about that the Hugo audience is converging on a relatively small number of novels every year. Let’s start by taking a broad look at the data:

Table 5: Year-by-Year Nominating Stats Data for the Hugo Best Novel Category, 2006-2015
Table 5 Best Novel Stats

That chart list the total number of ballots for the Best Novel Category, the Number of Votes the High Nominee received, and the Number of Votes the Low Nominee (i.e. the novel in fifth place) received. I also calculated the percentage by dividing the High and Low by the total number of ballots. Remember, if a work does not receive at least 5%, it doesn’t make the final ballot. That rule has not been invoked for the previous 10 years of the Best Novel category.

A couple notes on the table. The 2007 packet did not include the number of nominating ballots per category, thus the blank spots. The red flagged 700 indicates that the 2010 Hugo packet didn’t give the # of nominating ballots. They did give percentages, and I used math to figure out the number of ballots. They rounded, though, so that number may be off by +/- 5 votes or so. The other red flags under “Low Nom” indicate that authors declined nominations in those year, both times Neil Gaiman, once for Anasasi Boys and another time for The Ocean at the End of the Lane. To preserve the integrity of the stats, I went with the book that originally was in fifth place. I didn’t mark 2015, but I think we all know that this data is a mess, and we don’t even really know the final numbers yet.

Enough technicalities. Let’s look at this visually:

Chart 5 Best Novel Data

That’s a soaring number of nominating ballots, while the high and low ranges seem to be languishing a bit. Let’s switch over to percentages:

Chart 6 Best Novel % Data

Much flatter. Keep in mind I had to shorten the year range for the % graph, due to the missing 2007 data.

Even though the number of ballots are soaring, the % ranges are staying somewhat steady, although we do see year-to-year perturbation. The top nominees have been hovering between 15%-22.5%. Since 2009, every top nominee has managed at least 100 votes. The bottom nominee has been in that 7.5%-10% range, safely above the 5% minimum. Since 2009, those low nominees all managed at least 50 votes, which seems low (to me; you may disagree). Even in our most robust category, 50 readers liking your book can get you into the Hugo—and they don’t even have to like it the most. It could be their 5th favorite book on their ballot.

With low ranges so low, it doesn’t (or wouldn’t) take much to place an individual work onto the Hugo ballot, whether by slating or other types of campaigning. Things like number of sales (more readers = more chances to vote), audience familiarity (readers are more likely to read and vote for a book by an author they already like) could easily push a book onto the ballot over a more nebulous factor like “quality.” That’s certainly what we’ve seen in the past, with familiarity being a huge advantage in scoring Hugo nominations.

With our focus this close, we see a lot of year-to-year irregularity. Some years are stronger in the Novel categories, other weaker. As an example, James S.A. Corey actually improved his percentage total from 2012 to 2013: Leviathan Wakes grabbed 7.4% (71 votes) for the #5 spot in 2012, and then Caliban’s War 8.1% (90 votes) for the #8 spot in 2013. That kind of oddity—more Hugo voters, both in sheer numbers and percentage wise, liked Caliban’s War, but only Leviathan Wakes gets a Hugo nom—have always defined the Hugo.

What does this tell us? This is a snapshot of the “healthiest” Hugo: rising votes, a high nom average of about 20%, a low nom average of around 10%. Is that the best the Hugo can do? Is it enough? Do those ranges justify the weight fandom place son this award? Think about how this will compare to the other fiction categories, which I’ll be laying out in the days to come.

Now, a few other pieces of information I was able to dig up. The Worldcons are required to give data packets for the Hugos every year, but different Worldcons choose to include different information. I combed through these to find some more vital pieces of data, including Number of Unique Works (i.e. how many different works were listed on all the ballots, a great measure of how centralized a category is) and Total Number of Votes per category (which lets us calculate how many nominees each ballot listed on average). I was able to find parts of this info for 2006, 2009, 2013, 2014, and 2015.

Table 6: Number of Unique Works and Number of Votes per Ballot for Selected Best Novel Hugo Nominations, 2006-2015

Table 6 Best Novel Selected Stats

I’d draw your attention to the ratio I calculated, which is the Number of Unique Works / Number of Ballots. The higher that number is, the less centralized the award is. Interestingly, the Best Novel category is becoming more centralized the more voters there are, not less centralized. I don’t know if that is the impact of the Puppy slates alone, but it’s interesting to note nonetheless. That might indicate that the more voters we have, the more votes will cluster together. I’m interested to see if the same trend holds up for the other categories.

Lastly, look at the average number of votes per ballot. Your average Best Novel nominator votes for over 3 works. That seems like good participation. I know people have thrown out the idea of restricting the number of nominations per ballot, either to 4 or even 3. I’d encourage people to think about how much of the vote that would suppress, given that some people vote for 5 and some people only vote for 1. Would you lose 5% of the total vote? 10%? I think the Best Novel category could handle that reduction, but I’m not sure other categories can.

Think of these posts—and my upcoming short fiction posts—as primarily informational. I don’t have a ton of strong conclusions to draw for you, but I think it’s valuable to have this data available. Remember, my Part 1 post contains the Excel file with all this information; feel free to run your own analyses and number-crunching. If you see a trend, don’t hesitate to mention it in the comments.

Hugo Award Nomination Ranges, 2006-2015, Part 2

The Hugo is a strange award. One Hugo matters a great deal—the Best Novel. It sells copies of books, and defines for the casual SFF fan the “best” of the field. The Novella, Novelette, and Short Story also carry significant weight in the SFF field at large, helping to define rising stars and major works. Some of the other categories feel more like insider awards: Editor, Semiprozine. Others feel like fun ways to nod at the SFF fandom (Fanzine). All of them work slightly differently, and there’s a huge drop off between categories. That’s our point of scrutiny today, so let’s get to some charts.

First, let’s get some baseline data out there: the total number of nominating ballots per year. I also included the final voting ballots. Data gets spotty on the Hugo website, thus the blank spots. If anyone has that data, point me in that direction!

Table 2: Total Number of Nominating and Final Ballots for the Hugo Awards, 2006-2015
Table 2 Ballots 2006-2015

I pulled that data off the site, save for the flagged 895, which I grabbed from this File 770 post.

Now, how popular is each category? How many of those total nominators nominate in each category? First up, the averages for 2006-2015:

Table 3: Average Number of Nominating Ballots in the Hugo Award per Category, 2006-2015
Table 3 Number of Nominating Ballots Each Category

I included to averages for you: the 2006-2015 average, and then the 2006-2013 average. This shows how much the mix of Loncon, the Puppy vote, and increased Hugo scrutiny have driven up these numbers.

What this table also shows is how some categories are far more popular than others. Several hundred more people vote in the Novel category than in the next most popular category of Dramatic Long, and major categories like Novella and Novelette only manage around 50% of the Novel nominating vote. That’s a surprising result, and may show that the problem with the Hugo lies not in the total number of voters, but in the difficulty those voters have in voting in all categories. I’ve heard it mentioned that a major problem for the Hugo is “discovery”: it’s difficult to have a good sense of the huge range of novellas, novelettes, short stories, etc., and many people simply don’t vote in the categories they don’t know. It’d be interesting to have a poll: how many SFF readers actually read more than 5 new novels a year? 5 new novellas? I often don’t know if what I’m reading is a novella or a novelette, and does the lack of clarity in this categories hurt turnout?

Let’s look at this visually:

Chart 3 Popularity of Categories

Poor Fan Artist category. That drop off is pretty dramatic across the award. Are there too many categories for people to vote in?

Let’s focus in on 2015, as that’s where all the controversy is this year. I’m interested in the percentage of people who voted for each category, and the number of people who sat out in each category.

Table 4: Percentage of Voters and “Missing Votes” per Hugo Category, 2015 Only

Table 4 % Voters in Each Category

The chart at the top tells us a total of 2122 nominated in the Hugos, but no category managed more than 87% of that total. The missing votes columns is 2122 minus the number of people who actually nominated. I was surprised at how many people sat out each category. Remember, each of those people who didn’t vote in Best Novel, Best Short Story, etc., could have voted up to 5 times! In the Novella category alone, 5000 nominations were left on the table. If everyone who nominated in the Hugos had nominated in every category, the Puppy sweeps most likely wouldn’t have happened.

Again, let’s take a visual look:

Chart 4 % Voters by Category

That chart re-enforces the issue in the awards: less than 50% turnout in major categories like Novella, Short Story, and Novelette.

What to conclude from all of this? Total number of ballots isn’t as important as to who actually nominates in each category. Why aren’t people nominating in things like Short Story? Do the nominations happen too early in the year? Are readers overwhelmed by the sheer variety of works published? Do readers not have strong feelings about these works? Given the furor on the internet over the past few weeks, that seems unlikely. If these percentages could be brought up (I have no real idea how you’d do that), the award would immediately look very different.

Tomorrow, we’ll drill more deeply into the Fiction categories, and look at just how small the nominating numbers have been over the past decade.

Hugo Award Nomination Ranges, 2006-2015: A Chaos Horizon Report

Periodically, Chaos Horizon publishes extensive reports on various issues relating to SFF awards. One important context for this year’s Hugo controversy is the question of nomination numbers. New readers who are coming into the discussion may be unaware of how strange (for lack of a better word) the process is, and how few votes it has historically taken to get a Hugo nomination, particularly in categories other than Best Novel. As a little teaser of the data we’ll be looking at, consider this number: in 2006, it only took 14 votes to make the Best Short Story Hugo final ballot.

While those numbers have risen steadily over the past decade, they’re still shockingly low: in 2012, it took 36 votes in the Short Story category; in 2013, it took 34 votes; in 2014, we jumped all the way to 43. This year, with the Sad Puppy/Rabid Puppy influence, the number tripled to 132. That huge increase causes an incredible amount of statistical instability, to the point that this year’s data is “garbage” data (i.e. confusing) when compared to other years.

Without having a good grasp of these numbers and trends, many of the proposed “fixes”—if a fix is needed at all, and that this isn’t something that will work itself out over 2-3 years via the democratic process—might exacerbate some of the oddities already present within the Hugo. The Hugo has often been criticized for being an “insider” award, prone to log-rolling, informal cliques, and the like. While I don’t have strong opinions on any of those charges, I think it’s important to have a good understanding of the numbers to better understand what’s going on this year.

Chaos Horizon is an analytics, not an opinion, website. I’m interested in looking at all the pieces that go into the Hugo and other SFF awards, ranging from past patterns, biases, and oddities, to making future predictions as what will happen. I see this as a classic multi-variable problem: a lot of different factors go into the yearly awards, and I’ve been setting myself the task of trying to sort through some (and only some!) of them. Low nominating numbers are one of the defining features of the Hugo award; that’s just how the award has worked in the past. That’s not a criticism, just an observation.

I’ve been waiting to launch this report for a little while, hoping that the conversation around this year’s Hugo to cool off a little. It doesn’t look like that’s going to happen. The sheer flood of posts about this year’s Hugos reveal the desire that various SFF communities have for the Hugo to be the “definitive” SFF award, “the award of record.” File 770 has been the best hub for collecting all these posts; check them out if you want to get caught up on the broader conversation.

I don’t think any award can be definitive. That’s not how an award works, whether it’s the Hugo, the Pulitzer, or the Nobel prize. There are simply too many books published, in too many different sub-genres, to too many different types of fans, for one award to sort through and “objectively” say this is the best book. Personally, I don’t rely on the Hugo or Nebula to tell me what’s going on in the SFF field. I’ve been collating an Awards Meta-List that looks at 15 different SFF awards. That kind of broad view is invaluable if you want to know what’s happening across the whole field, not only in a narrow part of it. Lastly, no one’s tastes are going to be a perfect match for any specific award. Stanislaw Lem, one of my favorite SF authors, was never even nominated for a Hugo or Nebula. That makes those awards worse, not Lem.

Finally, I don’t mean this report to be a critique of the Worldcon committees who run the Hugo award. They have an incredibly difficult (and thankless) job. Wrestling with an award that has evolved over 50 years must be a titanic task. I’d like to personally thank them for everything they do. Every award has oddities; they can’t help but have oddities. Fantasizing about some Cloud-Cuckoo-Land “perfect” SFF award isn’t going to get the field anywhere. This is where we’re at, this is what we’ve have, so let’s understand it.

So, enough preamble: in this report we’ll be looking at the last 10 years of Hugo nomination data, to see what it takes to get onto the final Hugo ballot.

Background: If you already know this information, by all means skip ahead. themselves provide an intro to the Hugos:

The Hugo Awards, presented annually since 1955, are science fiction’s most prestigious award. The Hugo Awards are voted on by members of the World Science Fiction Convention (“Worldcon”), which is also responsible for administering them.

Every year, the attending or supporting members of the Worldcon go through a process to nominate and then vote on the Hugo awards. There are a great many categories (it’s changed over the years; we’re at 16 Hugo categories + the Campbell Award, which isn’t a Hugo but is voted on at the same time by the same people) ranging from Best Novel down to more obscure things like Best Semiprozine and Best Fancast.

If you’re unfamiliar with the basics of the award, I suggest you consult the Hugo FAQs page for basic info. The important bits for us to know here are how the nomination process works: every supporting and attending member can vote for up to 5 things in each category, and each of those votes counts equally. This means that someone who votes for 5 different Best Novels has 5 times as much influence as a voter who only votes for 1. Keep that wrinkle in mind as we move forward.

The final Hugo Ballot is made up of the 5 highest vote getters in each category, provided that they reach at least 5% total votes. This 5% rule has come into play several times in the last few years, particularly in the Short Story category.

Methodology: I looked through the Hugo Award nominating stats, archived at, and manually entered the highest nominee, the lowest nominee, and the total number of ballots (when available) for each Hugo category. Worldcon voting packets are not particularly compatible with data processing software, and it’s an absolute pain to pull the info out. Hugo committees, if you’re listening, create comma separated value files!

I chose 10 years as a range for two reasons. First, the data is easily available for that time range, and it gets harder to find for earlier years. The Hugo website doesn’t have the 2004 data readily linked, for instance. While I assume I could find it if I hunted hard enough, it was already tedious enough to enter 10 years of data. Second, my fingers get sore after so much data entry!

Since the Worldcon location and organizing committees change every year, the kind of data included in the voting results packet varies from year to year as well. Most of the time, they tell us the number of nominating ballots per category; some years they don’t. Some have gone into great detail (number of unique works nominated, for instance), but usually they don’t.

Two methodological details: I treated the Campbell as a Hugo for the purposes of this report: the data is very similar to the rest of the Hugo categories, and they show up on the same ballot. That may irk some people. Second, there have been a number of Hugo awards declined or withdrawn (for eligibility reasons). I marked all of those on the Excel spreadsheet, but I didn’t go back and correct those by hand. I was actually surprised at how little those changes mattered: most of the time when someone withdrew, it affected the data by only a few votes (the next nominee down had 20 instead of 22 votes, for instance). The biggest substantive change was a result of Gaiman’s withdrawal last year, which resulted in a 22 vote swing. If you want to go back and factor those in, feel free.

Thanks to all the Chaos Horizon readers who helped pull some of the data for me!

Here’s the data file as of 5/5/2015: Nominating Stats Data. I’ll be adding more data throughout, and updating my file as I go. Currently, I’ve got 450 data points entered, with more to come. All data on Chaos Horizon is open; if you want to run your own analyses, feel free to do so. Dump a link into the comments so I can check it out!

Results: Let’s look at a few charts before I wrap up for today. I think the best way to get a holistic overview of the Hugo Award nominating numbers is to look at averages. Across all the Hugo categories and the Campbell, what were the average number of ballots per category, the votes per top nominee (i.e. the work that took #1 in the nominations), and the votes per low nominee (the work that took place #5 in the nominations)? That’s going to set down a broad view and allow us to see what exactly it takes (on average) to get a Hugo nom.

Of course, every category works differently, and I’ll be more closely looking at the fiction categories moving forward. The Hugo is actually many different awards, each with slightly different statistical patterns. This makes “fixing” the Hugos by one change very unlikely: anything done to smooth the Best Novel category, for instance, is likely to destabilize the Best Short Story category, and vice versa.

On to some data:

Table 1: Average Number of Ballots, Average Number of Votes for High Nominee, and Average Number of Votes for Low Nominee for the Hugo Awards, 2006-2015
Average Ballots Table
(click to make this bigger)

This table gives us a broad holistic view of the Hugo Award nominating data. What I’ve done is taken all the Hugo categories and averaged them. We have three pieces of data for each year: average ballots per category (how many people voted), average number of votes for the high nominee, and average votes for the low nominee. So, in 2010, an average of 362 people voted in each category, and the top nominee grabbed 88 votes, the low nominee 47.

Don’t worry: we’ll get into specific categories over the next few days. Today, I want the broad view. Let’s look at this visually:

Chart 1 Average Ballots, High Nom, Low Nom

2007 didn’t include the number of ballots per category, thus the missing data in the graph. You can see in this graph that the total number of ballots is fairly robust, but that the number of votes for our nominated works are pretty low. Think about the space between the bottom two lines as the “sweet spot”: that’s how many votes you need to score a Hugo nomination in any given year. If you want to sweep the Hugos, as the Puppies did this year in several categories, you’d want to be above the Average High Nom line. For most years, that’s meant fewer than 100 votes. In fact, let’s zoom in on the High and Low Nom lines:

Chart 2 Average High Nom Low Nom

This graphs let us set mathematical patterns that are hard to see when just looking at numbers. Take your hand and cover up everything after 2012 on Chart #2: you’ll see a steady linear increase in the high and low ranges over those 8 years, rising from about 60 to 100 for the high nominee and 40 to 50 for the low nominee. Nothing too unusual there. If you’re take your hand off, you’ll see an exponential increase from 2012-2015: the numbers shoot straight up. That’s a convergence of many factors: the popularity of LonCon, the Puppies, and the increased scrutiny on the Hugos brought about by the internet.

What does all this mean? I encourage you to think and analyze this data yourself, and certainly use the comments to discuss the charts. Don’t get too heated; we’re a stats site, not a yell at each other site. There’s plenty of those out there. :).

Lastly, this report is only getting started. Over the next few days—it takes me a little bit of time to put together such data-heavy posts—I’ll be drilling more deeply into various categories, and looking at things like:
1. How do the fiction categories work?
2. What’s the drop off between categories?
3. How spread out (i.e. how many different works are nominated) are the categories?

What information would be helpful for you to have about the Hugos? Are you surprised by these low average nomination numbers, or are they what you’d expect? Is there a discrepancy between the “prestige” of the Hugo and the nomination numbers?

Hugo Best Novel Nominees: Amazon and Goodreads Numbers, May 2015

It’s been a busy month, but Chaos Horizon is slowly returning to it’s normal work: tracking various data sets regarding the Nebula and Hugo awards. Today, let’s take a look at where the 5 Hugo Best Novel nominees stand in terms of # of Goodreads ratings, # of Amazon ratings, and ratings score.

So far, I’ve not been able to find a clear (or really any) correlation between this data and the eventual winner of the Hugo award. In my investigations of this data—see here, here, and here—I’ve been frustrated with how differently Amazon, Goodreads, and Bookscan treat individual books. It’s also worth noting that I don’t think Amazon or Goodreads measure some abstract idea of “quality,” but rather a more nebulous and subjective concept of “reader satisfaction.” You definitely see that in something like the Butcher book: since it’s #15 in a series, everyone who doesn’t like Butcher gave up long ago. All you have left are fans, who are prone to ranking Butcher highly.

As a final note, Jason Sanford leaked the Bookscan numbers for the Hugo nominees in early April. Check those out to see how Bookscan reports this data.

On to the data! Remember, these are the 2015 Hugo Best Novel nominees:

Skin Game, Jim Butcher
Ancillary Sword, Ann Leckie
The Goblin Emperor, Katherine Addison
The Three-Body Problem, Cixin Liu
The Dark Between the Stars, Kevin J. Anderson

Number of Goodreads Ratings for the Best Novel Hugo Nominees, May 2015

Hugo Goodreads May2015

This chart gives you how many readers on Goodreads have rated each book; that’s a rough measure of popularity, at least for the self-selected Goodreads audience.

Goodreads shows Skin Game as having a massive advantage in popularity, with almost 5 times as many rankings as Leckie’s book. Given Skin Game is #15 in the series, that’s an impressive retention of readers. Of course, any popularity advantage for Butcher has to be weighted against the pro and anti Sad/Rabid Puppy effect. Also don’t neglect the difficulty that Hugo voters will have in jumping into #15 of a series.

While Liu is still running behind Addison and Leckie, keep in mind that Liu’s book came out a full seven months after Addison’s book and a month after Leckie. Still, the Hugo doesn’t adjust for things like that: your total number of readers is your total number of readers. That’s why releasing your book in November can put you at a disadvantage in these awards. Still, Liu picked up a huge % of readers this month; if that momentum keeps up, that speaks well for his chances. Anderson’s number is very low when compared to the others; that probably is a mix of Anderson selling fewer copies and Anderson’s readers not using Goodreads.

Switching to Amazon numbers:

Number of Amazon Ratings for the Best Novel Hugo Nominees, May 2015

Hugo Amazon May2015

I don’t have as much data here because I haven’t been collecting it as long. I foolishly hoped that Goodreads data would work all by itself . . . it didn’t. Butcher’s Amazon advantage is even larger than his Goodreads advantage, and Liu leaps all the way from 4th place in Goodreads data to second place in Amazon data. This shows the different ways that Goodreads and Amazon track the field: Goodreads tracks a younger, more female audience (check the QuantCast data), while Amazon has historically slanted older and more gender-neutral. Your guess is as good as mine as to which audience is more predictive of the eventual Hugo outcome.

Lastly, the rankings themselves:

Goodreads and Amazon Rating Scores for the Best Novel Hugo Nominees, May 2015
Hugo Goodreads Amazon Scores May2015

Let me emphasize again that these scores have never been predictive for the Hugo or Nebula: getting ranked higher on Amazon or Goodreads has not equated to winning the Hugo. It’s interesting that the Puppy picks are the outliers: higher and lower when it comes to Goodreads, with Leckie/Addision/Liu all within .05 points of each other. Amazon tends to be more generous with scoring, although Butcher’s 4.8 is very high.

The 2015 Hugo year is going to be largely useless when it comes to data: the unusual circumstances that led to this ballot (the Sad and Rabid Puppy campaigns, then various authors declining Best Novel nominations, and now the massive surge in voting number) mean that this data is going to be inconsistent with previous years. I think it’s still interesting to look at, but take all of this with four or five teaspoons of salt. Still, I’ll be checking in on these numbers every month until the awards are given, and it’ll be interesting to see what changes happen.

Ballot Changes: Updating the Hugo Math

In the past few days, there have been four changes to the Hugo ballot: two authors (Marko Kloos and Annie Bellet) have dropped out, and two more authors/artists have been ruled ineligible (a John C. Wright novelette and Professional Artist John Eno). All the details can be found at the official Hugo Awards Website, and File 770 has a nice overview article.

Since Chaos Horizon is a Hugo and Nebula analytics site, these changes give us some new information about the voting totals, campaigns, and block sizes of the 2015 Hugos. In my two previous Hugo math posts—How Many Puppies and Margin of Victory—I tried to use the information we already have to estimate (and it’s only an estimate) the size of the effective voting blocks and the margins of victory. With this new data, I can update those estimates. Let’s go through the changes one by one to see what we can learn:

Best Novel Category: Cixin Liu’s novel The Three-Body Problem replaced Marko Kloos’s withdrawn novel Lines of Departure. Kloos’s novel appeared on both the Sad and Rabid Puppy slates. The range of votes prior to Kloos’s withdrawal was 256-387. The new range of votes is 212-387.

This doesn’t tell us that much. We know that The Three Body-Problem novel received 212 votes. We can’t assume that Kloos was the 256 vote getter (that could have been Addison), so this doesn’t help us estimate the size of the combined Rabid + Sad Puppy effective block vote in the Novel category. I’ve been using the term “effective block” because the current data doesn’t allow us to distinguish between 300 Puppy voters voting 67% of a slate or 200 voters voting 100% of the slate. Since there is such a range between the top and low end of the votes (even in categories the Puppies swept), I think you need to assume that not every Puppy voter voted straight down the line.

However, we can tentatively conclude that neither the Rabid Puppy nor the Sad Puppy alone was able to reach 212 votes. The Puppy slates diverged with their last choice of novel: the Sad Puppies had Charles Gannon, and the Rabid Puppies had Brad Torgersen. However, it may be that both of them were offered the Hugo spot and turned it down for various reasons. Until we have more info, we’ll have to chalk up a “learned little” for this category.

Best Novelette Category: Now we can learn something. John C. Wright’s story “Yes, Virginia, There is a Santa Claus” was ruled ineligible and replaced by “The Day the World Turned Upside Down” by Thomas Olde Huevelt. That adjusted the category range from 165-267 to 72-267. Wright was a Rabid Puppy pick; Huevelt was not on either Puppy slate. We know that “The Day the World turned Upside Down” received 72 votes. Since the Puppies swept this category, we know that 165 had to be a Puppy text.

This category gives us our first definitive sense of the “margin of victory,” or how convincingly the Puppies swept the Hugos. 165 for the lowest Puppy vote, and then 72 for the highest non-Puppy vote. That’s a huge 93 votes, or more than double Huevelt’s total.

In my prior post, using 2014 stats which showed Mary Robinette Kowal getting 118 votes for “Lady Astronaut of Mars,” I estimated the margin as much lower than that (47, to be exact). However, Kowal’s numbers may have been inflated (her story was ruled ineligible the year before, so she got plenty of press and support). Torgersen was second in 2014 Novelette nomination with 92 (he was a Puppy pick), and Alette de Bodard was third with 79. That 79 number is in line with this year’s total, which may indicate that the non-Puppy group of Hugo nominators did not increase this year. Catherynn Valente’s “Fade to White” was the most nominated 2013 Novelette with 89 votes. 89 to 72 is quite a decline, considering how many more voters there were this year.

So, we can say, at least in the Novelette category, the Puppy margin of victory was 93 votes, in a category where the most popular work usually has less than 100 votes.

Best Short Story Category: Annie Bellet withdrew her Rabid and Sad Puppy supported story “Goodnight Stars,” which was replaced by the Sad Puppy supported story “A Single Samurai” by Steven Diamond. The range changed from 151-230 to 132-226. We know “A Single Samurai” must have received 132 votes, and we also know that “Goodnight Stars” must have received 230 votes (since the top range also changed). This category was swept by the Puppies.

It’s interesting that the top range in this category (230) is so different than in other Puppy-swept categories: 338 for Novella, 267 for Novelette (back when it was swept). The block vote certainly fell off quickly from category to category.

This is our first real chance to see the Sad and Rabid Puppy votes separated. The Sad Puppy category slate chose stories by Bellet, Grey, English, Antonelli, and Diamond. The Rabid Puppies only chose a few of those: the Bellet, the English, and Antonelli. All three were nominated. Vox Day filled out his slate with a story by Wright (nominated) and a story by Rzasa (not nominated). So the Sad Puppy nominee of Diamond with 132 votes beat Rabid Puppy nominee Rzasa. If Rzasa would have kept all the Rabid Puppy votes that went to John C. Wright, he would have made the ballot. It’s interesting to see that even in the Rabid Puppy slate there is a discernible fall off between the most popular authors to the less popular ones.
EDIT: I was running these late last night, and missed looking at the Rzasa. My mistake. Sorry! Chaos Horizon should impose a “no math after 10:00 PM rule.”

So, let’s tackle this again, in the clear light of morning and after I’ve had my coffee: This is our first real chance to see the Sad and Rabid Puppy votes separated. The Sad Puppy category slate chose stories by Bellet, Grey, English, Antonelli, and Diamond. The Rabid Puppies only chose a few of those: the Bellet, the English, and Antonelli. It added two different stories, one by Wright and one by Rzasa. Both of those made the slate. If we break this down, we can start to separate some of the math out:

Short Story Category:
#1 in category: 230 votes: “Goodnight Stars” by Bellet (we know this because she withdrew and the ranged changed, story appeared on both the RP and SP slates).
#2 in category: 226 votes, don’t know the text (likely the Antonelli or English, because they appeared on both the the RP and SP ballots).
#3 in category: don’t know vote total, don’t know the story (but probably the English or Antonelli, for reasons stated above)
#4 in category: don’t know vote total, don’t know the story (but probably the Wright or Rzasa story, since they only appeared on the RP ballot)
#5 in category: 151 votes, don’t know the story (but probably the Wright or Rzasa story, because of reasons stated above)
#6 in category: 132 votes, “A Single Samurai” (appeared only on the SP slate)

With some fancy subtracting and hopefully some sound logic, here are the conclusions I’m reaching:
The SP effective vote in this category was 132, because that’s the number of votes their nominee “A Single Samurai” managed. They had one more suggestion, a story by Megan Grey, that must have placed below that.

The RP effective vote in this category was likely 151. I think it’s logical to assume that stories #3 and #4 in the ballot were the RP picks, based on the assumption that any pick on both the RP and SP ballots would have more votes since it could draw from both pools. That’s an assumption, not a fact, but I think a reasonable one. Disagree if you wish! What’s interesting is that 133 + 151 = 283, which is well shy of the # of votes the Bellet story received. This is another good indicator that neither the SP influenced voters of the RP influenced voters were moving exactly in lock step. If every SP voter who voted for “The Single Samurai” and ever RP voted who voted for either Wright or Rsaza voted for Bellet, her vote total would have been much larger. This is all complicated by possible interactions between the two groups (i.e. one person might have picked some works from the SP ballot and some works from the RP ballot), and we won’t know more precisely until after the data comes out.

Best Professional Artist: John Eno was ruled ineligible. He was a Sad and Rabid Puppy pick. He was replaced by Kirk DouPonce, a Rabid Puppy pick. There was no alternative Sad Puppy pick to elevate (they only chose 4 artists in the category). The range changed from 136-188 to 118-188.

So we know Kirk DouPonce received 118 votes. You might want to begin thinking about that number (118) as the low end of the Rabid Puppy effective block vote. That would be consistent with the Short Story category results: 118 wouldn’t quite have been enough to push the Rzasa story onto the ballot. Still, 118 votes is a huge number, and would have been enough to sweep most Hugo categories without any support from the Sad Puppies. There were two slates, both of which were large enough to effectively dominate most Hugo categories.

All of this will be greatly clarified when we get the final data after the Hugos are announced in August. Remember, these are estimates working with limited data, and are probably considered in terms of ranges (through a +/- 50 if you want to) rather than absolutes.

Anything else we can figure out at this point? EDIT: For easy references, here are the last 4 years of Hugo data, taken from the Hugo website:

As Kerani points out in the comments, so much is changing so fast in this year’s Hugo data—increased voters, withdrawals, disqualifications, block votes, etc.—that this data set (particularly in its incomplete form) can’t be considered to have a ton in common with previous data sets. Any comparisons we draw have to be considered speculative. But, since our field is speculative fiction, why not? Remember, Chaos Horizon data analysis is more for fun than anything else, and provides, in my opinion, a useful alternative to some of the other more-opinion driven sites out there. Take everything with a grain of salt, and use your own logic and analysis to make sense of what’s happening.

Margin of Victory: Breaking Down the Hugo Math

In my last post, I looked at the ranges of votes in the categories swept by the Puppy vote to estimate the effective min/max of the Puppy numbers. In this post, I’ll ask this question: How close were these categories to being sweeps? How many additional “traditional” Hugo voters (i.e. non-block voters distributed in the ways Hugo votes have been in the past) would it have taken to prevent a sweep?

I think this information is important because so many various proposals to “fix” the Hugos are currently floating around the web. Even if you accept the premise that the Hugos need to be fixed, what exactly are you fixing? One flaw in the Hugo system is that proposals to change the voting patterns—if such changes are desirable or needed—have to be proposed at the WorldCon, and that’s before the voting results are broadly known. Thus people are working in the dark: they might be trying to “fix” something without knowing exactly the scope (or even the definition) of “the problem.” That’s a recipe for hasty and ineffective change.

What we’ll do today is to compare the lowest Puppy nominee in the 6 swept categories (Novella, Novelette, Short Story, Best Related Work, Editor Short Form, and Editor Long Form) to the highest non-Puppy from 2014. I’ll subtract those two values to find a “margin of victory” for each of those 6 categories. That would give us an estimation of how many more votes it would have taken to get one (and only one) non-Puppy work onto the final ballot. To overcome more of the Puppy ballot would take more votes.

Now, this plays a little loose with the math: we don’t know for sure that the highest non-Puppy nominee from 2015 would have the same number of votes as the 2014 nominee. In fact, it’s highly unlikely they would be exactly the same. However, I think this gives us a rough estimate; it may be 5 higher, or 10 lower, but somewhere in a reasonable range. This will give us a rough eyeball estimate of how pronounced the victories were. On to the stats:

Lowest 2015 nominee: 145 votes
Highest 2014 nominee: 143 votes (Valente’s “Six-Gun Snow White”)
Margin of Sweep: 2 votes

Lowest 2015 nominee: 165 votes
Highest 2014 nominee: 118 votes (Kowal’s “Lady Astronaut of Mars”)
Margin of Sweep: 47 votes

Short Story:
Lowest 2015 nominee: 151 votes
Highest 2014 nominee: 78 votes (Samatar’s “Selkie Stories are For Losers”)
Margin of Sweep: 73 votes

Best Related Work:
Lowest 2015 nominee: 206 votes
Highest 2014 nominee: 89 votes (VanderMeer’s Wonderbook)
Margin of Sweep: 117 votes

Best Editor Short Form:

Lowest 2015 nominee: 162 votes
Highest 2014 nominee: 182 votes (John Joseph Adams; Neil Clarke was second with 115)
Margin of Sweep: -20 votes (if JJA had gotten the same support in 2015 as he did in 2014, he would have placed on the ballot by 20 votes)

Best Editor Long Form:
Lowest 2015 nominee: 166 votes
Second Highest 2014 nominee: 118 votes (Toni Weisskopf was highest with 169 votes, but she was a Sad Puppy 2 nominee (although Weisskopf also has plenty of support outside of Sad Puppies); Ginjer Buchanon was second with 118 votes)
Margin of Sweep: 48 votes

What does all that data mean? That means if that 2015 played out like 2014 for the non-Puppy candidates, the highest non-Puppy candidate missed the slate by this much (I’m repeating the data to make it easier to find). This many more votes for the highest non-Puppy would have broken up the sweep by one nominee.
Novella: 2 votes
Novelette: 47 votes
Short Story: 73 votes
Best Related Work: 117 votes
Editor Short Form: -20 votes
Editor Long Form: 48 votes

The Puppy slates didn’t dominate every category as much as the final results makes it seem. We won’t know the exact margins until August, but I imagine that a non-Puppy Novella was very close to making the final slate. Something like Ken Liu’s “The Regular” probably only needed a few more votes to make it onto the ballot. Editor Short Form was probably even closer; John Joseph Adams must have lost support; if he kept his votes from last year, he would have gotten in.

The other categories were crushed. 73 for Short Story. 117 for Best Related. Given that Samatar’s story from 2014 only managed 78 votes and you needed 151 to make it this year, that’s an almost impossible climb to get just one non-Puppy story onto the slate. I think this reveals a major problem: the Puppies dominated these categories not only because of their organization, but because of the general lack of voting in those categories. EDIT (4/8/15): Tudor usefully pointed out in the comments that this is probably better understood as a diffusion of the Short Story vote (i.e. the vote is spread out across many stories), rather than a lack of Short Story voters. Thanks for the correction, Tudor, and I always encourage people to push back against any Chaos Horizon statements they think are wrong, incorrect, or misleading. The more eyes we have on stats, the better.

Let’s run some rough math to see how many more voters you would need to get that one work onto the slate. Here’s how I’ll calculate new voters (you may disagree with this formula). I’m assuming you’re bringing new voters into the Hugo process, and that those voters vote in a similar fashion to the past. So, to generate 2 more votes for the top non-Puppy Novella, you’d need to account for the % of voters that bother to vote for the Novella category (2122 people voted in the 2015 Hugo, but only 1083 voted in the Novella category, for 51%). So, to bring 50 new people into the Novella category, you’d need to bring 100 new people into the Hugo voting process.

Next, you need to account for how many people voted for the #1 non-Puppy work. In 2014, that was 16.9% (that’s the percentage “Six-Gun Snow White” got). If we assume a similar distribution, we wind up with 2 votes needed / 51% voting for the category / 16.9% voting for the number one work. That yields 23 new votes needed.

For the Novella category, that’s definitely doable. In fact, if only some of the 700 people who voted for the Best Novel category voted but sat out the Novella category voted, that would add one non-Puppy text back to the slate.

Let’s run the math for the six categories:
Novella: 23 new voters needed
Novelette: 597 new voters needed (47/48.5%/16.2%)
Short Story: 1450 new voters needed (73/55%/9.1%)
Best Related Work: 1830 new voters needed (117/54%/11.8%)
Short Form Editor: no new voters needed; data shows they would have a nominee best on last years patterns
Long Form Editor: 765 new voters needed (48/33.5%/18.7%)

That’s a lot of new voters, and remember this is the number of voters needed to place only one non-Puppy work onto the final nominee list. The Novella and the Short Form Editor categories were close to not being sweeps, but the others were soundly overwhelmed. Think of how the Novel category worked: with much more excitement (700 more voters), the Puppies still placed 3 works onto the list. Finally, I don’t know how you’d add 1000+ new voters without also adding more Puppy voters.

Just for grins, let’s imagine what it would take to eliminate all Puppy influence in the Short Story category. To do that, we’d have to elevate John Chu’s “The Water that Falls from Nowhere” onto this year’s slate based on last year’s percentage vote. Chu—who won the Hugo—managed 43 nominations. To beat the highest 2015 Short Story puppy (230), he’d have to add 187 votes. Chu managed only 5% of the 2014 vote, and 55% of the total Hugo voters voted in the Short Story category. That gives us (187/55%/5%) = 6800 voters. So, if the Hugos added a mere 6,800 voters (and managed to keep all new Puppy votes out!), the Puppies would have been shut out of the Short Story category.

Of course, counter-slates could boost the % of votes going to authors, and there are other solutions that could tilt the field (fewer nominees per voter, more works per slate). What this post goes to show, though, is how organized and enthusiastic the 2015 Puppy-vote was: they not only swept categories, they swept categories decisively.

How Many Puppy Votes: Breaking Down the Hugo Math

The dust is just beginning to settle on the 2015 Hugo nominations. Here’s the official Hugo announcement and list of finalists. If you’re completely in the dark, we had two interacting slates—one called Sad Puppies led by Brad Torgersen, another called Rabid Puppies led by Vox Dax—that largely swept the 2015 Hugo nominations.

The internet has blown up with commentary on this issue. I’m not going to get into the politics behind the slates here; instead, I want to look at their impact. Remember, Chaos Horizon is an analytics, not an editorial website. If you’re looking for more editorial content, this mega-post on File 770 contains plenty of opinions from both sides of this issue.

What I want to do here on Chaos Horizon today is look at the nominating stats. Using those, can we estimate how many Sad Puppies? How many Rabid Puppies?

For those who want to skip the analysis: my conclusion is that the total Puppy influenced vote doubled from 2014 to 2015 (from 182 to somewhere in the 360 range), and that this resulted in a max Puppy vote of 360, and a minimum effective Puppy block of 150 votes. We don’t yet have data that makes it possible to split out the Rabid/Sad effect.

Let’s start with some basic stats: there were 2,122 nominating ballots, up from 1,923 nominating ballots last year, making for a difference of (2,122-1,923) = 199 ballots. Given that Spokane isn’t as attractive a destination as London for WorldCon goers, what is the cause of that rise? Are those the new Puppy voters, Sad and Rabid combined?

If you take last year’s Sad Puppy total, you’d wind up with 184 for the max Puppy vote (that’s the amount of voters who nominated Correia’s Warbound), the top Sad Puppy 2 vote-getter. If we add 199 to that, we’d get a temporary estimate of 383 for the max 2015 Puppy vote. We’ll find that this rough estimate is within spitting distance of my final conclusion.

Here’s a screenshot that’s been floating around on Twitter, showing the number of nominating votes per category. Normally, this wouldn’t help us much, because we couldn’t correlate min and max votes to any specific items on the ballot. However, since the Puppies swept several categories, we can use these ranges to min and max the total Puppy vote in the categories they swept. With me so far?

Hugo Nominating Stats 2015

Click on that to make it bigger. As you can see, that’s from the Sasquan announcement.

The Puppies swept Best Novella, Best Novelette, Best Short Story, Best Related Work, Best Editor Long Form, and Best Editor Short Form. This means all the votes shown in these categories are Puppy votes. Let me add another wrinkle before we continue: at times, the Sad and Rabid voters were in competition, nominating different texts for their respective slates. I’ll get to that in a second.

So, if we were to look at the max vote in those six categories, we’d get a good idea of the “maximum Puppy impact” for 2015:
Novella: 338 high votes
Novelette: 267 high votes
Short Story: 230 high votes
Related Work: 273 high votes
Editor Short Form: 279 high votes
Editor Long Form: 368 high votes

Presumably, those 6 “high” vote-getters were works that appeared on both the Sad and Rabid slates. You see quite a bit of variation there; that’s consistent with how Sad Puppies worked last year. The most popular Puppy authors got more votes than the less popular authors. See my post here for data on that issue. Certain categories (novel, for instance), are also much more popular than the other categories.

At the top end, though, the Editor long form grabbed 368 votes, which was within shouting distance of the Novella high vote of 338, and even very close to the Novel high vote of 387. I think we can safely conclude that’s the top end of the Puppy vote: 360 votes. I’m knocking a few off because not every vote for every text had to come from a Puppy influence. I’m going to label that the max Puppy vote, which combines the maximum possible reach of the combined Rabid and Sad Puppies vote.

Why was there such a drop between the 368 votes for Editor Long Form and the mere 230 votes for Short Story when both of these were Puppy-swept categories? This means that not every Puppy voter was a straight slate voter: some used the slate as a guide, and only marked the texts they liked/found worthy/had read. Some Puppy voters appear to have skipped the Short Story category entirely. That’s exactly what we saw last year: a rapid falling off in the Puppy vote based on author and category popularity. This wasn’t as visible this year because the max vote was so much higher: even 50% of that 360 number was still enough to sweep categories.

Now, on to the Puppy “minimum.” This would represent the effective “block” nature of the Puppy vote: what were lowest values they put forward when they swept a category? Remember, we know that 5th place work had to be a Puppy nominee because the category was swept.

Novella: 145 low vote
Novelette: 165 low vote
Short Story: 151 low vote
Related Work: 206 low vote
Editor Short Form: 162 low vote
Editor Long Form: 166 low vote

Aside from Related Work, that’s enormously consistent. There’s your effective block vote. I call this “effective” because the data we have can’t tell us for sure that this is 150 people voting in lock-step, or whether it might be 200 Puppies each agreeing with 75% of the slate. Either way, it doesn’t matter: The effect of the 2015 Puppy campaign was to produce a block vote of around 150 voters.

If that’s my conclusion, why was the Best Related Work 206 minimum votes? That’s the only category where the Rabid and Sad Puppies agreed 100% on their slate. Everywhere else, they split their vote. As such, that’s the combined block voting power of Rabid and Sad Puppies, something that didn’t show up in the other 5 swept categories.

So, given the above data, here’s my conclusion: The Puppy campaigns of 2015 resulted in a maximum of 360 votes, and an effective block minimum of 150 votes. That ratio of 360/150 max/min (41%) is almost the same as last year’s (182 for Correia at the highest / 69 for Vox at the lowest, for a rate of 37.9%). That’s remarkable consistency. It doesn’t look the Puppy stuck together any more, just that there were far more of them. Of course, we won’t know the full statistics until the full voting data is released in August.

I think a lot casual observers are going to be surprised at that 360 number. That’s a big number, representing some 17% of the total Hugo voters (360/2122). Those 17% selected around 75% of the final ballot. That’s the imbalance in the process so many observers are currently discussing.

What do you think? Does that data analysis make sense? Are you seeing something I’m not seeing in the chart? Tomorrow I’ll do an analysis of how much the non-Puppy works missed the slate by.

Building the Nebula Model, Part 1

The raison de’etre of Chaos Horizon has always been to provide numerical predictions for the Nebula and Hugo Awards for Best Novel based on data-mining principles. I’ve always liked odds, percentages, stats, and so forth. I was surprised that no one was doing this already for the major SFF awards, so I figured I could step into this void and see where a statistical exploration would take us.

Over the past few months, I’ve been distracted trying to predict the Hugo and Nebula slates. Now that we have the Nebula slate—and the Hugo is coming shortly—I can turn my attention back to my Nebula and Hugo models. Last year, I put together my first mathematical models for the Hugo and Nebulas. They both predicted eventual winner Leckie, which is good for the model. As I’ll discuss in a few posts, my currently model has around 67% accuracy over the last 15 years. Of course, past accuracy is not going to make things accurate in the future, but at least you know where the model stands. In a complex, multi-variable problem like this, perfect accuracy is impossible.

I’m going to rebuilding and updating the model over the next several weeks. There’s a couple tweaks I want to make, and I also wanted to bring Chaos Horizon readers into the process who weren’t around last year. Over the next few days, we’ll go through the following:
1. Guiding principles
2. The basics of the model
3. Model reliability
4. To-do list for 2015

Let’s get started today with the guiding principles for my Nebula and Hugo models:

1. The past predicts the future. Chaos Horizon uses a type of statistics called data-mining, which means I look for statistical patterns in past data to predict the future. There are other equally valid statical models such as sampling. In a sampling methodology, you would ask a certain number of Nebula or Hugo voters what there award votes were going to be, and then use that sample to extrapolate the final results, usually correcting for demographic issues. This is the methodology of Presidential voting polls, for instance. A lot of people do this informally on the web, gathering up the various posted Hugo and Nebula ballots and trying to predict the awards from that.

Data-mining works differently. You take past data and comb through it to come up with trends and relationships, and then you assume (and it’s only an assumption) that such trends will continue into the future. Since there is carryover in both the SFWA and WorldCon voting pools, this makes a certain amount of logical sense. If the past 10 years of Hugo data show that most of the time a SF novel always wins, you should predict a SF novel to win in the future. If 10 years of data show that the second novel in a series never wins, you shouldn’t predict a second novel to win.

Now, the data is usually not that precise. Instead, there is a historical bias towards SF novels, and first or stand alone novels, and past winners, and novels that do well on critical lists, and novels that do well in other awards, etc. What I do is I transform these observations into percentages (60% of the time a SF novel wins, 75% of the time the Nebula winner wins the Hugo, etc) and then combine those percentages to come up with a final percent. We’ll talk about how I combine all this data in the next few posts.

Lastly for this point, data-mining has difficult predicting sudden and dramatic changes in data sets. Huge changes in sentiment will be missed in what Chaos Horizon does, as that isn’t reflected in past statistical trends. Understand the limitations of this approach, and proceed accordingly.

2. Simple data means simple statistics. The temptation for any statistician is to use the most high-powered, shiny statistical toys on their data sets: multi-variable regressions, computer assisted Bayesian inferences, etc. All that has it’s place, and maybe in a few years we’ll try one of those out to see how far off it is from the simpler statistical modeling Chaos Horizon uses.

For the Nebulas and Hugos, though, we’re dealing with a low N (number of observations) but a high number of variables (genre, awards history, popularity, critical response, reader response, etc.). As a result, the project itself is—from a statistical reliability perspective—fatally flawed. That doesn’t mean it can’t be interesting, or that we can’t learn anything from close observation, but I never want to hide the relative lack of data by pretending my results are more solid than they seem. Low data will inevitably result in unreliable predictions.

Let’s think about what the N is for the Nebula Award. Held since 1966, 219 individual novels have been nominated for the Nebula. That’s our N, the total number of observations we have. We don’t get individual voting numbers for the Nebula, so that’s not an option for a more robust N. Compare that to something like the NCAA basketball tournament (since it’s going on right now). That’s been held since 1939. The field expanded to our familiar 64 teams in 1985. That means, in the tournament proper (the play-in round is silly), 63 games are contested every year since 1985. So, if you’re modeling who will an NCAA tournament game, you have 63 * (2014-1985) = 1827 data sets. Now, if we wanted to add in the number of games played in the regular season, we’d wind up with 347 teams (Division I teams) * 30 games each / 2 (they play each other, so we don’t want to use every game twice) = 5,205 more observations. That’s just one year of college basketball regular season games! Multiply that by 30 seasons, and you’re looking at an N of 150,000 in the regular season, plus an N of 2,000 for the postseason. You can do a lot with data sets that big!

So our 219 Nebula Best Novel observations looks pretty paltry. Let’s throw in the reality that the Nebulas have changed greatly over the last 40 years. Does 1970 data really predict what will happen in 2015? That’s before the internet, before fantasy became part of the process, etc. So, at Chaos Horizon, I primarily use the post 2000 data: new millennia, new data, new trends. That leaves us with an N of a paltry 87. From a statistical perspective, that should make everyone very sad. One option is to pack up and go home, to conclude that any trends we see in the Nebulas will be random statistical noise.

I do think, however, that the awards have some very clear trends (favoring certain kinds of novels, favoring past nominees and winners) that help settle down the variability. Chaos Horizon should be considered an experiment—perhaps a grand failed experiment, but those are the best kind—to see if statistics can get us anywhere. Who knows that but in 5 years I’ll have to conclude that no, we can’t use data-mining to predict the awards?

3. No black boxing the math. A corollary to point #2, I’ve decided to keep the mathematics on Chaos Horizon at roughly the high school level. I want anyone, with a little work, to be able to follow the way I’m putting my models together. As such, I’ve had to chose some simpler mathematical modeling. I think that clarity is important: if people understand the math, they can contest and argue against it. Chaos Horizon is meant to be the beginning of a conversation about the Hugos and Nebulas, not the end of one.

So I try to avoid the following statement: given the data, we get this prediction. Notice how that sentence isn’t logically constructed: how was the data used? What kind of mathematics was it pushed through? If you wanted to do the math yourself, could you? I want to write: given this data, and this mathematical processing of that data, we get this prediction.

4. Neutral presentation. To trust any statistical presentation, you have to trust that the statistics are presented in a fair, logical, and unbiased fashion. While 100% lack of bias is impossible as long as humans are doing the calculating, the attempt for neutrality is very important for me on this website. Opinions are great, and have their place in the SFF realm: to get those, simply go to another site. You won’t find a shortage of those!

Chaos Horizon is trying to do something different. Whether I’m always successful or not is for you to judge. Keep in mind that neutrality does not mean completely hiding my opinions; doing so is just as artificial as putting those opinions in the forefront. If you know some of my opinions, it should allow you to critique my work better. You should question everything that is put up on Chaos Horizon, and I hope to facilitate that questioning by making the chains of my reasoning clear. What we want to avoid at all costs is saying: I like this author (or this author deserves an award), therefore I’m going to up their statistical chances. Nor do I want to punish authors because I dislike them; I try and apply the same processing and data-mining principles to everyone who comes across my plate.

5. Chaos Horizon is not definitive. I hold that the statistical predictions provided on Chaos Horizon are no more than opinions. Stats like this are not a science; the past is not a 100% predictor of the future. These opinions are arrived at through a logical process, but since I am the one designing and guiding the process, they are my ideas alone. If you agree with the predictions, agree because you think the process is sound. If you disagree with the process, feel free to use my data and crunch it differently. If you really hate the process, feel free to find other types of data and process them in whatever way you see appropriate. Then post them and we can see if they make more sense!

Each of these principles is easily contestable, and different statisticians/thinkers may wish to approach the problem differently. If I make my assumptions, biases, and axioms clearly visible, this should allow you to engage with my model fully, and to understand both the strengths and weaknesses of the Chaos Horizon project.

I’ll get into the details of the model over the next few days. If you’ve got any questions, let me know.

The New Yorker Publishes Essay on Cixin Liu

The New Yorker ran a very complimentary essay about Cixin Liu’s The Three-Body Problem and his other stories, positioning him as China’s Arthur C. Clarke. Check it out here; it’s an interesting read.

This comes on the heels of Liu’s Nebula nomination for The Three-Body Problem, and will accelerate Liu being thought of as a “major” SF author. Essays like this are very important in establishing an author’s short and long term reputation. Much of the mainstream press follows the lead of The New Yorker and The New York Times; this means other newspapers and places like NPR, Entertainment Weekly, and others are going to start paying attention to Cixin Liu. While the influence of these venues on the smaller SFF community (and the Hugos and Nebulas) isn’t as significant, mainstream coverage does bleed over into how bookstores buy books, how publishers acquire and position novels, etc.

The Dark Forest, Liu’s sequel to The Three-Body Problem, comes out on July 7th. Expect that to get major coverage and to be a leading candidate for the 2016 Nebula and Hugo. I currently have Cixin Liu’s The Three-Body Problem at #6 in my Hugo prediction, and that may be too low. All this great coverage and exposure does come very late in the game: Hugo nominations are due March 10th. Liu’s novel came out on November 11th, and that’s not a lot of time to build up a Hugo readership. It does appear that most people who read The Three-Body Problem are embracing it . . . but will it be enough for a Hugo nomination?

As a Hugo prediction site, the hardest thing I have to account for is sentiment: how much do people like an individual novel? How does that enthusiasm carry over to voting? How many individual readers grabbed other readers and said “you’ve got to read this”? We can measure this a little by the force of reviews and the positivity of blogging, but this is a weakness in my data-mining techniques. I can’t account for the community falling in love with a book. Keep in mind, initial Liu reviews were a little measured (check out this one from, for instance, that calls the main character “uninspiring”), but then there was a wave of far more positive reviews in December, such as this one from The Book Smugglers. My Review Round-Up gathers some more of these.

Has the wheel turned, and are most readers now seeing The Three-Body Problem as the best SF book of 2014? For the record, that’s my opinion as well, and I did read some 20+ speculate works published in 2014. Liu’s has a combination of really interesting science, very bold (and sometimes absurd) speculation, and a fascinating engagement with Chinese history in the form of the Cultural Revolution. In a head to head competition with Annihilation, I think The Three-Body Problem wins. You’d think that would be enough to score a Hugo nomination, and maybe it will be. We’ll find out within the month.

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction


Pluralism and Individuation in a World of Becoming

Rare Horror

We provide reviews and recommendations for all things horror. We are particularly fond of 80s, foreign, independent, cult and B horror movies. Please use the menu on the top left of the screen to view our archives or to learn more about us.

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"


A little about me, a lot about books, and a dash of something else

SCy-Fy: the blog of S. C. Flynn

Reader. Writer of fantasy novels.

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Read & Survive

How-To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

From couch to moon

Sci-fi and fantasy reviews, among other things