Archive | Report RSS for this section

2015 Hugo Analysis: Nominating Stats, Part 2

There’s a last few pieces of data I want easy access to before I move on from 2015. It’s been quite a year, and I think 2016 is going to be even more chaotic.

Not only do we have the specter of further Hugo voting kerfuffles, but 2015 is one of the stronger award years in recent memory. Of the last 7 Hugo Best Novel winners, 5 published new novels in 2015. Bacigalupi, Walton, Scalzi, Cixin Liu, and Leckie are all back at the party with well-received works—and that’s not including past Hugo heavy-hitters like Neal Stephenson (I think Sevenves will be a major player) and Kim Stanley Robinson (Aurora likely for a Nebula nom?).

That’s 7 books already . . . and you have to think Pratchett will grab plenty of sentimental votes for the final Discworld novel. Then we’ve got a whole range of other authors with competitive novels in 2015: George R.R. Martin’s A Knight of the Seven Kingdoms (I don’t know whether this is eligible or not as a novel, but if it is, watch out), Ken Liu’s debut Grace of Kings, Brandon Sanderson’s Shadows of Self (without the Puppies, he was awfully close to breaking through for a nom this year; see the stats below), N.K. Jemisin’s The Fifth Season (1 Hugo Best novel nom and 3 Nebula Best novel noms for Jemisin in the last 5 years), the list goes on and on. And I haven’t even mentioned potential breakout novels this year, like Naomi Novik’s Uprooted. Throw in the chaos of various Puppy picks, and you’ve got a very murky year coming up. I better roll up my sleeves and get to work!

So there’s two sets of numbers I want to look at today: sweep margin in various categories and the Best Novel nominations with the Puppy picks removed. Both will give us some good insight as to what might happen next year. Once again, I’m using the official Hugo stats for this info, which can be accessed here: 2015HugoStatistics.

If we look at categories where the Puppies, Sad and Rabid combined, had at least 5 picks, we can calculate something I’m calling the “sweep margin.” Basically, we subtract the Puppy #5 nominating number from the highest non-Puppy pick. This tells us how close the category came to getting swept. If the number is greater than 0, that means the category was swept, and we know how many votes it would have taken the non-Puppies to break up that sweep. If the number is less than zero, that means the category wasn’t swept. Note that I’m not taking withdrawals into account here; I just want to look at the raw numbers. Here’s the table:


Table #1: Sweep Margin for 2015 Hugos, Selected Cateogries

Sweep Margins

A little hard to parse, I know. Let’s think about Novella first: this chart tells us that Patrick Rothfuss’s The Slow Regard of Silent Things just missed the ballot by a mere 21 votes. So the Puppy sweep in Novella wasn’t particularly dominant; just a few more voters and one non-Puppy would have made it in. Does that mean a category like Novella might not be swept next year?

Look at another category, though, like Short Story. Since there are more short stories published in any given year, the vote is usually more spread out, and this time the Puppies were much more dominant. They actually had seven stories that placed above every non-Puppy pick. Two eventually declined, Bellet and Grey, but I’m not tracking that in this study. If we look at the #5 Puppy story, “Turncoat” by Steve Rzasa, it got 162 votes. The highest non-Puppy story only got 76 votes, “Jackalope Wives” by Ursula Vernon. That’s a heft margin of 86 votes—Vernon would have had to double her vote total to make it into the field (before Bellet and Grey declined, but you can’t count on people declining). Possible? Sure. I think the Hugo nomination vote can double next year—but the Puppy vote will also increase. Depending on Puppy strategy, it’s very possible that Short Story or Best Related Work will be swept next year.

I’m most interested in Best Novel here at Chaos Horizon; it’s the area of SFF I know best, and what I’m most interested in reading and predicting. Despite all the Puppy picks, Leckie was safely in the 2015 field: she placed 3rd in the raw nomination stats, above 5 different Puppy picks. Even Addison’s The Goblin Emperor (256 votes) and Cixin Liu’s The Three-Body Problem (210) beat two Puppy picks, Trial by Fire (199) and The Chaplain’s War. Those two picks, Ganon for Sad and Torgersen for Rabid, are examples of the Sad and Rabid picks not overlapping. The Best Novel category received enough attention, and enough votes, and was centralized enough, that the non-Puppy voters were able to overcome the Puppy votes when they didn’t team up. I think that’s a key piece of evidence for next year’s prediction: when the Sad and Rabid picks overlap, they’ll be very strong contenders. If they’re separate, I don’t think they’ll be able to beat the motivated Hugo nominators. We’ll see, of course.

That leaves out one obvious point—if the Sad/Rabid picks overlap with something already in the mainstream, that will definitely boost the Sad/Rabid chances.

Last thing is to look at the Nomination stats with the Puppies taken out. I need these for my records, because they’ll me to do year-to-year comparisons for authors. What I’m going to do is subtracting all the Puppy picks and then recalculating the percentages. Skin Game got 387 votes, so I’m just going to brute force subtract 387 from 1827 votes and recalculate. Were all of Butcher’s votes from the Sad Puppies? Probably not, but Butcher doesn’t have a strong history of vote-getting in the Hugos. By erasing those 387 votes, I’ll restore the percentages to what they might have looked like otherwise, which will help for my year to year tracking. I like to ask questions like, is Sanderson getting more or less popular?

Here’s that chart:

Table 2: Estimated Hugo Best Novel Nomination %s without Puppy VotesBest Novel Nominating %

Fun little note: they misspelled Katherine Addison’s name as Katherine Anderson in the Hugo nominating stats. It’s incredibly hard to enter all the data correctly when you’ve got so many data-points! EDIT 9/13/15: The error has been fixed! Check the comments for the full story.

Those percentages make a lot of sense. Leckie grabbed 23.1% for Ancillary Justice in 2015, and I don’t think the sequel was quite as popular.

A couple things of note: Weir probably would have been ineligible and many Hugo voters knew that. If that wasn’t the case, I expect he would have easily made the top 5. VanderMeer is much lower than I would have expected. Walton did very well (8th) for what was an experimental novel; that means The Just City might have a shot this year. Sanderson in 7th place has been moving steadily up in these Hugo nomination stats; he managed only 4.2% for Steelheart last year. Will his return to his very popular Mistborn universe be enough? I’m still going to predict just outside the Top #5, but it looks like it’s just a matter of time. Okorafor is actually eligible again in 2016 (Lagoon just only came out in the United States). There’s also a lot of experimental-ish fantasy on the list (Addison, Bennett, Hurley); that might speak well of Novik or Bear’s chances in 2016.

Well, that brings to an end my 2015 Hugo analysis! It’s been quite a year. I’m going to spend the several weeks doing Review Round-Ups of the big contenders for the 2016, and I should have my first 2016 Hugo/Nebula predictions in early October.

Advertisements

2015 Hugo Analysis: Nominating Stats, Part 1

Time to dig into the nomination stats. Since Chaos Horizon is a website dedicated to award predictions, this is data we really need—2015 is going to be our best model for 2016, after all.

Let’s tackle this in a methodical and organized fashion. The 2015 nominating stats are included as part of the 2015 Hugo packet, easily available at the Hugo website or right here: 2015HugoStatistics. The first thing we can do is go back to the Sad Puppy and Rabid Puppy slates and see how many votes each of those texts got. I’ve divided this into three lists: joint Sad/Rabid selections, Sad selections, and Rabid selections.

Joint Sad and Rabid Picks, Number of Nominations in 2015 Hugos:
Novel
263 The Dark Between the Stars
387 Skin Game
372 Monster Hunter Nemesis
270 Lines of Departure

Novella
292 Flow
338 One Bright Star to Guide Them
338 Big Boys Don’t Cry

Novelette
259 The Journeyman
248 The Triple Sun
267 Championship B’tok
266 Ashes to Ashes

Short Story
230 Goodnight Stars
184 On a Spiritual Plain
226 Totaled

Best Related
206 Letters from Gardner
273 Transhuman and Subhuman
254 The Hot Equations
236 Wisdom from my Internet
265 Why Science is Never Settled

Best Graphic Story
201 Reduce Reuse Reanimate

Dramatic Long
314 Lego Movie
769 Guardians of the Galaxy
489 Interstellar
170 The Maze Runner

Dramatic Short
169 Grimm “Once We Were Gods”
170 The Flash “Pilot”

Editor Long
368 Toni Weisskopf
276 Jim Minz
238 Anne Sowards
292 Sheila Gilbert

Editor Short
236 Jennifer Brozek
217 Bryan Thomas Schmidt
279 Mike Resnick
228 Edmund Schubert

Professional Artist
173 Carter Reid
160 Jon Eno
188 Alan Pollack
181 Nick Greenwood

Semiprozine
229 InterGalactic Medicine Show

Fanzine
181 Tangent
208 Elitist Book Reviews
187 Revenge of Hump Day

Fancast
179 Sci Phi Show
158 Dungeon Crawlers Radio
169 Adventures in SF Publishing

Fan Writer
150 Mathew Surridge
156 Jeffro Johnson
175 Amanda Green
201 Cedaer Sanderson

Campbell
229 Jason Cordova
224 Kary English
219 Eric S. Raymond

If we toss out the Best Dramatic, Long Form as an outlier (the stat numbers are way high, indicating that far more than just the Rabid + Sad Puppies voted for Guardians of the Galaxy, as anyone would predict), we wind up with this as the following range:

387-150. That takes us from the most popular pick to least popular choice (Skin Game by Jim Butcher in Novel, down to Matthew Surrdige in Fan Writer). That’s the “effective joint Sad/Rabid Puppy vote,” or how many votes the Puppies delivered to the 2015 Hugo nomination process. That wide range reflects two things: the lack of popularity of categories like Fan Writer, and lack of slate discipline (not every Puppy voter voted for all the works on the slate). To illustrate how some people didn’t follow the slate, look at Best Novel:

387 Skin Game
372 Monster Hunter Nemesis
270 Lines of Departure
263 The Dark Between the Stars

All four are joint Rabid/Sad picks, but Skin Game and Monster Hunter Nemesis grabbed 100 more votes than the Kloos or Anderson. That means that least 25% of these voters were picking and choosing from the slate, not voting it straight down the line.

A couple number to parse: how do we know Skin Game (or any other nominee) didn’t pick up some non-Puppy voters? We don’t know that for sure, but we can look at the 2009 Hugo Nominating stats for references. That’s the last year where they released the complete list of everyone who got a vote. Small Favor, Butcher’s Dresden Files #10, only got 6 votes that year. Now, this year’s pool is bigger, and maybe people liked Skin Game more, but that looks like a relatively trivial number to me. Your mileage may vary.

On the flip side, how do we know that every Puppy voter voted for Skin Game? Again, we don’t know for sure—there could have been 500 Sad Puppies, and only 80% of them voted for Butcher. In this case, I don’t think it matters. We’re looking at “effective” strength: this is how many votes the Puppies actually delivered in the categories, not a potential estimate of their max number. The actual number of votes is what is useful in my predictions.

Conclusion: So, Chaos Horizon is concluding that the effective Sad/Rabid combined block vote was 387-150, with sharp decay by both the popularity of the chosen work and the popularity of the category. I think that number can explain some of the vitriol in the field: of the 387 people who voted for Skin Game, at least 200 of them didn’t vote all the way to the bottom of the slate. More people only voted part of the slate than voted the whole thing—thus opening up the door for all kinds of online arguments as to exactly how “slate”-like this whole thing was. Expect those to continue as we move into Sad Puppies 4.

On to the Sad Puppy picks. When all was said and done, the Sad Puppies only had a few picks that were not mirrored by the Rabid Puppies (8, in fact), so we’ll learn far less here.

Sad Puppy Picks, Number of Nominations in 2015 Hugos:
Novel
199 Trial by Fire

Short Story
132 A Single Samurai
185 Tuesdays With Molakesh the Destroyer

Dramatic Short
41 Adventure Time “The Prince Who Wanted Everything”
Didn’t make top 15 Regular Show “Saving Time”

Semiprozine
111 Abyss & Apex
100 Andromeda Spaceways In-Flight Magazine

Fan Writer
132 Dave Freer

If we toss out the “Dramatic Short” category as an obvious outlier (the Sad Puppy voters didn’t seem to have liked picking cartoon shows in that category, as “Regular Show” didn’t even make the top 15), we wind up with this as a range:

199-100. I think the Trial by Fire number (at 199) is a little inflated; Gannon did grab a Nebula nom for this series in both 2014 and 2015, and I expect he picked up a fair amount of votes outside the Puppy blocks. That 185 number for “Molakesh” might be the more solid estimate of the max Sad Puppy core; that story is from Fireside Fiction, a rather obscure venue. Neither Andromeda Spaceways nor Abyss and Apex placed in the Top 15 in the 2014 Hugos, and the cut off there was a mere 10 votes, so I think we can attribute the lion’s share of those votes to the Sad Puppies.

Conclusion: We only have 8 data points here, but we’ve got a 199-100 range, with the top end only happening in popular categories (Novel, Short Story). That’s a 50% difference from the highest voted to the lowest voted, perhaps suggesting that only 50% of the Sad Puppy voters voted straight down the slate. You could get that number even lower, though, if you counted the television shows that not even the Sad Puppies voted for.

Rabid Puppy Picks, Number of Nominations in 2015 Hugos:
Novel
196 The Chaplain’s War

Novella
172 The Plural of Helen of Troy
145 Pale Realms of Shade

Novelette
165 Yes, Virginia, There is a Santa Claus

Short Story
162 Turncoat
151 The Parliament of Beasts and Birds

Dramatic Long
100 Coherence

Dramatic Short
141 Game of Thrones “The Mountain and the Viper”
86 Supernatural “Dog Dean Afternoon”

Editor Long
166 Vox Day

Editor Short
162 Vox Day

Professional Artist
118 Kirk DouPonce

Fanzine
119 Black Gate

Fan Writer
66 Daniel Enness

Campbell
143 Rolf Nelson

A couple interesting outliers here. The Dramatic Television category seems strange; you’d have to imagine that more people voted for Game of Thrones than just the Rabid Puppies, and Supernatural only picked up a scant 86 votes. Even the Rabid Puppies didn’t follow VD’s instructions in Fan Writer, only voting 66 times for Daniel Enness. I think the most sensible explanation is that Rabid Puppy voters didn’t follow the recommended picks in these categories. If you get rid of those 3 outliers, you end up with a very tight grouping of:

196-100. The Torgersen is probably inflated from Sad Puppies; even though he didn’t include himself on his own list, I can imagine some Sad Puppies coming over to vote for him. He’d also had a prior Hugo nomination outside the Puppy process. The tightly grouped Vox Day number (166 and 162) might be an equally sensible top number for the Rabid Puppies group. We’re only take 20-30 votes difference, though, and we’d be splitting hairs. I’m a stat site, though, so if you want to split hairs, go ahead!

Conclusion: 196-100 seems safe, and not even the Rabid Puppies had perfect slate discipline. This surprised me, although I could probably be persuaded there was a core group of 166 (Vox Day’s editor nom) to 119 (the Fanzine/Professional Artist number) of Rabid Puppies that did stick pretty closely.

So that leaves us:

Nomination Estimates, Sad, Rabid, and Joint Puppy Picks (percentage calculated using 1595 total nominating ballots):
Joint: 387-150; 24.2% – 9.5%
Sad Puppy: 199-100; 12.5% – 6.3%
Rabid Puppy: 196-100; 12.2% – 7.4%

Let’s double check-the math. If we add the Rabid and Sad picks together, we wind up with 395-200. The joint picks is 387-150. Obviously, that top number looks great; those 8 extra votes would seem to fall within the margin of other votes Skin Game is likely to have picked up. 200 and 150 are quite a bit farther apart, but this might reflect the limited data set we have for Sad Puppy picks alone (8 data points) and Rabid Puppy picks alone (15) compared to joint Sad/Rabid picks (52). Some of the joint picks may have been unappealing to both the Sad and Rabid voters, as well as being in categories with low voter turnout (Fan Writer, Fan Cast, etc.). Take a look at this chart, showing how quickly the various voting groups decayed (excluding Best Drama, for the reasons stated above):

Chart 1 Nomination Study

The chart just lines up the most popular pick to the least popular pick to take a look at the decay curve. Rabid and Joint alike fell off very quickly and then evened out. I think that reflects how much more popular the Best Novel category is than the rest of the Hugos. In 2015, it pulled in almost twice as many votes as the other fiction categories. Sad Puppies fell quickly the whole way down, but I don’t know if that reflects a greater variance amongst Sad Puppy voters or just a lack of data.

What does this all mean? That’s the big question. What it means for Chaos Horizon is that I can use these ranges and totals as I put together my 2016 prediction. The max number of nominating ballots was in Best Novel, where 1595 were cast; 5950 voted in the Hugo finals, an increase of almost 3.75. According to me previous analysis, here’s my final Puppy estimates:

Core Rabid Puppies: 550-525 (9.2% – 8.9%, using 5950 total votes for percentage)
Core Sad Puppies: 500-400 (8.4% – 6.7%, using 5950 total votes for percentage)
There are also some Puppy inclined neutrals; I’m not including them, because I don’t know if they’ll follow the Puppies into the nomination stage.

Those percentages are a little down from the nominating ballot, but not aggressively so. That’s what you would expect: the Puppies had the advantage of surprise in the nomination stage, while the push-back against them came in the final balloting. Much of the growth in the final ballot was from people wanting to vote specifically against the slates.

Boil this all down, and we now have a set of numbers to use in future predictions. In my next nominating analysis, I’ll be looking at how big the sweeps were for each category. With that data in place, I can then predict whether or not there will be sweeps (or in which categories) in 2016.

2015 Hugo Analysis: Category Participation

My posts have been slow these last weeks—my university is starting it’s Fall semester, and I’ve been getting my classes up and running. I’m teaching Kurt Vonnegut’s Slaughterhouse-Five, Nathanael Hawthorne’s The Scarlet Letter, and Mark Twain’s Adventures of Tom Sawyer across my three classes this week. Strange combination. Toni Morrison, Edith Wharton, and Benjamin Franklin next week!

Today, I want to look at category participation: how many people voted in each of the Hugo categories. Historically, there has been a very sharp drop off from the Best Novel category (which has around 85% participation) to the less popular categories like Fan Writer, Fanzine, and the Editor Categories (usually around the 40%-45% range). The stat we’re looking at today is what percentage of people who turned in a ballot voted for that particular category.

There are a lot of categories in the Hugos, and it’s unlikely that every fan engages in every category with the same intensity. That stats show that, but the 2015 controversy changed some of those patterns in interesting ways.

Let’s take at a big table of data from 2011-2015. I pulled the numbers directly from the last five years of Hugo packets, and what my table shows is the number of total ballots and the number of votes in each category. Divide those by each other, and you get the percentage participation in each category. Notice how skewed 2015 is compared to the other numbers; we had a total change in voting patterns this year. Click the table to make it larger:

Table 1: Participation in Final Ballot Hugo Categories, 2011-2015

Hugo Participation, 2011-2015

A lot of numbers, I know. Let’s look at that visually:

Percentage of Voters

That’s a very revealing chart. Ignore the top turquoise line for the moment; that’s 2015. The other four lines represent 2011-2014, and they’re pretty consistent with each other. Participation across categories declines until the Dramatic Presentations, then it declines again, then spikes at Best Professional Artist (who knew), before declining again. Best Fancast began in 2012, messing up the end of the 2011 line in the chart.

Historically, the ballot plunges from 85%-90% for Novel down to 40%. Even the major fiction categories (Novella, Novelette, and Short Story) manage only about 75% participation. Those declines are relatively consistent year to year, with some variation depending on how appealing the category is in any given cycle.

Now 2015: that line is totally inconsistent with the previous 4 years. Previously ignored categories like Editor grabbed an increase of 30 points—there’s your visual representation of how the Puppy kerfuffle drove votes. Thousands of voters voted in categories they would have previously ignored. I imagine this increase is due to both sides of the controversy, as various voters are tying to make their point. Still, 80% participation in a category like Editor, Short or Long Form is highly unusual for the Hugos. Even the Best Novel had a staggering 95% participation rate, up from a prior 4 year average of 87.4%.

Not every category benefitted in the same way. Let’s see if we can’t chart that increase:

Table 2: Increases of 2015 Hugo Participation Over 2011-2014 Participation Averages
Table 2 Increases

Last three columns are key. I averaged the 2011-2014 stats, and then looked to see how much they increased in 2015. If you take that absolute value (i.e. 80% to 90% is a 10 point increase), you can then calculate the percentage increase (divide it by the average value). That shows us which categories had the biggest relative boosts. Best Novel only increased slightly. Categories like Editor, Short, Editor, Long, and Fan Writer had huge relative boosts. The categories with little controversy, such as Fan Artist, didn’t enjoy the boosts other categories saw. A visual glance:

Chart 2 Percentage Increase

So, what did we learn from all this? That there was a Hugo controversy in 2015, and that it drove huge increases in participation to categories that had previously been ignored. I think we knew that already, but it’s always good to have the data.

Here’s my Excel file with the numbers and the charts: Participation Study. All data on Chaos Horizon is open, and feel free to use it in any wish you wish. Please provide a link back to this post if you do. Otherwise, you might want to check out my similar “Hugo Nomination Participation” study, which looks at the same data but in the nominating stage. It’s linked under the “Reports” tab.

Up next: the 2015 nominating numbers!

Hugo Award Nomination Ranges, 2006-2015, Part 4

We’re up to the short fiction categories: Novella, Novelette, and Short Story. I think it makes the most sense to talk about all three of these at once so that we can compare them to each other. Remember, the Best Novel nomination ranges are in Part 3.

First up, the number of ballots per year for each of these categories:

Table 6: Year-by-Year Nominating Ballots for the Hugo Best Novella, Novelette, and Short Story Categories, 2006-2015
Table 7 Ballots Short Fiction Categories

Chart 7 Ballots Short Fiction

A sensible looking table and chart: the Short Fiction categories are all basically moving together, steadily growing. The Short Story has always been more popular than the other two, but only barely. Remember, we’re missing the 2007 data, so the chart only covers 2008-2015. For fun, let’s throw the Best Novel data onto that chart:

Chart 8 All Fiction Categories

That really shows how much more popular the Best Novel is than the other Fiction categories.

The other data I’ve been tracking in this Report is the High and Low Nomination numbers. Let’s put all of those in a big table:

Table 7: Number of Votes for High and Low Nominee, Novella, Novelette, Short Story Hugo Categories, 2006-2015Table 8 High Low Noms Fiction Categories

Here we come to one of the big issues with the Hugos: the sheer lowness of these numbers, particularly in the Short Story category. Although the Short Story is one of the most popular categories, it is also one of the most diffuse. Take a glance at the far right column: that’s the number of votes the last place Short Story nominee has received. Through the mid two-thousands, it took in the mid teens to get a Hugo nomination in one of the most important categories. While that has improved in terms of raw numbers, it’s actually gotten worse in terms of percentage (more on that later).

Here’s the Short Story graph; the Novella and Novelette graphs are similar, just not as pronounced:

Chart 9 Short Story

The Puppies absolutely dominated this category in 2015, more than tripling the Low Nom number. They were able to do this because the nominating numbers have been so historically low. Does that matter? You could argue that the Hugo nominating stage is not designed to yield the “definitive” or “consensus” or “best” ballot. That’s reserved for the final voting stage, where the voting rules are changed from first-past-the-post to instant-run-off. To win a Hugo, even in a low year like 2006, you need a great number of affirmative votes and broad support. To get on the ballot, all you need is focused passionate support, as proved by the Mira Grant nominations, the Robert Jordan campaign, or the Puppies ballots this year.

As an example, consider the 2006 Short Story category. In the nominating stage, we had a range of works that received a meager 28-14 votes, hardly a mandate. Eventual winner and oddly named story “Tk’tk’tk” was #4 in the nominating stage with 15 votes. By the time everyone got a chance to read the stories and vote in the final stage, the race for first place wound up being 231 to 179, with Levine beating Margo Lannagan’s “Singing My Sister Down.” That looks like a legitimate result; 231 people said the story was better than Lannagan’s. In contrast, 15 nomination votes looks very skimpy. As we’ve seen this year, these low numbers make it easy to “game” the nominating stage, but, in a broader sense, it also makes it very easy to doubt or question the Hugo’s legitimacy.

In practice, the difference can be even narrower: Levine made it onto the ballot by 2 votes. There were three stories that year with 13 votes, and 2 with 12. If two people had changed their votes, the Hugo would have changed. Is that process reliable? Or are the opinions of 2—or even 10—people problematically narrow for a democratic process? I haven’t read the Levine story, so I can’t tell you whether it’s Hugo worthy or not. I don’t necessarily have a better voting system for you, but the confining nature of the nominating stage is the chokepoint of the Hugos. Since it’s also the point with the lowest participation, you have the problem the community is so vehemently discussing right now.

Maybe we don’t want to know how the sausage is made. The community is currently placing an enormous amount of weight on the Hugo ballot, but does it deserve such weight? One obvious “fix” is to bring far more voters into the process—lower the supporting membership cost, invite other cons to participate in the Hugo (if you invited some international cons, it could actually be a “World” process every year), add a long-list stage (first round selects 15 works, the next round reduces those 5, then the winner), etc. All of these are difficult to implement, and they would change the nature of the award (more voters = more mainstream/populist choices). Alternatively, you can restrict voting at the nominating stage to make it harder to “game,” either by limiting the number of nominees per ballot or through a more complex voting proposal. See this thread at Making Light for an in-progress proposal to switch how votes are tallied. Any proposed “fix” will have to deal with the legitimacy issue: can the Short Fiction categories survive a decrease in votes?

That’s probably enough for today; we’ll look at percentages in the short fiction categories next time.

Hugo Award Nomination Ranges, 2006-2015, Part 3

Today, we’ll start getting into the data for the fiction categories in the Hugo: Best Novel, Best Novella, Best Novelette, Best Short Story. I think these are the categories people care about the most, and it’s interesting how differently the four of them work. Let’s look at Best Novel today and the other categories shortly.

Overall, the Best Novel is the healthiest of the Hugo categories. It gets the most ballots (by far), and is fairly well centralized. While thousands of novels are published a year, these are widely enough read, reviewed, and buzzed about that the Hugo audience is converging on a relatively small number of novels every year. Let’s start by taking a broad look at the data:

Table 5: Year-by-Year Nominating Stats Data for the Hugo Best Novel Category, 2006-2015
Table 5 Best Novel Stats

That chart list the total number of ballots for the Best Novel Category, the Number of Votes the High Nominee received, and the Number of Votes the Low Nominee (i.e. the novel in fifth place) received. I also calculated the percentage by dividing the High and Low by the total number of ballots. Remember, if a work does not receive at least 5%, it doesn’t make the final ballot. That rule has not been invoked for the previous 10 years of the Best Novel category.

A couple notes on the table. The 2007 packet did not include the number of nominating ballots per category, thus the blank spots. The red flagged 700 indicates that the 2010 Hugo packet didn’t give the # of nominating ballots. They did give percentages, and I used math to figure out the number of ballots. They rounded, though, so that number may be off by +/- 5 votes or so. The other red flags under “Low Nom” indicate that authors declined nominations in those year, both times Neil Gaiman, once for Anasasi Boys and another time for The Ocean at the End of the Lane. To preserve the integrity of the stats, I went with the book that originally was in fifth place. I didn’t mark 2015, but I think we all know that this data is a mess, and we don’t even really know the final numbers yet.

Enough technicalities. Let’s look at this visually:

Chart 5 Best Novel Data

That’s a soaring number of nominating ballots, while the high and low ranges seem to be languishing a bit. Let’s switch over to percentages:

Chart 6 Best Novel % Data

Much flatter. Keep in mind I had to shorten the year range for the % graph, due to the missing 2007 data.

Even though the number of ballots are soaring, the % ranges are staying somewhat steady, although we do see year-to-year perturbation. The top nominees have been hovering between 15%-22.5%. Since 2009, every top nominee has managed at least 100 votes. The bottom nominee has been in that 7.5%-10% range, safely above the 5% minimum. Since 2009, those low nominees all managed at least 50 votes, which seems low (to me; you may disagree). Even in our most robust category, 50 readers liking your book can get you into the Hugo—and they don’t even have to like it the most. It could be their 5th favorite book on their ballot.

With low ranges so low, it doesn’t (or wouldn’t) take much to place an individual work onto the Hugo ballot, whether by slating or other types of campaigning. Things like number of sales (more readers = more chances to vote), audience familiarity (readers are more likely to read and vote for a book by an author they already like) could easily push a book onto the ballot over a more nebulous factor like “quality.” That’s certainly what we’ve seen in the past, with familiarity being a huge advantage in scoring Hugo nominations.

With our focus this close, we see a lot of year-to-year irregularity. Some years are stronger in the Novel categories, other weaker. As an example, James S.A. Corey actually improved his percentage total from 2012 to 2013: Leviathan Wakes grabbed 7.4% (71 votes) for the #5 spot in 2012, and then Caliban’s War 8.1% (90 votes) for the #8 spot in 2013. That kind of oddity—more Hugo voters, both in sheer numbers and percentage wise, liked Caliban’s War, but only Leviathan Wakes gets a Hugo nom—have always defined the Hugo.

What does this tell us? This is a snapshot of the “healthiest” Hugo: rising votes, a high nom average of about 20%, a low nom average of around 10%. Is that the best the Hugo can do? Is it enough? Do those ranges justify the weight fandom place son this award? Think about how this will compare to the other fiction categories, which I’ll be laying out in the days to come.

Now, a few other pieces of information I was able to dig up. The Worldcons are required to give data packets for the Hugos every year, but different Worldcons choose to include different information. I combed through these to find some more vital pieces of data, including Number of Unique Works (i.e. how many different works were listed on all the ballots, a great measure of how centralized a category is) and Total Number of Votes per category (which lets us calculate how many nominees each ballot listed on average). I was able to find parts of this info for 2006, 2009, 2013, 2014, and 2015.

Table 6: Number of Unique Works and Number of Votes per Ballot for Selected Best Novel Hugo Nominations, 2006-2015

Table 6 Best Novel Selected Stats

I’d draw your attention to the ratio I calculated, which is the Number of Unique Works / Number of Ballots. The higher that number is, the less centralized the award is. Interestingly, the Best Novel category is becoming more centralized the more voters there are, not less centralized. I don’t know if that is the impact of the Puppy slates alone, but it’s interesting to note nonetheless. That might indicate that the more voters we have, the more votes will cluster together. I’m interested to see if the same trend holds up for the other categories.

Lastly, look at the average number of votes per ballot. Your average Best Novel nominator votes for over 3 works. That seems like good participation. I know people have thrown out the idea of restricting the number of nominations per ballot, either to 4 or even 3. I’d encourage people to think about how much of the vote that would suppress, given that some people vote for 5 and some people only vote for 1. Would you lose 5% of the total vote? 10%? I think the Best Novel category could handle that reduction, but I’m not sure other categories can.

Think of these posts—and my upcoming short fiction posts—as primarily informational. I don’t have a ton of strong conclusions to draw for you, but I think it’s valuable to have this data available. Remember, my Part 1 post contains the Excel file with all this information; feel free to run your own analyses and number-crunching. If you see a trend, don’t hesitate to mention it in the comments.

Hugo Award Nomination Ranges, 2006-2015, Part 2

The Hugo is a strange award. One Hugo matters a great deal—the Best Novel. It sells copies of books, and defines for the casual SFF fan the “best” of the field. The Novella, Novelette, and Short Story also carry significant weight in the SFF field at large, helping to define rising stars and major works. Some of the other categories feel more like insider awards: Editor, Semiprozine. Others feel like fun ways to nod at the SFF fandom (Fanzine). All of them work slightly differently, and there’s a huge drop off between categories. That’s our point of scrutiny today, so let’s get to some charts.

First, let’s get some baseline data out there: the total number of nominating ballots per year. I also included the final voting ballots. Data gets spotty on the Hugo website, thus the blank spots. If anyone has that data, point me in that direction!

Table 2: Total Number of Nominating and Final Ballots for the Hugo Awards, 2006-2015
Table 2 Ballots 2006-2015

I pulled that data off the HugoAward.org site, save for the flagged 895, which I grabbed from this File 770 post.

Now, how popular is each category? How many of those total nominators nominate in each category? First up, the averages for 2006-2015:

Table 3: Average Number of Nominating Ballots in the Hugo Award per Category, 2006-2015
Table 3 Number of Nominating Ballots Each Category

I included to averages for you: the 2006-2015 average, and then the 2006-2013 average. This shows how much the mix of Loncon, the Puppy vote, and increased Hugo scrutiny have driven up these numbers.

What this table also shows is how some categories are far more popular than others. Several hundred more people vote in the Novel category than in the next most popular category of Dramatic Long, and major categories like Novella and Novelette only manage around 50% of the Novel nominating vote. That’s a surprising result, and may show that the problem with the Hugo lies not in the total number of voters, but in the difficulty those voters have in voting in all categories. I’ve heard it mentioned that a major problem for the Hugo is “discovery”: it’s difficult to have a good sense of the huge range of novellas, novelettes, short stories, etc., and many people simply don’t vote in the categories they don’t know. It’d be interesting to have a poll: how many SFF readers actually read more than 5 new novels a year? 5 new novellas? I often don’t know if what I’m reading is a novella or a novelette, and does the lack of clarity in this categories hurt turnout?

Let’s look at this visually:

Chart 3 Popularity of Categories

Poor Fan Artist category. That drop off is pretty dramatic across the award. Are there too many categories for people to vote in?

Let’s focus in on 2015, as that’s where all the controversy is this year. I’m interested in the percentage of people who voted for each category, and the number of people who sat out in each category.

Table 4: Percentage of Voters and “Missing Votes” per Hugo Category, 2015 Only

Table 4 % Voters in Each Category

The chart at the top tells us a total of 2122 nominated in the Hugos, but no category managed more than 87% of that total. The missing votes columns is 2122 minus the number of people who actually nominated. I was surprised at how many people sat out each category. Remember, each of those people who didn’t vote in Best Novel, Best Short Story, etc., could have voted up to 5 times! In the Novella category alone, 5000 nominations were left on the table. If everyone who nominated in the Hugos had nominated in every category, the Puppy sweeps most likely wouldn’t have happened.

Again, let’s take a visual look:

Chart 4 % Voters by Category

That chart re-enforces the issue in the awards: less than 50% turnout in major categories like Novella, Short Story, and Novelette.

What to conclude from all of this? Total number of ballots isn’t as important as to who actually nominates in each category. Why aren’t people nominating in things like Short Story? Do the nominations happen too early in the year? Are readers overwhelmed by the sheer variety of works published? Do readers not have strong feelings about these works? Given the furor on the internet over the past few weeks, that seems unlikely. If these percentages could be brought up (I have no real idea how you’d do that), the award would immediately look very different.

Tomorrow, we’ll drill more deeply into the Fiction categories, and look at just how small the nominating numbers have been over the past decade.

Hugo Award Nomination Ranges, 2006-2015: A Chaos Horizon Report

Periodically, Chaos Horizon publishes extensive reports on various issues relating to SFF awards. One important context for this year’s Hugo controversy is the question of nomination numbers. New readers who are coming into the discussion may be unaware of how strange (for lack of a better word) the process is, and how few votes it has historically taken to get a Hugo nomination, particularly in categories other than Best Novel. As a little teaser of the data we’ll be looking at, consider this number: in 2006, it only took 14 votes to make the Best Short Story Hugo final ballot.

While those numbers have risen steadily over the past decade, they’re still shockingly low: in 2012, it took 36 votes in the Short Story category; in 2013, it took 34 votes; in 2014, we jumped all the way to 43. This year, with the Sad Puppy/Rabid Puppy influence, the number tripled to 132. That huge increase causes an incredible amount of statistical instability, to the point that this year’s data is “garbage” data (i.e. confusing) when compared to other years.

Without having a good grasp of these numbers and trends, many of the proposed “fixes”—if a fix is needed at all, and that this isn’t something that will work itself out over 2-3 years via the democratic process—might exacerbate some of the oddities already present within the Hugo. The Hugo has often been criticized for being an “insider” award, prone to log-rolling, informal cliques, and the like. While I don’t have strong opinions on any of those charges, I think it’s important to have a good understanding of the numbers to better understand what’s going on this year.

Chaos Horizon is an analytics, not an opinion, website. I’m interested in looking at all the pieces that go into the Hugo and other SFF awards, ranging from past patterns, biases, and oddities, to making future predictions as what will happen. I see this as a classic multi-variable problem: a lot of different factors go into the yearly awards, and I’ve been setting myself the task of trying to sort through some (and only some!) of them. Low nominating numbers are one of the defining features of the Hugo award; that’s just how the award has worked in the past. That’s not a criticism, just an observation.

I’ve been waiting to launch this report for a little while, hoping that the conversation around this year’s Hugo to cool off a little. It doesn’t look like that’s going to happen. The sheer flood of posts about this year’s Hugos reveal the desire that various SFF communities have for the Hugo to be the “definitive” SFF award, “the award of record.” File 770 has been the best hub for collecting all these posts; check them out if you want to get caught up on the broader conversation.

I don’t think any award can be definitive. That’s not how an award works, whether it’s the Hugo, the Pulitzer, or the Nobel prize. There are simply too many books published, in too many different sub-genres, to too many different types of fans, for one award to sort through and “objectively” say this is the best book. Personally, I don’t rely on the Hugo or Nebula to tell me what’s going on in the SFF field. I’ve been collating an Awards Meta-List that looks at 15 different SFF awards. That kind of broad view is invaluable if you want to know what’s happening across the whole field, not only in a narrow part of it. Lastly, no one’s tastes are going to be a perfect match for any specific award. Stanislaw Lem, one of my favorite SF authors, was never even nominated for a Hugo or Nebula. That makes those awards worse, not Lem.

Finally, I don’t mean this report to be a critique of the Worldcon committees who run the Hugo award. They have an incredibly difficult (and thankless) job. Wrestling with an award that has evolved over 50 years must be a titanic task. I’d like to personally thank them for everything they do. Every award has oddities; they can’t help but have oddities. Fantasizing about some Cloud-Cuckoo-Land “perfect” SFF award isn’t going to get the field anywhere. This is where we’re at, this is what we’ve have, so let’s understand it.

So, enough preamble: in this report we’ll be looking at the last 10 years of Hugo nomination data, to see what it takes to get onto the final Hugo ballot.

Background: If you already know this information, by all means skip ahead.

TheHugoAwards.org themselves provide an intro to the Hugos:

The Hugo Awards, presented annually since 1955, are science fiction’s most prestigious award. The Hugo Awards are voted on by members of the World Science Fiction Convention (“Worldcon”), which is also responsible for administering them.

Every year, the attending or supporting members of the Worldcon go through a process to nominate and then vote on the Hugo awards. There are a great many categories (it’s changed over the years; we’re at 16 Hugo categories + the Campbell Award, which isn’t a Hugo but is voted on at the same time by the same people) ranging from Best Novel down to more obscure things like Best Semiprozine and Best Fancast.

If you’re unfamiliar with the basics of the award, I suggest you consult the Hugo FAQs page for basic info. The important bits for us to know here are how the nomination process works: every supporting and attending member can vote for up to 5 things in each category, and each of those votes counts equally. This means that someone who votes for 5 different Best Novels has 5 times as much influence as a voter who only votes for 1. Keep that wrinkle in mind as we move forward.

The final Hugo Ballot is made up of the 5 highest vote getters in each category, provided that they reach at least 5% total votes. This 5% rule has come into play several times in the last few years, particularly in the Short Story category.

Methodology: I looked through the Hugo Award nominating stats, archived at TheHugoAwards.org, and manually entered the highest nominee, the lowest nominee, and the total number of ballots (when available) for each Hugo category. Worldcon voting packets are not particularly compatible with data processing software, and it’s an absolute pain to pull the info out. Hugo committees, if you’re listening, create comma separated value files!

I chose 10 years as a range for two reasons. First, the data is easily available for that time range, and it gets harder to find for earlier years. The Hugo website doesn’t have the 2004 data readily linked, for instance. While I assume I could find it if I hunted hard enough, it was already tedious enough to enter 10 years of data. Second, my fingers get sore after so much data entry!

Since the Worldcon location and organizing committees change every year, the kind of data included in the voting results packet varies from year to year as well. Most of the time, they tell us the number of nominating ballots per category; some years they don’t. Some have gone into great detail (number of unique works nominated, for instance), but usually they don’t.

Two methodological details: I treated the Campbell as a Hugo for the purposes of this report: the data is very similar to the rest of the Hugo categories, and they show up on the same ballot. That may irk some people. Second, there have been a number of Hugo awards declined or withdrawn (for eligibility reasons). I marked all of those on the Excel spreadsheet, but I didn’t go back and correct those by hand. I was actually surprised at how little those changes mattered: most of the time when someone withdrew, it affected the data by only a few votes (the next nominee down had 20 instead of 22 votes, for instance). The biggest substantive change was a result of Gaiman’s withdrawal last year, which resulted in a 22 vote swing. If you want to go back and factor those in, feel free.

Thanks to all the Chaos Horizon readers who helped pull some of the data for me!

Here’s the data file as of 5/5/2015: Nominating Stats Data. I’ll be adding more data throughout, and updating my file as I go. Currently, I’ve got 450 data points entered, with more to come. All data on Chaos Horizon is open; if you want to run your own analyses, feel free to do so. Dump a link into the comments so I can check it out!

Results: Let’s look at a few charts before I wrap up for today. I think the best way to get a holistic overview of the Hugo Award nominating numbers is to look at averages. Across all the Hugo categories and the Campbell, what were the average number of ballots per category, the votes per top nominee (i.e. the work that took #1 in the nominations), and the votes per low nominee (the work that took place #5 in the nominations)? That’s going to set down a broad view and allow us to see what exactly it takes (on average) to get a Hugo nom.

Of course, every category works differently, and I’ll be more closely looking at the fiction categories moving forward. The Hugo is actually many different awards, each with slightly different statistical patterns. This makes “fixing” the Hugos by one change very unlikely: anything done to smooth the Best Novel category, for instance, is likely to destabilize the Best Short Story category, and vice versa.

On to some data:

Table 1: Average Number of Ballots, Average Number of Votes for High Nominee, and Average Number of Votes for Low Nominee for the Hugo Awards, 2006-2015
Average Ballots Table
(click to make this bigger)

This table gives us a broad holistic view of the Hugo Award nominating data. What I’ve done is taken all the Hugo categories and averaged them. We have three pieces of data for each year: average ballots per category (how many people voted), average number of votes for the high nominee, and average votes for the low nominee. So, in 2010, an average of 362 people voted in each category, and the top nominee grabbed 88 votes, the low nominee 47.

Don’t worry: we’ll get into specific categories over the next few days. Today, I want the broad view. Let’s look at this visually:

Chart 1 Average Ballots, High Nom, Low Nom

2007 didn’t include the number of ballots per category, thus the missing data in the graph. You can see in this graph that the total number of ballots is fairly robust, but that the number of votes for our nominated works are pretty low. Think about the space between the bottom two lines as the “sweet spot”: that’s how many votes you need to score a Hugo nomination in any given year. If you want to sweep the Hugos, as the Puppies did this year in several categories, you’d want to be above the Average High Nom line. For most years, that’s meant fewer than 100 votes. In fact, let’s zoom in on the High and Low Nom lines:

Chart 2 Average High Nom Low Nom

This graphs let us set mathematical patterns that are hard to see when just looking at numbers. Take your hand and cover up everything after 2012 on Chart #2: you’ll see a steady linear increase in the high and low ranges over those 8 years, rising from about 60 to 100 for the high nominee and 40 to 50 for the low nominee. Nothing too unusual there. If you’re take your hand off, you’ll see an exponential increase from 2012-2015: the numbers shoot straight up. That’s a convergence of many factors: the popularity of LonCon, the Puppies, and the increased scrutiny on the Hugos brought about by the internet.

What does all this mean? I encourage you to think and analyze this data yourself, and certainly use the comments to discuss the charts. Don’t get too heated; we’re a stats site, not a yell at each other site. There’s plenty of those out there. :).

Lastly, this report is only getting started. Over the next few days—it takes me a little bit of time to put together such data-heavy posts—I’ll be drilling more deeply into various categories, and looking at things like:
1. How do the fiction categories work?
2. What’s the drop off between categories?
3. How spread out (i.e. how many different works are nominated) are the categories?

What information would be helpful for you to have about the Hugos? Are you surprised by these low average nomination numbers, or are they what you’d expect? Is there a discrepancy between the “prestige” of the Hugo and the nomination numbers?

Nebula/Hugo Convergence 2010-2014: A Chaos Horizon Report

Time for a quick study on Hugo/Nebula convergence. The Nebula nominations came out about a week ago: how much will those nominations impact the Hugos?

In recent years, quite a bit. Ever since the Nebulas shifted their rules around in 2009 (moving from rolling eligibility to calendar year eligibility; see below), the Nebula Best Novel winner usually goes on to win the Hugo Best Novel. Since 2010, this has happened 4 out of 5 times (with Ancillary Justice, Among Others, Blackout/All Clear, and The Windup Girl, although Bacigalupi did tie with Mieville). That’s a whopping 80% convergence rate. Will that continue? Do the Nebulas and Hugos always converge? How much of a problem is such a tight correspondence between the two awards?

The Hugos have always influenced the Nebulas, and vice versa. The two awards have a tendency to duplicate each other, and there’s a variety of reasons for that: the voting pools aren’t mutually exclusive (many SFWA members attend WorldCon, for instance), the two voting pools are influenced by the same set of factors (reviews, critical and popular buzz, etc.), and the two voting pools have similar tastes in SFF. Think of how much attention a shortlist brings to those novels. Once a book shows up on the Nebula or Hugo slates, plenty of readers (and voters) pick it up. In the nearly 50 years when both the Hugo and Nebula has been given, the same novel has won the award 23 out of 49 times, for a robust 47% convergence. As we’ll see below, this has varied greatly by decade: in some decades (the 1970s, the 2010s) the winner are basically identical. In other decades, such as the 1990s, there’s only a 20% overlap.

All of this is made more complex by which award goes first. Historically, the Hugo used to go first, often awarding books a Hugo some six months before the Nebula was award. Thanks to the Science Fiction Awards Database, we can find out that Paladin of Souls received its Hugo on September 4, 2004; Bujold’s novel received its Nebula on April 30, 2005. Did six months post-Hugo hype seal the Nebula win for Bujold?

Bujold benefitted from the strange and now defunct Nebula rule of rolling eligibility. The Locus Index to SF Awards gave us some insight on how the Nebula used to be out of sync with the Hugo:

The Nebulas’ 12-month eligibility period has the effect of delaying recognition of many works until nearly 2 years after publication, and throws Nebula results out of synch with other awards (Hugo, Locus) voted in a given calendar year. (NOTE – this issue will pass with new voting rules announced in early 2009; see above.)

The rule change went through in early 2009:

SFWA has announced significant rules changes for the Nebula Awards process, eliminating rolling eligibility and limiting nominations to work published during a given calendar year (i.e., only works published in 2009 will be eligible for the 2010 awards), as well as eliminating jury additions. The changes are effective as of January 2009 and “except as explicitly stated, will have no impact on works published in 2008 or the Nebula Awards process currently underway.”

Since 2009, eligibility has been straightened out: Hugo and Nebula eligibility basically follow the same rules, and now it is the Nebula that goes first. The Nebula tends to announce a slate in late February, and then gives the award in early May. The Hugo announced a slate in mid April, and then awards in late August/early September, although those dates change very year.

Tl;dr: while it used to be the Hugos that influenced the Nebula, but, since 2010, it is now the Nebulas that influence the Hugos. We know that Nebula slates tend to come out while Hugo slate voting is still going on. This means that Hugo voters have a chance to wait until the Nebulas announce their nominations, and then adjust/supplement their voting as they wish. This year, there were about 3 weeks between the Nebula announcement and the close of Hugo voting: were WorldCon voters scrambling to read Annihilation and The Three-Body Problem in that gap? Remember, even a slight influence on WorldCon voters can drastically change the final slate.

But how much? Let’s take a look at the data from 2010-2014, or the post-rule change era. That’s not a huge data set, but the results are telling.

Chart 1: Hugo/Nebula Convergence in the Best Novel Categories, 2010-2014
Convergence Chart 1

This chart shows how many of the Nebula nominations showed up on the Hugo ballot a few weeks later. You can see the it makes for around 40% on average. Don’t get fooled by the 2014 data: Neil Gaiman’s The Ocean at the End of the Lane made both the Nebula and Hugo slate, but Gaiman declined his Hugo nomination. If we factored him in, we’d be staring at that same 40% across the board.

40% isn’t that jarring, since that only means 2 out of the 5 Hugo nominees. If we consider the overlap between reading audiences, critical and popular acclaim, etc., that doesn’t seem too far out of line.

It’s the last column that catches my eye: 4/5 joint winners, or 80% joint winners in the last 5 years. Only John Scalzi managed to eek out a win over Kim Stanley Robinson, otherwise we’d be batting 100%. We should also keep in mind the tie between The City and the City and The Windup Girl in 2010.

Nonetheless, my research shows that the single biggest indicator of winning a Hugo from 2010-2014 is whether or not you won the Nebula that year. Is this a timeline issue: does the Nebula winner get such a signal boost on the internet in May that everyone reads it in time for the Hugo in August? Or are the Hugo/Nebula voting pools converging to the point that their tastes are almost the same? Were the four joint-winners in the 2010s so clearly the best novels of the year that all of this is moot? Or is this simply a statistical anomaly?

I’m keeping close eye on this trend. If Annihilation sweeps the Nebula and Hugos this year, the SFF world might need to take step back and ask if we want the two “biggest” awards in the field to move in lockstep. This has happened in the past. Let’s take a look at the trends of Hugo/Nebula convergence by decade in the field:

Convergence Decade Chart

That’s an odd chart for you: the 1960s (only 4 years, though) had 25% joint winners, the 1970s jumped to 80%, we declined through the 1980s (50%) and the 1990s (20%), stayed basically flat in the 2000s (30%), and then jumped back up to 80% in the 2010s. Why so much agreement in the 1970s and 2010s with so much disagreement in the 1990s and 2000s? The single biggest thing that changed from the 2000s to the 2010s were the Nebula rules: is that the sole cause of present day convergence?

I don’t have a lot of conclusions to draw for you today. I think convergence is a very interesting (and complex) phenomenon, and I’m not sure how I feel about it. Should the Hugos and Nebulas go to different books? Should they only converge for books of unusual and universal acclaim? In terms of my own predictions, I expect the trend of convergence to continue: I think 2-3 of this year’s Nebula nominees will be on the Hugo ballot. If I had to guess, I’d bet that this year’s Nebula winner will also take the Hugo. Given this data, you’d be foolish to do anything else.

Literary Fiction and the Hugo and Nebula Awards for Best Novel, 2001-2014

A sub-category of my broader genre study, this post addresses the increasing influence of “literary fiction” on the contemporary Hugo and Nebula Awards for Best Novel, 2001-2014. I think the general perception is that the awards, particularly the Nebula, have begun nominating novels that include minimal speculative elements. Rather than simply trust the general perception, let’s look to see if this assumption lines up with the data.

Methodology: I looked at the Hugo and Nebula nominees from 2001-2014 and ranked the books as either primarily “speculative” or “literary.” Simple enough, right?

Defining “literary” is a substantial and significant problem. While most readers would likely acknowledge that Cloud Atlas is a fundamentally different book than Rendezvous with Rama, articulating that difference in a consistent manner is complicated. The Hugos and Nebulas offer no help themselves. Their by-laws are written in an incredibly vague fashion that does not define what “Science Fiction or Fantasy” actually means. Here’s the Hugo’s definition:

Unless otherwise specified, Hugo Awards are given for work in the field of science fiction or fantasy appearing for the first time during the previous calendar year.

Without a clear definition of “science fiction or fantasy,” it’s left up to WorldCon or SFWA voters to set genre parameters, and they are free to do so in any way they wish.

All well and interesting, but that doesn’t help me categorize texts. I see three types of literary fiction entering into the awards:
1. Books by literary fiction authors (defined as having achieved fame before their Hugo/Nebula nominated book in the literary fiction space) that use speculative elements. Examples: Cloud Atlas, The Yiddish Policeman’s Union.
2. Books by authors in SFF-adjacent fields (primarily horror and weird fiction) that have moved into the Hugo/Nebulas. These books often allow readers to see the “horror” elements as either being real or imagined. Examples: The Drowning Girl, Perfect Circle, The Girl in the Glass.
3. Books by already well-known SFF authors who are utilizing the techniques/styles more commonplace to literary fiction. Examples: We Are All Completely Besides Ourselves, Among Others.

That’s a broad set of different texts. To cover all those texts—remember, at any point you may push back against my methodology—I came up with a broad definition:

I will classify a book as “literary” if a reader could pick the book up, read a random 50 page section, and not notice any clear “speculative” (i.e. non-realistic) elements.

That’s not perfect, but there’s no authority we can appeal to make these classifications for us. Let’s see how it works:

Try applying this to Cloud Atlas. Mitchell’s novel consists of a series of entirely realistic novellas set throughout various ages of history and one speculative novella set in the future. If you just picked the book up and started reading, chances are you’d land in one of the realistic sections, and you wouldn’t know it could be considered a SFF book.

Consider We Are All Completely Beside Ourselves, Karen Joy Fowler’s reach meditation on science, childhood, and memory. Told in realistic fashion, it follows the story of a young woman whose parents raised a chimpanzee alongside her, and how this early childhood relationship shapes her college years. While this isn’t the place to decide if Fowler deserved a Nebula nomination—she won the National Book Award and was nominated for the Booker for this same book, so quality isn’t much of a question—the styles, techniques, and focus of Fowler’s book are intensely realistic. Unless you’re told it could be considered a SF novel, you’d likely consider it plain old realistic fiction.

With this admittedly imperfect definition in place, I went through the nominees. For the Nebula, I counted 13 out of 87 nominees (15%) that met my definition of “literary.” While a different statistician would classify books differently, I imagine most of us would be in the same ball park. I struggled with The City & The City, which takes place in a fictional dual-city and that utilizes a noir plot; I eventually saw it as being more Pychonesque than speculative, so I counted it as “literary.” I placed The Yiddish Policeman’s Union as literary fiction because of Chabon’s earlier fame as a literary author. After he establishes the “Jews in Alaska” premise, large portions of the book are straightly realistic. Other books could be read either as speculative or not, such as The Drowning Girl. Borderline cases all went into the “literary” category for this study.

Given that I like the Chabon and Mieville novels a great deal, I’ll emphasize I don’t think being “literary” is a problem. Since these kinds of books are not forbidden by the Hugo/Nebula by-laws, they are fair game to nominate. These books certainly change the nature of the award, and there are real inconsistencies—no Haruki Murakami nominations, no The Road nomination—in which literary SFF books get nominated.

As for the Hugos, only 4 out of 72 nominees met my “literary” definition. Since the list is small, let me name them here: The Years of Rice and Salt (Robinson’s realistically told alternative history), The Yiddish Policeman’s Union, The City & The City, and Among Others. Each of those pushes the genre definitions of speculative fiction. Two are flat out alternative histories, which has traditionally been considered a SFF category, although I think the techniques used by Robinson and Chabon are very reminiscent of literary fiction. Mieville is an experimental book, and the Walton is a book as much “about SFF” as SFF. I’d note that 3 of those 4 (all but the Robinson) received Nebula nominations first, and that Nebula noms have a huge influence on the Hugo noms.

Let’s look at this visually:

LitFic Nominees

Even with my relatively generous definition of “literary,” that’s not a huge encroachment. Roughly 1 in 6 of the Nebula noms have been from the literary borderlands, which is lower than what I’d expected. While 2014 had 3 such novels (the Folwer, Hild, and The Golem and the Jinni), the rest of the 2010s had about 1 borderline novel a year.

The Hugos have been much less receptive to these borderline texts, usually only nominating once the Nebula awards have done. We should note that both Chabon and Walton won, once again reflecting the results of the Nebula.

So what can we make of this? The Nebula nominates “literary” books about 1/6 times, or once per year. The Hugo does this much more infrequently, and usually when a book catches fire in the Nebula process. While this represent a change in the awards, particularly the Nebula, this is nowhere as rapid or significant as the changes regarding fantasy (which are around 50% Nebula and 30% Hugo). I know some readers think “literary” stories are creeping into the short story categories; I’m not an expert on those categories, so I can’t meaningfully comment.

I’m going to use the 15% Nebula and 5% Hugo “literary” number to help shape my predictions. I may have been overestimating the receptiveness of the Nebula to literary fiction; this study suggests we’d see either Mitchell or Mandel in 2015, not both. Here’s the full list of categorizations. I placed a 1 by a text if it met the “literary” definition: Lit Fic Study.

Questions? Comments?

The Hugo and Nebula Awards and Genre, Part 7

We’re knee deep in these awards now. Yesterday, I looked at whether or not it makes sense to break the Nebula (2001-2014) down into sub-genres (secondary/fantasy, contemporary/historical, epic series/stand-alone). Today, we’ll apply that same methodology to the Hugo, so you might want to take a look at Part 6 to refresh your memory on the methodology.

By my count, there were 20 fantasy novels nominated for the Hugo between 2001-2014. Here’s the primary/secondary breakdown for the nominees:

Chart 9 Hugo Primary Nominees

A reasonable split, and this reflects what I’d expect. Secondary world fantasies, particularly epic series, are a little more populist/mass-market, and the Hugo is usually more receptive to those kinds of books. The secondary world novels are clustered around well-known authors: 3 Martin novels, 4 Mieville novels, 2 Bujold, and then books by Jemisin, Ahmed, and Jordan/Sanderson. The primary world novels show a better range of authors: Gaiman has 2, but 6 other authors have one each, headlined by Rowling and Walton. Now, with that 60/40 break, you’d expect secondary world novels to do well in the winner’s circle. The stats show the opposite is true:

Chart 10 Primary Hugo Winners

There have been seven fantasy winners from 2001-2014, and primary world novels have dominated: Rowling, Gaiman (twice), Clarke, and Walton. Only Mieville and Bujold have grabbed wins for secondary world novels. That’s quite a flip in from Chart 9 to Chart 10. While the data set is small, we should acknowledge that the Hugo voters are willing to put secondary world fantasy on the slate, but haven’t voted it into the winner’s circle very often. The City and the City is definitely a genre-boundary pushing book, and Bujold probably grabbed her win on the strength of her prior Hugo reputation (she’d already won twice before Paladin of Souls). Despite the enormous popularity of secondary world fantasy, it’s not a sub-genre that wins the Hugo (or the Nebula, for that matter). Is that destined to change?

This, for me, is the “tipping point” of the modern Hugo. When will a book like A Game of Thrones win? Is Martin destined for a win once Winds of Winter comes out? Or will another author break this epic fantasy “glass ceiling”? In terms of raw popularity, a book like Words of Radiance trounces most fantasy and SF competitors, but the bias against a book like that is likely to prevent Sanderson from winning (or even being nominated). As fantasy becomes more popular, though, will this bias hold up?

Let’s break this down into sub-genres:

Chart 11 Hugo Nominee Subgenre

A fairly even division, although “stand alone secondary world fantasy” is propped up by Mieville’s 4 nominations in that sub-genre. The winners list tells a different story: the “epic series” wedge drops out entirely.

Chart 12 Hugo Winner Subgenre

It’s these kind of statistical oddities I find fascinating. If you asked most people to define fantasy, the “epic series” idea would pop up very quickly. Probably Tolkein first, then Martin, and then on through the entire range of contemporary fantasy: Robin Hobb, Patrick Rothfuss, N.K. Jemisin, Brandon Sanderson, Elizabeth Bear, Saladin Ahmed, Mark Lawrence, Brent Weeks, and on and on and on. So many well-known (and well-selling) writers are working in this field, and yet the Hugo has never been awarded to this kind of text. The closest you get is Paladin of Souls. Admittedly, the Bujold is pretty close, but her epic Chalion trilogy is clearly three stand-alone texts linked by a shared world.

There’s a tension here that will likely be resolved in the next 10 or so years. Can the Hugo continue to ignore the fantasy series? Is it offering a true survey/accounting of the SFF field without it?

I’m going to take a few days break from the genre study, and then wrap this up by looking at the idea of literary fiction in the Hugos and Nebulas.

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Rare Horror

We provide reviews and recommendations for all things horror. We are particularly fond of 80s, foreign, independent, cult and B horror movies. Please use the menu on the top left of the screen to view our archives or to learn more about us.

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

SCy-Fy: the blog of S. C. Flynn

Reader. Writer of fantasy novels.

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Reading SFF

Reading science fiction and fantasy novels and short fiction.

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

The Other Side of the Rain

Book reviews, speculative fiction, and wild miscellany.

Read & Survive

How- To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

From couch to moon

Sci-fi and fantasy reviews, among other things