Archive | May 2015

2015 Nebula Prediction: Final Results

Here we go . . . the official Chaos Horizon Nebula prediction for 2015!

Disclaimer: Chaos Horizon uses data-mining techniques to try and predict the Hugo and Nebula awards. While the model is explained in depth (this is a good post to start with) on my site, the basics are that I look for past patterns in the awards and then use those to predict future behavior.

Chaos Horizon predictions are not based on my personal readings or opinions of the books. There are flaws with this model, as there are with any model. Data-mining will miss sudden changes in the field, and it does not do a good job of taking into account the passion of individual readers. So take Chaos Horizon lightly, as an interesting mathematical perspective on the awards, and supplement my analysis with the many other discussions available on the web.

Lastly, Chaos Horizon predicts who is most likely to win based on past awards, not who “should” win in a more general sense.

Ancillary Sword Goblin Emperor Three-Body Problem
Annihilation Coming Home Trial By Fire

1. Ann Leckie, Ancillary Sword: 19.4%
2. Katherine Addison, The Goblin Emperor: 19.2%
3. Cixin Liu and Ken Liu (translator), The Three-Body Problem: 17.7%
4. Jeff VanderMeer, Annihilation: 16.8%
5. Jack McDevitt, Coming Home: 16.5%
6. Charles Gannon, Trial by Fire: 10.4%

The margin is incredibly small this year, indicating a very close race. Last year, Leckie had an impressive 5% lead on Gaiman and an impressive 14% lead over third place Hild in the model. This year, Leckie has a scant .2% lead on Addison, and the top 5 candidates are all within a few percentage points of each other. I think that’s an accurate assessment of this year’s Nebula: there is no breakaway winner. You’ve got a very close race that’s going to come down to just a few voters. A lot of this is going to swing on whether or not voters want to give Leckie a second award in two years, or whether they prefer fantasy to science fiction (Addison would win in that case), or how receptive they are to Chinese-language science-fiction, or of they see Annihilation as SF and complete enough to win, etc.

Let’s break-down each of these by author, to see the strengths and weaknesses of their candidacy.

Ancillary Sword: Leckie’s sequel to her Hugo and Nebula winning Ancillary Justice avoided the sophomore jinx. While perhaps less inventive and exciting than Ancillary Justice, many reviewers and commenters noted that it was a better overall novel, with stronger characterization and writing. Ancillary Sword showed up on almost every year-end list and has already received the British Science Fiction Award. This candidacy is complicated, though, by the rareness of winning back-to-back Nebulas. She would join Samuel R. Delany, Frederik Pohl, and Orson Scott Card as the only back-to-back winners. Given how early Leckie is in her career (this is only her second novel), are SFWA voters ready to make that leap? Leckie also is competing against 4 other SF novels: it’s possible she could split the vote with someone like Cixin Liu, leaving the road open for Addison to win.

Still, Leckie is the safe choice this year. Due to all the attention and praise heaped on Ancillary Justice, Ancillary Sword was widely read and reviewed. More readers = more voters, even in the small pool of SFWA authors. People that are only now getting to The Three-Body Problem may have read Ancillary Sword months ago. I don’t think you can overlook the impact of this year’s Hugo controversy on the Nebulas: SFWA authors are just as involved in all those discussions, and giving Leckie two awards in a row may seem like a safe and stable choice amidst all the internet furor. If Ancillary Justice was a consensus choice last year, Ancillary Sword might be the compromise choice this year.

The Goblin Emperor: My model likes Addison’s novel because it’s the only fantasy novel in the bunch. If there is even a small pool of SFWA voters (5% or so) who only vote for fantasy, Addison has a real shot here. The Goblin Emperor also has had a great year: solid placement on year-end lists, a Hugo nomination, and very enthusiastic fan-reception. Of the six Nebula nominees this year, it’s the most different in terms of its approach to genre (along with Annihilation, I guess), giving a very non-standard take on the fantasy novel. The Nebula has liked those kinds of experiments recently. The more you think about it, the more you can talk yourself into an Addison win.

The Three-Body Problem: The wild-card of the bunch, and the one my model has the hardest time dealing with. This come out very late in the year—November—and that prevented it from making as many year-end lists as other books. Secondly, how are SFWA voters going to treat a Chinese-language novel? Do they stress the in A (America) in SFWA? Or do they embrace SF as a world genre? The Nebula Best Novel has never gone to a foreign-language novel before. Will it start now?

Lastly, do SFWA voters treat the novel as co-authored by Ken Liu (he translated the book), who is well known and well liked by the SFWA audience? Ken Liu is actually up for a Nebula this year in the Novella category for “The Regular.” I ended up (for the purposes of the model) treating Cixin Liu’s novel as co-authored by Ken Liu. Since Ken Liu was out promoting the novel heavily, Cixin Liu didn’t get the reception of a new author. I think many readers came into The Three-Body Problem because of Ken Liu’s reputation. If I hadn’t done that, this novel drops 1% point in the prediction, from 3rd to 5th place.

The Three Body-Problem hasn’t always received the best reviews. Check this fairly tepid take on the novel published this week by Strange Horizons. Liu is writing in the tradition of Arthur C. Clarke and other early SF writers, where character is not the emphasis in the book. If you’re expecting to deeply engaged by the characters of The Three Body-Problem, you won’t like the novel. Given that the Nebula has been leaning literary over the past few years, does that doom its chances? Or will the inventive world-building and crazy science of the book push it to victory? This is the novel I feel most uncertain about.

Annihilation:
I had VanderMeer’s incredibly well-received start to his Southern Reach trilogy as the frontrunner for most of the year. However, VanderMeer has been hurt because of his lack of other SF awards this season: no Hugo, and he’s only made the Campbell out all of the other awards. I think this reflects some of the difficulty of Annihilation. It’s a novel that draws on weird fiction, environmental fiction, and science fiction, and readers may be having difficulty placing it in terms of genre. Add in that it is very short (I believe it would be the shortest Nebula winner if it ever wins) and clearly the first part of something bigger, is it stand-alone enough to win? The formula doesn’t think so, but formulas can be wrong. I wouldn’t be stunned by a VanderMeer win, but it seems a little unlikely at this point.

Coming Home: Ah, McDevitt. The ghost of the Nebula Best Novel category: he’s back for his 12th nomination. He’s only won once, but could it happen again? There’s a core of SFWA voters who must love Jack McDevitt. If the vote ends up getting split between everyone else, could they drive McDevitt to another victory? It’s happened once already, in 2007 with Seeker. I don’t see it happening, but stranger things have gone down in the Nebula.

Trial by Fire: The model hates Charles Gannon. He actually did well last year. According to my sources, he placed 3rd in the Nebula last year. Still, this is the sequel to that book, and sequels tend to move down in the voting. Gannon’s lack of critical acclaim and lack of Hugo success are what kills him in the model.

Remember, the model is a work in progress. This is only my second year trying to do this. The more data I collect, and the more we see how individual Nebula and Hugos go, the better the model will get. As such, just treat the model as a “for fun” thing. Don’t bet your house on it!

So, what do you think? Another win for Leckie? A fantasy win for Addison? A late tail-wind win for Liu?

Advertisement

2015 Nebula Prediction: Indicators and Weighting

One last little housekeeping post before I post my prediction later today. Here are the 10 indicators I settled on using:

Indicator #1: Author has previously been nominated for a Nebula (78.57%)
Indicator #2: Author has previously been nominated for a Hugo (71.43%)
Indicator #3: Author has previously won a Nebula for Best Novel (42.86%)
Indicator #4: Has received at least 10 combined Hugo + Nebula noms (50.00%)

Indicator #5: Novel is science fiction (71.43%)
Indicator #6: Places on the Locus Recommended Reading List (92.86%)
Indicator #7: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #8: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)

Indicator #9: Receives a same-year Hugo nomination (64.29%)
Indicator #10: Nominated for at least one other major SFF award (71.43%)

I reworded Indicator #4 to make the math a little clearer. Otherwise, these are the same as in my Indicator posts, which you can get to by clicking on each link.

If you want to see how the model is built, checking out the “Building the Model” posts.

I’ve tossed around including a “Is not a sequel” indicator, but that would take some tinkering, and I don’t like to tinker at this point in the process.

The Indicators are then weighted according to how well they’ve worked in the pass. Here are the weights I’ve used this year:

Indicator #1: 8.07%
Indicator #2: 8.65%
Indicator #3: 13.78%
Indicator #4: 11.93%
Indicator #5: 10.66%
Indicator #6: 7.98%
Indicator #7: 7.80%
Indicator #8: 4.24%
Indicator #9: 16.54%
Indicator #10: 10.34%

Lots of math, I know, but I’m going to past the prediction shortly!

2015 Nebula Prediction: Indicators #9-#10

Here are the last two indicators currently in my Nebula formula. These ones try to chart how well a book is doing in the current awards season, based on the assumption that if you are able to get nominated for one award, you’re more likely to win another. Note that it’s nominations that seem to correlate, not necessarily wins. Many of the other SFF awards are juried, so winning isn’t as good a measure of votes like the Hugo and Nebula use. Nominations raise your profile and get your book buzzed about, which helps pull in those votes. If something gets nominated 4-5 times, it becomes the “must-read” of the year, and that leads to wins.

Indicator #9: Receives a same-year Hugo nomination (64.29%)
Indicator #10: Nominated for at least one other major SFF award (71.43%)

I track things like the Philip K. Dick, the British Science Fiction Award, the Tiptree, the Arthur C. Clarke, the Campbell, and the Prometheus. Interestingly, the major fantasy awards—the World Fantasy Award, the British Fantasy Award—don’t come out until later in the year. This places someone like Addison at a disadvantage in these measures. We need an early in the year fantasy award!

In recent years, the Nebula has been feeding into the Hugo and vice-versa. Since the same awards are talked about so much in the same places, getting a Nebula nom raises your Hugo profile, which in turn feeds back and shapes the conversation about the Nebulas. If everyone on the internet is discussing Addison, Leckie, and Liu, someone like VanderMeer or Gannon can fall through the cracks. More exposure = more chances of winning.

So, how do things look this year?

Table 5: 2015 Awards Year Performance for 2015 Nebula Nominees
Table 5 Awards

The star by Leckie’s name means she won the BSFA this year. 2015 is very different than 2014: at this time last year, Ancillary Justice was clearly dominating, having already picked up nominations for the Clarke, Campbell, BSFA, Tiptree, and Dick. She’d go on to win the Clarke, BSFA, Hugo, and Nebula.

This year there isn’t a consensus book powering to all the awards. I thought VanderMeer would garner more attention, but he missed a Philip K. Dick Award nomination, and I figured the Clarke would have been sympathetic to him as well. Those are real storm clouds for Annihilation‘s Nebula chances. Maybe the book was too short or too incomplete for readers. Ancillary Sword isn’t repeating Leckie’s 2014 dominance, but it has already won the BSFA. Liu has some momentum beginning to build for him, while Gannon and McDevitt are languishing.

So those are the 10 factors I’m currently weighting in my Nebula prediction. I’ve been tossing around the idea of adding a few more (publication date, sequel, book length), but I might wait until next year to factor them in. I’d like to factor in something about popularity but I haven’t found any means of doing that yet.

What’s left? Well, we have to weight each of these Indicators, and once I do that, I can run the numbers to see who leads the model!

2015 Nebula Prediction: Indicators #6-#8

These indicators try to wrestle with the idea of critical and reader reception by charting how the Nebula nominees do on year-end lists. While these indicators are evolving as I put together my “Best of Lists”, these are some of our best measures of critical and reader response, which directly correlate to who wins the awards.

Right now, I’m using a variety of lists: the Locus Recommended Reading List (which has included the winner 13 out of the past 14 years, with The Quantum Rose being the lone exception), the Goodreads Best of the Year Vote (more populist, but they’ve at least listed the winner in the Top 20 4 years since they’ve been fully running, so that’s at least promising), and then a very lightly weighted version of my SFF Critics Meta-List. With a few years more data, I’ll split this into a “Hugo” list and a “Nebula” list, and we should have some neatly correlated data. Until then, one nice thing about my model is that it allows me to decrease the weights of Indicators I’m testing out. The Meta-List will probably only account for 2-3% of the total formula, with the Goodreads list at around 5% and the Locus at around 9%. I can’t calculate the weights until I go through all the indicators.

Indicator #6: Places on the Locus Recommended Reading List (92.86%)
Indicator #7: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #8: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)

Table 4: Critical/Reader Reception for 2015 Nebula Nominees
Table 4 Reception

There are separate Fantasy and SF Goodreads lists, hence the SF and F indicators. These are fairly bulky lists (the Locus is at least 40+, the Goodreads the same, etc.), so it isn’t too hard to place on one of them. If you don’t, that’s a real indicator that your book isn’t popular enough (or popular enough in the right places) to win a mainstream award. So these indicators more punish books that don’t make the lists than help those that do, if that makes any sense.

Results are as expected: Gannon and McDevitt suffer in these measures a great deal. Their books did not garner the same kind of broad critical/popular acclaim that other authors did. Cixin Liu missing the Goodreads vote might be surprising, but The Three-Body Problem came out very late in the year (November), and didn’t have time to pick up steam for a December vote. This is something to keep you eye on: did Liu come out too late in the year to pick up momentum for the Nebulas? If The Three-Body Problem ends up losing, I might add a “When did this come out?” Indicator for the 2016 Nebula model. Alternatively, these lists may have mismeasured Liu because of its late arrival, and then these lists would need to be weighted more lightly.

The good thing about the formula is that the more data we have, the more we can correct things. Either way Chaos wins!

2015 Nebula Prediction: Indicators #5

One of my simplest indicators:

Indicator #5: Novel is science fiction (71.43%)

The Nebula—just look at that name—still has a heavy bias towards SF books, even if this has been loosening in recent years. See my Genre Report for the full stats. In its 33 year history, only 7 fantasy novels have taken home the award. Chaos Horizon only uses data since 2001 in my predictions, but we’re still only looking at 4 of the last 14 winners being fantasy.

How do this year’s nominees stack up?

Table 3: Genre of 2015 Nebula Award Nominees

Table 3 Genre

Obviously, it’s a heavy SF year, with 5 of the 6 Nebula nominees being SF novels. There were plenty of Nebula-worthy fantasy books to choose, including something like City of Stairs, but the SFWA voters went traditional this year. I think Annihilation could be considered a “borderline” or “cross-genre” novel, although I see most people classifying it as Science Fiction.

Ironically, all of this actually helps Addison’s chances with the formula. Think about that logically: fantasy fans only have 1 book to vote for, while SF fans are split amongst 5 choices. The formula won’t give Addison a huge boost (the probability chart works out 28.57% for Addison, 14.29% for everyone else), but it’s the one part of the formula where she does better than everyone else.

Next time, we’ll get into the indicators for critical reception.

2015 Nebula Prediction: Indicators #1-#4

Let’s leave the Hugo Award behind for now—the controversy swirling around that award has distracted Chaos Horizon, so it’s time to get back on track doing what this site was designed to do: generating numerical predictions for the Nebula and Hugo Award based on data mining principles.

Over the next three to four days, I’ll be putting put the various “Indicators” of the Nebula Award, and then we weight and combine those to get our final prediction. For a look at the methodology, check out this post and this post. If you’re really interested, there’s an even more-in-depth take in my “Building the Nebula Model” posts. Bring caffeine!

With the basics of the model built, though, all that’s left is updating the stats and plugging in this year’s data. Here’s Indicators #1-#4 (out of 11). These deal with past awards history:

Indicator #1: Author has previously been nominated for a Nebula (78.57%)
Indicator #2: Author has previously been nominated for a Hugo (71.43%)
Indicator #3: Author has previously won a Nebula for Best Novel (42.86%)
Indicator #4: Author is the most honored nominee (50.00%)

The best way to understand each of those is as an opinion/prediction of the Nebula based on data from 2001-2014. So, 78.6% of the time, someone who has previously been nominated for the Nebula wins the Nebula Best Novel award, and so on. The only tricky one here is the “Author is the most honored nominee”: I add up the total number of Hugo Noms + Wins + Nebula Noms + Wins to get a rough indicator of “total fame in the field.” 50% of the time, the Nebula voters just give the Nebula Best Novel award to the most famous nominee.

All of these indicators flow from the logical idea that the Nebula is a “repetitive” award: they tend to give the Best Novel award to the same people over and over again. Take a look at my Repeat Nominees study for justification behind that. This repetition is also a kind of a “common sense” conclusion: to win a Nebula you have to be known by Nebula voters. What’s the best way to be known to them? To have already been part of the Nebulas.

Don’t think this excludes rookie authors though—Leckie did great last year even in my formula, and that’s why these are only Indicators #1-#4. The other indicators tackle things like critical reception and same-year award nominations. Still, they give us a good start. Let’s check this year’s data:

Tables 1 and 2: Past Awards History for 2015 Nebula Nominees

Table 1 Past Hugo Nebula Data Info
Table 2 Past Hugo Nebula Data

Legend:
The chart is for award nominations prior to this year’s award season, so no 2015 awards are added in
Nebula Wins = Prior Nebula Wins (any category)
Nebula Noms = Proir Nebula Nominations (any category)
Hugo Wins = Prior Hugo Nominations (any category)
Hugo Noms = Prior Hugo Wins (any category)
Total = sum of N. Wins, N. Noms, H. Wins, and H. Noms
Total rank = Ranking of authors based on their Total number of Wins + Nominations
Best Novel = Has author previously won the Nebula award for Best Novel?
Gray shading of boxes added solely for readability
All data mined from http://www.sfadb.com

Jack McDevitt breaks out of the pack here: his prior 17 Nebula nominations (!) make him the most familiar to the Nebula voting audience. He only has 1 win for those 17 nominations, though, so I don’t think he’s in line for a second. McDevitt is going to suffer in indicators #6-10, as his books tend to not get much critical acclaim. McDevitt currently has a 10% win rate for the Nebula Best Novel award. If he keeps getting noms, I’m going to have to add a “McDevitt” exception to keep the formula working.

Jeff VanderMeer’s Hugo nominations are all in Best Related Work, not for fiction, although his other Nebula nomination is for Finch. He’s well-known in the field, although Annihilation hasn’t picked up many award nominations for 2015.

Leckie, who was a rookie last year, now does very well across the board: her prior Nebula noms, Best Novel Nebula win, and Hugo nom will all give her a boost in the formula. The real wild-card in Indicators #1-#4 is The Three-Body Problem. Cixin Liu’s novel was translated by Ken Liu, who is very well known to the Nebula and Hugo audience: he has 3 Hugo nominations (2 wins), and 6 Nebula nominations (1 win), to make him one of the most nominated figures in recent years. If SFWA voters think of The Three-Body Problem as being co-authored by Ken Liu, they’re more likely to pick it up, and that will really boost the novel’s chances. I haven’t decided the best way to treat The Three-Body Problem for my formula. What do you think? Should I include Ken Liu’s nominations as part of the profile for The Three-Body Problem?

Tomorrow, we’ll start looking at Indicators tracking genre and critical reception.

Hugo Award Nomination Ranges, 2006-2015, Part 5

Let’s wrap this up by looking at the rest of the data concerning the Short Fiction categories of Novella, Novelette, and Short Story. Remember, these stories receive far fewer votes than the Best Novel category, and they are also less centralized, i.e. the votes are spread out over a broader range of texts. Let’s start by looking at some of those diffusion numbers:

Table 9: Number of Unique Works and Number of Votes per Ballot for Selected Best Novel Hugo Nominations, 2006-2015
Table 9 Diffusion Fiction Categories

Remember, the data is spotty because individual Worldcon committees have chosen not to provide it. Still, the table is very revealing: the Short Story category is far more diffuse (i.e. far more different works are chosen by the voters) than either the Novella, Novelette, or Novel categories. To look at this visually:

Chart 10 Number of Unique Works

In any given year, there are more than 3 times as many unique Short Stories nominated as Novellas. Now, I imagine there are far more Short Stories are published in any given year, but this also means that it’s much easier—much easier—to get a Novella nomination than a Short Story nomination. More voters may make something like the Short Story category more scattered, not more centralized, and this further underscores a main conclusion of this report: each Hugo category works differently.

This diffusion has some pretty profound implications on nominating %. Remember, the Hugos have a 5% rule: if you don’t appear on 5% of the nominating ballots, you don’t make the final ballot. Let’s look at the percentages of the #1 and the #5 nominee for the Short Fiction categories:

Table 10: High and Low Nominating % for Novella, Novelette, and Short Story Hugo Categories, 2008-2015
Table 10 Nominating % for Short Fiction

I think these percentage numbers are our best measure of “consensus.” Look at that 2011 Novella number of 35%: that means Ted Chiang’s Lifecycle of Software Objects appeared on 35% of the ballots. That seems like a pretty compelling argument that SFF fans found Chiang’s novella Hugo worthy (it won, for the record). In contrast, the Short Story category has been flirting with the 5% rule. In both 2013, only 3 stories made it above 5%, and in 2014 only 4. You could interpret that as meaning there was not much agreement in those years as to what the “major” short stories were. If you think the 5% bar is too high, keep in mind that each ballot has 5 votes, and each voter often votes for 3 works. That means that appearing on 5% of the ballots means you only got around (5%/3) = 1.67% of the total vote. If a story can’t manage that much support, is it Hugo worthy?

Now, this may be unfair. There might simply be too many venues, with too many short stories published, for people to narrow their thoughts down. Given the explosion of online publishing and the economics of either paying for stories or getting free stories (I’ll let you guess which one readers like more), there may simply be too much “work” for readers to agree upon their 5 favorite stories in the nominating stage. Perhaps that what the final stage is for, and it’s fine for the nominating stage to have low % numbers. Still, these low numbers make these categories very easy for either slate sweeps or other undue influences, even including “eligibility” posts. Either way, this should be a point of discussion in any proposed change to the Hugo award.

The landscape of SFF publishing and blogging has changed rapidly over the last 10 years, and the Hugos have not made many changes to adapt to this new landscape. Some categories remain relatively healthy, with clear centralization happening in the nomination stage. Other categories are very diffuse, with little agreement amongst the multiplicity of SFF fans.

To these complexities, we have to think about how much scrutiny the Hugos are under: people—myself included, and perhaps worst of all—comb through the data, looking for patterns, oddities, and problems. No longer is the Hugo a distant award given in relative quiet at the Worldcon, with results trickling out through the major magazines. It’s front and center in our instant reaction world of Twitter and the blogosphere. A great many SFF fans seem to want the Hugo to be the definitive award, to provide some final statement on what SFF works were the best in any given year. I’m not sure any award can do that, or that it’s fair to ask the Hugo to carry all that weight.

So, those rather tepid comments conclude this 5 part study of Hugo nominating ranges. All the data I used is right here, drawn primarily from the Hugo nominating packets and the Hugo Award website: Nominating Stats Data. While there are other categories we could explore, they essential work on similar lines to what we’ve discussed so far. If you’ve got any questions, let me know, and I’ll try to answer the best I can.

Hugo Award Nomination Ranges, 2006-2015, Part 4

We’re up to the short fiction categories: Novella, Novelette, and Short Story. I think it makes the most sense to talk about all three of these at once so that we can compare them to each other. Remember, the Best Novel nomination ranges are in Part 3.

First up, the number of ballots per year for each of these categories:

Table 6: Year-by-Year Nominating Ballots for the Hugo Best Novella, Novelette, and Short Story Categories, 2006-2015
Table 7 Ballots Short Fiction Categories

Chart 7 Ballots Short Fiction

A sensible looking table and chart: the Short Fiction categories are all basically moving together, steadily growing. The Short Story has always been more popular than the other two, but only barely. Remember, we’re missing the 2007 data, so the chart only covers 2008-2015. For fun, let’s throw the Best Novel data onto that chart:

Chart 8 All Fiction Categories

That really shows how much more popular the Best Novel is than the other Fiction categories.

The other data I’ve been tracking in this Report is the High and Low Nomination numbers. Let’s put all of those in a big table:

Table 7: Number of Votes for High and Low Nominee, Novella, Novelette, Short Story Hugo Categories, 2006-2015Table 8 High Low Noms Fiction Categories

Here we come to one of the big issues with the Hugos: the sheer lowness of these numbers, particularly in the Short Story category. Although the Short Story is one of the most popular categories, it is also one of the most diffuse. Take a glance at the far right column: that’s the number of votes the last place Short Story nominee has received. Through the mid two-thousands, it took in the mid teens to get a Hugo nomination in one of the most important categories. While that has improved in terms of raw numbers, it’s actually gotten worse in terms of percentage (more on that later).

Here’s the Short Story graph; the Novella and Novelette graphs are similar, just not as pronounced:

Chart 9 Short Story

The Puppies absolutely dominated this category in 2015, more than tripling the Low Nom number. They were able to do this because the nominating numbers have been so historically low. Does that matter? You could argue that the Hugo nominating stage is not designed to yield the “definitive” or “consensus” or “best” ballot. That’s reserved for the final voting stage, where the voting rules are changed from first-past-the-post to instant-run-off. To win a Hugo, even in a low year like 2006, you need a great number of affirmative votes and broad support. To get on the ballot, all you need is focused passionate support, as proved by the Mira Grant nominations, the Robert Jordan campaign, or the Puppies ballots this year.

As an example, consider the 2006 Short Story category. In the nominating stage, we had a range of works that received a meager 28-14 votes, hardly a mandate. Eventual winner and oddly named story “Tk’tk’tk” was #4 in the nominating stage with 15 votes. By the time everyone got a chance to read the stories and vote in the final stage, the race for first place wound up being 231 to 179, with Levine beating Margo Lannagan’s “Singing My Sister Down.” That looks like a legitimate result; 231 people said the story was better than Lannagan’s. In contrast, 15 nomination votes looks very skimpy. As we’ve seen this year, these low numbers make it easy to “game” the nominating stage, but, in a broader sense, it also makes it very easy to doubt or question the Hugo’s legitimacy.

In practice, the difference can be even narrower: Levine made it onto the ballot by 2 votes. There were three stories that year with 13 votes, and 2 with 12. If two people had changed their votes, the Hugo would have changed. Is that process reliable? Or are the opinions of 2—or even 10—people problematically narrow for a democratic process? I haven’t read the Levine story, so I can’t tell you whether it’s Hugo worthy or not. I don’t necessarily have a better voting system for you, but the confining nature of the nominating stage is the chokepoint of the Hugos. Since it’s also the point with the lowest participation, you have the problem the community is so vehemently discussing right now.

Maybe we don’t want to know how the sausage is made. The community is currently placing an enormous amount of weight on the Hugo ballot, but does it deserve such weight? One obvious “fix” is to bring far more voters into the process—lower the supporting membership cost, invite other cons to participate in the Hugo (if you invited some international cons, it could actually be a “World” process every year), add a long-list stage (first round selects 15 works, the next round reduces those 5, then the winner), etc. All of these are difficult to implement, and they would change the nature of the award (more voters = more mainstream/populist choices). Alternatively, you can restrict voting at the nominating stage to make it harder to “game,” either by limiting the number of nominees per ballot or through a more complex voting proposal. See this thread at Making Light for an in-progress proposal to switch how votes are tallied. Any proposed “fix” will have to deal with the legitimacy issue: can the Short Fiction categories survive a decrease in votes?

That’s probably enough for today; we’ll look at percentages in the short fiction categories next time.

John W. Campbell Memorial Award Finalists

The Finalists for the John W. Campbell Memorial Award have been announced:

Nina Allan, The Race
James L. Cambias, A Darkling Sea
William Gibson, The Peripheral
Daryl Gregory, Afterparty
Dave Hutchinson, Europe In Autumn
Simon Ings, Wolves
Cixin Liu (Ken Liu, translator), The Three-Body Problem
Emily St. John Mandel, Station Eleven
Will McIntosh, Defenders
Claire North, The First Fifteen Lives of Harry August
Laline Paull, The Bees
Adam Roberts, Bête
John Scalzi, Lock In: A Novel of the Near Future
Andy Weir, The Martian
Jeff VanderMeer, Area X (The Southern Reach Trilogy: Annihilation; Authority; Acceptance)
Peter Watts, Echopraxia

The Campbell Memorial can be confusing since it has basically the same name as the John W. Campbell Award for Best New Writer (given at the same time as the Hugos). The two awards should fight a duel to see who keeps the name.

The Campbell Memorial is a juried SF only award, thus giving it a very different feel from the Hugo or Nebula. If you peruse their history page, they’ve moved in and out of alignment with the Hugos and Nebulas, often slanting more in a literary direction, such as last year winner Strange Bodies by Marcel Theroux.

It’s a very interesting list this year. They hit the major American SF novels (Gibson, Watts, etc.) but also managed to bring in the novels that were buzzed about in Europe (Ings, Allan, Hutchinson). It’s nice to see Andy Weir finally get a nomination, publication date for The Martian be damned. Given the literary slant of this award, is this Station Eleven‘s to lose?

As a side note, Emily St. John Mandel won the Arthur C. Clarke award last week for Station Eleven.

Hugo Award Nomination Ranges, 2006-2015, Part 3

Today, we’ll start getting into the data for the fiction categories in the Hugo: Best Novel, Best Novella, Best Novelette, Best Short Story. I think these are the categories people care about the most, and it’s interesting how differently the four of them work. Let’s look at Best Novel today and the other categories shortly.

Overall, the Best Novel is the healthiest of the Hugo categories. It gets the most ballots (by far), and is fairly well centralized. While thousands of novels are published a year, these are widely enough read, reviewed, and buzzed about that the Hugo audience is converging on a relatively small number of novels every year. Let’s start by taking a broad look at the data:

Table 5: Year-by-Year Nominating Stats Data for the Hugo Best Novel Category, 2006-2015
Table 5 Best Novel Stats

That chart list the total number of ballots for the Best Novel Category, the Number of Votes the High Nominee received, and the Number of Votes the Low Nominee (i.e. the novel in fifth place) received. I also calculated the percentage by dividing the High and Low by the total number of ballots. Remember, if a work does not receive at least 5%, it doesn’t make the final ballot. That rule has not been invoked for the previous 10 years of the Best Novel category.

A couple notes on the table. The 2007 packet did not include the number of nominating ballots per category, thus the blank spots. The red flagged 700 indicates that the 2010 Hugo packet didn’t give the # of nominating ballots. They did give percentages, and I used math to figure out the number of ballots. They rounded, though, so that number may be off by +/- 5 votes or so. The other red flags under “Low Nom” indicate that authors declined nominations in those year, both times Neil Gaiman, once for Anasasi Boys and another time for The Ocean at the End of the Lane. To preserve the integrity of the stats, I went with the book that originally was in fifth place. I didn’t mark 2015, but I think we all know that this data is a mess, and we don’t even really know the final numbers yet.

Enough technicalities. Let’s look at this visually:

Chart 5 Best Novel Data

That’s a soaring number of nominating ballots, while the high and low ranges seem to be languishing a bit. Let’s switch over to percentages:

Chart 6 Best Novel % Data

Much flatter. Keep in mind I had to shorten the year range for the % graph, due to the missing 2007 data.

Even though the number of ballots are soaring, the % ranges are staying somewhat steady, although we do see year-to-year perturbation. The top nominees have been hovering between 15%-22.5%. Since 2009, every top nominee has managed at least 100 votes. The bottom nominee has been in that 7.5%-10% range, safely above the 5% minimum. Since 2009, those low nominees all managed at least 50 votes, which seems low (to me; you may disagree). Even in our most robust category, 50 readers liking your book can get you into the Hugo—and they don’t even have to like it the most. It could be their 5th favorite book on their ballot.

With low ranges so low, it doesn’t (or wouldn’t) take much to place an individual work onto the Hugo ballot, whether by slating or other types of campaigning. Things like number of sales (more readers = more chances to vote), audience familiarity (readers are more likely to read and vote for a book by an author they already like) could easily push a book onto the ballot over a more nebulous factor like “quality.” That’s certainly what we’ve seen in the past, with familiarity being a huge advantage in scoring Hugo nominations.

With our focus this close, we see a lot of year-to-year irregularity. Some years are stronger in the Novel categories, other weaker. As an example, James S.A. Corey actually improved his percentage total from 2012 to 2013: Leviathan Wakes grabbed 7.4% (71 votes) for the #5 spot in 2012, and then Caliban’s War 8.1% (90 votes) for the #8 spot in 2013. That kind of oddity—more Hugo voters, both in sheer numbers and percentage wise, liked Caliban’s War, but only Leviathan Wakes gets a Hugo nom—have always defined the Hugo.

What does this tell us? This is a snapshot of the “healthiest” Hugo: rising votes, a high nom average of about 20%, a low nom average of around 10%. Is that the best the Hugo can do? Is it enough? Do those ranges justify the weight fandom place son this award? Think about how this will compare to the other fiction categories, which I’ll be laying out in the days to come.

Now, a few other pieces of information I was able to dig up. The Worldcons are required to give data packets for the Hugos every year, but different Worldcons choose to include different information. I combed through these to find some more vital pieces of data, including Number of Unique Works (i.e. how many different works were listed on all the ballots, a great measure of how centralized a category is) and Total Number of Votes per category (which lets us calculate how many nominees each ballot listed on average). I was able to find parts of this info for 2006, 2009, 2013, 2014, and 2015.

Table 6: Number of Unique Works and Number of Votes per Ballot for Selected Best Novel Hugo Nominations, 2006-2015

Table 6 Best Novel Selected Stats

I’d draw your attention to the ratio I calculated, which is the Number of Unique Works / Number of Ballots. The higher that number is, the less centralized the award is. Interestingly, the Best Novel category is becoming more centralized the more voters there are, not less centralized. I don’t know if that is the impact of the Puppy slates alone, but it’s interesting to note nonetheless. That might indicate that the more voters we have, the more votes will cluster together. I’m interested to see if the same trend holds up for the other categories.

Lastly, look at the average number of votes per ballot. Your average Best Novel nominator votes for over 3 works. That seems like good participation. I know people have thrown out the idea of restricting the number of nominations per ballot, either to 4 or even 3. I’d encourage people to think about how much of the vote that would suppress, given that some people vote for 5 and some people only vote for 1. Would you lose 5% of the total vote? 10%? I think the Best Novel category could handle that reduction, but I’m not sure other categories can.

Think of these posts—and my upcoming short fiction posts—as primarily informational. I don’t have a ton of strong conclusions to draw for you, but I think it’s valuable to have this data available. Remember, my Part 1 post contains the Excel file with all this information; feel free to run your own analyses and number-crunching. If you see a trend, don’t hesitate to mention it in the comments.

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Read & Survive

How-To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

SFF Book Reviews

random thoughts about fantasy & science fiction books

Philip K. Dick Review

A Re-read Project

Notes From the Darknet

Book reviews and literary discussion