Hugo Contenders and Popularity, October 2014
Awards season moves ever closer! Right now, the Hugo and Nebula Awards for 2015 are still wide-open: what we have is disorganized sentiment, and that’s going to begin to get organized over the next 2-3 months. Some of the first posts about the 2015 Hugos are beginning to appear, including this excellent post from A Dribble of Ink. As more and more people begin talking about the Hugos and Nebulas, new contenders are going to emerge.
To help get those discussions perspective, I’m going to launch a new monthly feature of Chaos Horizon: checking on the popularity of the major Hugo contenders. Without further ado, here’s the chart, with all numbers taken from Goodreads as of October 31, 2014:
Table 1: Popularity of the Main Hugo Contenders, October 31, 2014
The chart lists the number of time each book has been ranked on Goodreads, with also the overall ranking. I think this chart is interesting, as it allows some of the differences in number of readers/reception based on these texts. While The Mirror Empire was embraced by critics on SFF websites, it’s been outread almost 75 to 1 by Words of Radiance. Sure, the Sanderson has been out for 8 months and the Hurley only 2, but that’s a staggering difference in number of readers. Also look at the score: 3.81 for Hurley, 4.76 for Sanderson. Have enough people read and liked The Mirror Empire for it to make the slate?
When you see the numbers like this, it’s clear how popular Words of Radiance and The Martian actually are, and how quickly writers like Scalzi or Mitchell are moving copies of their books. Based on number of ratings alone, Annihilation looks like a strong candidate. I don’t know yet how much raw popularity factors into the Hugo awards, but it has to be a factor, doesn’t it? I think by touching base with the popularity of these books every month, we can get a good idea of how often they’ve been read, and, in turn, how many voters can vote for them. That is, if you believe Hugo voters only vote for novels they’ve read . . .
About the Chart: One of the frustrations I have about the publishing industry is how secretive they are with numbers. The movie, music, and television industries are all relatively transparent about their numbers, and publish them regularly. We know within a few hours how well a blockbuster movie did at the box office, for instance.
For books—it’s all a deep, dark secret. The bestseller lists we have like NYTimes are calculated using obscure and byzantine formulas, and they don’t even release estimates of numbers sold. Bookscan covers a solid portion of physical book sales, but all those numbers are locked behind an extraordinarily expensive paywall. Publisher’s Weekly gives us some Bookscan numbers, but only for top-selling books—which often excludes SFF novels. So we’re left having to estimate popularity, either by word of mouth, blog traffic, Amazon sales ranks, etc.
I’ve thought long and hard about what the best measure of popularity might be, and I’ve settled on Goodreads as our current, most reliable measure of a book’s popularity. It’s not perfect by any means, and we don’t know the correlation between the # of Goodreads ratings to total sales (I’d estimate that at 5%-20%). What we do know is the following:
1. Goodreads has tons of users. For an average SFF book, there might be anywhere from between 1,000 to 50,000 ratings, and that has to represent a significant % of overall readers. Goodreads tends to have at least 10x the amount of ratings that Amazon.com does, for instance.
2. Goodreads doesn’t distinguish between electronic or print versions of the book. If you’ve read the book, you can rank it, no matter how. To my mind, that makes Goodreads even more reliable than Bookscan.
3. Even if Goodreads is somewhat skewed (the % of Goodreads readers is not 1:1 with the general reading public), it’s likely to always be skewed the same way. This makes for good “apples to apples” comparisons. Or, in other words, Goodreads is equally unfair to everyone.
A couple things to keep in mind:
1. The pool of Hugo voters is not the same as the general reading public. Hardcore SFF fans may like different things about novels than the more general pool of Goodsreads readers.
2. We don’t know how popularity correlates to the Hugo awards. I’ll need to collect data for a few years to begin to see patterns.
3. We don’t know if Goodreads is skewed towards certain authors. Most social media sites skew young, and books that appeal to readers in their teens and twenties may do better on such sites. I’ve got some ideas to figure this out, but it’s going to take time.
4. Goodreads only works for comparisons within the same year. More people join Goodreads all the time. Also, I can’t go back in time to measure how popularity worked in previous years; as soon as novels get Hugo or Nebula nominations/wins, that greatly increases their popularity, and we can’t know whether those novels sold well before or after they were nominated.
So, what do you think of this measure of popularity? Can it help us understand the Hugos or Nebulas better?
Larry Correia’s Monster Hunter Nemesis Review Round-Up
Now for some controversy: Larry Correia dominated much of the 2014 Hugo discussion. If you live under a big rock and are unfamiliar with the story, Correia championed a slate of potential Hugo nominees, his novel Warbound included. Correia explains himself here, but tl;dr: Correia has advanced the argument that Hugo awards are too “liberal,” and that these texts would offer a “right wing” alternative. Much of the controversy came not from Correia’s nomination for Warbound—while Correia isn’t the most natural Hugo candidate, he does have a substantial fanbase—but how other, more fringe SFF authors from Correia’s campaign were pushed into the Hugo short fiction categories. If you want to know more about this, google “2014 Hugo Controversy.”
Chaos Horizon is an analytics, not an opinion, website, and my interest lies in predicting what will happen in the 2015 Hugos and Nebulas, not what “should” happen. If the 2014 Hugos proved anything, it was that Hugo campaigns, whether for Robert Jordan’s Wheel of Time or Larry Correia’s “Sad Puppy” authors, have the ability to change the Hugo slate. Keep in mind, though, that not much evidence suggests that these campaigns change who wins the awards. This reflects the way the awards are set up: the process is designed to pick the “Best Novel” winner, not necessarily the slate of the five best (or most deserving, however you’d want to define that) candidates. It doesn’t take all that much to get nominated, and even a minor campaign can easily shift results. We’re talking very small numbers here: Mira Grant made the 2014 Hugo slate with only 98 votes.
I don’t want to spend too much time musing on the impact of campaigns, but they are certainly an element of the modern Hugo Award that cannot be ignored. Many authors put up web posts saying “I have these books, they’re eligible, vote for them,” and other authors suggest books or stories for readers to vote for. All of that is well within the rules. The larger the author’s web presence, the more these posts impact voting. How exactly that should be factored into my Hugo and Nebula predictions is a question Chaos Horizon is continuing to struggle with. For the time being, I’m going to include writers like Correia in my predictions because I think that most accurately reflects the current Hugo voting situation.
In terms of the 2015 Hugo slate, Correia placed 6th in 2013 and 3rd in 2014 (behind Leckie and Gaiman; Gaiman turned down the nomination), and, when coupled with what we know about the Hugo Awards and Repeat Nominees, that makes him a likely candidate for 2015. Correia’s book this year is called Monster Hunter Nemesis, from his popular Monster Hunter International series. Correia delivers what he promises: big monsters, big guns, and big action, in what might be described as an adventure-pulp throwback. People who have liked Correia’s previous work are going to like Monster Hunter Nemesis, and the same pool of voters that placed him on the Hugo slate in 2014 could easily place him on the slate in 2015. Correia has a large web footprint and an enthusiastic fanbase—Monster Hunter Nemesis boasts 250 ratings and a 4.8 rating on Amazon, both of which are high—and the Hugos are, after all, a popularity contest. You don’t need to be popular with the entire voting base, but just strongly popular with around 10-15% of that base. Correia’s fans seem very happy with the work he is producing, and will likely continue to support him.
Does Monster Hunter Nemesis fit the past mold of past Hugo nominees? Not particularly, and for a couple of clear reasons. Firsts, genre: Correia writes urban fantasy, a type of writing generally ignored by the Hugos. Jim Butcher, of the Dresden Files fame, has never received a Hugo award (although he has one nomination for Best Graphic Story), and he is certainly an order of magnitude more popular than Correia. No Hugo novels jump out as being “urban fantasy” unless you want to count American Gods. Urban fantasy has done well in the novella category—Charles Stross has several wins for novellas from his urban fantasy series The Laundry Files—but the short fiction categories operate very differently than the Best Novel category. We should note that other traditionally slighted genres have been popping up into the Hugo more recently; Mira Grant is a prime example, with her zombie themed Newsflesh series. Horror/zombie books have not generally made the slate, and something like Feed is as out-of-place in the Hugos as Warbound. Nothing stops the Hugos from evolving over time, and perhaps the success of authors like Correia and Grant indicates a loosening of genre borders.
Second, Monster Hunter Nemesis is the fifth in a series, and, almost universally, later novels in a series don’t get nominated unless the first book was also nominated. It’s difficult for new readers to jump into the middle of a series; if you haven’t read Correia, it would seem the best starting place is Monster Hunter International. This cuts down the pool of voters, and Correia’s fanbase will have to be very enthusiastic to counter this. They did so for Warbound (#3 in a series), so it could definitely happen here.
On to reactions to the book:
About the Book:
Larry Correia’s Website/Blog
Amazon page
Goodreads page
Publisher’s page (Baen)
Mainstream Reviews:
None? For each of these Review Round-Ups, I check the same places: Publisher’s Weekly, Kirkus Reviews, NPR, NYTimes, the Guardian, and Entertainment Weekly. These are some of the most popular and widely distributed reviewing venues, and they give us a good idea if the book is reaching beyond the core SFF audience. The fact that Correia received no discernible support from these outlets certainly says something. The lack of reviews in Publisher’s Weekly and Kirkus is surprising, as they do short capsule reviews of tons of texts. For most authors, this lack of mainstream coverage would hurt them; for an author like Correia, this lack of coverage re-enforces his outsider or maverick status.
WordPress Blog Reviewers:
AdVerb Creative
Koeur’s Book Review
Bookstoge’s Reviews on the Road (4.5 out of 5)
Attack of the Books!
Alternative Worlds II
Not the biggest group of reviews, but all are fairly positive. It’s interesting that Monster Hunter Nemesis doesn’t show up as strongly in these places. Goodreads has 1700+ ratings for Monster Hunter Nemesis, which does indicate it’s selling copies. People just don’t seem to blog about Correia’s book with the same intensity as they do other texts.
So, will Correia score another Hugo nomination in 2015?
Jeff VanderMeer’s Annihilation Review Round-Up
Jeff VanderMeer has been a major voice in experimental fantasy for well over a decade, and in 2014 he stepped into the mainstream with his three-part Southern Reach trilogy: Annihilation, Authority, and Acceptance. Annihilation is the first, shortest, and most accessible of the three, and it looks to be a major Nebula (and perhaps Hugo) contender in 2015.
Annihilation tells the story of an expedition to the mysterious Area X, an anomalous bit of the American coastline where . . . something . . . has happened. VanderMeer spins this mystery into a unfolding conspiracy, as characters and readers try to figure out what exactly is going on. VandMeer’s novel is both tense and scary as it spirals into a zone of paranoia and madness: is anything real? Is anything knowable? Is everyone insane? It’s best not to know too much about the narrative before you start reading; just trust VanderMeer to take you on a bizarre but enjoyable ride.
I found Annihilation the most effective part of this trilogy. VanderMeer is at his best when he’s weaving these kind of paranoid delusion stories, as he has proved so well in his Ambergris books. Once he starts to give us answers in Authority and Acceptance, Southern Reach drags a little. Still, Annihilation is a heaping slice of weirdness, and as strange, creepy, and effective as anything VanderMeer has written. While VanderMeer has been somewhat inaccessible in the past, Annihilation is brief and direct, and has opened up his writing to a whole new audience.
VanderMeer has one prior Nebula nomination for Best Novel, for Finch, and three Best Related Work Hugo nominations. VanderMeer comes across as a very “writerly” writer, with a careful control of his craft; I figure this is exactly what the Nebula voters love, and that they’ll strongly support his book. Throw in the fact that VanderMeer has had a long and distinguished career (cool experimental novels, anthologies, books about writing) and he seems a tailor-made Nebula candidate. I currently have him very high up on my 2015 Nebula Prediction.
The Hugo is a little dicier, given the experimental nature of VanderMeer’s work. Still, Annihilation sold well, and there’s a chance that VanderMeer sweeps both the Nebula and Hugo. This has happened quite a bit in recent years: an author builds momentum through the entire season, riding a Nebula nomination to a Hugo nomination to a Nebula win to a Hugo win. Since the award season is staggered in that fashion, a cascading effect is definitely possible. This benefited Walton in 2012 and Leckie in 2014.
I think it’s up in the air of how exactly voters will handle this trilogy: nominate Annihilation? Nominate the whole of The Southern Reach? A lot of this depends on how the Nebula and Hugo awards put together the slate. We’ll find out when the slate is revealed.
Book released February 4, 2014.
About the Book:
Jeff VanderMeer’s Web Page
Amazon.com page
Goodreads page
Publisher’s page (FSG)
Mainstream Reviews:
Publisher’s Weekly (Starred review)
Kirkus Reviews (Starred review)
NPR
The Guardian
NY Times
LA Times
Entertainment Weekly (B+)
That’s an impressive amount of coverage: when Annihilation came out in February, it got reviewed everywhere. That exposure is really going to help VanderMeer come awards season.
WordPress Blog Reviews:
Books, Brains and Beer
BiblioSanctum (4.5 out of 5)
Raging Biblio-Holism (5 out of 5)
Intellectus Speculativus
For Winter Nights
Science Fiction & Fantasy Book Reviews (7.5 out of 10)
Bookmunch
The Little Red Reviewer
Book Reviews Forevermore (4.5 out of 5)
Lynn’s Book Blog
Doomsdayer
Since the book has been out since early in the year, there’s plenty of reviews to choose from. I took a representative slice from WordPress, and reviews are very positive across the board. Of all the books I’ve looked at 2014, I think this had the best reception on WordPress. Whether that carries over to the Nebulas is another matter, but, at the very least, readers were highly intrigued by the start of VanderMeer’s trilogy.
Will Andy Weir’s The Martian be eligible in 2015?
One of the great unanswered questions going into the 2015 Hugo and Nebula season concerns the eligibility of Andy Weir’s The Martian. Weir’s book was a runaway success in 2014, selling tons of copies by tapping into the same vein that made the film Gravity such a hit. If you stroll over to Goodreads, you’ll see that The Martian has 30,000+ ratings and a 4.33 score. In comparison, last year’s Hugo and Nebula winner, Ancillary Justice, has under 10,000 ratings and a 3.96 score. While The Martian wasn’t a huge hit amongst SF critics, it was staggeringly popular with the general public. If The Martian is eligible for this year’s awards, it’d likely be a major contender on that popularity alone.
But . . . there are lingering eligibility issues. Long story short: Weir self-published the novel on Amazon in 2012. The novel did well, and was picked up by a mainstream press (Crown publishing) and republished in February 2014. Any changes to the narrative seem to be minor. If you take the 2012 date as the date of first publication, Weir is not eligible for the 2015 Hugo or Nebula. If you take the 2014 date, he would be.
I have no idea how this will be resolved. I want to use this post as a repository for information; if anyone has any good sources on Weir’s eligibility, I’d love to link them here. Here’s what I have so far:
Weir’s own take on his Hugo eligibility from a Goodreads Q+A session:
I don’t know for sure. My interpretation of the Hugo rules is that it’s not eligible. The Awards are year-by-year. Although the print version of The Martian came out in 2014, I posted it to my website as a serial starting in 2012. The Hugos don’t discriminate between print publication and self-publication. Therefore, to them, I think The Martian is a work from 2012. So it’s not within the time period to be eligible.
While I don’t think serializing on your website would count as “publication” (how is that different than serializing a novel in a magazine?), the Hugo clock likely began when Weir self-published the novel through Amazon, as per this publication timeline, taken from the Wall Street Journal:
He’d been rebuffed by literary agents in the past, so he decided to put the novel on his website free of charge rather than to try to get it published. A few fans asked him to sell the story on Amazon so that they could download it to e-readers. Mr. Weir had been giving his work away, but he began charging a modest amount because Amazon set the minimum price at 99 cents. He published the novel as a serial on the site in September 2012. It rose to the top of Amazon’s list of best-selling science-fiction titles. He sold 35,000 copies in three months. Agents and publishers and movie studios started circling.
Now, compare that info to the official paragraph on eligibility, taken from the Constitution of the World Science Fiction Society:
Section 3.4: Extended Eligibility. In the event that a potential Hugo Award nominee receives extremely limited distribution in the year of its first publication or presentation, its eligibility may be extended for an additional year by a three fourths (3/4) vote of the intervening Business Meeting of WSFS.
I can’t imagine that 35,000 copies meets the “limited distribution” requirement. Aside from that, a one year extension wouldn’t help The Martian because of the 2012 publication date.
I even asked about Weir’s eligibility over at the official Hugo website. They didn’t give me a definite answer:
Will Andy Weir’s book The Martian be eligible for the Hugo Award in 2015? It was originally indie-published, but then published by a commercial press in 2014. The rules seem unclear about this.
Reply
Kevin says:
August 28, 2014 at 21:25
You’ll need to address your question directly to the 2015 Hugo Administrator (Select “Hugo Administrator” from the Committee List) to get a definite answer to this; however, the Hugo Award rules are pretty clear about the fact that first publication is what starts a work’s “clock.” The fact that a work is self-published, published by a small press, or by a large press isn’t relevant. Publication date is publication date, regardless of who publishes it.
That was as far as I pushed it; I didn’t think it was my place to “officially” ask the 2015 Hugo Administrator.
Based on the evidence we have so far, I’d come down on the side of Weir not being eligible for the 2015 Hugo or Nebula. I doubt that either award will issue an official statement; they’ll just let the process play out, and if he gets nominated, declare him ineligible at that time. As a result, I’ll be crossing Weir off of my Hugo and Nebula predictions.
Is this fair? I don’t know. Since The Martian came out in 2012, it’s had a long time to build up momentum, which might put it at an unfair advantage compared to books released this year. Don’t feel sorry for Weir: he sold a bunch of copies, The Martian is being made into a movie starring Matt Damon, and he’s now a major player in the SF landscape. He’ll survive without a Hugo or Nebula.
The Hugo and Nebula Awards and Repeat Nominees, Conclusions and Discussion
Over the past two weeks, Chaos Horizon has been looking into the idea of Repeat Nominees and the Hugo and Nebula Awards for Best Novel, 2001-2014. Remember, Chaos Horizon is a website dedicated to providing predictions for the Hugo and Nebula Best Novel awards, and I want these predictions to be based on more than my opinions about what the “best” books of the year are. If you want those kinds of opinions, the internet is crawling with them.
Instead, Chaos Horizon takes the position that we can better understand the Hugo and Nebula awards by data-mining past awards to find patterns concerning nominations and winners. While this won’t allow us to know with 100% certainty how future awards will go—that’s not how statistics work—this will allow us to make informed guesses as to what the nominees and winners will be in the future.
The basic hypothesis I’m working with is that there are 7 or 8 determining factors which factor into these awards. Roughly speaking, these are: past awards history, critical reception, reader reception, popularity/sales, marketing and web footprint, genre, demographic concerns, and reputation. Some of these are incredibly hard to quantify (reputation, for instance); others are slippery (genre); others are changing rapidly (demographics); and others are mine-fields of conflicting opinions (critical and reader response). Nonetheless—and perhaps foolishly—I believe we can wade into these factors and make some sense of them.
So, in regards to “repeat nominations”—one aspect of awards history—what have we learned in Parts 1 to 6, and how can this information be applied? For those who didn’t read Parts 1 to 6 (and they got pretty technical!), here’s what I think we can conclude:
Conclusion #1: The Hugo and Nebula Best novel slates are substantially biased towards authors who have previously received a Best Novel nomination, to the tune of 65% for the Hugo and 50% for the Nebula.
Application #1: When I make a prediction for the Hugo Slate, my prediction should be 2/3 previously nominated authors, and 1/3 rookie authors. For the Nebula, I should go 1/2 previously nominated authors, 1/2 rookie authors.
Conclusion #2: The Hugo Award slate favors super-repeaters, authors who get nominated for the Best Novel award over and over again. In 2001-2014, the top 7 Hugo authors accounted for 45% of the total ballot. The Nebula award does no show the same bias towards super-repeaters.
Application #2: When putting together a prediction for the Hugo slate, I need to pay special attention to the authors who have previously received more than 4 nominations.
Conclusion #3: Winning the Nebula Award is biased towards past winners and repeat nominees, with 64% of the winners having previously appeared on the ballot and 43% having won before. Proportionally, the Hugo did not show the same bias towards past winners and repeat nominees.
Application #3: When predicting the Nebula winner, I need to strongly factor in past Nebula nominations and wins.
Conclusion #4: There is no strong evidence to suggest that Hugo or Nebula Best Novel nominees need to have been nominated in other Hugo and Nebula categories before snagging a Best Novel nomination or win.
Application #4: I need to be careful about predicting authors “jumping” from the Short Story, Novelette, or Novella categories to the top of the slate. While it happens, it’s not the advantage you would think. Or, in other words, I need to keep on open mind towards authors completely new to the Hugo and Nebula process.
Conclusion #5: Despite the Hugo and Nebula favoring “repeat nominees,” even the repeaters don’t get every novel nominated. Most repeaters only manage a 25%-50% nomination rate, no matter how popular.
Application #5: I can’t blindly put popular authors onto my watchlist, but I need to analyze how each specific novel was received, including factors such as genre and critical/reader response.
That’s a fairly fruitful study, yielding some specific application that can help improve my watchlists and predictions. As I continue to do these, hopefully Chaos Horizon can become more and more useful as a resource to the SFF community.
What this information doesn’t tell us is whether this “bias” is good or bad. Maybe you believe that there are only 20-25 exception writers at work today in SFF today, and that the centralization of the awards reflects the excellence at the top of the field. Or maybe you believe there are hundreds of interesting SFF writers, and that some are unfairly excluded from the awards because of this centralization towards past winners and nominees. Chaos Horizon, nor any kind of statistical analysis, can answer those questions for you.
Here’s my Excel worksheets with the data I used. Let me know if you have any questions about methodologies or how I came up with my numbers.
So, finally, what do you think of the results? Did you expect the Hugos and Nebulas to work in this way? Was some of the information surprising? Is there additional information we need about repeat nominees and these awards? How will this factor in predicting the Hugos and Nebulas? Has this hurt or helped your perception of some of the chances of the 2014 candidates?
Thanks for the reading, and stay tuned for the next Chaos Horizon report, where I’m going to tackle the question of genre and the Hugo and Nebula awards! How biased are these awards towards science fiction? How much do they hate fantasy? We’ll find out soon . . .
The Hugo and Nebula Awards and Repeat Nominees, Part 6
Almost done with this report, I swear. In the comments, it was suggested that it would be a good idea to look at nomination %. This’ll give a great piece of context for the “repeat nominees” charts: obviously someone who publishes a novel every year can get more nominations than someone who publishes a novel every 5 years. However, the author who publishes a novel every 5 years might have a higher nomination percentage than a more prolific author. I limited this study to the repeat nominees, those authors who had multiple nominations in the 2001-2014 time period.
So how does this shake out for the Hugo and Nebula Best Novel Awards, 2001-2014? The Hugo chart first:
Table 5: Nomination % for Repeat Hugo Nominees, Hugo Best Novel 2001-2014
***Gaiman turned down two nominations. I’ve included those in his percentage, because the voters did vote Gaiman into the slate.
What this chart shows is the author’s Nomination % (number of novels nominated divided by number of novels published) in the 2001-2014 period. I did make one caveat: I only counted novels published after the author’s first nomination. I figured that this first Best Novel nomination brought the author into the spotlight, and that their later novels received more attention and had a much better chance of being nominated. When an author had their initial nomination before the year 2000, I counted all the novels they published between 2001 and 2014. I pulled all this information off of sfadb.com and isfdb.org.
What does this chart show us? That individual authors have greatly different publishing habits, from the incredibly prolific Stross to rather unprolific Willis. There is some good information here: when Willis or Martin publish another novel, they’re very likely to be nominated again. Kim Stanley Robinson, on the other hand, doesn’t stand quite as good of a chance. This kind of information is very relevant when putting together a Hugo Watchlist.
It would be possible to get even deeper into this chart. Mieville, for instance, published two YA adult novels that are on the chart; YA novels don’t that well on the Hugos. Likewise, a bunch of Stross’s novels are urban fantasy, and urban fantasy books don’t do as well as science fiction in the Hugos. Any credible watchlist or prediction has to take all those complexities into consideration.
Table 6: Nomination % for Repeat Nebula Nominees, Nebula Best Novel 2001-2014
A similarly interesting chart. The Nebula repeaters have a slightly worse nomination %, which is in line with the Nebula not being as friendly towards repeat nominations. Once again, we can use this chart to improve any Nebula Watchlists that Chaos Horizon puts together.
The Hugo and Nebula Awards and Repeat Nominees, Part 5
This report on Repeat Nominees has gotten more complicated by the post: there’s a lot of information to sift through here, and a number of different ways to slice the statistical pie. In Parts 5 and Parts 6, I’m going to provide some additional detail that commenters asked for. If you’d like to know anything more—and provided I can come up with a decent way of finding/presenting the data—just ask.
In Part 1, we identified a certain number of “rookies,” nominees for the Best Hugo or Best Nebula Novel Award (2001-2014) who had previously not been nominated for the award. If you recall, there were 25 “rookies” for the Hugo and 44 for the Nebula, or roughly 35% for the Hugo and 50% for the Nebula. In the comments, Niall asked whether or not these rookies had prior success on other parts of the Hugo and Nebula ballot, thus making them familiar to voters.
This is a very solid hypothesis, one we could call the “moving on up” idea: writers would first get nominated for Best Short Story (or Novelette, or Novella), and then would eventually “graduate” to the Best Novel slate. Data shows that this isn’t necessarily the case: you don’t need to have previously been nominated for any Hugo or Nebula categories to make the slate. Here’s some charts:
The numbers are relatively straightforward: of the 25 Hugo rookies, only 6.5 had received downballot Hugo nominations (I counted Brandon Sanderson as the .5; he shared his nomination with Robert Jordan for Wheel of Time). For the 44 Nebula Best Novel rookies, 15 had received a prior Nebula nomination in another category. I was generous in my counting, including such categories as “Best Related Work.” If this were limited to only fiction categories, that would cut the numbers down by a little more.
So, what do we learn? That being downballot on the awards doesn’t necessarily help you get up into the Best Novel category. Hugo and Nebula voters don’t necessarily insist that you’ve had success in the Short Story, Novelette, or Novella categories before you make it to the top of the ballot. Lots of pure rookies make the ballot—good news if you’re a rookie, but maybe disappointing if you’ve got some wins in the other categories.
This is a case where the raw statistics might be a little misleading. While they show 75% of the Hugo rookies never have been nominated before, we must acknowledge that this is a much larger pool of writers than those who have previously been nominated for Hugos. So, if we estimate that the pool of no-Hugo nominations as around 500 (I just made that number up, but it’s probably in the ballpark of the novels that could be considered “credible” Hugo contenders), and the pool of writers who have been nominated for downballot Hugos at around 100, you can see that there is a statistical advantage to being on other parts of the Hugo ballot. I’m still surprised; I was expecting a greater advantage. I thought the awards would be more hospitable to downballot success, as appearing on other parts of the ballot would make you more familiar to Hugo and Nebula voters. While it helps, it doesn’t seem to help that much.
Let’s think about a couple of examples: Elizabeth Bear has had good success on other parts of the Hugo ballot, with 4 nominations and 4 wins, 2 for Best Fancast, one for Best Novelette, and one for Best Short Story. She hasn’t had any luck, however, at cracking the top of the ballot. Ken Liu is going to be a great author to keep your eye on for 2016 awards. His debut novel, The Grace of Kings, is due out in April 2015. Liu has been enormously successful in the other Hugo and Nebula fiction categories: 3 Hugo noms, 2 wins, and 6 Nebula noms, 1 win. He’d be a prime example of someone you would expect to “graduate” to the Best Novel slates: but will he? Before this study, I might have Liu down as a “good” bet. Now, I’m not so sure.
So, in conclusion: while being on other parts of the Hugo and Nebula ballot helps, it’s not an enormous help, and we’ll have to be cautious predicting Best Novel nominations based solely on short fiction nominations or wins.
The Hugo and Nebula Awards and Repeat Nominees, Part 4
We trundle along—now we’re up to looking at the way “repeaters” impact winning the Nebula Award. Parts 1 and Parts 2 discussed how the Nebula ballot is centralized: not as much as the Hugo, but still in a substantial way. Roughly speaking, 50% of the Nebula nominees have already received a Nebula nomination for Best Novel. Do these “repeaters” stand a better chance of winning?
In the case of the Nebula, the answer is a resounding yes. In the 2001-2014 period, 6 winners had already won the Nebula Award for Best Novel: Bear, Bujold, Haldeman, Le Guin, Willis, and Robinson. That’s a robust 43%; in contrast, the Hugo had 27% repeat winners from 2001-2014.
An additional 3 winners had previously been on the Best Novel ballot: Asaro, McDevitt, and Walton. So, all told, 9 out of 14 (64%) of the winners had previously appeared on the Nebula Best Novel ballot. When we consider the 50%/50% split of the slate, that means there is a substantial bias towards past winners and nominees.
2 more winners, Gaiman and Bacigalupi, had previously appeared on other parts of the Nebula ballot. There were only 3 “rookies” who won in their first time appearing on the ballot: Moon, Chabon, and Leckie. Let’s look at this visually:
Compared to the Hugo chart from Part 3, this shows more centralization towards repeaters. That’s an interesting reversal: it’s easier to get on the Nebula slate as a rookie than the Hugo, but it’s easier to win the Hugo as a rookie than the Nebula. Perhaps this reflects the different make-up of the voting groups: the SFWA is made up of authors, and probably more likely to vote for “one of their own.” Remember, we’re talking about statistical bias here, not absolute numbers. Even if only 5% of the voting pool is swayed by familiarity, that can have a substantial impact on winning.
One thing you see in the Nebula that you don’t see as often in the Hugo is the “lifetime achievement win.” Let’s isolate Ursula K. Le Guin’s 2009 win for Powers. Le Guin’s credentials are untouchable; she’s likely one of the 10 most influential SFF writers of all time, and has made essential contributions to both Science Fiction (the Hainish cycle) and Fantasy (Earthsea). Before Powers, Le Guin had won an impressive 5 Hugos and 5 Nebulas. All of that said, Powers doesn’t make much sense as a Nebula winner. It’s the third volume of a Young Adult series; historically, sequels and Young Adult books haven’t done very well in the Nebulas. I don’t think many readers would identify Powers as a “central” or “essential” Le Guin novel; if I was recommending Le Guin books to read, this might end up near the bottom. If, for some god-forskaen reason, you haven’t read any Le Guin, start with Left Hand of Darkness. So why did Powers win?
We can’t know for sure, but I suspect this was a way of honoring Le Guin’s whole career. Even though she had already won 3 Best Novel Nebulas, the SFWA voters figured she needed another. A number of literary awards work this way. The Pulitzer Prize has actually done this quite a bit, periodically giving an author an award at the end of their career as an acknowledgement of all the great writing they’ve done. William Faullkner—my favorite author, for the record—has two Pulitzer Prizes for lesser novels, A Fable and The Reivers, while his best novels went unrecognized.
Although the SFWA gives a Grandmaster award, which Le Guin won in 2003, the Powers Nebula might be just another way of honoring a long and distinguished career. I think Haldeman’s win for The Accidental Time Machine also falls into this “lifetime achievement award” category. While Bear, Robinson, and Willis all won for stronger novels, I think there was also a little “sentimental” bump to their wins. Even a small bump can greatly change the outcome.
Part of these Chaos Reports is simply gathering information for future predictions. While I don’t think the Nebula always goes to an author for their career, it does happen sometimes. I wouldn’t be stunned to see this happen again in the next 5 or 10 years. Who would be a likely lifetime sentimental winner? George R. R. Martin? No matter the actual quality of A Dream of Spring, I think it’ll stand a good chance of winning. Could Vandermeer get a “career” boost this year for Annihilation?
That speculation aside, let’s look at whether getting lots of nominations correlates to wins:
Table 4: Number of Wins for Best Novel Nebula Award for Repeat Nominees, 2001-2014
Not much to be learned here. There are several writers—Jemisin, Hopkinson, Mieville—who got 3 nominations but haven’t managed a win yet. Writers with only 2 nominations actually did better. Thus, you can’t necessarily correlate the number of nominations to number of wins. McDevitt is the true testimony to this: 9 nominations, 1 win. That works out to an 11.1% win percentage. The baseline chance of winning a Nebula, assuming no bias—i.e. just drawing one of the six names from a hat—is 16.7%, so McDevitt actually did worse than that.
So, to sum up: the Nebula slate is less centralized than the Hugo slate, with more of a tendency to nominate a wider array of authors. Still, that doesn’t translate to those “rookie” authors having a solid chance of winning. Instead, the Nebula award more often goes to the “repeaters,” particularly past winners. To think about that numerically, the Nebula slate is roughly 50%/50% prior nominees/new nominees. However, the breakdown for winning the Nebula is 65%/35%. In contrast, the Hugo was roughly 67%/33% for nominees, and 67%/33% for winners. One of those is proportional (the Hugo), the other is not. The plot thickens!
The Hugo and Nebula Awards and Repeat Nominees, Part 3
Today, we’ll be continuing our look at repeat nominees and the Hugo and Nebula Award for Best Novel, 2001-2014. Part 1 looked at how often the Hugo and Nebula voters nominate writers who have already received Best Novel nominations, to the tune of 65% for the Hugo and 50% for the Nebula. In Part 2, we looked at how “repeat nominees” dominate the slates, particularly for the Hugo. The 7 most popular Hugo writers received 45% of all the possible nominations between 2001-2014. The Nebula was far more evenly distributed, with the most popular writers taking home 24% of the slots, and that number was greatly inflated by Jack McDevitt’s 9 Best Novel nominations.
So far, I’ve only looked at nominations, and not chances of winning. Do these “repeaters” dominate the Hugo and Nebula wins? Or do rookies and one-time nominees stand a chance?
Hugos: Between 2001-2014, there were 15 Hugo winners (Bacigalupi and Mieville tied in 2010). At the time of their win, 4 of those authors had previously won the Hugo for Best Novel: Willis, Gaiman, Vinge, and Bujold. 5 winners had previously been nominated for Best Novel: Scalzi, Mieville, Wilson, Sawyer, and Rowling. So, all told, 60% of the Best Novel Hugo winners had prior history in the Best Novel category.
4 of the winners were pure Hugo rookies at the time of their win: Leckie, Walton, Chabon, and Clarke. The other 2 winners—Bacigalupi and Gaiman (for his initial American Gods win; by the time he wins for The Graveyard Book, he’s a “repeater”)—had success “downballot.” Bacigalupi had several prior nominations for his stories, and Gaiman had a Best Related Book nomination for one of the Sandman volumes. To look at that visually:
All in all, that seems a fairly reasonably distribution: 66% of the total winners have Hugo history, and 33% are rookies. If we correlate that to the stats from Part 1 of the report (65% of the ballot is repeaters), we’ll see that there isn’t much of a bias once you make it into the slate. While it’s harder to get into the slate as a rookie, a rookie has just as good a chance, proportionally, of winning as a repeater. Hugo voters are fairer at picking a winner than picking the slate.
How does this correlate, though, to Part 2 of our report, the pool of “repeaters” who got 2 or Hugo nominations in the 2001-2014 period? On the surface, you’d think that this group would dominate the winner’s list: after all, they grabbed most of the slate spots. Here’s the data:
Table 3: Number of Wins for Best Novel Hugo Award for Repeat Nominees, 2001-2014
8 of the 15 winners were from this list of repeaters; that means 7 of the winners came from people who were only nominated once in the 2001-2014 period. That’s a 53%/47% split, or basically a coin flip. Even though these repeaters dominated the slate, they didn’t dominate the winner’s circle. This may suggest an interesting hypothesis: getting nominated is more biased towards reputation/award history, while winning depends more on the quality of the individual novel. Toss Gaiman out, and these “repeaters” actually have a worse chance of winning—proportionally, of course—than the non-repeaters.
The top of the repeaters list—those 7 authors who dominated 45% of the slate—only managed to win 33% of the Hugos, so dominating the slate doesn’t necessarily lead to winning the Hugo for Best Novel. Stross is perhaps the best example of this: he is the most nominated author between 2001-2014, but has no Best Novel Hugos to show for it. Don’t feel too sad for him, though; he picked up 3 Best Hugo Novella Awards in that time period.
Remember, the pool of winners (15) is a very small sample size, and we shouldn’t put too much stock in specific numbers. Instead, it’s the trends that matter. The main trend for the Hugo seems to be this: past Hugo Best Novel history doesn’t seem to matter much when it comes to actually winning the award. Given that making the slate seems biased towards past Best Novel nominees, this is an interesting result.
I’ll take a look at the winners of the Nebula in the next post.
The Hugo and Nebula Awards and Repeat Nominees, Part 2
In Part 1 of this report, we discussed how the Hugo and Nebula Award for Best Novel are heavily weighted towards writers who have previously been nominated for those awards, to the tune of 65% for the Hugo and 50% for the Nebula. While those numbers are interesting—and perhaps eye-opening—they don’t tell us how centralized these awards are. Is this a bunch of different writers receiving 2 nominations each, or few select writers receiving 6, 7, or more nominations?
Today, we’ll look at how frequently the most popular writers were nominated in the 2001-2014 time period for the Hugo and Nebula award. The methodology here is simple: I took the lists of the Hugo and Nebula Best Novel nominees form 2001-2014 and counted the number of awards each received. Here are the results.
Hugo Awards: From 2001-2014, 37 unique authors (counting Jordan/Sanderson for Wheel of Time as one author) were nominated for a total of 72 Hugo Award Best novel slots. 24 of those authors received only one nomination, and the 13 other authors shared the remaining 48 nominations. Here’s the list of the “repeaters”:
Table 1: Number of Nominations for Best Novel Hugo Award for Repeat Nominees, 2001-2014
This list would have been even more pronounced if Neil Gaiman hadn’t turned down two Hugo nominations, one for Anasazi Boys and one for The Ocean at the End of the Lane. Even without that, there is still a very definite centralization in the Hugo Award for Best Novel. The top 7 candidates (Stross, Mieville, Sawyer, Bujold, Grant, Scalzi, and Wilson, all of whom have at least 4 nominations in the past 14 years) racked up an impressive 33 nominations (out of 72 total), for 45.8% of the Hugo award slate. The rest of the SFF publishing world received 39 nominations, for 54.2% of the slate.
Nebula Awards: The Nebula is rather more balanced. In the 2001-2014 period, 61 unique authors were nominated for a total of 87 Best Nebula novel slots. 46 authors received only one nomination each, with the remaining 15 “repeaters” sharing 41 nominations. Here’s the list:
Table 2: Number of Nominations for Best Novel Nebula Award for Repeat Nominees, 2001-2014
With the exception of Jack McDevitt’s world-crushing domination of the Nebula nominations, that’s a pretty evenly distributed list: a fair amount of authors getting 2 or 3 nominations, but no one (but McDevitt) getting 4 or 5 nominations. The top 5 “repeater nominees” (McDevitt, Bujold, Hopkinson, Jemisin, and Mieville, each of whom at least 3 nominations) managed 21 nominations between them, accounting for 24.1% of the total nominations. That number is a little misleading since McDevitt alone accounted for 10% of 2001-2014 field. As a side note, I have no idea why McDevitt has done so well in the Nebulas. In any statistical analysis of the Nebulas, his domination distorts the numbers, and certainly makes the Chaos Horizon predictions more difficult. If anyone has insight into his success, please share.
So, in conclusion: the Hugo is heavily centralized around a small number of repeat nominees. The Nebula, with the exception of Jack McDevitt, is spread out over a much greater number of authors, and demonstrates only mild centralization.
In the next part of this report, we’ll look at what impact repeat nominations have on the chances of actually winning the Hugo or Nebula for Best novel.