Now that the World Fantasy Award has been given, I’ve updated my Awards Meta-List to reflect the Top 20 SFF novels of the year. My list uses 15 different SFF Awards to see who dominated the year, using this limited methodology of awards. Of course, awards don’t reflect quality; they give us a certain slant on the SFF market, one which provides an interesting but flawed measure. The rules are simple: you get nominated for any of these awards, you get a point. Most points wins. No bonus for winning the award, although I’ll note the winners.
Edit 11/25/15: My 15 awards are the Clarke, the British Fantasy, the British SF, the Campbell Memorial Award (not the Campbell for Best New Author), the Locus Fantasy and SF categories (not the Best First Novel), the Compton, the Crawford, the Gemmell, the Hugo, the Kitschies, the Nebula, the Philip K. Dick, the Prometheus, the Tiptree, and the World Fantasy. You’ll notice that I’m currently not tracking the “Best First Novel” award categories or YA categories. You’ve got to draw the line somewhere. The First Novel categories are valuable, but since such a wide range of novels aren’t eligible as first novels, I felt it distorted the results by over-counting those novels.
In my opinion, this provides a broad overview of the field. 15 different awards mean 15 different sets of rules and voters (some popular and huge, some small, and some by committee). If a book shows up time and time again through all that chaos, those are the consensus books of the year.
So how did 2015 turn out? There wasn’t a single dominant book, as was the case with Ancillary Justice in 2014 (7 nominations, 4 wins, with 2 additional nominations and wins in “First Novel” categories). This year, Cixin Liu did the best with 5 nominations, but he managed only 1 win. I suspect that if The Three-Body Problem came out earlier in the year (it was published in November), it would have done a little better. Leckie won twice for Ancillary Sword, and she was the only author to win two awards. Those wins, depending on how cynical you are, could be chalked up to last year’s success of Ancillary Justice.
Nothing else jumps out as a dominant book. If we can think all the way to 2017, Emmi Itaranta might be someone to keep an eye on. Memory of Water was the debut novel for this Finnish writer, and the 2017 WorldCon is in Helsinki, Finland . . .
Here’s the list. I’m listing everyone who got at least 2 nominations, which is conveniently exactly 20 novels. 64 different novels received at least one nomination. Obviously, there are lots of ties: 1 novel got 5 noms, 3 novels got 4 noms each, 7 novels got 3 noms each, and 9 novels got 2 nominations each.
If I had to describe the 2015 awards season, it would be with the term “divided.” There wasn’t much agreement as to what the major works were; we had lots of competitive novels rather than 2-3 consensus books. It’ll be interesting to see if the 2016 award play the same way. Between Uprooted, Seveneves, and Ancillary Mercy, we could wind up with a much more centralized year.
Here’s the final list, and the accompanying Excel file: 2015 Awards Meta-List.
1. The Three-Body Problem, Cixin Liu: 5 nominations, 1 wins (Noms: Hugo, Nebula, Campbell, Locus SF, Prometheus; Wins: Hugo)
2. Ancillary Sword, Ann Leckie: 4 nominations, 2 wins (Noms: Hugo, Nebula, BSFA, Locus SF, Wins: BSFA and Locus SF)
2. Annihilation/Area X, Jeff VanderMeer: 4 nominations, 1 win (Noms: Campbell, Nebula, Locus SF, World Fantasy; Win: Nebula)
2. The Goblin Emperor, Katherine Addison: 4 nominations, 1 win (Noms: Hugo, Nebula, Locus Fantasy, World Fantasy; Win: Locus Fantasy)
5. Memory of Water, Emmi Itaranta: 3 nominations, 0 wins (Noms: Clarke, Tiptree, Philip K. Dick)
5. Europe in Autumn, David Hutchinson: 3 nominations, 0 wins (Noms: Clarke, BSFA, Campbell)
5. The First Fifteen Lives of Harry August, Claire North: 3 nominations, 1 win (Noms: Clarke, BSFA, Campbell, Win: Campbell)
5. Lagoon, Nnedi Okorafor: 3 nominations, 0 wins (Noms: BSFA, Tiptree, Kitschies)
5. The Peripheral, William Gibson: 3 nominations, 0 wins (Noms: Campbell, Locus SF, Kitschies)
5. The Race, Nina Allan: 3 nominations, 0 wins (Noms: British SF, Campbell, Kitschies)
5. City of Stairs, Robert Jackson Bennett: 3 nominations, 0 wins (Noms: British Fantasy, Locus Fantasy, World Fantasy)
12. Elysium, Jennifer Marie Brissett: 2 nominations, 0 wins (Noms: Dick, Tiptree)
12. Station Eleven, Elizabeth St. John Mandel: 2 nominations, 1 win (Noms: Clarke, Campbell, Win: Clarke)
12. Lock In, John Scalz: 2 nominations, 0 wins (Noms: Locus SF, Campbell)
12. The Bees, Laline Paul: 2 nominations, 0 wins (Noms: Campbell, Compton)
12. A Darkling Sea, James Cambias: 2 nominations, 0 wins (Noms: Campbell, Compton)
12. My Real Children, Jo Walton: 2 nominations, 1 win (Noms: Tiptree, World Fantasy, Win: Tiptree)
12. Cuckoo Song, Frances Hardinge: 2 nominations, 1 win (Noms: British Fantasy, British SF, Win: British Fantasy)
12. Wolves, Simon Ings: 2 nominations, 0 wins (Noms: British SF, Campbell)
12. The Moon King, Neil Williamson: 2 nominations, 0 wins (Noms: British Fantasy, British SF)
We can close the page on 2015, and get ready for 2016!
At long last, the 2015 awards season is over! Here are the World Fantasy Award winners.
The World Fantasy Award is the final award of 2014 . They certainly stretch it out long enough! The World Fantasy is probably the most “literary” of the SFF awards, having gone to books like Murakami’s Kafka on the Shore, for instance. This year seems no different, as David Mitchell’s The Bone Clocks is a very literary take on the SFF genre.
Here are the other nominees:
The Goblin Emperor, Katherine Addison (Tor)
City of Stairs, Robert Jackson Bennett (Broadway; Jo Fletcher)
Area X: The Southern Reach Trilogy, Jeff VanderMeer (Farrar, Straus & Giroux)
My Real Children, Jo Walton (Tor; Corsair)
The World Fantasy has a habit of going to a book that hasn’t already won an award. That’s the advantage of being the-last-to-move award; you can see what the rest of the field has done and fill in the gaps.
David Mitchell’s hybrid realistic/fantasy/horror novel was highly acclaimed by literary critics last year. While there is some significant fantasy—and even near-future content—in the novel, it’s only briefly touched on in the first 400-500 pages of into the book. Until then, it reads as literary fiction with light surreal/horror touches. This might make it hard for some SFF fans to read, as well as the fact that Mitchell has taken to writing his books as series of linked novellas, changing characters every 75-100 pages (he did this in Cloud Atlas as well).
I thought The Bone Clocks was an exceptional novel, and my second favorite of last year (after The Three-Body Problem). I also really like Slade House, which just came out and is basically Bone Clocks in miniature. If you haven’t read any Mitchell, I’d suggested checking that book out as a low risk sampler.
With the World Fantasy Award finally given, I can update and finalize my Award Meta-List. Then we can put a bow on 2014 (the most controversial in awards history?) and move on to 2015!
To the surprise of absolutely no one, Brandon Sanderson won his second David Gemmell Legend Award this past Saturday for his massive novel Words of Radiance. Words of Radiance also picked up the Ravenheart award for best cover art, and Brian Staveley’s The Emperor’s Blades won the Morningstar for best first fantasy debut.
The Gemmell has two rounds of voting, and they reported 17,059 votes in Round #1 and 19,700 votes in the finals. I’m not sure that’s the total votes across all three categories or not. Either way, this number compares favorably to the Hugos, which had a record number of voters this year at 5,950 this year.
The Gemmell is an interesting award because its so focused (fantasy novels, just three categories) and because this is an open internet vote. We could get a sense of what the Hugo might be like if they removed their entry bar (i.e. the $40 entry fee).
Sanderson won because he is incredibly popular. Just look at the number of Goodreads ratings for the finalists as of 8/10/15:
57,770 ratings: Words of Radiance by Brandon Sanderson
14,524 ratings: Half a King by Joe Abercrombie
14,326 ratings: The Broken Eye by Brent Weeks
6,910 ratings: Prince of Fools by Mark Lawrence
1,832 ratings: Valour by John Gwynne
Not much of a contest, is it? Sanderson has emerged as one of—if not the—most popular fantasy writer working today. While George R.R. Martin is still more popular (a hit TV show will do that), and there is an argument to be made for Patrick Rothfuss and J.K. Rowling (if she ever writes fantasy again), Sanderson has been far more productive than those writers over the past 3-4 years. This steady flow of novels has catapulted him to a lofty status. Part of the appeal about reading an author like Sanderson is that you’re reading the books everyone else is reading, which is a powerful pull for fantasy fans. That’s how things were back in my day with Dragonlance. You also get your Sanderson fix every year. No 3-4 year wait like Rothfuss, Martin, Lynch, etc.
Jared over at Pornokitsch has been leading the charge with some great analysis of the Gemmell awards. He’s particularly tough on Sanderson, although his analysis glosses over what Sanderson does well, which is setting up magic systems and worlds that work by clear rules. Sanderson then gives us near-endless (you have to if you want to write 1000+ page novels) scenarios involving those rules. Sanderson works well because he gives us fantasy worlds that aren’t cloaked in a shroud of magic; when you read Words of Radiance, you feel like you “get” the world. Compared to the hand-waving magic of A Song of Ice and Fire or the “we-always-find-the-magic-we-need” plots of Harry Potter or The Kingkiller Chronicles, it feels very organized. Sanderson does coherent systems better than almost anyone else working in the field today, and its those systems that drive readers through Sanderson’s somewhat pedestrian prose and meandering plots.
I think Sanderson pulls a lot of his style from gaming, particularly the tabletop variety. At times, the appendices in the back of the book feel like DM rulebooks; I don’t think it should be any surprise that a generation of fantasy readers that grew up with D&D, Baldur’s Gate, Elder Scrolls, Final Fantasy, etc., would be drawn to Sanderson. There’s a steady drip-drip of information through Sanderson, and it’s that revealing of the world’s rules (not the plot) that drives the action.
All of this raises interesting questions about what an award should be. Should awards go to the most popular novels? Don’t they already receive enough attention, being the “most popular” already? Or does bringing attention to Sanderson help draw casual fantasy fans into the field, who are likely to pick up and enjoy him? The Hugo and Nebulas have gone one route, the Gemmell another.
It’s been a while since I’ve updated my master list of 2015 SFF Awards to see who has the most nominations and wins. A couple major awards have been announced in the past month, including the Campbell Memorial (to Claire North’s The First Fifteen Lives of Henry August) and the Locus Awards (SF to Ann Leckie’s Ancillary Sword and Fantasy to Katherine Addison’s The Goblin Emperor). Leckie’s win for Ancillary Sword makes her the only two-time winner this year (she also grabbed the British Science Fiction Award).
The World Fantasy Nominees for 2015 were also recently announced. Here’s the Novel category:
Katherine Addison, The Goblin Emperor (Tor Books)
Robert Jackson Bennett, City of Stairs (Broadway Books/Jo Fletcher Books)
David Mitchell, The Bone Clocks (Random House/Sceptre UK)
Jeff VanderMeer, Area X: The Southern Reach Trilogy (Farrar, Straus and Giroux Originals)
Jo Walton, My Real Children (Tor Books US/Corsair UK)
A strong list, even if I’m not quite sure some of these are actually fantasy. The WFA tends to tip over to the Weird fiction side of things, so that accounts for Area X and The Bone Clocks. I suspect Addison is the likely winner here, although this is a juried (not popular vote) award. If Addison wins the Hugo, they might choose to go in a different direction.
So, where does that leave us? You can see my full list here: 2015 Awards Meta-List. I’m tacking 15 major awards. Let’s focus on the Top 8, everyone who received at least 3 different award nominations:
EDIT: A couple clean ups to the list. One of the commentators caught that I’d miscounted Nina Allan’s The Race, and I had VanderMeer down for the Hugo nom instead of the Campbell nom. Thanks everyone for double-checking!
1. The Three-Body Problem, Cixin Liu: 5 nominations, 0 wins (Hugo, Nebula, Campbell, Locus SF, Prometheus)
2. Ancillary Sword, Ann Leckie: 4 nominations, 2 wins (Hugo, Nebula, BSFA, Locus SF, with wins in the BSFA and Locus SF)
3. Annihilation/Area X, Jeff VanderMeer: 4 nominations, 1 win (Campbell, Nebula, Locus SF, World Fantasy, with a win in the Nebula)
4. The Goblin Emperor, Katherine Addison: 4 nominations, 1 win (Hugo, Nebula, Locus Fantasy, World Fantasy, with a win in the Locus Fantasy)
5. Memory of Water, Emmi Itaranta: 3 nominations, 0 wins (Clarke, Tiptree, Philip K. Dick)
5. Europe in Autumn, David Hutchinson: 3 nominations, 0 wins (Clarke, BSFA, Campbell)
5. The First Fifteen Lives of Harry August, Claire North: 3 nominations, 1 win (Clarke, BSFA, Campbell, with a win in the Campbell)
5. Lagoon, Nnedi Okorafor: 3 nominations, 0 wins (BSFA, Tiptree, Kitschies)
5. The Peripheral, William Gibson: 3 nominations, 0 wins (Campbell, Locus SF, Kitschies)
5. The Race, Nina Allan: 3 nominations, 0 wins (British SF, Campbell, Kitschies)
For all the love lavished on Station Eleven by Emily Mandel, it managed only two nominations (for the Clarke and Campbell), although it did win the Clarke. Not a bad haul. City of Stairs has a real shot at a British Fantasy nomination, and could join the group above with 3, adding to its Locus Fantasy and World Fantasy nominations.
A couple observations: it has been a very evenly divided year. No one has really dominated the 15 awards I’m keeping track of. Last year, Ancillary Justice had 8 nominations and 5 wins; Ancillary Sword has only managed half of that. 2015 is a year without a consensus “best novel” in the field; that’s something that has been overlooked in all the furor that’s gone down over this year’s awards. It’s going to be a toss up as to whether Leckie or Addison wins the year. If Leckie wins her second Hugo, that’ll give her the edge, but Addison still has a chance to win the Hugo, and then go on to sweep the British Fantasy and World Fantasy awards.
Of the top 9, we’re seeing an increased influence of European fiction: both Europe in Autumn and Lagoon had their biggest impact and readerships outside the United States. Don’t forget Memory of Water, translated from the Finnish, which joins The Three-Body Problem as highly nominated novels in translation. Fully half this list represents world science fiction and fantasy, an intriguing change from previous years. I haven’t read Memory of Water or Europe in Autumn yet, but this list is tempting me to pick them up.
So, what do you think? Does this collated list better reflect the true state of the SFF field than any individual award?
The 2015 Gemmell Legend Award, an internet vote for the Best Fantasy Novel of the year, is now open for voting! The finalist are:
Half a King by Joe Abercrombie (HarperCollins)
Valour by John Gwynne (Pan Macmillan/Tor UK)
Prince of Fools by Mark Lawrence (HarperCollins)
Words of Radiance by Brandon Sanderson (Gollancz)
The Broken Eye by Brent Weeks (Orbit)
I find the Gemmell a fascinating award for several reasons. First, this is true “open internet vote” award: anyone can vote, and it pulls in a very different voting audience than the “pay to vote” Hugo. As such, you get a very different feel in this award: very populist, very mainstream, very best-sellery. Second, the Gemmell moves perpendicular to the other SFF awards: this and the World Fantasy Award couldn’t be more different in terms of the books they honor. The Gemmell is all about big epic series fantasy, whereas the other awards avoid such novels like the plague.
In my meta-awards tracking, I track 15 different SFF awards. Not a single of these 5 authors was nominated for any of the other 14 awards. To be fair, the major fantasy nominees (the World Fantasy and British Fantasy) haven’t been announced yet. I could see Abercrombie grabbing a nomination in one of those, but not the other Gemmell finalists.
This Gemmell will be a fascinating contest. We have three former winners going head-to-head in a fantasy deathmatch: Mark Lawrence (2014 winner for Emperor of Thorns), Brent Weeks (2013 winner for The Blinding Knife), and Brandon Sanderson (2011 winner for The Way of Kings). Add in Joe Abercrombie, and you probably have the most competitive Gemmell ever.
I think the Gemmell boils down to pretty much a popularity contest. In that case, Sanderson should win, as he’s the most popular of these big “epic” fantasy writers. In this case, the broad sweep of Goodreads can help. Check how popular these books are in terms of ratings on that site:
Words of Radiance: 52,766 ratings, 4.76 average
The Blinding Knife: 26,911 ratings, 4.46 average
Half a King: 12,067 ratings, 4.01 aveage
Prince of Fools: 5,933 ratings, 4.10 average
Valour: 1,566 ratings 4.42 average
A clear Sanderson advantage, no?
Let’s wrap this up by looking at the rest of the data concerning the Short Fiction categories of Novella, Novelette, and Short Story. Remember, these stories receive far fewer votes than the Best Novel category, and they are also less centralized, i.e. the votes are spread out over a broader range of texts. Let’s start by looking at some of those diffusion numbers:
Remember, the data is spotty because individual Worldcon committees have chosen not to provide it. Still, the table is very revealing: the Short Story category is far more diffuse (i.e. far more different works are chosen by the voters) than either the Novella, Novelette, or Novel categories. To look at this visually:
In any given year, there are more than 3 times as many unique Short Stories nominated as Novellas. Now, I imagine there are far more Short Stories are published in any given year, but this also means that it’s much easier—much easier—to get a Novella nomination than a Short Story nomination. More voters may make something like the Short Story category more scattered, not more centralized, and this further underscores a main conclusion of this report: each Hugo category works differently.
This diffusion has some pretty profound implications on nominating %. Remember, the Hugos have a 5% rule: if you don’t appear on 5% of the nominating ballots, you don’t make the final ballot. Let’s look at the percentages of the #1 and the #5 nominee for the Short Fiction categories:
I think these percentage numbers are our best measure of “consensus.” Look at that 2011 Novella number of 35%: that means Ted Chiang’s Lifecycle of Software Objects appeared on 35% of the ballots. That seems like a pretty compelling argument that SFF fans found Chiang’s novella Hugo worthy (it won, for the record). In contrast, the Short Story category has been flirting with the 5% rule. In both 2013, only 3 stories made it above 5%, and in 2014 only 4. You could interpret that as meaning there was not much agreement in those years as to what the “major” short stories were. If you think the 5% bar is too high, keep in mind that each ballot has 5 votes, and each voter often votes for 3 works. That means that appearing on 5% of the ballots means you only got around (5%/3) = 1.67% of the total vote. If a story can’t manage that much support, is it Hugo worthy?
Now, this may be unfair. There might simply be too many venues, with too many short stories published, for people to narrow their thoughts down. Given the explosion of online publishing and the economics of either paying for stories or getting free stories (I’ll let you guess which one readers like more), there may simply be too much “work” for readers to agree upon their 5 favorite stories in the nominating stage. Perhaps that what the final stage is for, and it’s fine for the nominating stage to have low % numbers. Still, these low numbers make these categories very easy for either slate sweeps or other undue influences, even including “eligibility” posts. Either way, this should be a point of discussion in any proposed change to the Hugo award.
The landscape of SFF publishing and blogging has changed rapidly over the last 10 years, and the Hugos have not made many changes to adapt to this new landscape. Some categories remain relatively healthy, with clear centralization happening in the nomination stage. Other categories are very diffuse, with little agreement amongst the multiplicity of SFF fans.
To these complexities, we have to think about how much scrutiny the Hugos are under: people—myself included, and perhaps worst of all—comb through the data, looking for patterns, oddities, and problems. No longer is the Hugo a distant award given in relative quiet at the Worldcon, with results trickling out through the major magazines. It’s front and center in our instant reaction world of Twitter and the blogosphere. A great many SFF fans seem to want the Hugo to be the definitive award, to provide some final statement on what SFF works were the best in any given year. I’m not sure any award can do that, or that it’s fair to ask the Hugo to carry all that weight.
So, those rather tepid comments conclude this 5 part study of Hugo nominating ranges. All the data I used is right here, drawn primarily from the Hugo nominating packets and the Hugo Award website: Nominating Stats Data. While there are other categories we could explore, they essential work on similar lines to what we’ve discussed so far. If you’ve got any questions, let me know, and I’ll try to answer the best I can.
We’re up to the short fiction categories: Novella, Novelette, and Short Story. I think it makes the most sense to talk about all three of these at once so that we can compare them to each other. Remember, the Best Novel nomination ranges are in Part 3.
First up, the number of ballots per year for each of these categories:
A sensible looking table and chart: the Short Fiction categories are all basically moving together, steadily growing. The Short Story has always been more popular than the other two, but only barely. Remember, we’re missing the 2007 data, so the chart only covers 2008-2015. For fun, let’s throw the Best Novel data onto that chart:
That really shows how much more popular the Best Novel is than the other Fiction categories.
The other data I’ve been tracking in this Report is the High and Low Nomination numbers. Let’s put all of those in a big table:
Here we come to one of the big issues with the Hugos: the sheer lowness of these numbers, particularly in the Short Story category. Although the Short Story is one of the most popular categories, it is also one of the most diffuse. Take a glance at the far right column: that’s the number of votes the last place Short Story nominee has received. Through the mid two-thousands, it took in the mid teens to get a Hugo nomination in one of the most important categories. While that has improved in terms of raw numbers, it’s actually gotten worse in terms of percentage (more on that later).
Here’s the Short Story graph; the Novella and Novelette graphs are similar, just not as pronounced:
The Puppies absolutely dominated this category in 2015, more than tripling the Low Nom number. They were able to do this because the nominating numbers have been so historically low. Does that matter? You could argue that the Hugo nominating stage is not designed to yield the “definitive” or “consensus” or “best” ballot. That’s reserved for the final voting stage, where the voting rules are changed from first-past-the-post to instant-run-off. To win a Hugo, even in a low year like 2006, you need a great number of affirmative votes and broad support. To get on the ballot, all you need is focused passionate support, as proved by the Mira Grant nominations, the Robert Jordan campaign, or the Puppies ballots this year.
As an example, consider the 2006 Short Story category. In the nominating stage, we had a range of works that received a meager 28-14 votes, hardly a mandate. Eventual winner and oddly named story “Tk’tk’tk” was #4 in the nominating stage with 15 votes. By the time everyone got a chance to read the stories and vote in the final stage, the race for first place wound up being 231 to 179, with Levine beating Margo Lannagan’s “Singing My Sister Down.” That looks like a legitimate result; 231 people said the story was better than Lannagan’s. In contrast, 15 nomination votes looks very skimpy. As we’ve seen this year, these low numbers make it easy to “game” the nominating stage, but, in a broader sense, it also makes it very easy to doubt or question the Hugo’s legitimacy.
In practice, the difference can be even narrower: Levine made it onto the ballot by 2 votes. There were three stories that year with 13 votes, and 2 with 12. If two people had changed their votes, the Hugo would have changed. Is that process reliable? Or are the opinions of 2—or even 10—people problematically narrow for a democratic process? I haven’t read the Levine story, so I can’t tell you whether it’s Hugo worthy or not. I don’t necessarily have a better voting system for you, but the confining nature of the nominating stage is the chokepoint of the Hugos. Since it’s also the point with the lowest participation, you have the problem the community is so vehemently discussing right now.
Maybe we don’t want to know how the sausage is made. The community is currently placing an enormous amount of weight on the Hugo ballot, but does it deserve such weight? One obvious “fix” is to bring far more voters into the process—lower the supporting membership cost, invite other cons to participate in the Hugo (if you invited some international cons, it could actually be a “World” process every year), add a long-list stage (first round selects 15 works, the next round reduces those 5, then the winner), etc. All of these are difficult to implement, and they would change the nature of the award (more voters = more mainstream/populist choices). Alternatively, you can restrict voting at the nominating stage to make it harder to “game,” either by limiting the number of nominees per ballot or through a more complex voting proposal. See this thread at Making Light for an in-progress proposal to switch how votes are tallied. Any proposed “fix” will have to deal with the legitimacy issue: can the Short Fiction categories survive a decrease in votes?
That’s probably enough for today; we’ll look at percentages in the short fiction categories next time.
Nina Allan, The Race
James L. Cambias, A Darkling Sea
William Gibson, The Peripheral
Daryl Gregory, Afterparty
Dave Hutchinson, Europe In Autumn
Simon Ings, Wolves
Cixin Liu (Ken Liu, translator), The Three-Body Problem
Emily St. John Mandel, Station Eleven
Will McIntosh, Defenders
Claire North, The First Fifteen Lives of Harry August
Laline Paull, The Bees
Adam Roberts, Bête
John Scalzi, Lock In: A Novel of the Near Future
Andy Weir, The Martian
Jeff VanderMeer, Area X (The Southern Reach Trilogy: Annihilation; Authority; Acceptance)
Peter Watts, Echopraxia
The Campbell Memorial can be confusing since it has basically the same name as the John W. Campbell Award for Best New Writer (given at the same time as the Hugos). The two awards should fight a duel to see who keeps the name.
The Campbell Memorial is a juried SF only award, thus giving it a very different feel from the Hugo or Nebula. If you peruse their history page, they’ve moved in and out of alignment with the Hugos and Nebulas, often slanting more in a literary direction, such as last year winner Strange Bodies by Marcel Theroux.
It’s a very interesting list this year. They hit the major American SF novels (Gibson, Watts, etc.) but also managed to bring in the novels that were buzzed about in Europe (Ings, Allan, Hutchinson). It’s nice to see Andy Weir finally get a nomination, publication date for The Martian be damned. Given the literary slant of this award, is this Station Eleven‘s to lose?
Today, we’ll start getting into the data for the fiction categories in the Hugo: Best Novel, Best Novella, Best Novelette, Best Short Story. I think these are the categories people care about the most, and it’s interesting how differently the four of them work. Let’s look at Best Novel today and the other categories shortly.
Overall, the Best Novel is the healthiest of the Hugo categories. It gets the most ballots (by far), and is fairly well centralized. While thousands of novels are published a year, these are widely enough read, reviewed, and buzzed about that the Hugo audience is converging on a relatively small number of novels every year. Let’s start by taking a broad look at the data:
That chart list the total number of ballots for the Best Novel Category, the Number of Votes the High Nominee received, and the Number of Votes the Low Nominee (i.e. the novel in fifth place) received. I also calculated the percentage by dividing the High and Low by the total number of ballots. Remember, if a work does not receive at least 5%, it doesn’t make the final ballot. That rule has not been invoked for the previous 10 years of the Best Novel category.
A couple notes on the table. The 2007 packet did not include the number of nominating ballots per category, thus the blank spots. The red flagged 700 indicates that the 2010 Hugo packet didn’t give the # of nominating ballots. They did give percentages, and I used math to figure out the number of ballots. They rounded, though, so that number may be off by +/- 5 votes or so. The other red flags under “Low Nom” indicate that authors declined nominations in those year, both times Neil Gaiman, once for Anasasi Boys and another time for The Ocean at the End of the Lane. To preserve the integrity of the stats, I went with the book that originally was in fifth place. I didn’t mark 2015, but I think we all know that this data is a mess, and we don’t even really know the final numbers yet.
Enough technicalities. Let’s look at this visually:
That’s a soaring number of nominating ballots, while the high and low ranges seem to be languishing a bit. Let’s switch over to percentages:
Much flatter. Keep in mind I had to shorten the year range for the % graph, due to the missing 2007 data.
Even though the number of ballots are soaring, the % ranges are staying somewhat steady, although we do see year-to-year perturbation. The top nominees have been hovering between 15%-22.5%. Since 2009, every top nominee has managed at least 100 votes. The bottom nominee has been in that 7.5%-10% range, safely above the 5% minimum. Since 2009, those low nominees all managed at least 50 votes, which seems low (to me; you may disagree). Even in our most robust category, 50 readers liking your book can get you into the Hugo—and they don’t even have to like it the most. It could be their 5th favorite book on their ballot.
With low ranges so low, it doesn’t (or wouldn’t) take much to place an individual work onto the Hugo ballot, whether by slating or other types of campaigning. Things like number of sales (more readers = more chances to vote), audience familiarity (readers are more likely to read and vote for a book by an author they already like) could easily push a book onto the ballot over a more nebulous factor like “quality.” That’s certainly what we’ve seen in the past, with familiarity being a huge advantage in scoring Hugo nominations.
With our focus this close, we see a lot of year-to-year irregularity. Some years are stronger in the Novel categories, other weaker. As an example, James S.A. Corey actually improved his percentage total from 2012 to 2013: Leviathan Wakes grabbed 7.4% (71 votes) for the #5 spot in 2012, and then Caliban’s War 8.1% (90 votes) for the #8 spot in 2013. That kind of oddity—more Hugo voters, both in sheer numbers and percentage wise, liked Caliban’s War, but only Leviathan Wakes gets a Hugo nom—have always defined the Hugo.
What does this tell us? This is a snapshot of the “healthiest” Hugo: rising votes, a high nom average of about 20%, a low nom average of around 10%. Is that the best the Hugo can do? Is it enough? Do those ranges justify the weight fandom place son this award? Think about how this will compare to the other fiction categories, which I’ll be laying out in the days to come.
Now, a few other pieces of information I was able to dig up. The Worldcons are required to give data packets for the Hugos every year, but different Worldcons choose to include different information. I combed through these to find some more vital pieces of data, including Number of Unique Works (i.e. how many different works were listed on all the ballots, a great measure of how centralized a category is) and Total Number of Votes per category (which lets us calculate how many nominees each ballot listed on average). I was able to find parts of this info for 2006, 2009, 2013, 2014, and 2015.
Table 6: Number of Unique Works and Number of Votes per Ballot for Selected Best Novel Hugo Nominations, 2006-2015
I’d draw your attention to the ratio I calculated, which is the Number of Unique Works / Number of Ballots. The higher that number is, the less centralized the award is. Interestingly, the Best Novel category is becoming more centralized the more voters there are, not less centralized. I don’t know if that is the impact of the Puppy slates alone, but it’s interesting to note nonetheless. That might indicate that the more voters we have, the more votes will cluster together. I’m interested to see if the same trend holds up for the other categories.
Lastly, look at the average number of votes per ballot. Your average Best Novel nominator votes for over 3 works. That seems like good participation. I know people have thrown out the idea of restricting the number of nominations per ballot, either to 4 or even 3. I’d encourage people to think about how much of the vote that would suppress, given that some people vote for 5 and some people only vote for 1. Would you lose 5% of the total vote? 10%? I think the Best Novel category could handle that reduction, but I’m not sure other categories can.
Think of these posts—and my upcoming short fiction posts—as primarily informational. I don’t have a ton of strong conclusions to draw for you, but I think it’s valuable to have this data available. Remember, my Part 1 post contains the Excel file with all this information; feel free to run your own analyses and number-crunching. If you see a trend, don’t hesitate to mention it in the comments.
The Hugo is a strange award. One Hugo matters a great deal—the Best Novel. It sells copies of books, and defines for the casual SFF fan the “best” of the field. The Novella, Novelette, and Short Story also carry significant weight in the SFF field at large, helping to define rising stars and major works. Some of the other categories feel more like insider awards: Editor, Semiprozine. Others feel like fun ways to nod at the SFF fandom (Fanzine). All of them work slightly differently, and there’s a huge drop off between categories. That’s our point of scrutiny today, so let’s get to some charts.
First, let’s get some baseline data out there: the total number of nominating ballots per year. I also included the final voting ballots. Data gets spotty on the Hugo website, thus the blank spots. If anyone has that data, point me in that direction!
I pulled that data off the HugoAward.org site, save for the flagged 895, which I grabbed from this File 770 post.
Now, how popular is each category? How many of those total nominators nominate in each category? First up, the averages for 2006-2015:
I included to averages for you: the 2006-2015 average, and then the 2006-2013 average. This shows how much the mix of Loncon, the Puppy vote, and increased Hugo scrutiny have driven up these numbers.
What this table also shows is how some categories are far more popular than others. Several hundred more people vote in the Novel category than in the next most popular category of Dramatic Long, and major categories like Novella and Novelette only manage around 50% of the Novel nominating vote. That’s a surprising result, and may show that the problem with the Hugo lies not in the total number of voters, but in the difficulty those voters have in voting in all categories. I’ve heard it mentioned that a major problem for the Hugo is “discovery”: it’s difficult to have a good sense of the huge range of novellas, novelettes, short stories, etc., and many people simply don’t vote in the categories they don’t know. It’d be interesting to have a poll: how many SFF readers actually read more than 5 new novels a year? 5 new novellas? I often don’t know if what I’m reading is a novella or a novelette, and does the lack of clarity in this categories hurt turnout?
Let’s look at this visually:
Poor Fan Artist category. That drop off is pretty dramatic across the award. Are there too many categories for people to vote in?
Let’s focus in on 2015, as that’s where all the controversy is this year. I’m interested in the percentage of people who voted for each category, and the number of people who sat out in each category.
Table 4: Percentage of Voters and “Missing Votes” per Hugo Category, 2015 Only
The chart at the top tells us a total of 2122 nominated in the Hugos, but no category managed more than 87% of that total. The missing votes columns is 2122 minus the number of people who actually nominated. I was surprised at how many people sat out each category. Remember, each of those people who didn’t vote in Best Novel, Best Short Story, etc., could have voted up to 5 times! In the Novella category alone, 5000 nominations were left on the table. If everyone who nominated in the Hugos had nominated in every category, the Puppy sweeps most likely wouldn’t have happened.
Again, let’s take a visual look:
That chart re-enforces the issue in the awards: less than 50% turnout in major categories like Novella, Short Story, and Novelette.
What to conclude from all of this? Total number of ballots isn’t as important as to who actually nominates in each category. Why aren’t people nominating in things like Short Story? Do the nominations happen too early in the year? Are readers overwhelmed by the sheer variety of works published? Do readers not have strong feelings about these works? Given the furor on the internet over the past few weeks, that seems unlikely. If these percentages could be brought up (I have no real idea how you’d do that), the award would immediately look very different.
Tomorrow, we’ll drill more deeply into the Fiction categories, and look at just how small the nominating numbers have been over the past decade.