I’ve spun my creaky model around and around, and here is my prediction for the Nebulas Best Novel category, taking place this weekend:
N.K. Jemisin, The Fifth Season: 22.5%
Ann Leckie, Ancillary Mercy: 22%
Naomi Novik, Uprooted: 14.7%
Ken Liu, The Grace of Kings: 13.3%
Lawrence Schoen, Barsk: 10.7%
Charles Gannon, Trial by Fire: 9.5%
Fran Wilde, Updraft: 7.3%
Remember, Chaos Horizon is a grand (and perhaps failed!) experiment to see if we can predict the Nebulas and Hugos using publicly available data. To predict the Nebulas, I’m currently using 10 “Indicators” of past Best Novel winners. I’ve listed them at the bottom of this post, and I suggest you dig into the previous year’s prediction post to see how I’m building the model. If you travel down that hole, I suggest you bring plenty of coffee.
Simply put, though, I treat a bunch of indicators as competing experts (one person says the blue horse always wins! another person says when it’s rainy, green horses win!) and combine those expert voices to come up with a single number. While my model gives Jemisin a very slight edge this year, anyone can (and has) won the Best Nebula Novel award. We’ve had some real curveballs in this category in the last 15 years, and if you bet money on this award, you’d lose. What I suggest is treating the list as a starting point for further thought and discussion . . . Why would Jemisin be in the lead? What about The Fifth Season seems to make it a front-runner?
This year, Jemisin does very well because of her impressive Hugo and Nebula history (6 prior nominations), her sterling placement on year-end lists, her nominations for the Hugo, Locus Fantasy, and Kitscies, and the fact this is the first novel in a series. Jemisin is very familiar to the Nebula audience and critically acclaimed. That’s a recipe for winning. The Nebulas tend to go to first books in a series (think Ancillary Justice or Annihilation from the past two years), so if Jemisin doesn’t win for Book #1 of The Broken Earth series, it could be quite a while before she has viable chance to win again. Does that help? Sure, SFWA voters could vote Book #3 to win, but that hasn’t happened in the past. I tend not to look much at content (there are plenty of other websites for that), but The Fifth Season does have some of the more experimental/literary prose Nebula voters have liked recently. Parts of it are in the second person, for instance. This book would fit pretty well with the Leckie and VanderMeer wins.
Leckie is probably too high in my formula—and that’s not because SFWA voters don’t like Leckie, but because Ancillary Justice just won 2 years ago. Do the SFWA voters really want to give Leckie another award for the same series so soon? Aside from that wrinkle, Ancillary Mercy has everything going for it: critical acclaim, award nominations, etc. A decade from now, I expect Leckie to have won the Nebule at least once more . . . but not until she publishes a new series.
I think Uprooted has a real shot. This is actually a great test case year, allowing us to balance what SFWA voters value the most: past Nebula history/familiarity? That helps Jemisin and Liu; Novik has 0 prior Nebula noms. If it’s popularity, that helps Novik—stroll over to Amazon or Goodreads, and you can see that Uprooted has 4-5 times more rankings than Jemisin or Leckie. In the past, though, the SFWA hasn’t much cared about mainstream popularity. If Uprooted wins, I need to recalculate my formulate take popularity more into account.
Ken Liu will be familiar to the Nebula audience–he’s already won a Nebula in short fiction. My formula dings him because he didn’t show up on year-end lists or in the other awards. Same for Updraft, although we’re lacking the Nebula history for Wilde.
Gannon is the new Jack McDevitt—and McDevitt got nominated a bunch of times and then won. So it’s not out of the realm of reason for Gannon to win this year: the other books split the vote, etc. Still, it’s hard to imagine voters jumping on to Book #3 of a series if Books #1 and #2 couldn’t win.
That leaves Schoen—a true wild card. Schoen had the most votes in the SFWA nomination recommended list, and we don’t yet know how much that matters. If Schoen wins, I’ll have to completely rejigger my formula. Things are getting a little creaky as is, and it’s probably time to go back and rebuild the model for Year #4.
Always remember the Nebula is an unpredictable award. Remember, The Quantum Rose won over A Storm of Swords. Who saw that coming? That’s why everyone has a decent chance in my formula: no one dips below 5%.
Lastly, remember Chaos Horizon is just for fun, a chance to look at some predictions and think about who is likely to win. A different statistician would build a different model, and there’s no problem with that—statistics can’t predict the future. Instead, they help us to think about events that haven’t happened yet. That’s just one of many possible engagements with the awards. Good luck to all the Nebula nominees, and enjoy the ceremonies this weekend!
Indicator #1: Author has previously been nominated for a Nebula (80%)
Indicator #2: Author has previously been nominated for a Hugo (73.33%)
Indicator #3: Has received at least 10 combined Hugo + Nebula noms (46.67%)
Indicator #4: Novel is science fiction (73.33%)
Indicator #5: Places on the Locus Recommended Reading List (93.33%)
Indicator #6: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #7: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)
Indicator #8: Receives a same-year Hugo nomination (60%)
Indicator #9: Nominated for at least one other major SFF award (73.33%)
Indicator #10: Is the first novel of a series or standalone. (80%)
The percentage afterward tracks the data from 2001-2015 (when available), so it reads that 80% of the time, the eventual winner had previously been nominated for a Nebula, etc.
In a great find on the last Chaos Horizon SFWA thread, one of the commenters noted that you can access earlier years of the SFWA Recommended Reading list by changing the URL. Data goes back to 2011, which gives us three additional years to analyze. Previously, I’d been relying only on the 2014 data to see a correlation between this list and the eventual Nebula nominees.
You can check out the lists by going to this page and changing the year. I’m most interested in whether or not there’s a correlation between the recommendations and the eventual nominees/winners of the Best Novel category, so let’s take a look at the Top 10. Eventual Nebula winners are in red; eventual Nebula nominees are in green. I also included the number of recommendations for each book. Remember, the Nebula nominates 6 works unless there are ties, and then it can nominate more (it nominated 8 in 2014, for instance).
Table #1: The Top 10 From the SFWA Recommended Reading List, 2014-2011
You can always click to make that chart bigger, but I think the colors tell the story. That’s a lot of green at the top of the chart and a lot of red at the very top.
3/4 times, the top vote getter from the Recommended List went on to win the Nebula. Schoen must be dancing right now for Barsk, which topped the 2015 list with 35 votes (Gannon did get 33, and Wilde 29, so Schoen shouldn’t start celebrating yet). The only exception to this rule was Kim Stanley Robinson in 2012. Maybe KSR, who had 11 prior Nebula nominations and 2 prior wins, was just so much better known to the voting audience than his fellow nominees, although that’s just speculation. That KSR win from the #4 spot does stand out as a real outlier to the other years.
The Top 6 recommended works got nominated 19/24 times, for a staggering 79.1% nomination rate. If you’re predicting the Nebulas, are you going to find any better correlation than this? Just pick the top 6, and bask in your 80% success rate. Even in the worst year of the past 4, you’d have gotten 67% right. For the record, 80% is better than Chaos Horizon has ever done, or I ever really hope to do. :(. Maybe the SFWA is trying to put Chaos Horizon out of business.
We’ve got one anomaly in 2012 that I can’t account for. Mary Robinson Kowal’s Glamour in Glass was nominated for a Nebula that year, and yet it appears nowhere (not even 1 vote!) on the 2012 Recommended Reading list. Before you start sharpening your Nebula conspiracy knives, I wonder if Kowal, who was SFWA Vice-President, asked to not appear on this list? This is also the year of John Scalzi’s Redshirts, and, according to him, he turned down a Nebula nomination that year because he was SFWA President. Check this thread, and search for “Redshirts” in the comments. Note that Redshirts doesn’t show up on the 2012 recommendations either. I find it hard to believe Scalzi and Kowal garnered enough support in 2012 to grab nominations without receiving a single recommendation on this list, but what do I know?
Other than the Kowal oddity, all the eventual Nebula nominees have all come from the Top 8 of the Recommended Reading list. There doesn’t see to be much rhyme or reason why works in the #7 or #8 spot overperform their position. We do have a number of SF novels that got elevated (McDevitt, Liu, Mieville), but we also have Caitlin Kiernan’s horror/weird novel The Drowning Girl beating out Elizabeth Bear’s fantasy novel Range of Ghosts. We may only be talking a couple votes when we get to these spots, so it’s probably best not to take too much stock in them. I guess if there’s a toss up between a SF novel and a F or Horror novel, go with the SF if you’re predicting.
Other items of interest: we’ve seen a sharp increase of votes over the past two years, almost doubling the numbers in 2014 and 2015 from 2013, 2012, and 2011. Has the intesnse scrutiny on the Nebula and Hugos kicked the SFWA voters into gear? Are more people aware of this list, and its possible impact on the Nebulas?
Anything else we can learn from this list? For those of you interested in short fiction, you can change the year to access past data for the Short Story, Novelette, and Novella categories. Happy analyzing!
Since Chaos Horizon is a website dedicated to gathering stats and information about SFF awards, particularly the Hugos and Nebulas, a list of declined award nominations might prove helpful to us. There’s a lot of information out there, but it’s scattered across the web and hard to find . Hopefully we can gather all this information in one place as a useful resource.
So, if you know of any declined nominations—in the Hugos and Nebulas or other major SFF awards—drop the info on the comments. I have not included books withdrawn for eligibility reasons (published in a previous year, usually). I’ll keep the list updated and stash it in my “Resources” tab up at the top.
Declined Hugo Best Novel nominations:
1972 Best Novel: Robert Silverberg, The World Inside (source: NESFA.org’s excellent Hugo website; Silverberg allegedly declined to give his other Hugo nominated novel that year A Time of Changes a better chance)
1979 Best Novel: James Triptree, Jr., Up the Walls of the World (source: NESFA.org; I have no idea what the story is here)
1989 Best Novel: P. J. Beese and Todd Cameron Hamilton, The Guardsman (source: NESFA.org; Jo Walton has an interesting snippet on this from her Hugo series, noting that the book was disqualified because of block voting)
2005 Best Novel: Terry Pratchett, Going Postal (source: 2005 Hugo Page, nomination links at bottom, Pratchett’s statement that he just wanted to enjoy the event)
2006 Best Novel: Neil Gaiman, Anasi Boys (source: Gaiman’s statement, 2006 Hugo Page, nomination stats at bottom)
2014 Best Novel: Neil Gaiman, Ocean at the End of the Lane (source: 2014 Hugo Page, nomination stats at bottom)
2015 Best Novel: Larry Correia, Monster Hunter Nemesis (source: Correia’s website)
2015 Best Novel: Marko Kloos, Lines of Departure (source: Kloos’s website)
An interesting list. Anasi Boys might have won the 2006 Best Novel Hugo over Robert Charles Wilson’s Spin; Gaiman was incandescently hot at the time. I don’t think Pratchett would have won, as Going Postal isn’t his best work, and Jonathan Strange & Mr. Norrell was a sensation that year. Gaiman wasn’t going to beat Ann Leckie’s Ancillary Justice in 2014.
Declined Nebula Best Novel nominations:
2012 Best Novel: John Scalzi, Redshirts (source: I found this mention on Scalzi’s blog; search the comments for “Redshirts”)
2312 by Kim Stanley Robinson won in 2012; I think Redshirts would have been competitive, although the Nebulas have never been particularly friendly to Scalzi. I’m sure this has happened other times in the Nebula, but the Nebula is more of a closed-shop award, and they don’t publicize what happens behind the scenes as much as the Hugos.
Other Declined Nominations (story categories, other awards):
1971 Hugo Novella: Fritz Leiber, “The Snow Women” (source: NESFA.org; Leiber was up against himself this year, for the eventual winner “Ill Met in Lankhmar”)
1982 Nebula Short Story: Lisa Tuttle, “The Bone Flute” (source: Ansible; Tuttle said that she had “written to withdraw my short story from consideration for a Nebula, in protest at the way the thing is run, and in the hope that my protest might move the Nebula Committee to institute a few simple rules (like, either making sure that all items up for consideration are sent around to all the voters; or else disqualifying works which are campaigned for by either the authors or the editors) which would make the whole Nebula system less of a farce”; she still won, then refused the award)
1990 Hugo Novella: George Alec Effinger, “Marîd Changes His Mind” (sources: NESFA.org; I have no clue why)
1991 Hugo Novella: Lois McMaster Bujold, “Weathermen” (source: NESFA.org;I have no clue why; EDIT: Mark mentioned that the first six chapters of The Vor Game, which won the Best Novel Hugo that year, are a lightly modified version of “Weathermen”; perhaps Bujold withdrew for that reason)
2003 Hugo Novella: Ted Chiang, “Liking What You See: A Documentary” (source: NESFA.org; Chiang allegedly felt it didn’t live up to his best work; I’m also linking this Frank Wu article from Abyss & Apex because it has some more discussion of other declined Hugo noms in other categories)
2015 Hugo Short Story: Annie Bellet, “Goodnight Stars” (source: Bellett’s website)
I know I must have missed plenty—I’m not necessarily plugged in to the inner workings of the SFF world. What other authors have declined, and why?
A sub-category of my broader genre study, this post addresses the increasing influence of “literary fiction” on the contemporary Hugo and Nebula Awards for Best Novel, 2001-2014. I think the general perception is that the awards, particularly the Nebula, have begun nominating novels that include minimal speculative elements. Rather than simply trust the general perception, let’s look to see if this assumption lines up with the data.
Methodology: I looked at the Hugo and Nebula nominees from 2001-2014 and ranked the books as either primarily “speculative” or “literary.” Simple enough, right?
Defining “literary” is a substantial and significant problem. While most readers would likely acknowledge that Cloud Atlas is a fundamentally different book than Rendezvous with Rama, articulating that difference in a consistent manner is complicated. The Hugos and Nebulas offer no help themselves. Their by-laws are written in an incredibly vague fashion that does not define what “Science Fiction or Fantasy” actually means. Here’s the Hugo’s definition:
Unless otherwise specified, Hugo Awards are given for work in the field of science fiction or fantasy appearing for the first time during the previous calendar year.
Without a clear definition of “science fiction or fantasy,” it’s left up to WorldCon or SFWA voters to set genre parameters, and they are free to do so in any way they wish.
All well and interesting, but that doesn’t help me categorize texts. I see three types of literary fiction entering into the awards:
1. Books by literary fiction authors (defined as having achieved fame before their Hugo/Nebula nominated book in the literary fiction space) that use speculative elements. Examples: Cloud Atlas, The Yiddish Policeman’s Union.
2. Books by authors in SFF-adjacent fields (primarily horror and weird fiction) that have moved into the Hugo/Nebulas. These books often allow readers to see the “horror” elements as either being real or imagined. Examples: The Drowning Girl, Perfect Circle, The Girl in the Glass.
3. Books by already well-known SFF authors who are utilizing the techniques/styles more commonplace to literary fiction. Examples: We Are All Completely Besides Ourselves, Among Others.
That’s a broad set of different texts. To cover all those texts—remember, at any point you may push back against my methodology—I came up with a broad definition:
I will classify a book as “literary” if a reader could pick the book up, read a random 50 page section, and not notice any clear “speculative” (i.e. non-realistic) elements.
That’s not perfect, but there’s no authority we can appeal to make these classifications for us. Let’s see how it works:
Try applying this to Cloud Atlas. Mitchell’s novel consists of a series of entirely realistic novellas set throughout various ages of history and one speculative novella set in the future. If you just picked the book up and started reading, chances are you’d land in one of the realistic sections, and you wouldn’t know it could be considered a SFF book.
Consider We Are All Completely Beside Ourselves, Karen Joy Fowler’s reach meditation on science, childhood, and memory. Told in realistic fashion, it follows the story of a young woman whose parents raised a chimpanzee alongside her, and how this early childhood relationship shapes her college years. While this isn’t the place to decide if Fowler deserved a Nebula nomination—she won the National Book Award and was nominated for the Booker for this same book, so quality isn’t much of a question—the styles, techniques, and focus of Fowler’s book are intensely realistic. Unless you’re told it could be considered a SF novel, you’d likely consider it plain old realistic fiction.
With this admittedly imperfect definition in place, I went through the nominees. For the Nebula, I counted 13 out of 87 nominees (15%) that met my definition of “literary.” While a different statistician would classify books differently, I imagine most of us would be in the same ball park. I struggled with The City & The City, which takes place in a fictional dual-city and that utilizes a noir plot; I eventually saw it as being more Pychonesque than speculative, so I counted it as “literary.” I placed The Yiddish Policeman’s Union as literary fiction because of Chabon’s earlier fame as a literary author. After he establishes the “Jews in Alaska” premise, large portions of the book are straightly realistic. Other books could be read either as speculative or not, such as The Drowning Girl. Borderline cases all went into the “literary” category for this study.
Given that I like the Chabon and Mieville novels a great deal, I’ll emphasize I don’t think being “literary” is a problem. Since these kinds of books are not forbidden by the Hugo/Nebula by-laws, they are fair game to nominate. These books certainly change the nature of the award, and there are real inconsistencies—no Haruki Murakami nominations, no The Road nomination—in which literary SFF books get nominated.
As for the Hugos, only 4 out of 72 nominees met my “literary” definition. Since the list is small, let me name them here: The Years of Rice and Salt (Robinson’s realistically told alternative history), The Yiddish Policeman’s Union, The City & The City, and Among Others. Each of those pushes the genre definitions of speculative fiction. Two are flat out alternative histories, which has traditionally been considered a SFF category, although I think the techniques used by Robinson and Chabon are very reminiscent of literary fiction. Mieville is an experimental book, and the Walton is a book as much “about SFF” as SFF. I’d note that 3 of those 4 (all but the Robinson) received Nebula nominations first, and that Nebula noms have a huge influence on the Hugo noms.
Let’s look at this visually:
Even with my relatively generous definition of “literary,” that’s not a huge encroachment. Roughly 1 in 6 of the Nebula noms have been from the literary borderlands, which is lower than what I’d expected. While 2014 had 3 such novels (the Folwer, Hild, and The Golem and the Jinni), the rest of the 2010s had about 1 borderline novel a year.
The Hugos have been much less receptive to these borderline texts, usually only nominating once the Nebula awards have done. We should note that both Chabon and Walton won, once again reflecting the results of the Nebula.
So what can we make of this? The Nebula nominates “literary” books about 1/6 times, or once per year. The Hugo does this much more infrequently, and usually when a book catches fire in the Nebula process. While this represent a change in the awards, particularly the Nebula, this is nowhere as rapid or significant as the changes regarding fantasy (which are around 50% Nebula and 30% Hugo). I know some readers think “literary” stories are creeping into the short story categories; I’m not an expert on those categories, so I can’t meaningfully comment.
I’m going to use the 15% Nebula and 5% Hugo “literary” number to help shape my predictions. I may have been overestimating the receptiveness of the Nebula to literary fiction; this study suggests we’d see either Mitchell or Mandel in 2015, not both. Here’s the full list of categorizations. I placed a 1 by a text if it met the “literary” definition: Lit Fic Study.
Hot off the presses is my newly collated SFF Critics Meta-List! This list includes 8 different “Best of 2014” lists, all by outlets that have a reasonable chance of either reflecting or influencing the Hugo/Nebula awards.
Currently included: Coode Street Podcast, io9, SF Signal, Strange Horizons, Jeff VanderMeer writing for Electric Literature, Adam Roberts writing for The Guardian, Tor.com, and a A Dribble of Ink. The lists were chosen because of their reach and previous reliability in predicting the Hugos/Nebulas (Tor, io9); the fame of the authors (VanderMeer, Roberts); or because the website/fancast has been recently nominated for a Hugo (Dribble, Strange Horizons, SF Signal, Coode Street). Any comments/questions about methodology are welcome.
Points: 1 point per list, unless the list is a collation of more than 3 critics (SF Signal, Strange Horizons, Tor.com). In that case, books can grab a maximum of 2 points, pro-rated for # of mentions on the list. See here for an explanation of this methodology.
6.5: Ancillary Sword
5: The Goblin Emperor
4.5: The Magician’s Land
4: Broken Monsters
4: The Bone Clocks
3.5: City of Stairs
3: The First Fifteen Lives of Harry August
3: The Girls at the Kingfisher Club
3: The Girl in the Road
3: All Those Vanished Engines
3: The Peripheral
3: The Three-Body Problem
2.25: The Race
2: Steles of the Sky
2: Our Lady of the Streets
2: Nigerians in Space
2: Europe in Autumn
2: A Man Lies Dreaming
2: Station Eleven
2: The Martian
2: Half a King
2: The Causal Angel
2: The Book of Strange New Things
2: The Memory Garden
2: My Real Children
Pretty much what I expected. The SFF world is often very repetitive (nominating the same authors over and over again), so Leckie makes sense at #1. She was so talked about for last year’s award she’s a natural for this year, even if people are less excited about Ancillary Sword. VanderMeer, Mitchell, and Bennett are no surprise near the top.
Addison’s The Goblin Emperor is doing well, particularly when the lists are a little more fan oriented. She does represent a methodological problem: she has 2 points from SF Signal and 2 points from Tor.com, as well as 1 point from Dribble of Ink. That’s concentrated, not broad support. In contrast, Leckie’s 6.5 points are spread out across 6 venues. If I gave 1 point max per venue, Addison would be knocked down to only 3 points. I’ll keep my eye on the math of this, and make adjustments to my counting as necessary. Remember, the goal is to be predictive, not perfect. We can just wait until the Nebula noms come out, and then reassess at that point.
In terms of Addison’s chances: secondary world fantasy is not an easy sell to the Nebula voters. I wouldn’t be shocked to see her on the slate, but I also wouldn’t be surprised if she missed. Given the way that the Nebula slate shapes the Hugo slate, her Hugo chances are closely tied to whether or not she grabs a Nebula nom.
Lev Grossman poses something of a problem. The Magicians trilogy is very well regarded, and books like this (fantasy that crosses over into the real world) have done well in the Nebulas as of late. Still, the last books of trilogies have usually NOT been part of the Hugo/Nebula process; it’ll be interesting to see if that bias continues.
Beukes is something of a surprise with Broken Monsters. A serial-killer novel that crosses over into a supernatural text in it’s last 50 pages, I’m not sure it’s speculative enough to grab an award nomination. Beukes almost made the Hugo slate last year, so don’t count this one out.
Some lesser known works, at least to Americans: Lagoon (no U.S. release, so Nebula eligibility is unlikely), All Those Vanished Engines by Paul Park (not very hyped in the U.S.), The Race by Nina Allan (released in the U.S., but not widely known over here). It’ll be interesting to see if these works continue to be part of the conversation.
The snubs: Station Eleven isn’t taking this list by storm. That might reflect a simple timeline problem: the Mandel came out late in the year, so people might only be getting to read it now. The Martian is also way down, but is that because it wasn’t a 2014 book?
I’d still like to add several more lists to this collation to see if we get a better convergence. Locus Magazine will have their “Best of 2014,” and I’m waiting for several Hugo nominated blogs to get their lists out (Book Smugglers, Elitist). Anyone else you would suggest for this collation?
Here’s the data: Best of 2014. This list is on the second tab.
As I’m putting together my “SFF Critics Best of 2014 Meta-List,” I’ve been trying to find lists that are likely to be reflective of the Hugo/Nebula voters. I don’t want to be mired in “old-media,” so I thought I better some “Best of 2014” podcasts to include.
The Coode Street Podcast, by Gary K. Wolfe and Jonathan Strahan, has twice been nominated for the “Best Fancast” Hugo Award. Wolfe is a prominent reviewer for Locus Magazine, and Strahan a frequent editor, including for the Best Science Fiction and Fantasy of the Year series. Probably good voices to listen to.
With guest author James Bradley, they recently put up a “Best of 2014” podcast. It’s an hour discussion, and ranges over a large number of important works from 2014. Here’s the list of what they identify as the best of the year:
Wolves, Simon Ings
The Magician’s Land, Lev Grossman
The Bone Clocks, David Mitchell
Clariel, Garth Nix
Beautiful Blood, Lucius Shepard
The Memory Garden, Mary Rickert
Academic Exercies, K.J. Parker
Stone Mattress, Margaret Atwood
Lagoon, Nnedi Okorafor
Half a King, Joe Abercrombie
Bathing the Lion, Jonathan Carroll
Bete, Adam Roberts
The Peripheral, William Gibson
The Girls at the Kingfisher Club, Genevieve Valentine
My Real Children, Jo Walton
The Blood of Angels, Johanna Sinisalo
All Those Vanished Engines, Paul Parks
The Book of Strange New Things, Michel Faber
Consumed, David Cronenberg
Annihilation, Jeff VanderMeer
The Girl in the Road, Monica Byrne
Ancillary Sword, Ann Leckie
Echopraxia, Peter Watts
The Causal Angel, Hanuu Rajaniemi
Orfeo, Richard Powers
The Three-Body Problem, Cixin Liu
Questionable Practices, Eileen Gunn
Proxima, Stephen Baxter
The Race, Nina Allan
Crashland, Sean Williams
More international than most lists, and this bring up an interesting point: major SF novels are getting published in England that aren’t getting published in the US. Lagoon, for instance, would be in the award mix if it had received as US publication. Without that, though, you’re cutting off too much of your potential audience (and probably aren’t even eligible for the Nebula). Books like Wolves or even Europe in Autumn (which was published here but not really marketed) might be worthy of award consideration, but losing over half their potential audience is going to make a Hugo or Nebula nomination next to impossible.
Coode street touches on many of the major candidates, and I found their framing of the year in SF quite useful. Coode Street is more interested in SF than in Fantasy, and they don’t discuss some of the fantasy candidates (such as City of Stairs or Goblin Emperor). By having a large number of lists, these genre imbalances should work themselves out.
I’ll update and post the Meta-List later today.
I’m doing some organizing work here at Chaos Horizon, so let me put up something I’ve been meaning to for a while: blank data sets for the Hugo and Nebula Awards for Best Novel, from the beginnings to the present. These are Excel formatted lists of the Hugo and Nebula winners + nominees, sorted by year and author. It was a pain to put these together, but now that they’re cleanly formatted I wanted to share them with the community.
So long story short, anyone who wants to do their own statistical study of the Hugo and Nebulas is free to use my worksheets. Excel is a powerful tool, and given the relatively small size of the data sets—311 Nebula nominees, 288 Hugo nominees—it isn’t too hard to use. With only a little amount of work—and data entry—you can be generating your own tables and graphs in no time. I’m also somewhat confident Google Docs can work with these, although I never use Google Docs myself.
The guiding principles of Chaos Horizon have always been neutrality and methodological/data transparency. Statistics are at their most meaningful when multiple statisticians are working on the same data sets. There’s lot of information to be sorted through, and I look forward to other what statisticians will find. If you do a study, drop me an e-mail at email@example.com or link in the comments.
Here’s the Excel File: Blank Hugo and Nebula Data Set. I’ll also perma-link this post under “Resources.”
Well, back to work. In my previous posts about genre, I’ve looked at the basic stat breakdowns of science fiction versus fantasy in the Nebula and Hugo Awards. Today, I want to take a closer look at fantasy and the Nebula Award for Best Novel, 2001-2014, to see if we can find some useful patterns based on fantasy sub-genres.
A couple preliminary notes: I’m using 2001-2014 because as a data range because this isolates recent statistical trends. What happened in the 1960s and 1970s likely has little relevance to the modern Nebula or Hugo award, particularly given how rapidly the awards have changed vis-a-vis genre. Different publishing environment, different review environment, different set of readers, etc.
Second: for this sub-study, let’s work on simple hypothesis. Proposed: The Nebula and Hugo Awards for Best Novel have been biased against serialized secondary world fantasy. This is an impression I’ve always had, and I want to see whether or not the stats back this up.
Let’s break down the terms: by “serialized” I mean books that are part of a series, i.e. a trilogy like Lord of the Rings or a seven-book sequence like A Song of Ice and Fire. The defining feature of a series is that you can’t/shouldn’t read the books individually; I don’t think anyone would suggest reading Assassin’s Quest before you read the first two volumes in Robin Hobbs’ trilogy. This would be different than China Mieville’s Bas-Lag series or Terry Pratchett’s Discworld, where the books share a common world (and even characters at times) but not one over-arching plot. You can read The Scar before Perdido Street Station, or Lois McMaster Bujold’s Paladin of Souls before The Curse of Chalion.
Second is this idea of “secondary world.” That’s a term drawn from Tolkein, and has come to mean a fantasy world with no explicit narrative connections to our world (the primary world). Here’s a decent online definition. While this is only one of many possible ways to slice fantasy sub-genres, I think it’s a useful one. A book that has connections to our world (Harry Potter, The Chronicles of Narnia, etc.) asks us to make a different leap of imagination than a book that takes place in an entirely fantasy realm. A primary world novel operates against the backdrop of our history, and thus the motivations, educations, philosophies, etc., of the characters are readily intelligible. A secondary world novel interrupts that familiarity, and forces us to take a different kind of cognitive approach. Different authors utilize those concepts in very different ways, and fantasy has ebbed back and forth between those two different models over the years.
So, for my initial division, I split the 36 Nebula fantasy nominees (2001-2014) between primary and secondary world novels. The split was remarkably even. Keep in mind, these are my categorizations; there’s no way to do this objectively. I classed a book as a “primary world” novel if it had any connection to our world at all:
That’s a fascinating split, and shows that my hypothesis might be wrong: there’s plenty of secondary world novels, ranging from Martin to Mieville to Le Guin. Even Pratchett sneaks in once! It’d be interesting to know how many total secondary world vs. primary world novels were published from 2001-2014, but that’s a piece of data we don’t have access to (and likely never will). The SFWA voters seem fairly evenly split between liking primary world and secondary world novels. Now, who wins?
From a statistical perspective, this a good and a bad chart. There’s only been 4 fantasy winners in the Nebulas from 2001-2014 (low data = unreliable results), but at least the chart is proportional to the nominee chart. This limited sample shows that there isn’t much of a bias for the SFWA writers once they’re actually voting on the slate. Walton and Gaiman are the winners for their primary world novels American Gods and Among Others, with Le Guin and Bujold grabbing secondary world wins for Powers and The Paladin of Souls.
Here’s a great moment to stop and comment how a low data set can yield garbage results: both primary world novels start with “A,” and both secondary world novels start with “P.” Does that have any significance? Absolutely not, but in a small data set, garbage like that shows up. In cases like this, we should just note the trend, not invest too much “meaning” into it, and move on.
Still, we haven’t born out my hypothesis: you seem just as likely to get nominated/win a Nebula for a secondary world novel as a primary world novel. You do statistics just as much to disprove things as to prove them, so we should count this as a win.
Now, onto the serialized part. I further broke the 36 fantasy nominees down by additional questions. For primary world novels, I asked: is this book set in the present (post-1960s) or the past? I was interested if a book like Strange and Norrell was kicking off a trend. Primary world novels tend not to be heavily serialized; only Kowal’s books seem to fall into that category, so there wasn’t any interesting data to be found there.
For my secondary world novels, I broke them down by the following question: were they stand alone or part of a series? This is obviously somewhat difficult; someone might argue that Paladin of Souls is a sequel to Curse of Chalion; I consider it a stand alone. Take that into account when you look at the chart:
That’s a pretty even divide across sub-genres. Now, the winners (keep in mind there are only 4):
Those floating 0%—no wins for serialized secondary world fantasy or historical primary world novels—reveal a glimmer of bias. Serialized fantasy has only ever won once, and that’s only if you consider The Claw of the Conciliator fantasy. We’re getting into the upcoming “Sequel” study here, but I think we can conclude the following: the Nebula voters will periodically nominate secondary world fantasy series, but will rarely, if ever, give those books the award. Something that shook out here is the equal dismissal of primary world historical fantasies: that appears to a sub-genre the SFWA voters are not interested in recognizing.
I think we should also acknowledge that the two wins from 2001-2014 for secondary world fantasy were by authors already extremely well known to the Nebula audience: Le Guin and Bujold. Was it their general level of fame that grabbed them those wins (i.e. overcoming a bias against secondary world novels)? Or does this represent a loosening of the Nebulas?
Unfortunately, we can’t reach any grand conclusions. In terms of making it into the slate, most primary and secondary world fantasy seem to have an equal chance. Some sub-genres drop out when you zoom in, but we’ve only had 4 fantasy winners and can’t overcommit to those results. These patterns make sense, though: the Nebula is friendly to stand-alone secondary world novels and to primary world novels set in the present.
Tomorrow, I’ll put up the relevant charts on these issues for the Hugo awards, 2001-2014. As always, here’s my data if you want to double-check: Nebula Sub-Genre Data.
Yesterday, we looked at the Nebula slate; today, we’ll look at the Nebula winners. I show seven fantasy novels (out of 50 winners total; there was a tie in 1967) as having won the Nebula Award:
1982: Claw of the Conciliator, Gene Wolfe
1988: Falling Woman, Pat Murphy
1991: Tehanu, Ursula K. Le Guin
2003: American Gods, Neil Gaiman
2005: Paladin of Souls, Lois McMaster Bujold
2009: Powers, Ursula K. Le Guin
2012: Among Others, Jo Walton
Interestingly, the 1980s were better for winning than the 1990s (we’ll see that also reflected in the Hugo in the upcoming days), and things have picked up a great deal in the last 15 years for fantasy. This is a pretty broad slice of fantasy: we have secondary world novels with Bujold and Le Guin, contemporary fantasy with Walton and Gaiman, and Wolfe’s nearly unclassifiable Dying Earth style book. Here’s the data and charts:
The chart is pretty zig-zaggy because we’re dealing with such small numbers (10 per decade), although you do see a gradual increase over time in the direction of fantasy wins. Still, the “win” chart is nowhere near as dramatic as the “nominee” chart, proving that it’s easier to get nominated as a fantasy novel than to win as a fantasy novel.
We can conclude that fantasy novels tend to underperform once they reach the slate: since 1980, fantasy novels have made up 32% of the slate but only account for 20% of the wins. That’s a statistically significant bias against fantasy novels winning, something I need to take into account for my future predictions.
In an odd way, the more fantasy novels get nominated, the harder it can be for a fantasy novel to win, as the fantasy vote ends up getting split between the slate. 2013 is a perfect example of this: one SF novel faced off against 5 fantasy novels. 2312 ended up winning, because I imagine all the “the Nebula should go to a SF novel” SFWA voters voted for Robinson, and the fantasy votes were spread our across the other 5. If we’re considering genre alone, fantasy books are at a disadvantage of winning. Of course, genre alone does not determine the winner, as many other factors—familiarity, reception, popularity, demographics, etc.—also come into play.
In a statistical study like this, you have to think about what the “baseline” might be, i.e. what the stats would be like without bias. Is the Nebula an award moving toward a 50/50 split between fantasy and science fiction? Why should/would 50/50 be the baseline? Isn’t fantasy more popular than science fiction, at least in terms of readers in 2014? What about critical prestige? What about the nebulous and nearly impossible to define idea of “tradition”? How about the bias towards well-known authors? How about potential biases regarding gender? What about the bias against books in a series or sequels?
All of these are going to factors in the eventual fantasy/science fiction split the Nebula arrives at, and all these factors change over time. Trying to cross-correlate all those variables before we have a basic understanding is only going to result in mass confusion. As Chaos Horizon slowly builds up its data sets, the best we can do is think about the statistical moment we’re at right now, as predicting even the next 5 years is very difficult. So, to sum up the situation for the Nebula:
1. The Nebula slate breaks into three eras: a 15 year period (1966-1980) where fantasy was largely excluded, a 30 year period (1980-2010) where fantasy was around 25%-30% of the slate, and a more recent era (2010-2014) where fantasy has overtaken SF on the slate. Are the last 5 years a statistical aberration or something that is likely to continue?
2. The Nebula winners have been more consistent since 1980, accounting for around 20% of the wins, with a general increase in % of winners over time. Be aware that these conclusions are shakier because the numbers are smaller. Nonetheless, fantasy novels have underperformed on the slate, winning at a smaller proportion than their SF peers.
Tomorrow, we’ll look at how the Hugo Award nominees have shaped up! Any questions so far?
In this post, we’ll look at how genre impacts the Nebula Award for Best Novel from the start of that award (1966) up to the present day (2014). Let’s jump right into the data:
The streams have crossed! Lame Ghostbusters joke aside, there’s a lot of information to sort through here. Obviously, that cross in the 2010s is going to jump out the most, but let’s make some other observations:
1. The Nebula embraced fantasy nominees fairly early in its history, starting primarily in 1981. This was a surprise to me; I thought the change would have come later. The Nebula has been awarded for almost 50 years, and it’s only for the first 15 years that this was exclusively a SF award.
2. The Nebula was fairly consistent through the 1980s, 1990s, and 2000s, nominating around 25%-30% fantasy novels for that 30 year period, or roughly 2 fantasy novels per year.
3. The Nebula has changed drastically in the last 5 years. While the 2010s aren’t over, we’re more than halfway through, and already more fantasy novels have been nominated this decade than in the 2000s. Even if you believe the numbers are a little skewed, a retreat back to that 30% number is statistically unlikely.
1960s: A straightforward SF decade. The Nebula was still finding its way, and we have a very erratic # of nominees per year: 1966 saw 12 nominees, with 1967 only 3. The only “Other” book in this decade was James Blish’s Black Easter, an outstanding horror-themed book about demonic summoning. This is best read with its companion volume The Day After Judgment, usually collected together as Devil’s Day. Blish, of course, was already well-known for the SF audience, and this pattern—genre-borderline books getting nominated if they’re by well-known authors—will continue for the next several decades.
1970s: Plenty of “Other” books form this decade. In 1976, the Nebula nominated an overwhelming 18 novels for the award (check out the sfadb for the full list). With that massive list, some unusual choices creep in: Italo Calvino’s Invisible Cities and E.L. Doctorow’s Ragtime. Throw in 1974’s nomination for Thomas Pynchon’s Gravity’s Rainbow and we have our first inklings of the Nebula’s sympathy for literary fiction. I don’t know if you could classify any of those novels as science fiction or fantasy, although I’ll listen if anyone wants to try.
There are some other hard-to-classify novels from the 1970s. I never know what to do with R.A. Lafferty, and he received a 1972 nomination for The Devil is Dead. Along this horror angle, Robert Silverberg grabbed a 1973 nomination for The Book of Skulls. I know people wouldn’t blink an eye if I classed this at SF, but it, at least in my opinion, is basically a realistic novel with a few horror elements. As a pure aside, this is part of Silverberg’s great “death” trilogy alongside Dying Inside and “Born with the Dead.” In my opinion, these three texts are Silverberg’s greatest achievement as an author, and if you can handle the gloom factor, they’re excellent reading.
Larry Niven and Jerry Pournelle were nominated for Inferno in 1977, a variation on Dante’s Inferno. You can see that in the 1970s, the best way to get a Nebula if you aren’t writing SF is to write something horror themed, particularly if it has “devil” or “death” in the title.
Lastly, we see our first fantasy books pop up in this decade. Poul Anderson received a 1976 nomination for A Midsummer Tempest, a Shakespeare-themed magic-infused alternative-history book. Anderson, though, was already a SF star, and he was one of that strange 18 nominee year. Richard Lupoff’s 1978 nomination for Sword of the Demon was a Japanese themed fantasy about demon-killing, and it fits the pattern of needing a horror theme in the title to make it into the Nebulas.
So, all told, the 1970s show a definite loosening of genre-boundaries in the Nebula, although this seems to be more inflected in the direction of horror or literary fiction than fantasy.
1980s: This is where things get interesting. Beginning 1981, fantasy arrives in a major way: Robert Stallman’s The Orphan, and, more significantly, Gene Wolfe’s Shadow of the Torturer.
Wolfe’s four volume The Book of the New Sun is the critical series for this decade. Each volume received a nomination, with the second volume (Claw of the Conciliator) winning the Nebula. New Sun is a difficult and hard to classify series. Drawing on elements of Jack Vance’s Dying Earth, it hovers on the line between fantasy and science fiction, a fact that I think helped it get nominated. Taking place in the far future, it initially seems to be pure fantasy, only to have some technological elements revealed in the later volumes. The Locus Magazine reviewers were equally confused: volumes 1-3 were voted as fantasy, and volume 4 made it as science fiction. In the 2012 Locus Century poll, it makes the list of both “20th Century Science Fiction Novel” and “20th Century Fantasy Novel.” Maximum confusion for everyone! I ultimately classified the four volumes just as the Locus voters saw them: #1-#3 as fantasy, #4 as science fiction. Make of that what you will.
Wolfe was a driving wedge, though, and after 1981 more and more clearly fantasy books get nominated: Jon Crowley’s Little, Big, Jack Vance’s Lyonesse, Orson Scott Card’s Red Prophet, as well as Wolfe’s own Soldier of the Mist. By the time the decade is over, 14 fantasy novels have been nominated, and Pat Murphy wins again in 1989 for her Mayan-influenced Falling Woman.
By this time, the Nebula has loosened it’s genre-policing. While some of these fantasy nominees were already well known for their SF (Wolfe, Vance, Card), others were not, and we see fantasy novels by lesser known authors pop up on the list. We aren’t seeing, though, fantasy novels by writers like Terry Brooks, Stephen Donaldson, Daniel Eddings, Mercedes Lackey, Marion Zimmer Bradley, etc. (i.e. the books that are bestsellers). No nomination for Mists of Avalon might be the most surprising.
1990s: The 1990s are filled with fantasy nominations. To mention some of the bigger ones: Elizabeth Scarborough’s Healer’s War, Ursula K. LeGuin’s Tehanu, Patricia McKillip’s Winter Rose, and George R.R. Martin’s A Game of Thrones. There are also a number of nominations by lesser known authors, showing a real openness in the Nebula to different types of fantasy literature. Notice there aren’t a lot of nominations for what we might think of as traditional “epic” fantasy: secondary world, part of a multi-volume series, etc. I’ll be taking a closer look at that in a few posts.
2000s: Two more nominations for George R.R. Martin, as well as multiple nominations for Nalo Hopkinson and Lois McMaster Bujold. Even someone like Terry Pratchett is able to get into the mix, scoring a 2006 nomination for Going Postal. We have plenty of lesser known authors grab nominations. For instance, China Mieville is nominated for Perdido Street Station. While it’s hard to remember, Mieville was unknown at the time: to grab a Nebula for a fantasy debut marks a major change.
We also have a broad range of fantasy novels nominated this decade, from more contemporary fantasy like American Gods to 19th century fantasy like Jonathan Strange & Mr Norrell to secondary world books like the Bujold, Martin, or Pratchett.
It’ll be a few more years, though, until fantasy takes over the Nebulas. Fantasy is still stuck around the 30% mark . . .
2010s: And that 30% jumps to 60% for this decade. We’ve seen an explosion of fantasy nominations in the last five years: 2010 had 4 fantasy nominations and only 2 SF nominations, 2012 was the same, and 2013 saw one lone SF novel face off against 5 fantasy contenders. Why the rapid acceleration? I, quite frankly, have no idea. The fantasy novels being nominated now come from all versions of fantasy: contemporary (Gaiman’s The Ocean at the End of the Lane, Walton’s Among Others), historical (Griffith’s Hild and Kowal’s Shades of Milk and Honey), experimental (VanderMeer’s Finch), literary (Wecker’s The Golem and the Jinni), and secondary world (the multiple nominations for Jemisin, Ahmed’s The Throne of the Crescent Moon).
So, to sum up: we saw a slow start for fantasy from 1960-1970s, with horror-themed books breaking the genre divide. Beginning 1981, fantasy leapt into the Nebulas, occupying around 25%-30% of the award. That held steady until 2010, when fantasy leapt into the lead. All genres of fantasy now seem welcome in the Nebulas, and it’s going to be fascinating to see what happens going forward.
If you want to look at the Excel sheet with the genre classifications, here it is: Nebula Genre Study.