Why So Early?
One of the more common critiques—that’s probably too strong a word, let’s use “comments,” because no one is trying to be hostile—about Chaos Horizon is that it’s too early to start thinking about the 2015 Hugo and Nebula awards. So why I am I predicting Hugo and Nebula slates so early?
As a reader, I’m interested in the Hugo and Nebula awards because they allow me to keep track of the trends—and controversies—going on in the SFF world. Over the past 4 or 5 years, I’ve tried to read each of the Hugo and Nebula nominees, but I noticed I was falling farther and farther behind. When the Nebula nominees would come out in late February, I’d find myself having to buy 4 or 5 books, and then having to buy another 2 or 3 books when the Hugo slate got announced. By then, I’d be so far behind I wouldn’t have time to read all of the nominees before awards season. Because of this, I ended up missing out on the conversations and arguments that go along with choosing the Hugos and Nebulas.
Last year, I decided to get ahead of the curve, and read the major SFF books as they came out. I went looking for resources on potential Hugo and Nebula nominees, and there wasn’t much out there. Many SFF review sites are very enthusiastic about the genre (as they should be), and end up recommending lots and lots of books. I don’t have the time (or money) to read 30-40 new SFF novels a year; I need to contain my SFF reading to about one book a month, both for my pocketbook and sanity.
Thus Chaos Horizon. By looking for past trends in the Nebula and Hugos, I figured I could come out with the most likely nominees myself. That way, I’ll save myself a little bit of time and money by getting a jump on the award season. While I’m not going to be 100% accurate—that’s an impossibility—but if I can predict (and read) a good chunk of the eventual nominees by the end of the year, I’ll only have to buy a few books when the slates come out. I also think readers need time to process all these books. It’s not easy to zip through The Bone Clocks and Echopraxia, so if I’m going to read them, I want to read them over Christmas and other vacations, when I have some actual time to dedicate to them. Better to have a good list by October than to have to wait until February.
Is there a downside to thinking about the awards early? Some could argue that this is going to slight books that come out later in the year, but I’m not so sure. Won’t drawing attention to contenders as they come out leave plenty of space for contenders from the end of the year? Anyways, most Hugo noms are published between May and October, so that prime season is almost over. Another objection could be that predicting early ends up utilizing things like reputation, marketing, and past awards history, rather than the actual content of the novel. I think that objection is 100% true—but that’s also what nets award nominations. The Hugos and Nebulas are stuffed with repeat nominees, and that statistical consistency is what makes Chaos Horizon possible.
So tl;dr: I started Chaos Horizon because I don’t have enough money or time to buy and read all the Nebula/Hugo nominated books when the slates are announced. I wanted to get a start on selecting and reading these books earlier, and thus the site. Questions? Objections?
Jo Walton’s My Real Children Review Round-Up
Jo Walton’s previous novel, Among Others, was something of a surprise winner of the 2012 Hugo and Nebula awards. More of a meditation on growing up as an isolated and lonely SFF fan than a “traditional” speculative work, it rode a wave of good will to sweep the two major awards. Like many other recent Hugo and Nebula winners, it’s a book that launched a thousand discussions: is Among Others science-fictiony enough to win these awards? Where are the limits of genre? What kinds of books should these awards honor? Should the Nebula and the Hugo go to the same book?
Chaos Horizon is not in the business of policing genre, so we’ll have to put those fascinating and maddening questions aside. What Chaos Horizon does is use past Hugo and Nebula data to predict future winners and nominees. Since Walton was a double winner in 2012, it’s likely that My Real Childrem will receive strong consideration in 2015. The Publisher’s Weekly review (by Lev Grossman) nails this book when it calls it a “Schrodinger’s cat” kind of novel, as it gives us two alternative timelines for an aging woman looking back at her past. We don’t know which past is real, and we wind up with a quiet book that draws its tone from literary fiction, meditating on the branching possibilities of life. Taken in conjunction with Among Others, My Real Children shows Walton’s willingness to go beyond the boundaries of genre, and draw in material normally considered too “literary” or “slow” for more traditional SFF works. Some readers will value that experimentation; others will be turned off by it.
Among Others offered a strong nostalgia element, taking fans back to their own youths of reading and obsessing over SFF novels. My Real Children doesn’t provide that same thrill, so it’s likely to be a little less beloved than Walton’s last novel. The Nebula is more receptive to books that push the limits of the SFF/literary borderline, and it only takes a small number of passionate Walton fans to grab a nomination. I’ve currently got her in the field at #4 for my Nebula Watchlist, and at #6 on my Hugo Watchlist, although those might be too high. It’ll be interesting to see if she shows up on the yearly “Best of Science Fiction and Fantasy” lists, and we’ll have a better sense if this is being embraced as a “genre” novel.
On to the novel:
Book published May 20, 2014.
About the Book:
Jo Walton’s web page
Jo Walton’s blog
Jo Walton’s posts on Tor.com
Amazon.com page
Goodreads page
Publisher’s page (Macmillan)
Mainstream Reviews:
Publisher’s Weekly
Kirkus Reviews
NPR
The Guardian
New York Times (brief review, scroll down the page)
A.V. Club (B)
WordPress Blog Reviews:
Necromancy Never Pays
Nashville Book Worm
1330v (4.5 out of 5)
Quill and Quire
Bibliotropic (5 out of 5)
Reading the End
Genre-Bending
Brainfluff (10 out of 10)
Sociologist Novelist
Robin’s Books
Bookworm Blues
Bureau 42 (34 out of 42)
Since I’m doing this review Round-Up in September, some four months after the book was published in May, I pulled a good collection of reviews, but this list is by no means comprehensive. Reviews for My Real Children were uniformly positive, often calling the book “powerful” and “emotional” while noting its experimental nature. Judging by how this books has been received on WordPress, it does appear to be a very serious candidate for the 2015 Nebula: a book that makes an emotional connection with readers is much stronger than a book that merely entertains readers, at least for these awards. Interestingly, most reviewers didn’t score the book numerically, an indication they were approaching it more as literary fiction than science fiction.
Exactly what shakes out with My Real Children is going to be one of the more interesting 2015 Nebula and Hugo questions. You’d expect the 2012 winner to easily coast into nominations for these awards with her next book, but will that happen, and what does this say about how readers are accepting literary science fiction? Or will Walton’s sterling reputation, built on such clearly fantasy books as Tooth and Claw, be enough to overcome these genre questions?
The Hugo Award and Publication Dates, Part 4: Conclusion and Discussion
Over the past several days, Chaos Horizon has been looking at the correlation between US publication dates and the frequency of being nominated for or winning a Hugo Award for Best Novel, 2001-2014. Today, we’ll wrap up that report and open the floor for discussion and questions. Here are the previous posts (with charts and data!): Part 1, Part 2, Part 3.
Based on the previous posts, I believe the conclusion we can reach is simple: there is a definite “publication window” that extends from May to October. About 75% of Hugo nominees come from this window, as do 85% of the winners. May and September were the best Hugo-winning months, perhaps correlating to the start of the Summer and Christmas book-buying seasons.
Further questions:
1. How does this “window” correlate with the number of SFF books published per month? That’s not an easy statistic to find, although we can make a rough estimate based on Locus Magazines list of SFF books per month. I trust LocusMag—they’ve been making this list for a long time, so there methodology is likely consistent—but this estimate is gong to be very rough. We should only pay attention to the trends in this chart, not precise numbers:
This is what we might expect: there is a definite spike in books published right before the Christmas book-buying season, a drop off in December in January, and a slight spike during the Summer book-buying season. Since more books are published in May, September, and October, it should come as no surprise more Hugo nominations and winners come from that time period.
From a publisher’s perspective, it might be that the Summer season is being neglected—it looks like everyone wants to publish in September and October. If I were an author, I might prefer to published in May: there’s a softer market (fewer titles to compete with), and maybe more of a chance for publicity/to be read.
2. Are we looking at a self-fulfilling prophecy? Do publishers believe that May-October are the best months for publishing potential Hugo books? In other words, do publishers hold their Hugo books until this window, thus biasing the stats as a result? Would publishers be better off trying other months, in an attempt to break through to an audience that needs books to read?
3. Is the internet changing the importance of publication dates? If so, how? Do e-books provide more immediate access than print books, and would that alter the publication window? Could publishers extend the window by dropping e-book prices later in the year?
4. How much stock can we place in this study, given the relatively small amount of data: 68 nominees and 14 winners? Is this too small of a data set to draw reliable conclusions from?
5. Is it fair to only think about US publication dates? How would UK (or international) publication dates factor in?
Lastly, are there any concerns or issues you’d like to raise about this study? Statistics can be incredibly misleading, as they depend enormously both on the data set and the statistical model being set up by the analyst (in this case, me). Chaos Horizon is committed to transparency in all reports. How else could the study be set up? How could we provide a more complete picture of publication dates and the Hugo Award?
The Hugo Award and Publication Dates, Part 3: Methodology and Data
This methodology post is unlikely to be much of interest to the casual reader, but I’m recording this information in case anyone wants to double check the data, or to call into question the kind of data I used. It is very easy to mislead the public using statistics, and Chaos Horizon is trying to avoid that by providing maximum transparency on all studies and reports. If you have questions, ask in the comments or e-mail me at chaoshorizon42@gmail.com.
Date Range: Why 2001-2014? I used this date range because 2001 marks a substantial shift in the Hugo awards. Prior to 2001, the Hugo award for Best Novel was basically a SF award, with all prior awards having been Science Fiction novels. J.K. Rowling wins for Harry Potter and the Goblet of Fire in 2001, and this opens up the Hugos to all sorts of different genres and types of books, and can be thought of as starting the “modern” era of the award. There is also undeniable convenience to starting studies with the new millennia. It’s also hard to believe that the book market back in something like 1994 was the same as now: no internet, no e-books, vastly different audience and buying habits. The farther we go back in time, the more we cloud the statistics.
September 2014 is when the study was made, thus marking the upper part of the date limit.
Limitations: I limited myself to US publication dates in this study, although the Hugo encompasses both the American, British, and international authors and voters. No novel in translation was nominated for the Hugo Award from 2001-2014, so the exclusion of international publication dates seems justified.
British publication dates were trickier, and I initially explored them in some detail. That data is present on the third page of the Excel spreadsheet. British dates were not as readily accessible, and even when I could find them I had no real way of double-checking them. Furthermore, some texts were published simultaneously in the UK; in the case of British authors, some texts were published earlier; and in the case of American authors, some texts were published later. Those discrepancies introduced a great deal of uncertainty into the project, as it wasn’t clear which date should be used. British publication dates likely greatly impacted the years the WorldCon was in the UK, and had less impact when the WorldCon was in the US. If anyone can think of a clever way to find and handle British publication dates, I’m all ears.
Sources: To find the publication dates, I utilized three main sources. First, I used the International Science Fiction database, found at www.isfdb.org, to come up with an initial publication date. Probably the most in-depth resource for finding information about different SFF book editions, I utilized the first available date for US print editions in this study, excluding limited availability special editions.
Second: I cross-checked that isfdb date with Amazon. While we can debate some of Amazon’s sale practices, there is no doubt about the wide variety of book-related information their site offers. Since they are a professional book-seller, they have a huge stake in providing accurate data. Again, I tried to find the earliest published print edition, and, whenever possible, to match the ISBN of that edition against the isfdb.org info.
Interestingly—and frustratingly—the isfdb.org and amazon.com information often disagreed. Of the 68 dates provided, there were discrepancies in 20 of them. However, these were often very minor: isfdb.org reporting a March publication date, and amazon.com reporting a late February date. In general, amazon.com usually reported earlier publication dates by a few weeks.
Third: If the isfdb.org date and the amazon.com date disagreed, I went to the Barnes and Noble website to resolve the issue. Like amazon.com, this provides a wealth of information, and I trust their database because that’s how they make their money. In almost all instances, the amazon.com date agreed with the bn.com, so I went with the amazon/bn publication date. All disagreements are marked in the Excel spreadsheet.
Any discrepancies were only a matter of weeks (pushing a book from June to July), and are unlikely to cause major changes in the analysis. Still, you might want to avoid placing too much stock in any individual month; I believe the ranges of the seasons are more reliable.
Other possible sources: I tried out several other possible sources for publication data before discarding them. Both WorldCat and the Library of Congress, two major sources for cataloging books, only provided publication month, and I wanted as precise as information as possible.
Notes: Four nominated texts were excluded from the study. Robert Jordan and Brandon Sanderson’s The Wheel of Time is a series of 14 novels published over decades. Connie Willis won for Blackout/All Clear, two novels published during the same year. I could have used both dates, but I decided to go with neither to keep the data clear. Two books, both from the 2005 Hugos held in Glasgow, did not receive American releases prior to their year of nomination; those were River of Gods by Ian McDonald and The Algebraist by Iain M. Banks.
Weakness of the Study: With only 68 pieces of data, we’re falling far short of a substantial data set. As a result, small changes in the data—an individual author publishing in October rather than September—may affect the final results unduly. Since each individual novel accounts for around 1.5% of the total data, take everything with a grain of salt. While I feel it likely the broader conclusions are accurate, the specifics of months, particularly for the winners, probably needs to be de-emphasized. We shouldn’t place all that much stock that Jo Walton published Among Others in January rather than February, for instance.
While I could expand the data back another decade, and likely pick up 50+ more dates, I’ve decided not to go that route. I feel that the publishing market in the 1990s was substantially different than the publishing market in the 2000s, and that this additional data would not contribute much to the study. If someone else feels otherwise, and would like to chart that data, feel free. Send me a link if you do the analysis.
Here’s a link to the Excel spreadsheet that gathers all the data: Hugo Dates Study.
I think that sums up methodology questions. Let me know if you need any other information.
The Hugo Award and Publication Dates, Part 2
In Part 1 of this Chaos Horizon report, we looked at the relationship between US publication dates and Hugo Best Novel nominations from 2001-2014. Now, we can turn our eyes to actually wining the Hugo Best Novel for that same date range. Here’s a breakdown of winners by month for 2001-2014:
A couple of notes: I didn’t include the 2011 Hugo winner, Connie Willis’s Black Out/All Clear, because it was published as two separate volumes, one in February and one in October. I felt that this dual publication was an exceptional case; including it would muddy the analysis. That still leaves us with 14 winners, because Paolo Bacigalupi and China Mieville tied in 2010 for The Windup Girl and The City & The City.
It appears that May and September are far and away the best months for Hugo winners, at least for the 2001-2014 time period. With only 14 winners, we shouldn’t put a huge amount of stock in this chart, but May and September make a certain amount of sense. May is the beginning of the summer book buying season, and September the beginning of the Fall/Christmas book buying season: having your book published early in those cycles might maximize exposure and sales. The more people know about your book, the better a chance to win.
So—going back to Part 1—even though May (10), June (9), July (9), and October (8) yielded the most nominations, only May yielded a good number of winners. In terms of ratio, September was by far the best, with 4 out of the 6 September nominees going on to win.
Let’s look at the amount of winners per season, 2001-2014:
Except for the dismal winter, that’s a pretty even bar graph. Summer does dip a little, but that’s an exaggerated due to the small number of winners (14). Essentially, I’d estimate a novel has roughly the same chance of winning from the Spring, Summer, or Fall.
The window is still in full effect, though. Almost all our winners come from that May-October period, 2001-2014:
The only two novels to win out of that time period were authors with already established reputations: Jo Walton for Among Others, published January 2011, and Robert Charles Wilson’s Spin, published in April 2005.
So, tl;dr: if I were publishing a novel and wanting to win the Hugo, I’d request a release date of either May or September.
Tomorrow, we’ve got the boring Part 3: Methodology and Data.
The Hugo Award and Publication Dates: A Chaos Horizon Report
As part of its continued statistical analysis of the Hugo and Nebula Awards, Chaos Horizon is happy to present its first ever report. Today, we’ll be looking at the impact of publication date on the chances of being nominated for and winning the Hugo Award for Best Novel. Since this is going to be detailed, I’ll break the report down into four posts:
1. Analysis of publication date and the chances of being nominated for the Hugo, 2001-2014 (Monday 9/22)
2. Analysis of publication date and the chances of winning the Hugo, 2001-2014 (Tuesday 9/23)
3. Methodology (the boring part!) (Wednesday 9/24)
4. Conclusions and discussion (Thursday 9/25)
Introduction: The Hugo Award for Best Novel is a Science Fiction and Fantasy (SFF) award given annually at the World Science Fiction Convention (WorldCon). Voted on by the attendees of WorldCon, the Hugo has been awarded since 1953. For more details, see the Hugo awards website.
In general, all SFF books published in the previous year are eligible for the Hugo award. Nominations are due early the following year, often by March, and voting takes place at the actual WorldCon, usually in August, although the exact timeline can vary slightly. For this report, we’ll be considering initial US publication dates—the date a book is first released in print—and the chances of getting nominated for or winning the Hugo award.
Today’s research question is simple: are some publication dates better than others for Hugo nominations and/or wins? Some believe that January is too early to be published, as voters will forget about the novel when nomination season rolls around. Likewise, December may be too late, as readers won’t have enough time to read and process the book before nominations are due. Does a statistical analysis confirm these expectations?
Findings: When it comes to receiving a Hugo nomination, Chaos Horizon’s statistical analysis suggests that there is a “publication window” that extends from May to October. Let’s take a look at the data, which I generated by looking up the initial print US publication dates for 68 nominated novels between 2001 and 2014:
As you can see, there is a definite peak during the middle of the year. May (10 nominations), July (9 nominations), and June (9 nominations) were the best months, with October (8 nominations) also providing a solid option. November (2 nominations) and December (a sad 0 nominations) were the worst months. February was surprising, with 6 nominations, showing that all months—except December—have some promise.
When we break this down by season, the trend is even clearer.
We have a nice bell-shaped curve, with nominations peaking in the summer months and falling off on either side. I think the conclusion is pretty obvious: Summer is the best time to be published if you want a Hugo nom, with late Spring and early Fall being your other viable alternatives.
The window for maximum Hugo nomination chances extends from May to October, and the difference is pretty stark:
Nearly 75% of the nominees come from that May-October window, and only roughly 25% come from outside of it. While there may be other reasons to publish early in the year—a less competitive marketplace, for instance—when it comes to getting nominated for the Hugo, your best chances lie in publishing between May and October. Still, 28% is nothing to sneeze at. Life exists outside the “publication window,” and SFF readers are capable of finding good novels whenever they are published.
Tomorrow, we’ll consider what effect publication date has on winning the Hugo.
2015 Nebula Watchlist
As part of Chaos Horizon’s continued look at the 2015 Nebula Award for Best Novel, here’s my 2015 Nebula Watchlist:
Disclaimer: Chaos Horizon tries to determine which novels are most likely to be nominated based on data-mining past awards data, not who should be nominated for having the “best” novel in a more general sense. Take the list for what it is intended to be, as a starting point for debate of the 2015 Nebula.
In general, the Nebula award is harder to predict the Hugo award because we have less data. The Hugo award releases a list of the top 15 authors nominated that year, complete with number of votes. The Nebula only releases the final slate, with no actual information on how many votes each author got. This makes it harder to find out who was close in previous years, giving us far less info to make a good prediction on.
1. Ancillary Sword, Ann Leckie (2014 Nebula winner, 2014 Hugo winner)
2. Annihilation, Jeff VanderMeer (2010 Nebula nom, first book of a three book series all released this year, which received a lot of attention and buzz)
3. The Bone Clocks, David Mitchell (2005 Nebula nom for the well liked Cloud Atlas, huge marketing push, made NYT bestseller lists)
4. My Real Children, Jo Walton (2012 Nebula winner, 2012 Hugo winner, less SFF than her other works, although the Nebula cares less about that than the Hugo)
5. The Martian, Andy Weir (biggest debut SF novel of 2014, although eligibility issues—the book was originally self-published in 2012—might prevent a nomination)
6. Coming Home, Jack McDevitt (11 prior Nebula noms for best novel (!), but no 2013 or 2014 nom; still, you can’t count McDevitt out)
7. The Mirror Empire, Kameron Hurley (2012 Nebula nom, start of a well-received new series)
8. Valour and Vanity, Mary Robinette Kowal (2011 Nebula nom, 2013 Nebula nom for prior books in this series)
9. Yesterday’s Kin, Nancy Kress (5 prior Nebula wins, including 2013 Nebula novella; 2 prior Nebula best novel noms)
10. The Girl in the Road, Monica Byrne (high concept debut novel, good buzz)
11. Strange Bodies, Marcel Thereoux (won 2014 Campbell award, one of the few times Ancillary Justice got beat; maybe that counts for something?)
12. Literary Fiction interlopers: A large number of books from the literary world have used speculative elements this year, and the Nebula has, in the past, been somewhat receptive. This long list includes The Girl With all the Gifts by M.R. Carrey, Boy, Snow, Bird by Helen Oyeyemi, J by Harold Jacobson (shortlisted for the Booker Prize), Station Eleven by Emily St. John Mandel, On Such a Full Sea by Chang-Rae Lee, Book of Strange New Things by Michel Faber, and The Bees by Laline Paull. If one of these books gets nominated, it would be similar to The Golem and the Jinni‘s nomination from 2014.
13. The Lagoon, Nnedi Okorafor (2011 Nebula nom, but this novel only came out in UK this year; no US release yet)
That’s all I could come up with for now—it’s much harder to populate this list than the Hugo Watchlist, as Nebula voters are so unpredictable. I’m 100 % positive there are books not on this watchlist that will make the final slate, but what could they be?
If a novel didn’t make the list, it’s likely because the author lacked any real Nebula pedigree: that’s why a John Scalzi or Joe Abercrombie didn’t make it. Likewise, later novels in series rarely jump into the slate if earlier novels didn’t get nominated, cutting out an author like Elizabeth Bear.
Methodology:
The list is compiled using several factors:
1. Winners and nominees over the past several years: once you get nominated or win a Nebula, you’re likely to get nominated again. The Nebula has a much longer memory than the Hugo, and Nebula nominees from a decade back (like Griffth last year) can resurface.
2. Who won or was nominated for the Nebula in other categories and have novels coming out this year.
3. Potential crossovers with the Hugo awards.
4. Novels that have lots of critical buzz.
For more information about specific novels, check out My Too Early 2015 Nebula Prediction.
Obviously, this is not an exact science. Since Chaos Horizon primarily uses past Nebula performance to predict future Nebula performance, this hurts novelists who have never been nominated for the Nebula before.
I’d like to get the Watchlist to 15 by the end of the year. Anyone else to add? Thanks to everyone in previous threads who suggested novels. If you post a suggestion, try to back it up with some data. I’m waffling on Cherie Preist’s Maplecroft: she scored a 2010 nomination for Boneshaker, but this novel looks more horror than SFF.
Lauren Beukes’ Broken Monsters Review Round-Up
Lauren Beukes returns with a new “genre-bending” novel that mixes detective, horror, and the supernatural. A spin on the detective/serial killer story, Broken Monsters looks to be another successful entry in Beukes’ rapidly expanding career. Beukes mixes in her trademark weirdness, and the book crosses over into some dark and horrifying spaces. For the purposes of Chaos Horizon—a website dedicated to predicting the Hugo and Nebula awards—Beukes was heartbreakingly close to making the Hugos with her last book The Shining Girls. That time-travelling serial killer novel placed 7th in the 2014 Hugos, and was only two votes (two votes!) shy of making it into the field. Can Beukes build on that momentum and breakthrough this year?
I have Broken Monsters a little further down my 2015 Hugo Watchlist. This is less obviously Science Fiction than The Shining Girls. Beukes has built quite a career working between the boundaries of genre, and it’s yielded a lot of interesting literature. Unfortunately, Hugo voters don’t look as favorably upon horror-speculative hybrids, although they’ve loosened a little on that lately with Mira Grant. Broken Monsters might have a better chance at a 2015 Nebula nomination, given the reputation Beukes has established as a bold and experimental writer over these past years. My guess is she’ll miss the slate on both. Keep a close eye on her in future years: once she returns to writing more obviously “genre” fiction (whatever that means), she’ll be a strong contender in years to come.
Book published September 16, 2014
About the Book:
Lauren Beukes’ web page
Lauren Beuke’s blog
Amazon page
Note: Broken Monsters is one of the many books caught up in the Amazon/Hachette feud. It’ll be interesting to see if this affects sales. Amazon has made it harder to get to the Hardcover web page when you search for Broken Monsters.
Goodreads page
Publisher’s page (Mulholland books, of Little, Brown)
Mainstream Reviews:
Publisher’s Weekly
Kirkus Reviews (starred review)
Entertainment Weekly (B+)
The Guardian
WordPress Blog Reviewers:
Robyn Reads
Angela Savage
Tomcat in the Red Room
Ms. Wordopolis Reads
Words of Mystery
Lynn’s Book Blog
Espresso Coco
BroadartVibe
A Literary Mind (3 out 5)
The Thugbrarian Review (4 out of 5)
Generationbooks (5 out of 5)
My Good Bookshelf (4 out of 5)
Bride of the Good Book
Solid reviews from my fellow WordPress bloggers, although it’s clearly being received as a detective novel with some horror elements, not as Science Fiction or Fantasy novel. It’s an interesting note that more SFF reviewers seem to give books numerical scores; mystery reviews (and literary fiction reviewers) don’t give scores as often. It also looks like mystery book reviews are more organized than SFF book reviewers: there were more reviews for this book than the other SFF books I’ve been tracking.
On the negative side: several reviewers compared it unfavorably to The Shining Girls, seeing this novel more as a step back than a step forward for Beukes. Some readers were also disappointed with the ending. This confirms my analysis of the Hugo and Nebula chances for Broken Monsters; Beukes is liked, but this book doesn’t have the best chances of making a final Hugo or Nebula slate.
What do you think of Beukes’ chance this year?
Literary and Speculative Fiction: A Brief Follow-Up
In relation to our discussion of literary fiction and the Hugo and Nebula awards, try this little thought experiment:
If The City and the City, China Mieville’s post-modern tale about a mysterious dual city and ways of seeing, had been written by Cormac McCarthy, would it have won the Hugo Award?
If The Road, Cormac McCarthy’s hyper-violent post-apocalyptic tale, had been written by China Mieville, would it have placed 21st in the Hugo voting?
Your results may vary. Personally, I don’t think a Cormac McCarthy authored novel, no matter how speculative, would ever win the Hugo or Nebula. Likewise, I think if an author like China Mieville wrote a novel similar to The Road, with that level of emotional impact and that kind of memorable prose, he’d have a good shot of at least getting nominated.
In Hugo and Nebula voting, reputation matters as much as the content of an individual novel. This make sense: the awards are a popularity contest, and SFF authors are more popular with these voters than literary authors. This gives the Hugos and the Nebulas an inconsistent appearance, as speculative novels by “literary ” authors, no matter how well written, rarely make the final slates, while literary novels (and stories) by SFF authors do.
Ironically, this doesn’t make it harder to predict the Hugos and the Nebulas, but rather easier. Chaos Horizon can eliminate authors from contention based on literary reputation alone: that’s what the last 15 years of Hugo and Nebula voting reveal. Is this fair? Do you think it will change?
Literary Fiction and Predicting the Hugos and Nebulas
Chaos Horizon, a website dedicated to predicting the eventual winners of the Hugo and Nebula awards, largely ignores speculative novels that emerge from the world of “literary fiction.” Why?
Because such novels have, in general, not received past nominations. Let’s take a look at how these books are presented to the reading public:
In this week’s issue of Entertainment Weekly (September 19), there were two reviews of works that can be loosely classified as “genre” fiction: one of Broken Monsters by Lauren Beukes, presented here as a straightforward serial killer novel (although it has supernatural elements), and a review of Station Eleven by Emily St. John Mandel, identified as a “post-apocalyptic” novel. What’s of interest is how these two authors are positioned: Mandel is from the world of literary fiction, having published three previous non-SFF novels; she gets the lead story and receives an “A.” Beukes, who is best known for two earlier speculative novels, gets a smaller review several pages later and walks away with a “B+.”
I point this out not because Entertainment Weekly is the be all and end all of reviews, but rather because it sums up the American “mainstream” approach to SFF. Over the last ten years, there have been an increasing number of “literary SFF” novels written by authors outside the SFF community. Think of novels like Tom Perrotta’s The Leftovers, Karen Thompson Walker’s The Age of Miracles, or, from just this year, Chang-Rae Lee’s On Such a Full Sea. Add in slightly older books like Michael Chabon’s The Yiddish Policeman’s Union, Kazuo Ishiguro’s Never Let You Go, Cormac McCarthy’s The Road, Colson Whitehead’s Zone One, Margaret Atwood’s Oryx and Crake trilogy, Haruki Murakim’s 1Q84—the list of impressive and important novels goes on and on.
These books, particularly in publications like Entertainment Weekly, The New York Times, The New Yorker, etc., have been better reviewed and more prominently featured than what, for lack of a better term, we might think of as “traditional” SFF. By this, I mean authors that are publishing in SFF magazines and whose books appear from the SFF imprints. Since these literary books have received major pushes—and sold well, even winning major literary awards such as the Pulitzer prize for The Road—you might expect them to perform well in the Hugo and Nebula nominations. The Nebula, with its focus towards more experimental and literary books, seems an obvious landing spot for these authors.
Past analysis of nominations shows that these books perform very poorly. With the exception of Chabon—who won the Nebula and the Hugo—and David Mitchell’s Cloud Atlas (a Nebula nom), these books have not received Hugo and Nebula noms. The Road, arguably the most highly acclaimed SFF novel of the past 10 years, placed 21st in the 2007 Hugo voting. 21st!
It’s clear that Hugo and Nebula voters have not, over the past 15 years, wanted to reward authors from the “literary” world who dip into speculative fiction. I think Chabon is the exception because he wrote a number of genre stories; his Lovecraftian “In the Black Mill,” as well to his nods to comic book culture in The Amazing Adventures of Kavalier and Clay, have won him fans in the SFF community. Otherwise, these books have been snubbed.
What’s interesting is that the opposite has not happened—when SFF authors like China Mieville, Karen Joy Fowler, Jo Walton, Nicola Griffith, and so on, all who have a long history of writing speculative fiction, move over into writing more literary fiction, they get Nebula noms. These gives the slates a slightly incoherent feel: why is We Are All Completely Beside Ourselves, which only contains trace speculative elements, getting a nomination while The Road, a post-apocalyptic novel, getting ignored? Chaos Horizon can’t answer those questions; we can merely note the patterns and move on.
Since Chaos Horizon bases its predictions on data-mining past results, these kinds of novels will not make the final predictions until something changes in voting patterns.
So, despite glowing reviews of Station Eleven, it’s hard to consider it a true Nebula contender. I have Bone Clocks down as a possibility because of Mitchell’s prior nomination, but I wouldn’t be surprised if it misses the Nebula slate.
So, tldr;: over the past decade, the “literary” world has been producing an increasing number of books with strong speculative elements. While these books have done well in reviews and sales, they haven’t made the Nebula or Hugo slates. Because of this, Chaos Horizon will not be predicting such novels to make future Nebula or Hugo slates.
I’d love to hear any explanations of why such novels are ignored in the comments.