2015 Hugo and Nebula Contenders: Goodreads and Amazon Reader Rankings
Since it’s obvious I like charts and graphs, and that I want to find more “objective” measures of what SFF books are actually being liked in 2014, here’s a table of Amazon and Goodreads reader ratings for the main 2015 Hugo and Nebula Contenders, as of the end of November 2014:
Table 1: Goodreads and Amazon Reader Ratings, November 2014
I’ve always thought Goodreads ratings to be more reliable than Amazon’s, largely given the sample size. Goodreads usually has around 10 times more ratings than Amazon. Consider Words of Radiance: there are 36,000 Goodreads ratings versus 3,100 Amazon ratings.
I’ve never thought either rating is an accurate measure of the “quality” of a book. Everyone has a different scoring scale: some people hand out 5-star reviews like candy, others give a book a 1-star rating because they don’t like the cover. As such, I’d characterize these rating as a more nebulous measure of “satisfaction” than “quality,” which might not be particularly well correlated to Hugo or Nebula chances. People rate very personally, based on their unique likes or dislikes. Don’t expect the top-rated books to waltz off with the Hugo; this chart, like many of the other metrics, gives us only a piece of the Hugo and Nebula puzzle.
Sanderson and Correia do well because they deliver exactly what their fans want. Since their books are part of a series, everyone who hated the series stopped reading after Book #1, so all you have left are enthusiasts. Something like Annihilation, which takes several risks in both its storytelling and content, is more divisive amongst fans. I presume a lot of people bought the VanderMeer expecting one kind of book, and were confused or alienated or outraged by what VanderMeer actually did, thus the low scores. It’s interesting that three of the most “experimental” books—VanderMeer, Beukes, and Walton—scored the lowest. All of those also flirt with traditional genre-boundaries, something that the mainstream audiences tend to vote against.
Takeaways? Words of Radiance is amazingly well-liked. That 4.76 score is unprecedented, particularly given the 30,000+ rankings. I was only able to find one massively popular book on Goodreads that has done better, and that’s The Complete Calvin and Hobbes with a 4.80 rating. Sanderson beats out all comparable authors: Martin, Rowling, Jordan, Gaiman, etc. Hell, even Return of the King can only scrape up a 4.48 rating. If the Hugo wasn’t substantially biased against both epic fantasy and book #2 of a series, you’d have to consider Sanderson a major contender. I currently don’t even have Sanderson predicted for a nomination, but those metrics are impressive. I think Sanderson has the popularity—if not the critical respect—to win a Hugo, but to do so, the conversation around the Hugo would need to change. Since those conversations are evolving, particularly given the campaigning that has gone on the last few years, it’s something that might happen. Would anyone have predicted a nomination for The Wheel of Time this time last year? A similar campaign for Sanderson could get him into the slate, and if he’s on the slate, anything can happen.
Correia, Weir, and Bennett all perform very well. Bennett’s 4.24 for the experimental The City of Stairs is outstanding, and I think further boosts his chances of scoring both Nebula and Hugo nods. A lot of writers fall into an average score of around 4.0, which neither helps nor hurts them very much.
It’s interesting that a book I have at the top of my predictions, Annihilation, is dead last in this measure. Remember, though, you don’t have to be universally liked to get award. Instead, you need a small percentage of SFF fans (10% to make the slate, around 30% to win) who absolutely adore your book. I think VanderMeer has that core of enthusiasts, even if some other readers are more hesitant about his book.
So, what do you think reader ratings can tell us? Are there any other books I should add to the list?
Leckie Wins Nebula Award: Chaos Averted
Ann Leckie was announced as the winner of the Nebula Award for Best Novel.
Since Leckie was the winner picked by my prediction model, this means success for Chaos Horizon.
Leckie won the model not because of her award history—she has none—but because of the strong critical/reader reception of the book, and, most significantly, because of her dominant awards season performance. This year’s modeling shows how important things like Hugo nominations and other awards are for determining the winner.
Stay tuned for my upcoming Hugo prediction.
2014 Nebula Prediction: Final Analysis
So, now that Leckie’s Ancillary Justice has emerged as our pick, how did we get here?
Ann Leckie’s Ancillary Justice: 25.8% chance of winning the 2014 Nebula
What It Is: A sprawling space-opera novel about a sentient spaceship confronting past intergalactic wrongs. With flourishes of Ian M. Banks’ Culture series and Ursula K. Le Guin’s Hainish cycle, Leckie’s first novel introduced us to a complex future, including a provocative take on gender, and heralded the arrival of a promising new author.
Why She’ll Win: Leckie did poorly in the first part of the prediction model, which is based on an author’s previous awards history. She did well on the second part, which charts critical and reader response. Leckie, in particular, was a critical darling. Lastly, Leckie absolutely dominated the third part of the formula, which measures this season’s award performance. Leckie racked up five major SFF award nominations (Hugo, Clarke, Dick, Tiptree, BSFA), and won the Clarke and the BSFA. No one else in this year’s Nebula pool came close. It is this awards season dominance that boosted her past Gaiman, whose well-liked Ocean has received little awards chatter. Leckie is also the most viable SF novel of this year’s nominees, and the Nebula still slants in the SF direction.
Why She Might Not Win: Will Nebula voters reward a first novel? Did Leckie only begin to pick up steam this awards season after the Nebula votes were due? Will voters retreat to the safer, more well-known Gaiman? It wouldn’t be a surprise if Leckie lost—the Nebula does not usually reward first novels, although it has done so recently with Bacigalupi. While Ancillary Justice is ambitious, it is also clearly a first novel: Leckie is working out her writing style, and there are portions that are less clear/engaging than they could have been.
Neil Gaiman’s The Ocean at the of the Lane: 20.7% chance
What It Is: A brief fantasy/horror novel about a child encountering an extra-dimensional evil.
Why It Might Win: Gaiman was leading the prediction model until the very end, and very well might have stayed ahead of Leckie if he had received a Hugo nomination. Gaiman has the best awards history of this bunch, with plenty of Nebula and Hugo wins. Ocean was well received by critics and readers alike, winning things such as the Goodreads vote. Tons of people read Ocean, which means tons of potential voters. Gaiman fell apart, though, in the last third of the model, failing to attract much awards attention this season. Part of that is that fantasy nominations come out later, but the crushing blow was Gaiman’s lack of a Hugo nomination. This indicates a substantial weakness of voter feeling for his book, and sprang Leckie to the top. However, if Nebula readers are looking for a safe pick, this is it, and is probably the “I’m voting quickly and haven’t read all these novels” vote of choice. You can never count out a multiple Nebula winner.
Why It Won’t Win: Gaiman’s novel is popular, accessible, interesting, and well-written. It’s also brief (under 200 pages) and it doesn’t represent Gaiman’s best work. In fact, Gaiman has already mined this territory (child confronting evil) in the better-liked Coraline. Nebula voters might feel that Gaiman doesn’t deserve yet another award for his “smallest” novel. They may choose to reward Leckie’s ambitious risks over Gaiman’s safe choices. A lot of this circles back to the lack of a Hugo nomination: if Ocean were considered worthy of an award, why didn’t it get nominated there? Ocean may be fading from the popular imagination even as Ancillary Justice is rising.
Nicola Griffith’s Hild: 11.2%, Helene Wecker’s The Golem and the Djinni: 10.6%, and Karen Joy Fowler’s We Are All Completely Beside Ourselves: 9.8%
What They Are: A historical novel about St. Hilda of Whitby, a historical magic realist novel about a Golem and Djinni immigrating to America, and a realistic novel about a family adopting a chimp.
Why They’ll Win: It’s easy to cluster these books together because they are all literary fiction novels that dip their toes into the waters of SFF fantasy. All are complex, beautifully written, and moving books. In fact, if you’re looking solely at the “quality” of the book, as disconnected from whether or not the book is actually SFF, these would be strong contenders. Fowler even won the PEN/Faulkner award. If voters are tired of traditional Science Fiction and Fantasy, these are their options—books that expand our sense of genre by challenging the very concept of what a SFF book can be. Furthermore, Fowler and Griffith boast a strong profile in the field, and could easily receive a “life-time achievement” vote.
Why They Won’t Win: Because they aren’t SFF novels. While genre-policing is a rather fruitless endeavor, some voters are doubtless going to find these books too far outside the traditions of SFF to merit a vote. Even more problematic, though, is that since these three novels are somewhat similar in profile, they’ll split votes between them. If Fowler’s novel was the only borderline SFF novel in the pool, it might stand a strong chance of winning, but I believe voters looking for experimental fiction will split their votes between Fowler, Griffith, and Wecker, leaving none of these three with much chance to win.
Linda Nagata’s The Red: First Light: 8.2%, Sofia Samatar’s A Stranger in Olondria: 7.7%, and Charles Gannon’s Fire with Fire: 6.0%
Why They’ll Win: They won’t. These are our “happy to be nominated” group, although we should keep in mind that once during the past 15 years (when Asaro’s The Quantum Rose won), we’ve had a total Nebula surprise. Maybe it’ll be this year?
Why They Won’t Win: These novels aren’t well known, didn’t attract wide-spread critical and reader acclaim, and didn’t perform well this awards season. There is nothing in the current profile of these author’s to indicate that their novels are “big” enough to win the award.
I personally see this as a two-horse race between Leckie and Gaiman. If convention rules, Gaiman wins. If voters want to award the next big thing, Leckie wins. The results are going to tell us something about the current make-up of the Nebula voters, and what exactly they’re looking for in the field of SFF fantasy.
2014 Nebula Prediction: Final Update
Here we go, with the final prediction for the 2014 Nebula. Award to be given 5/18/14:
1. Ann Leckie, Ancillary Justice (25.8%)
2. Neil Gaiman, The Ocean at the End of the Lane (20.7%)
3. Nicola Griffith, Hild (11.2%)
4. Helene Wecker, The Golem and the Jinni (10.6%)
5. Karen Joy Fowler, We Are All Completely Beside Ourselves (9.8%)
6. Linda Nagata, The Red: First Light (8.2%)
7. Sofia Samatar, A Stranger in Olondria (7.7%)
8. Charles E. Gannon, Fire with Fire (6.0%)
Leckie makes a late (spectacular!) run to overcome Gaiman. Leckie’s final numbers have been greatly boosted by her award season performance. She was the only Nebula nominee this year to also score a Hugo nomination, and she also won both the Arthur C. Clarke Award and the British Science Fiction Association award (in a tie). Factor in nominations for the Philip K. Dick and the Tiptree, and Leckie put up one of the most impressive award season performances in recent SF history.
Will Leckie actually win? It’s looking more and more likely. Despite the impressive history of Gaiman, and the fact that Ocean at the End of the Lane is well liked, my impression is that people think Ocean is a “small” book, a lesser Gaiman novel. In contrast, Ancillary Justice is a “big” book, and Nebula voters tend to reward ambition. If Leckie wins, she’ll be following a similar pattern to Bacigalupi’s winning The Windup Girl, where a SF writer, although new, swept their way to the award based on the impressive scope of their first novel. In some ways, voters might be awarding promise over execution, although most critical voices have been enthusiastic about Ancillary Justice, even if not all readers have been swayed.
I’ll be back soon with a little bit of analysis as to what possible results on 5/18 mean for the Prediction Model.
To understand the formula, check out the Indicators, Weighting, and the Methodology.
2014 Nebula: Ann Leckie wins BSFA and Arthur C. Clarke Awards
Ann Leckie continues her dominant award season performance, racking up wins at both the British Science Fiction Awards (a tie with Gareth L. Powell’s Ack Ack Macaque and the Arthur C. Clarke Awards.
This makes Ancillary Justice the most honored book of the season, and, combined with nominations for the Hugo, Philip K. Dick, and Tiptree awards, this is going to push Leckie past Gaiman in the final 2014 Nebula prediction. No other 2014 Nebula Nomination has come even close to being as honored as Leckie, and since the Nebula voters have strong overlap with these other award voters, this means Leckie is in prime position to take home the 2014 Nebula.
2014 Nebula Award: The Cost of the Nebulas
When discussing these awards, it’s important to point out some of the problems with them. Since the Nebula is voted on by the members of the SFWA, we can ask an obvious question: how many members actually have the time/money to read/buy all these texts?
The Nebula nominates at least six books, but if there are ties they can expand that number. This year, there were eight. Here’s a chart detailing the length and cost of the books, as of when they were nominated:
These numbers are a little inflated. The price are list prices, and you can save yourself money by ordering online or getting e-books. The page lengths are the lengths reported by publishers, and they pump those up by including front and back mater. Still, we’re looking at close to $100 and 3000 pages of reading—who has the time for that kind of investment?
Nominations were announced February 25th, and SFWA members had until March 30th to vote. Unless a Nebula reader had already tackled most of the nominees before the announcement (and were a good guesser at what would be nominated!), it’s nearly impossible to do that amount of reading in a month.
The result: the lesser known books are doubtless ignored or skipped by most voters, and they end up making a choice between the 2-3 Nebula nominees they’ve already read that year. This is why there are so many repeat winners: voters vote for the books and authors they know because they simply don’t have the time fully explore the other nominees. And this is only one category—the Nebula nominates novellas, short stories, and YA books. It’s a near impossible task for voters to sift through that amount material in one month.
A more concise list of nominations and a longer amount of time between nominations/voting would definitely help with this problem.
2014 Nebula Award Prediction: Weighting
Weighting is one of the most difficult aspects of the statistical model. Our Linear Opinion Pool takes various indicator and combines them—but how do you know which indicator to trust the most?
This is the problem with any statistical model: the way the model is built is as critical as the data that goes into it. Statistics often mask the human bias of the people using those statistics. However, our model is just for fun—it’s not like millions of dollars are on the line, or that the Nebula has enough data to be truly accurate, or SFWA voters are predictable enough for this to be 100% reliable. If we get 70% reliability, that’d be great.
I weighted the model by measuring how accurate each indicator would be if we used that indicator—and only that indicator—to pick the Nebula. Those are then normalized against each other. Using data since 2000, this generated the following weights:
Note two disappointing facts: I had to zero out the Locus Awards column, since the Locus Awards seem to be coming out after the Nebula award. There’s also a zero for Amazon/Goodreads rating, as there wasn’t enough data to make a meaningful correlation.
Does this model pass the eye test? Well, the formula uses three main different categories:
1. Awards History: This makes sense for voters: they vote for names they are familiar and comfortable with. Unlike some other major literary awards—where winning once means you’ll never win a second time—the Nebula likes to give the same people the awards over and over again. At times, I think people voting for the name and not the book! All awards are biased, and this is one of the strongest ways the Nebula is biased.
2. Critic and Reader Response: Sometimes, though, a book is so buzzed about that it can overcome the lack of fame of the author. Conversely, a famous writer might write something that people dislike. These Indicators (#6-#10) try to track how people are feeling about the nominated book this year.
3. Awards Momentum: People like to vote on the winning side of history, so the more attention a book gets in awards season, particularly from the Hugo, the more likely it is to win. I think the web has actually increased the importance of this category—same year Hugo nominations was one of the most reliable indicators in the whole process. More nominations = more people read the book = more likely to vote for the book.
Pretty simple, huh? No model is perfect, though, and the model can’t take into account certain kinds of sentiment: “it’s this author’s time,” “this author is a jerk,” “this book is too political,” “this book isn’t SF,” and so forth.
The formula works out to be around 40% author’s history, 60% this year’s response, which seems roughly fair given the award.
2014 Nebula Prediction: Testing the Model
One of the easiest ways to test the model is to apply it to the previous 13 years of the Nebula award, and see whether it works. Here are the results:
Not too bad. In 9/13 years, the formula chose the winner, or roughly 70% of the time. Given the lack of data and the somewhat erratic nature of the award, that’s a satisfying result.
Where and why, though, did the formula break down in 2012, 2009, 2004, and 2002?
2012: My model picked Mieville’s Embassytown over Jo Walton’s Among Others, which also went on to win the Hugo. The formula was somewhat close, giving Mieville a 24.5% chance versus Walton’s 19.6%. Unfortunately, Jack McDevitt’s Firebird took second in the formula, with 22.0%. McDevitt, given his strong cult following (with 10+ nominations for Nebula Best Novel) and weak results (only 1 Nebula win) has a tendency to warp the formula.
Among Others was hurt by several factors. First, while it did get significant award buzz, picking up nominations for the World Fantasy and British Fantasy award, this happened after the Nebula. This is a problem with the model—SF award season happens early in the year, with Fantasy awards in the later part of the year. Now, you could argue that Among Others received those later nominations because of the Nebula, and that’s what leaves us with no perfect answer in terms of the formula. How do you factor in award buzz without punishing Fantasy novels?
Embassytown performed better than Among Others in every indicator. Even looking back, it’s not clear that readers actually like Among Others better than Embassytown, given that Mieville’s novel is ranked higher both on Amazon and Goodreads. However, I believe the sentiment that Embassytown wasn’t Mieville’s best book counted strongly against him. If he didn’t win the Nebula for The City and the City or Perdido Street Station, did Embassytown deserve it? Voters might think Mieville is going to write a better novel and that they should wait and award that. In contrast, I think readers felt that Among Others was Jo Walton’s best book, and this was her time to win the award.
The 2012 results show easy it is to swing the award. We’re dealing with human voters, not robots, and statistics can only take us so far.
What about 2009? A titan of the field, Ursula K. Le Guin, won for her young adult novel Powers, the last in a trilogy. My formula picked Cory Doctorow’s Little Brother with 26.2%, and Le Guin came in a strong second with 19.7%. Powers probably needs to be understood more as a “lifetime achievement win” rather than reflecting strong sentiment for that specific novel, which is not considered among Le Guin’s best. The two books are pretty neck-and-neck in the indicators, save Doctorow grabbed a Hugo nomination and Le Guin did not. That’s the main difference between the two, although Doctorow did perform well that awards season (winning the Campbell).
It is certainly interesting to note that there might be a bias against writers like Mieville and Doctorow, as they are edgier, more obviously “experimental” and “post-modern” writers. Keep in mind that a writer like Neal Stephenson hasn’t even been nominated for a Nebula since The Diamond Age in 1997. Books like Cryptonomicon don’t seem to even be in the running for this award. Nebula voters seem to gravitate more towards tradition than the fringes of the field.
2004 is an interesting case. As the model moves back in time, it loses some indicators (Goodreads votes, blog recommendations, etc.), making it less reliable. In 2004, the formula picked Bujold’s Diplomatic Immunity with 22.7% over Moon’s winning The Speed of Dark at 17.5%. It’s nice that the formula, when wrong, still picks the eventual winner in a high spot. Here’s a case where more indicators might have better identified the buzz for Moon’s novel, rather than the relatively safe choice of Bujold (who actually won the next year, for Paladin of Souls). For my model, the relatively unknown Moon couldn’t overcome Bujold’s history in the field. It is interesting to note that none of the novels from 2004 went on to be nominated for the Hugo. This is a case where factoring in placement on the Locus Awards would have helped Moon out, as Moon’s novel placed 5th do Bujold’s 23rd. That’s an indicator I’m considering bringing in: placement on Goodreads/Locus Awards/etc.
2002 is the exception year. Asaro won for a science fiction romance novel named The Quantum Rose in what is probably the most inexplicable Nebula win ever. My formula placed her 7th (out of 8 nominees) with a minimal 8.2%. Willis took the prediction with a strong 27.7% for Passage and Martin placed second with 15.3% for A Storm of Swords. So how and why did Asaro win?
Perhaps Willis and Martin’s novels were too long: Passage rings in at 800 pages, and Storm at a hefty 1000+. Perhaps the 8 nominations this year split votes amongst themselves, leading Asaro to win with a relatively low number of votes. There’s no way, though, that any formula is every going to pick Asaro—she got no critical buzz, no award season buzz, and didn’t even place in the Locus Awards. We just need to chalk this one up to a fluke of voting and move on. It is important, though, to keep this in the formula, as it reminds us these awards can be erratic.
Some questions to think about for the formula:
1. Are Fantasy novels treated fairly by Indicator #12, given that the major Fantasy awards happen later in the year?
2. Does the model adequately take into account bias against certain types of writers?
3. Does the model need to consider placement on things like the Locus Awards lists?
4. Does the model need to take into account some sense “career sentiment” when ranking the books? If so, how would we measure that?
5. Does the number of nominees make the model less reliable?
I’m not going to tweak the model this year, but this will be important information moving forward.
2014 Nebula Prediction: April 5 Update
Here’s our 4/5/14 update, now including most of our indicators, save for the Locus Awards and Same Year Hugo nominations (those will come out soon). Also expect additional award nominations to change the outcomes.
1. Neil Gaiman, The Ocean at the End of the Lane (20.5%)
2. Ann Leckie, Ancillary Justice (18.0%)
3. Karen Joy Fowler, We Are All Completely Beside Ourselves (14.3%)
4. Nicola Griffith, Hild (13.2%)
5. Helene Wecker, The Golem and the Jinni (12.6%)
6. Linda Nagata, The Red: First Light (8.5%)
7. Sofia Samatar, A Stranger in Olondria (7.7%)
8. Charles E. Gannon, Fire with Fire (5.2%)
No changes in the order, although Wecker is creeping up, due to her solid performance on critic and reader’s lists. Gaiman’s lead has also been cut, as Leckie has (so far) delivered an impressive series of award nominations, including the Clarke, Dick, and the Tiptree.
To understand the formula, check out the Indicators.
2014 Nebula Prediction: Indicators
Here are the 12 indicators for the Nebula award, based on the last 13 years of data (since 2000):
Indicator #1: Nominee has previously been nominated for a Nebula. (84.6%)
Indicator #2: Nominee has previously been nominated for a Hugo. (76.9%)
Indicator #3: Nominee has previously won a Nebula award for best novel. (46.1%)
Indicator #4: Nominee was the year’s most honored nominee (Nebula Wins + Nominations + Hugo Wins + Nominations). (53.9%)
Indicator #5: Nominated novel was a science fiction novel. (69.2%).
Indicator #6: Nominated novel places in the Locus Awards. (92.3%)
Indicator #7: Nominated novel places in the Goodreads Choice Awards. (100%)
Indicator #8: Nominated novel appears on the Locus Magazine Recommended Reading List. (92.3%)
Indicator #9: Nominated novel appears on the Tor.com or io9.com Year-End Critics’ list. (100%)
Indicator #10: Nominated novel is frequently reviewed and highly scored on Goodreads and Amazon. (unknown%)
Indicator #11: Nominated novel is also nominated for a Hugo in the same year. (73.3%)
Indicator #12: Nominated novel is nominated for at least one other major SF/F award that same year. (69.2%)
These separate categories, when weighted, are combined together to create our prediction.
The indicators fall into three broad areas: past award performance, critical and reader reception of the book, and current year award performance. By utilizing all these different opinions in our Linear Opinion Pool, we come up with a predictive model.
Now, what remains is weighting and then testing the model against past years.