2014 Nebula Prediction: Testing the Model

One of the easiest ways to test the model is to apply it to the previous 13 years of the Nebula award, and see whether it works. Here are the results:

Testing Formula
Not too bad. In 9/13 years, the formula chose the winner, or roughly 70% of the time. Given the lack of data and the somewhat erratic nature of the award, that’s a satisfying result.

Where and why, though, did the formula break down in 2012, 2009, 2004, and 2002?

2012: My model picked Mieville’s Embassytown over Jo Walton’s Among Others, which also went on to win the Hugo. The formula was somewhat close, giving Mieville a 24.5% chance versus Walton’s 19.6%. Unfortunately, Jack McDevitt’s Firebird took second in the formula, with 22.0%. McDevitt, given his strong cult following (with 10+ nominations for Nebula Best Novel) and weak results (only 1 Nebula win) has a tendency to warp the formula.

Among Others was hurt by several factors. First, while it did get significant award buzz, picking up nominations for the World Fantasy and British Fantasy award, this happened after the Nebula. This is a problem with the model—SF award season happens early in the year, with Fantasy awards in the later part of the year. Now, you could argue that Among Others received those later nominations because of the Nebula, and that’s what leaves us with no perfect answer in terms of the formula. How do you factor in award buzz without punishing Fantasy novels?

Embassytown performed better than Among Others in every indicator. Even looking back, it’s not clear that readers actually like Among Others better than Embassytown, given that Mieville’s novel is ranked higher both on Amazon and Goodreads. However, I believe the sentiment that Embassytown wasn’t Mieville’s best book counted strongly against him. If he didn’t win the Nebula for The City and the City or Perdido Street Station, did Embassytown deserve it? Voters might think Mieville is going to write a better novel and that they should wait and award that. In contrast, I think readers felt that Among Others was Jo Walton’s best book, and this was her time to win the award.

The 2012 results show easy it is to swing the award. We’re dealing with human voters, not robots, and statistics can only take us so far.

What about 2009? A titan of the field, Ursula K. Le Guin, won for her young adult novel Powers, the last in a trilogy. My formula picked Cory Doctorow’s Little Brother with 26.2%, and Le Guin came in a strong second with 19.7%. Powers probably needs to be understood more as a “lifetime achievement win” rather than reflecting strong sentiment for that specific novel, which is not considered among Le Guin’s best. The two books are pretty neck-and-neck in the indicators, save Doctorow grabbed a Hugo nomination and Le Guin did not. That’s the main difference between the two, although Doctorow did perform well that awards season (winning the Campbell).

It is certainly interesting to note that there might be a bias against writers like Mieville and Doctorow, as they are edgier, more obviously “experimental” and “post-modern” writers. Keep in mind that a writer like Neal Stephenson hasn’t even been nominated for a Nebula since The Diamond Age in 1997. Books like Cryptonomicon don’t seem to even be in the running for this award. Nebula voters seem to gravitate more towards tradition than the fringes of the field.

2004 is an interesting case. As the model moves back in time, it loses some indicators (Goodreads votes, blog recommendations, etc.), making it less reliable. In 2004, the formula picked Bujold’s Diplomatic Immunity with 22.7% over Moon’s winning The Speed of Dark at 17.5%. It’s nice that the formula, when wrong, still picks the eventual winner in a high spot. Here’s a case where more indicators might have better identified the buzz for Moon’s novel, rather than the relatively safe choice of Bujold (who actually won the next year, for Paladin of Souls). For my model, the relatively unknown Moon couldn’t overcome Bujold’s history in the field. It is interesting to note that none of the novels from 2004 went on to be nominated for the Hugo. This is a case where factoring in placement on the Locus Awards would have helped Moon out, as Moon’s novel placed 5th do Bujold’s 23rd. That’s an indicator I’m considering bringing in: placement on Goodreads/Locus Awards/etc.

2002 is the exception year. Asaro won for a science fiction romance novel named The Quantum Rose in what is probably the most inexplicable Nebula win ever. My formula placed her 7th (out of 8 nominees) with a minimal 8.2%. Willis took the prediction with a strong 27.7% for Passage and Martin placed second with 15.3% for A Storm of Swords. So how and why did Asaro win?

Perhaps Willis and Martin’s novels were too long: Passage rings in at 800 pages, and Storm at a hefty 1000+. Perhaps the 8 nominations this year split votes amongst themselves, leading Asaro to win with a relatively low number of votes. There’s no way, though, that any formula is every going to pick Asaro—she got no critical buzz, no award season buzz, and didn’t even place in the Locus Awards. We just need to chalk this one up to a fluke of voting and move on. It is important, though, to keep this in the formula, as it reminds us these awards can be erratic.

Some questions to think about for the formula:
1. Are Fantasy novels treated fairly by Indicator #12, given that the major Fantasy awards happen later in the year?
2. Does the model adequately take into account bias against certain types of writers?
3. Does the model need to consider placement on things like the Locus Awards lists?
4. Does the model need to take into account some sense “career sentiment” when ranking the books? If so, how would we measure that?
5. Does the number of nominees make the model less reliable?

I’m not going to tweak the model this year, but this will be important information moving forward.

Advertisements

Tags: ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Rare Horror

We provide reviews and recommendations for all things horror. We are particularly fond of 80s, foreign, independent, cult and B horror movies. Please use the menu on the top left of the screen to view our archives or to learn more about us.

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Reading SFF

Reading science fiction and fantasy novels and short fiction.

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

The Other Side of the Rain

Book reviews, speculative fiction, and wild miscellany.

Read & Survive

How- To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

From couch to moon

Sci-fi and fantasy reviews, among other things

%d bloggers like this: