Archive | April 2014

2014 Hugo Award Nominations: Chaos Rules

The Hugo Award nominations have been announced at the Hugo website:

Ancillary Justice, Ann Leckie (Orbit US/Orbit UK)
Neptune’s Brood, Charles Stross (Ace / Orbit UK)
Parasite, Mira Grant (Orbit US/Orbit UK)
Warbound, Book III of the Grimnoir Chronicles, Larry Correia (Baen Books)
The Wheel of Time, Robert Jordan and Brandon Sanderson (Tor Books / Orbit UK)

Chaos is everywhere. Due to a little known rule and a vigorous online campaign, the entirety of Jordan’s Wheel of Time has been nominated, meaning that we have now have a multi-authored 14 book series going up against individual novels. How exactly voters are going to resolve that question is up in the air. Likewise, Warbound received a significant push from its fandom. Given that the Hugo is a fan award, largely determined by an author’s popularity, these kinds of campaigns are simply par for the course.

What is unclear, though, is whether web campaigns affect final voting. For every die-hard Wheel of Time fan who passionately read all 14 books, there are multiple readers who gave up after a few books. A passionate fanbase leads to nominations; wide supported is needed to win.

The biggest thing to come out of the nominations is Ann Leckie’s continued domination in this award season. Since she was nominated and Gaiman’s The Ocean at the End of the Lane was not, this is going to significantly boost her chances of winning the Nebula. The lack of a nomination for Ocean is a little surprising, given Gaiman’s popularity, but this also says something about Gaiman’s short novel being consider “too slight” for a major award.

Over the next month, I’ll be putting together a prediction model for the Hugo. For now, stay tuned for the finalization of the Nebula prediction.

Advertisements

2014 Nebula Award: The Cost of the Nebulas

When discussing these awards, it’s important to point out some of the problems with them. Since the Nebula is voted on by the members of the SFWA, we can ask an obvious question: how many members actually have the time/money to read/buy all these texts?

The Nebula nominates at least six books, but if there are ties they can expand that number. This year, there were eight. Here’s a chart detailing the length and cost of the books, as of when they were nominated:
Costs

These numbers are a little inflated. The price are list prices, and you can save yourself money by ordering online or getting e-books. The page lengths are the lengths reported by publishers, and they pump those up by including front and back mater. Still, we’re looking at close to $100 and 3000 pages of reading—who has the time for that kind of investment?

Nominations were announced February 25th, and SFWA members had until March 30th to vote. Unless a Nebula reader had already tackled most of the nominees before the announcement (and were a good guesser at what would be nominated!), it’s nearly impossible to do that amount of reading in a month.

The result: the lesser known books are doubtless ignored or skipped by most voters, and they end up making a choice between the 2-3 Nebula nominees they’ve already read that year. This is why there are so many repeat winners: voters vote for the books and authors they know because they simply don’t have the time fully explore the other nominees. And this is only one category—the Nebula nominates novellas, short stories, and YA books. It’s a near impossible task for voters to sift through that amount material in one month.

A more concise list of nominations and a longer amount of time between nominations/voting would definitely help with this problem.

2014 Nebula Award Prediction: Weighting

Weighting is one of the most difficult aspects of the statistical model. Our Linear Opinion Pool takes various indicator and combines them—but how do you know which indicator to trust the most?

This is the problem with any statistical model: the way the model is built is as critical as the data that goes into it. Statistics often mask the human bias of the people using those statistics. However, our model is just for fun—it’s not like millions of dollars are on the line, or that the Nebula has enough data to be truly accurate, or SFWA voters are predictable enough for this to be 100% reliable. If we get 70% reliability, that’d be great.

I weighted the model by measuring how accurate each indicator would be if we used that indicator—and only that indicator—to pick the Nebula. Those are then normalized against each other. Using data since 2000, this generated the following weights:
Weights

Note two disappointing facts: I had to zero out the Locus Awards column, since the Locus Awards seem to be coming out after the Nebula award. There’s also a zero for Amazon/Goodreads rating, as there wasn’t enough data to make a meaningful correlation.

Does this model pass the eye test? Well, the formula uses three main different categories:

1. Awards History: This makes sense for voters: they vote for names they are familiar and comfortable with. Unlike some other major literary awards—where winning once means you’ll never win a second time—the Nebula likes to give the same people the awards over and over again. At times, I think people voting for the name and not the book! All awards are biased, and this is one of the strongest ways the Nebula is biased.

2. Critic and Reader Response: Sometimes, though, a book is so buzzed about that it can overcome the lack of fame of the author. Conversely, a famous writer might write something that people dislike. These Indicators (#6-#10) try to track how people are feeling about the nominated book this year.

3. Awards Momentum: People like to vote on the winning side of history, so the more attention a book gets in awards season, particularly from the Hugo, the more likely it is to win. I think the web has actually increased the importance of this category—same year Hugo nominations was one of the most reliable indicators in the whole process. More nominations = more people read the book = more likely to vote for the book.

Pretty simple, huh? No model is perfect, though, and the model can’t take into account certain kinds of sentiment: “it’s this author’s time,” “this author is a jerk,” “this book is too political,” “this book isn’t SF,” and so forth.

The formula works out to be around 40% author’s history, 60% this year’s response, which seems roughly fair given the award.

2014 Nebula Prediction: Testing the Model

One of the easiest ways to test the model is to apply it to the previous 13 years of the Nebula award, and see whether it works. Here are the results:

Testing Formula
Not too bad. In 9/13 years, the formula chose the winner, or roughly 70% of the time. Given the lack of data and the somewhat erratic nature of the award, that’s a satisfying result.

Where and why, though, did the formula break down in 2012, 2009, 2004, and 2002?

2012: My model picked Mieville’s Embassytown over Jo Walton’s Among Others, which also went on to win the Hugo. The formula was somewhat close, giving Mieville a 24.5% chance versus Walton’s 19.6%. Unfortunately, Jack McDevitt’s Firebird took second in the formula, with 22.0%. McDevitt, given his strong cult following (with 10+ nominations for Nebula Best Novel) and weak results (only 1 Nebula win) has a tendency to warp the formula.

Among Others was hurt by several factors. First, while it did get significant award buzz, picking up nominations for the World Fantasy and British Fantasy award, this happened after the Nebula. This is a problem with the model—SF award season happens early in the year, with Fantasy awards in the later part of the year. Now, you could argue that Among Others received those later nominations because of the Nebula, and that’s what leaves us with no perfect answer in terms of the formula. How do you factor in award buzz without punishing Fantasy novels?

Embassytown performed better than Among Others in every indicator. Even looking back, it’s not clear that readers actually like Among Others better than Embassytown, given that Mieville’s novel is ranked higher both on Amazon and Goodreads. However, I believe the sentiment that Embassytown wasn’t Mieville’s best book counted strongly against him. If he didn’t win the Nebula for The City and the City or Perdido Street Station, did Embassytown deserve it? Voters might think Mieville is going to write a better novel and that they should wait and award that. In contrast, I think readers felt that Among Others was Jo Walton’s best book, and this was her time to win the award.

The 2012 results show easy it is to swing the award. We’re dealing with human voters, not robots, and statistics can only take us so far.

What about 2009? A titan of the field, Ursula K. Le Guin, won for her young adult novel Powers, the last in a trilogy. My formula picked Cory Doctorow’s Little Brother with 26.2%, and Le Guin came in a strong second with 19.7%. Powers probably needs to be understood more as a “lifetime achievement win” rather than reflecting strong sentiment for that specific novel, which is not considered among Le Guin’s best. The two books are pretty neck-and-neck in the indicators, save Doctorow grabbed a Hugo nomination and Le Guin did not. That’s the main difference between the two, although Doctorow did perform well that awards season (winning the Campbell).

It is certainly interesting to note that there might be a bias against writers like Mieville and Doctorow, as they are edgier, more obviously “experimental” and “post-modern” writers. Keep in mind that a writer like Neal Stephenson hasn’t even been nominated for a Nebula since The Diamond Age in 1997. Books like Cryptonomicon don’t seem to even be in the running for this award. Nebula voters seem to gravitate more towards tradition than the fringes of the field.

2004 is an interesting case. As the model moves back in time, it loses some indicators (Goodreads votes, blog recommendations, etc.), making it less reliable. In 2004, the formula picked Bujold’s Diplomatic Immunity with 22.7% over Moon’s winning The Speed of Dark at 17.5%. It’s nice that the formula, when wrong, still picks the eventual winner in a high spot. Here’s a case where more indicators might have better identified the buzz for Moon’s novel, rather than the relatively safe choice of Bujold (who actually won the next year, for Paladin of Souls). For my model, the relatively unknown Moon couldn’t overcome Bujold’s history in the field. It is interesting to note that none of the novels from 2004 went on to be nominated for the Hugo. This is a case where factoring in placement on the Locus Awards would have helped Moon out, as Moon’s novel placed 5th do Bujold’s 23rd. That’s an indicator I’m considering bringing in: placement on Goodreads/Locus Awards/etc.

2002 is the exception year. Asaro won for a science fiction romance novel named The Quantum Rose in what is probably the most inexplicable Nebula win ever. My formula placed her 7th (out of 8 nominees) with a minimal 8.2%. Willis took the prediction with a strong 27.7% for Passage and Martin placed second with 15.3% for A Storm of Swords. So how and why did Asaro win?

Perhaps Willis and Martin’s novels were too long: Passage rings in at 800 pages, and Storm at a hefty 1000+. Perhaps the 8 nominations this year split votes amongst themselves, leading Asaro to win with a relatively low number of votes. There’s no way, though, that any formula is every going to pick Asaro—she got no critical buzz, no award season buzz, and didn’t even place in the Locus Awards. We just need to chalk this one up to a fluke of voting and move on. It is important, though, to keep this in the formula, as it reminds us these awards can be erratic.

Some questions to think about for the formula:
1. Are Fantasy novels treated fairly by Indicator #12, given that the major Fantasy awards happen later in the year?
2. Does the model adequately take into account bias against certain types of writers?
3. Does the model need to consider placement on things like the Locus Awards lists?
4. Does the model need to take into account some sense “career sentiment” when ranking the books? If so, how would we measure that?
5. Does the number of nominees make the model less reliable?

I’m not going to tweak the model this year, but this will be important information moving forward.

2014 Nebula Prediction: April 5 Update

Thumb Ocean Thumb Ancillary Thumb Beside Thumb Hild Thumb Golem Thumb Red Thumb Stranger Thumb Fire

Here’s our 4/5/14 update, now including most of our indicators, save for the Locus Awards and Same Year Hugo nominations (those will come out soon). Also expect additional award nominations to change the outcomes.

1. Neil Gaiman, The Ocean at the End of the Lane (20.5%)
2. Ann Leckie, Ancillary Justice (18.0%)
3. Karen Joy Fowler, We Are All Completely Beside Ourselves (14.3%)
4. Nicola Griffith, Hild (13.2%)
5. Helene Wecker, The Golem and the Jinni (12.6%)
6. Linda Nagata, The Red: First Light (8.5%)
7. Sofia Samatar, A Stranger in Olondria (7.7%)
8. Charles E. Gannon, Fire with Fire (5.2%)

No changes in the order, although Wecker is creeping up, due to her solid performance on critic and reader’s lists. Gaiman’s lead has also been cut, as Leckie has (so far) delivered an impressive series of award nominations, including the Clarke, Dick, and the Tiptree.

To understand the formula, check out the Indicators.

2014 Nebula Prediction: Indicators

Here are the 12 indicators for the Nebula award, based on the last 13 years of data (since 2000):

Indicator #1: Nominee has previously been nominated for a Nebula. (84.6%)
Indicator #2: Nominee has previously been nominated for a Hugo. (76.9%)
Indicator #3: Nominee has previously won a Nebula award for best novel. (46.1%)
Indicator #4: Nominee was the year’s most honored nominee (Nebula Wins + Nominations + Hugo Wins + Nominations). (53.9%)

Indicator #5: Nominated novel was a science fiction novel. (69.2%).
Indicator #6: Nominated novel places in the Locus Awards. (92.3%)
Indicator #7: Nominated novel places in the Goodreads Choice Awards. (100%)

Indicator #8: Nominated novel appears on the Locus Magazine Recommended Reading List. (92.3%)
Indicator #9: Nominated novel appears on the Tor.com or io9.com Year-End Critics’ list. (100%)

Indicator #10: Nominated novel is frequently reviewed and highly scored on Goodreads and Amazon. (unknown%)
Indicator #11: Nominated novel is also nominated for a Hugo in the same year. (73.3%)
Indicator #12: Nominated novel is nominated for at least one other major SF/F award that same year. (69.2%)

These separate categories, when weighted, are combined together to create our prediction.

The indicators fall into three broad areas: past award performance, critical and reader reception of the book, and current year award performance. By utilizing all these different opinions in our Linear Opinion Pool, we come up with a predictive model.

Now, what remains is weighting and then testing the model against past years.

Karen Joy Fowler wins PEN/Faulkner Award

The PEN/Faulkner association has announced that We Are All Completely Beside Ourselves has won the 2014 PEN/Faulkner award for Best Novel.

That’s quite an honor for Fowler. The PEN/Faulkner is one of the biggest literary fiction awards of the year, along with the Pulitzer, National Book Award, and National Book Critics Circle Award. This win means that Fowler is going to be a force to reckon with on the literary fiction award circuit this year.

What does this mean for the Nebula, though? No Nebula book nominee has ever won an award like this. Does this make Fowler a frontrunner? Or does her acceptance by the lit fic world mean that the novel isn’t SF? Will Nebula voters feel that since Fowler has already been rewarded with a major award, she doesn’t need the Nebula? Or will they bandwagon to show that the Nebula is choosing “major” texts?

I’ll add the PEN/Faulkner to same year awards, and it will factor in the formula. I suspect that Fowler is in the same boat she was in before the award–highly regarded, but her vote will be split by people not finding her genre-y enough. If there were more obvious SF elements in Beside Ourselves, it would be a frontrunner. Since there’s discussion about its SFness, it falls behind Gaiman’s more obviously fantasy novel.

2014 Nebula Prediction: Indicators #11 and #12

The last of our two indicators. These focus on how our novels are doing this award season. Are they being nominated? Are they winning? There’s got to be some correlation—if you can win one award, you must be more likely to win another.

Unfortunately, these indicators are fraught with problems. Most of the major SF and F awards come out after the Nebula. In particularly, fantasy awards are often clustered in the back half of the year, including the World Fantasy and the British Fantasy.

The Hugo Nominations come out fairly early (April), and are a good indicator of whether or not they’ll win. Since 2000, 8 out of 13 winners were also nominated for the Hugo award. The indicator is actually a little better—in two years, 2004 and 2007, none of the Nebula Nominees was nominated for a Hugo.

The other indicator I want to work with is the broader idea that being nominated for a major award, such as the Clarke, the Tiptree, the Philip K. Dick, the Campbell, etc., is a good indicator. My list of major awards are the ones identified by SFADB, and are as follows: World Fantasy, British Fantasy, British SF, Campbell, Arthur Clarke, Philip K. Dick, Stoker, and Tiptree. Those cover most of the awards that get a lot of press. 10 of the previous 13 years, our eventual winner has claimed at least a nomination for one of those other awards, with the Campbell being the most reliable. Now, the Campbell is only for SF, and that puts our fantasy writers at a disadvantage. However, we already know fantasy is at a disadvantage. I’m going to have to figure out how to weigh this one. Since this is the first year of prediction, it’s a great trial run to see when award nominations come out. For now, I’m pinning this at 9/13. This tosses out the Jo Walton year, since Among Others (2012) was nominated for the World Fantasy and British Fantasy, and those nominations happened after the Nebula was awarded.

Indicator #11: The novel is also nominated for a Hugo in the same year. (72.3%)
Indicator #12: The novel is nominated for at least one other major SF/F award that same year. (69.2%)

We’re still early in award season, so not many have come out yet–just the Tiptree, the Clarke nominations, and the Philip K. Dick nominations. Here’s how our books are doing. Nominations are noted in the chart below; in case anyone wins, I’ll mark that with *:
Indicators 11-12
Leckie is obviously leading the way. I suspect she’s going to be a favorite on the award circuit this year: a big, ambitious SF novel that deals with gender issues is always going to get a lot of attention. Don’t read too much into the lack of nominations for Gaiman–the major fantasy awards haven’t moved yet.

So we’ll have to keep an eye on these indicators as more information comes out.

So that’s our 12 indicators! Now on to weighting!

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Rare Horror

We provide reviews and recommendations for all things horror. We are particularly fond of 80s, foreign, independent, cult and B horror movies. Please use the menu on the top left of the screen to view our archives or to learn more about us.

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

SCy-Fy: the blog of S. C. Flynn

Reader. Writer of fantasy novels.

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Reading SFF

Reading science fiction and fantasy novels and short fiction.

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

The Other Side of the Rain

Book reviews, speculative fiction, and wild miscellany.

Read & Survive

How- To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

From couch to moon

Sci-fi and fantasy reviews, among other things