Archive | 2015 Nebula Award RSS for this section

Jeff VanderMeer’s Annihilation Wins Nebula: Instant Analysis

Jeff VanderMeer’s Annihilation, the first part of his Southern Reach trilogy, won the Nebula Award for Best Novel tonight.

Congratulations to VanderMeer. I’ve been an avid VanderMeer fan for more than a decade, and City of Saints and Madmen is my favorite fantasy novel from the 2000s. That novel is an experimental masterpiece, a collection of four linked novellas that takes place in Ambergris, a fantastically weird city that may (or may not) be overrun by a race of mysterious mushroom creatures. It’s a fabulous mash-up of Pynchon-esque conspiracy, Kafka-esque weirdness, David Foster Wallace-style textual shenanigans (footnotes and what not), and high fantasy imagination. It’s certainly not a novel for everyone, and I never imagined that VanderMeer would pass over from the fringe to the mainstream. VanderMeer had always been too weird to be popular, too obtuse to be widely read, and it’s fascinating that he managed to evolve his style into something as accessible as Annihilation.

VanderMeer continued his Ambergris trilogy with the brilliant Shriek, a dual-layered pseudo-memoir about an art-dealer in Ambergris and his investigation of the mushroom men, and then wrapped it all up with Finch, a pseudo-noir that explained the mysteries of the mushroom race. The explanations are never as good as the mysteries, and VanderMeer is at his best when he’s delivering the what-the-hell-is-going-on ambiguity of City of Saints and Madmen. No book this side of The Fifth Head of Cerberus by Gene Wolfe or House of Leaves by Mark Z. Williamson does a better job of puzzling and intriguing the reader, and if you’re willing to take that journey with VanderMeer, you’ve got nothing but pure baffled enjoyment in front of you.

It’s that kind of absolute strangeness that flows through Annihilation. While VanderMeer has backed off some of his stylistic weirdness in this volume, he amps up the mystery, horror, and psychological intrigue. In this novel, a team is sent by the government to explore Area X, a patch of America that has gone entirely wonky. Aliens? Horrors? Government conspiracy? Drugs? Failed expedition after failed expedition hasn’t gotten to the bottom of what exactly is happening. Annihilation gives us a new expedition into Area X, and with it a descent into madness and mystery.

I don’t like Annihilation as much as VanderMeer’s other work. I’m reminded of what Cormac McCarthy did in his The Border Trilogy: he backed off some of his weirdness as a writer to make his work more accessible to readers. That’s worked extremely well in Annihilation, and it’s a great place to start for new VanderMeer readers. It’s the new audience that VanderMeer has attracted which has likely driven Annihilation to Nebula victory.

I’d still recommend City of Saints and Madmen over this. I even taught that novel in a 600 level Post-Modern literature seminar a few years ago. I also like Veniss Underground a great deal, although that novel is probably even fragmented and difficult than City of Saints and Madmen.

So why did Annihilation win the Nebula? It was one of the most celebrated genre novels of the year, showing up on critics year-end lists almost as often as Ancillary Sword. It did a great business, seeming to outsell Leckie (7,000 Goodreads ratings for Ancillary Sword as of 6/6/15) by a wide margin (23,000 Goodread ratings for Annihilation as of 6/6/15). VanderMeer was widely heralded this year in mainstream venues such as The New Yorker and The Atlantic. Annihilation is also a quick read, and works well as a stand-alone horror novel.

Lastly, VanderMeer has been an incredibly hardworking author over the last decade. He helped co-edit (with his wife) one of the best anthologies in recent memory, the massive and definitive The Weird. He’s also put his time in the trenches, even writing a Predator tie in novel (Predator: South China Seas, which is pretty good) and a Halo novella (forgettable). Despite boiling the pot to pay his bills, he’s someone who’s plugged away at his craft, publishing uncompromising and difficult works that had a limited audience. As he’s stepped into the mainstream, he’s done that with grace and success. The SFWA is a group of writers, and I they valued the writerly-ness of VanderMeer. I know I do: VanderMeer is one of the most unique and original authors working in the SF/Fantasy/Horror space. He deserves whatever honors he can get.

In my prediction work on Chaos Horizon, I had VanderMeer as the front runner most of the year. I moved VanderMeer up to the #1 spot in my November 2014 prediction, and kept him there until my Nebula Prediction formula kicked him down to #4. While that is certainly disappointing, the formula did give VanderMeer a 16.8% chance to win, only 3% behind Leckie’s formula leading 19.4%. To tighten up my formula, I’ll need to add indicators punishing (which would have moved Addison into the lead) and rewarding a high number of Goodreads ratings. It’s actually better for Chaos Horizon when the formula doesn’t work, particularly in these early years. This allows me to make corrections, and to make my predictions better in the future.

2015 Nebula Prediction: Final Results

Here we go . . . the official Chaos Horizon Nebula prediction for 2015!

Disclaimer: Chaos Horizon uses data-mining techniques to try and predict the Hugo and Nebula awards. While the model is explained in depth (this is a good post to start with) on my site, the basics are that I look for past patterns in the awards and then use those to predict future behavior.

Chaos Horizon predictions are not based on my personal readings or opinions of the books. There are flaws with this model, as there are with any model. Data-mining will miss sudden changes in the field, and it does not do a good job of taking into account the passion of individual readers. So take Chaos Horizon lightly, as an interesting mathematical perspective on the awards, and supplement my analysis with the many other discussions available on the web.

Lastly, Chaos Horizon predicts who is most likely to win based on past awards, not who “should” win in a more general sense.

Ancillary Sword Goblin Emperor Three-Body Problem
Annihilation Coming Home Trial By Fire

1. Ann Leckie, Ancillary Sword: 19.4%
2. Katherine Addison, The Goblin Emperor: 19.2%
3. Cixin Liu and Ken Liu (translator), The Three-Body Problem: 17.7%
4. Jeff VanderMeer, Annihilation: 16.8%
5. Jack McDevitt, Coming Home: 16.5%
6. Charles Gannon, Trial by Fire: 10.4%

The margin is incredibly small this year, indicating a very close race. Last year, Leckie had an impressive 5% lead on Gaiman and an impressive 14% lead over third place Hild in the model. This year, Leckie has a scant .2% lead on Addison, and the top 5 candidates are all within a few percentage points of each other. I think that’s an accurate assessment of this year’s Nebula: there is no breakaway winner. You’ve got a very close race that’s going to come down to just a few voters. A lot of this is going to swing on whether or not voters want to give Leckie a second award in two years, or whether they prefer fantasy to science fiction (Addison would win in that case), or how receptive they are to Chinese-language science-fiction, or of they see Annihilation as SF and complete enough to win, etc.

Let’s break-down each of these by author, to see the strengths and weaknesses of their candidacy.

Ancillary Sword: Leckie’s sequel to her Hugo and Nebula winning Ancillary Justice avoided the sophomore jinx. While perhaps less inventive and exciting than Ancillary Justice, many reviewers and commenters noted that it was a better overall novel, with stronger characterization and writing. Ancillary Sword showed up on almost every year-end list and has already received the British Science Fiction Award. This candidacy is complicated, though, by the rareness of winning back-to-back Nebulas. She would join Samuel R. Delany, Frederik Pohl, and Orson Scott Card as the only back-to-back winners. Given how early Leckie is in her career (this is only her second novel), are SFWA voters ready to make that leap? Leckie also is competing against 4 other SF novels: it’s possible she could split the vote with someone like Cixin Liu, leaving the road open for Addison to win.

Still, Leckie is the safe choice this year. Due to all the attention and praise heaped on Ancillary Justice, Ancillary Sword was widely read and reviewed. More readers = more voters, even in the small pool of SFWA authors. People that are only now getting to The Three-Body Problem may have read Ancillary Sword months ago. I don’t think you can overlook the impact of this year’s Hugo controversy on the Nebulas: SFWA authors are just as involved in all those discussions, and giving Leckie two awards in a row may seem like a safe and stable choice amidst all the internet furor. If Ancillary Justice was a consensus choice last year, Ancillary Sword might be the compromise choice this year.

The Goblin Emperor: My model likes Addison’s novel because it’s the only fantasy novel in the bunch. If there is even a small pool of SFWA voters (5% or so) who only vote for fantasy, Addison has a real shot here. The Goblin Emperor also has had a great year: solid placement on year-end lists, a Hugo nomination, and very enthusiastic fan-reception. Of the six Nebula nominees this year, it’s the most different in terms of its approach to genre (along with Annihilation, I guess), giving a very non-standard take on the fantasy novel. The Nebula has liked those kinds of experiments recently. The more you think about it, the more you can talk yourself into an Addison win.

The Three-Body Problem: The wild-card of the bunch, and the one my model has the hardest time dealing with. This come out very late in the year—November—and that prevented it from making as many year-end lists as other books. Secondly, how are SFWA voters going to treat a Chinese-language novel? Do they stress the in A (America) in SFWA? Or do they embrace SF as a world genre? The Nebula Best Novel has never gone to a foreign-language novel before. Will it start now?

Lastly, do SFWA voters treat the novel as co-authored by Ken Liu (he translated the book), who is well known and well liked by the SFWA audience? Ken Liu is actually up for a Nebula this year in the Novella category for “The Regular.” I ended up (for the purposes of the model) treating Cixin Liu’s novel as co-authored by Ken Liu. Since Ken Liu was out promoting the novel heavily, Cixin Liu didn’t get the reception of a new author. I think many readers came into The Three-Body Problem because of Ken Liu’s reputation. If I hadn’t done that, this novel drops 1% point in the prediction, from 3rd to 5th place.

The Three Body-Problem hasn’t always received the best reviews. Check this fairly tepid take on the novel published this week by Strange Horizons. Liu is writing in the tradition of Arthur C. Clarke and other early SF writers, where character is not the emphasis in the book. If you’re expecting to deeply engaged by the characters of The Three Body-Problem, you won’t like the novel. Given that the Nebula has been leaning literary over the past few years, does that doom its chances? Or will the inventive world-building and crazy science of the book push it to victory? This is the novel I feel most uncertain about.

Annihilation:
I had VanderMeer’s incredibly well-received start to his Southern Reach trilogy as the frontrunner for most of the year. However, VanderMeer has been hurt because of his lack of other SF awards this season: no Hugo, and he’s only made the Campbell out all of the other awards. I think this reflects some of the difficulty of Annihilation. It’s a novel that draws on weird fiction, environmental fiction, and science fiction, and readers may be having difficulty placing it in terms of genre. Add in that it is very short (I believe it would be the shortest Nebula winner if it ever wins) and clearly the first part of something bigger, is it stand-alone enough to win? The formula doesn’t think so, but formulas can be wrong. I wouldn’t be stunned by a VanderMeer win, but it seems a little unlikely at this point.

Coming Home: Ah, McDevitt. The ghost of the Nebula Best Novel category: he’s back for his 12th nomination. He’s only won once, but could it happen again? There’s a core of SFWA voters who must love Jack McDevitt. If the vote ends up getting split between everyone else, could they drive McDevitt to another victory? It’s happened once already, in 2007 with Seeker. I don’t see it happening, but stranger things have gone down in the Nebula.

Trial by Fire: The model hates Charles Gannon. He actually did well last year. According to my sources, he placed 3rd in the Nebula last year. Still, this is the sequel to that book, and sequels tend to move down in the voting. Gannon’s lack of critical acclaim and lack of Hugo success are what kills him in the model.

Remember, the model is a work in progress. This is only my second year trying to do this. The more data I collect, and the more we see how individual Nebula and Hugos go, the better the model will get. As such, just treat the model as a “for fun” thing. Don’t bet your house on it!

So, what do you think? Another win for Leckie? A fantasy win for Addison? A late tail-wind win for Liu?

2015 Nebula Prediction: Indicators and Weighting

One last little housekeeping post before I post my prediction later today. Here are the 10 indicators I settled on using:

Indicator #1: Author has previously been nominated for a Nebula (78.57%)
Indicator #2: Author has previously been nominated for a Hugo (71.43%)
Indicator #3: Author has previously won a Nebula for Best Novel (42.86%)
Indicator #4: Has received at least 10 combined Hugo + Nebula noms (50.00%)

Indicator #5: Novel is science fiction (71.43%)
Indicator #6: Places on the Locus Recommended Reading List (92.86%)
Indicator #7: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #8: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)

Indicator #9: Receives a same-year Hugo nomination (64.29%)
Indicator #10: Nominated for at least one other major SFF award (71.43%)

I reworded Indicator #4 to make the math a little clearer. Otherwise, these are the same as in my Indicator posts, which you can get to by clicking on each link.

If you want to see how the model is built, checking out the “Building the Model” posts.

I’ve tossed around including a “Is not a sequel” indicator, but that would take some tinkering, and I don’t like to tinker at this point in the process.

The Indicators are then weighted according to how well they’ve worked in the pass. Here are the weights I’ve used this year:

Indicator #1: 8.07%
Indicator #2: 8.65%
Indicator #3: 13.78%
Indicator #4: 11.93%
Indicator #5: 10.66%
Indicator #6: 7.98%
Indicator #7: 7.80%
Indicator #8: 4.24%
Indicator #9: 16.54%
Indicator #10: 10.34%

Lots of math, I know, but I’m going to past the prediction shortly!

2015 Nebula Prediction: Indicators #9-#10

Here are the last two indicators currently in my Nebula formula. These ones try to chart how well a book is doing in the current awards season, based on the assumption that if you are able to get nominated for one award, you’re more likely to win another. Note that it’s nominations that seem to correlate, not necessarily wins. Many of the other SFF awards are juried, so winning isn’t as good a measure of votes like the Hugo and Nebula use. Nominations raise your profile and get your book buzzed about, which helps pull in those votes. If something gets nominated 4-5 times, it becomes the “must-read” of the year, and that leads to wins.

Indicator #9: Receives a same-year Hugo nomination (64.29%)
Indicator #10: Nominated for at least one other major SFF award (71.43%)

I track things like the Philip K. Dick, the British Science Fiction Award, the Tiptree, the Arthur C. Clarke, the Campbell, and the Prometheus. Interestingly, the major fantasy awards—the World Fantasy Award, the British Fantasy Award—don’t come out until later in the year. This places someone like Addison at a disadvantage in these measures. We need an early in the year fantasy award!

In recent years, the Nebula has been feeding into the Hugo and vice-versa. Since the same awards are talked about so much in the same places, getting a Nebula nom raises your Hugo profile, which in turn feeds back and shapes the conversation about the Nebulas. If everyone on the internet is discussing Addison, Leckie, and Liu, someone like VanderMeer or Gannon can fall through the cracks. More exposure = more chances of winning.

So, how do things look this year?

Table 5: 2015 Awards Year Performance for 2015 Nebula Nominees
Table 5 Awards

The star by Leckie’s name means she won the BSFA this year. 2015 is very different than 2014: at this time last year, Ancillary Justice was clearly dominating, having already picked up nominations for the Clarke, Campbell, BSFA, Tiptree, and Dick. She’d go on to win the Clarke, BSFA, Hugo, and Nebula.

This year there isn’t a consensus book powering to all the awards. I thought VanderMeer would garner more attention, but he missed a Philip K. Dick Award nomination, and I figured the Clarke would have been sympathetic to him as well. Those are real storm clouds for Annihilation‘s Nebula chances. Maybe the book was too short or too incomplete for readers. Ancillary Sword isn’t repeating Leckie’s 2014 dominance, but it has already won the BSFA. Liu has some momentum beginning to build for him, while Gannon and McDevitt are languishing.

So those are the 10 factors I’m currently weighting in my Nebula prediction. I’ve been tossing around the idea of adding a few more (publication date, sequel, book length), but I might wait until next year to factor them in. I’d like to factor in something about popularity but I haven’t found any means of doing that yet.

What’s left? Well, we have to weight each of these Indicators, and once I do that, I can run the numbers to see who leads the model!

2015 Nebula Prediction: Indicators #6-#8

These indicators try to wrestle with the idea of critical and reader reception by charting how the Nebula nominees do on year-end lists. While these indicators are evolving as I put together my “Best of Lists”, these are some of our best measures of critical and reader response, which directly correlate to who wins the awards.

Right now, I’m using a variety of lists: the Locus Recommended Reading List (which has included the winner 13 out of the past 14 years, with The Quantum Rose being the lone exception), the Goodreads Best of the Year Vote (more populist, but they’ve at least listed the winner in the Top 20 4 years since they’ve been fully running, so that’s at least promising), and then a very lightly weighted version of my SFF Critics Meta-List. With a few years more data, I’ll split this into a “Hugo” list and a “Nebula” list, and we should have some neatly correlated data. Until then, one nice thing about my model is that it allows me to decrease the weights of Indicators I’m testing out. The Meta-List will probably only account for 2-3% of the total formula, with the Goodreads list at around 5% and the Locus at around 9%. I can’t calculate the weights until I go through all the indicators.

Indicator #6: Places on the Locus Recommended Reading List (92.86%)
Indicator #7: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #8: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)

Table 4: Critical/Reader Reception for 2015 Nebula Nominees
Table 4 Reception

There are separate Fantasy and SF Goodreads lists, hence the SF and F indicators. These are fairly bulky lists (the Locus is at least 40+, the Goodreads the same, etc.), so it isn’t too hard to place on one of them. If you don’t, that’s a real indicator that your book isn’t popular enough (or popular enough in the right places) to win a mainstream award. So these indicators more punish books that don’t make the lists than help those that do, if that makes any sense.

Results are as expected: Gannon and McDevitt suffer in these measures a great deal. Their books did not garner the same kind of broad critical/popular acclaim that other authors did. Cixin Liu missing the Goodreads vote might be surprising, but The Three-Body Problem came out very late in the year (November), and didn’t have time to pick up steam for a December vote. This is something to keep you eye on: did Liu come out too late in the year to pick up momentum for the Nebulas? If The Three-Body Problem ends up losing, I might add a “When did this come out?” Indicator for the 2016 Nebula model. Alternatively, these lists may have mismeasured Liu because of its late arrival, and then these lists would need to be weighted more lightly.

The good thing about the formula is that the more data we have, the more we can correct things. Either way Chaos wins!

2015 Nebula Prediction: Indicators #5

One of my simplest indicators:

Indicator #5: Novel is science fiction (71.43%)

The Nebula—just look at that name—still has a heavy bias towards SF books, even if this has been loosening in recent years. See my Genre Report for the full stats. In its 33 year history, only 7 fantasy novels have taken home the award. Chaos Horizon only uses data since 2001 in my predictions, but we’re still only looking at 4 of the last 14 winners being fantasy.

How do this year’s nominees stack up?

Table 3: Genre of 2015 Nebula Award Nominees

Table 3 Genre

Obviously, it’s a heavy SF year, with 5 of the 6 Nebula nominees being SF novels. There were plenty of Nebula-worthy fantasy books to choose, including something like City of Stairs, but the SFWA voters went traditional this year. I think Annihilation could be considered a “borderline” or “cross-genre” novel, although I see most people classifying it as Science Fiction.

Ironically, all of this actually helps Addison’s chances with the formula. Think about that logically: fantasy fans only have 1 book to vote for, while SF fans are split amongst 5 choices. The formula won’t give Addison a huge boost (the probability chart works out 28.57% for Addison, 14.29% for everyone else), but it’s the one part of the formula where she does better than everyone else.

Next time, we’ll get into the indicators for critical reception.

2015 Nebula Prediction: Indicators #1-#4

Let’s leave the Hugo Award behind for now—the controversy swirling around that award has distracted Chaos Horizon, so it’s time to get back on track doing what this site was designed to do: generating numerical predictions for the Nebula and Hugo Award based on data mining principles.

Over the next three to four days, I’ll be putting put the various “Indicators” of the Nebula Award, and then we weight and combine those to get our final prediction. For a look at the methodology, check out this post and this post. If you’re really interested, there’s an even more-in-depth take in my “Building the Nebula Model” posts. Bring caffeine!

With the basics of the model built, though, all that’s left is updating the stats and plugging in this year’s data. Here’s Indicators #1-#4 (out of 11). These deal with past awards history:

Indicator #1: Author has previously been nominated for a Nebula (78.57%)
Indicator #2: Author has previously been nominated for a Hugo (71.43%)
Indicator #3: Author has previously won a Nebula for Best Novel (42.86%)
Indicator #4: Author is the most honored nominee (50.00%)

The best way to understand each of those is as an opinion/prediction of the Nebula based on data from 2001-2014. So, 78.6% of the time, someone who has previously been nominated for the Nebula wins the Nebula Best Novel award, and so on. The only tricky one here is the “Author is the most honored nominee”: I add up the total number of Hugo Noms + Wins + Nebula Noms + Wins to get a rough indicator of “total fame in the field.” 50% of the time, the Nebula voters just give the Nebula Best Novel award to the most famous nominee.

All of these indicators flow from the logical idea that the Nebula is a “repetitive” award: they tend to give the Best Novel award to the same people over and over again. Take a look at my Repeat Nominees study for justification behind that. This repetition is also a kind of a “common sense” conclusion: to win a Nebula you have to be known by Nebula voters. What’s the best way to be known to them? To have already been part of the Nebulas.

Don’t think this excludes rookie authors though—Leckie did great last year even in my formula, and that’s why these are only Indicators #1-#4. The other indicators tackle things like critical reception and same-year award nominations. Still, they give us a good start. Let’s check this year’s data:

Tables 1 and 2: Past Awards History for 2015 Nebula Nominees

Table 1 Past Hugo Nebula Data Info
Table 2 Past Hugo Nebula Data

Legend:
The chart is for award nominations prior to this year’s award season, so no 2015 awards are added in
Nebula Wins = Prior Nebula Wins (any category)
Nebula Noms = Proir Nebula Nominations (any category)
Hugo Wins = Prior Hugo Nominations (any category)
Hugo Noms = Prior Hugo Wins (any category)
Total = sum of N. Wins, N. Noms, H. Wins, and H. Noms
Total rank = Ranking of authors based on their Total number of Wins + Nominations
Best Novel = Has author previously won the Nebula award for Best Novel?
Gray shading of boxes added solely for readability
All data mined from http://www.sfadb.com

Jack McDevitt breaks out of the pack here: his prior 17 Nebula nominations (!) make him the most familiar to the Nebula voting audience. He only has 1 win for those 17 nominations, though, so I don’t think he’s in line for a second. McDevitt is going to suffer in indicators #6-10, as his books tend to not get much critical acclaim. McDevitt currently has a 10% win rate for the Nebula Best Novel award. If he keeps getting noms, I’m going to have to add a “McDevitt” exception to keep the formula working.

Jeff VanderMeer’s Hugo nominations are all in Best Related Work, not for fiction, although his other Nebula nomination is for Finch. He’s well-known in the field, although Annihilation hasn’t picked up many award nominations for 2015.

Leckie, who was a rookie last year, now does very well across the board: her prior Nebula noms, Best Novel Nebula win, and Hugo nom will all give her a boost in the formula. The real wild-card in Indicators #1-#4 is The Three-Body Problem. Cixin Liu’s novel was translated by Ken Liu, who is very well known to the Nebula and Hugo audience: he has 3 Hugo nominations (2 wins), and 6 Nebula nominations (1 win), to make him one of the most nominated figures in recent years. If SFWA voters think of The Three-Body Problem as being co-authored by Ken Liu, they’re more likely to pick it up, and that will really boost the novel’s chances. I haven’t decided the best way to treat The Three-Body Problem for my formula. What do you think? Should I include Ken Liu’s nominations as part of the profile for The Three-Body Problem?

Tomorrow, we’ll start looking at Indicators tracking genre and critical reception.

Building the Nebula Model, Part 5

This post continues my discussion on building my 2015 Nebula Best Novel prediction. See Part 1 for an introduction, Part 2 for a discussion of my Indicators, Part 3 for a discussion of my methodology/math, and Part 4 for a discussion of accuracy.

Taken together, those posts should help explain to anyone new to Chaos Horizon how Chaos Horizon works. This post wraps things up with a to-do list for 2015. To update my model for 2015, this is what I need to do:

1. Update the data sets with the results of the 2014 Nebulas, Hugos, and everything else that happened last year.
2. Rethink the indicators, and possibly replace/refine some of them.
3. Reweight the indicators.
4. Test model reliability using the reweighed indicators.
5. Use the indicators to build probability tables for the 2015 Nebulas.
6. Run the probability tables through the weights to come up with the final results.

I won’t be able to get all of this done until mid-April. It doesn’t make any sense to run the numbers until the Hugo noms come out, and those will be coming out this Saturday (April 4th).

Building the Nebula Model, Part 4

This post continues my discussion on building my 2015 Nebula Best Novel prediction. See Part 1 for an introduction, Part 2 for a discussion of my Indicators, and Part 3 for a discussion of my methodology/math.

Now to the only thing anyone cares about: how reliable is the model?

Here’s my final Nebula prediction from 2014:

1. Ann Leckie, Ancillary Justice (25.8%) (winner)
2. Neil Gaiman, The Ocean at the End of the Lane (20.7%)
3. Nicola Griffith, Hild (11.2%)
4. Helene Wecker, The Golem and the Jinni (10.6%)
5. Karen Joy Fowler, We Are All Completely Beside Ourselves (9.8%)
6. Linda Nagata, The Red: First Light (8.2%)
7. Sofia Samatar, A Stranger in Olondria (7.7%)
8. Charles E. Gannon, Fire with Fire (6.0%)

As you can see, my model attaches % chances to each nominee, deliberately avoiding the certainty of proclaiming one work the “sure” winner. This reflects how random the Nebula has been at times. There have been some true left-field winners (The Quantum Rose, for instance) that should remind us statistical certainty is not a possibility in this case.

Broadly speaking, I’m seeking to improve our sense of the odds from a coin-flip/random model to something more nuanced. For 2014, a “coin-flip” (i.e. randomly picking a winner) would have given a 12.5% chance to each of these 8 nominees. My prediction doubled those odds for Leckie/Gaiman, and lessened them for everyone else. While that lacks the crisp assurance of “this person will definitely win,” I think it correctly reflects how variable and unpredictable the Nebula really is.

A fundamental weakness of my model is that it does not take into account the specific excellence/content of a book. I’ve done that deliberately. My thought process is that if you want analysis/forecasts based on the content/excellence of a book, you can find that elsewhere on the web. I want Chaos Horizon to do something different, not necessarily imitate what’s already being done. I don’t know how, in the more abstract/numerical terms that Chaos Horizon uses, to measure the relative quality of a Leckie versus a Gaiman. I don’t think Amazon or Goodreads measures quality in a compelling enough fashion to be useful for Chaos Horizon, although I’m happy to listen to counter-arguments.

Even if we could come up with an objective measure of quality, how would we correlate that measurement to the Nebulas? Some of my indicators do (either directly or indirectly) mirror excellence/content, but they do so at several removes. If a book gets lots of nominations, I’m accepting that SFF readers (and voters) probably like it. If SFF readers like it, it’s more likely to win awards. Those are pretty tepid statements. I’m not, at least for the purposes of Chaos Horizon, analyzing the books for excellence/content myself. I believe that an interesting model could be built up by doing that—anyone want to start a sister website for Chaos Horizon?

Lastly, I’ve tried to avoid inserting too much of my opinion into the process. That’s not because I don’t value opinion; I really like opinion driven web-sites on all sides of the SFF discussion. Opinion is a different model of prediction than what I use. I think the Nebula/Hugo conversation is best served by having a number of different analyses from different methodological and ideological perspectives. Use Chaos Horizon in conjunction with other predictions, not as a substitute for them.

I posted last year about how closely my model predicted the past 14 years of the Nebulas
. The formula was 70% successful at predicting the winner. Not terrible, but picking things that have already happened doesn’t really count for much.

I’ll wrap up this series of posts with my “To-Do” list for the 2015 Nebula Model.

Building the Nebula Model, Part 3

This post continues my discussion on building my 2015 Nebula Best Novel prediction. See Part 1 for an introduction, and Part 2 for a discussion of my Indicators.

Now that we have 12 different indicators, how do I combine them together? This is where the theory gets sticky: how solidly do I want to treat each indicator? Am I try to find correlations between them? Do I want to pick one as the “main” indicator as my base, and then refine that through some recursive statistical process? Do I treat each indicator as independent, or are some dependent on each other? Do I treat them as opinions or facts? How complicated do I allow the math to be, given the low N we have concerning the Nebulas?

I thought about this, and read some math articles and scoured the internet, and I decided to use an interesting statistical tool: the Linear Opinion Pool. Under this model, I treat my data mining results as opinions, and combine them together using a Reliability Factor, to get a weighted combined percentage score. This keeps us from taking the data mining results too seriously, and it allows us to weigh a great number of factors without letting one of them dominate.

Remember, one of my goals on Chaos Horizon is to keep the math transparent (at a high school level). I want everyone who follows Chaos Horizon to be able to understand and explain how the math works; if it doesn’t, it becomes sort of a mysterious black box that lends an air of credibility and mystery to the statistics that I don’t want.

Here’s a basic definition of a Linear Opinion Pool:

a weighted arithmetic average of the experts’ probability distributions. If we let Fi(x) denote expert i’s probability distribution for an uncertain variable of interest (X), then the linear opinion pool Fc(x) that results from combining k experts is:
Linear Opinion Pool
where the weight assigned to Fi(x) is wi, and Σwi = 1.

Although the linear opinion pool is a popular and intuitive combination method with many useful properties, there is no method for assigning weights that is derived entirely from first principles. One can, however, interpret the weights in a variety of ways, and each interpretation lends itself to a particular way to calculate the weights.

This is a model often used in risk analysis, where you have a number of competing opinions about what is risky, and you want to combine those opinions to find any possible overlap (while also covering your ass from any liability). There’s plenty of literature on the subject; just google “Linear Opinion Pool” for more reading.

We have the probability distributions from my data mining. What weights do I use? That’s always the challenge in a Linear Opinion Pool. For Chaos Horizon, I’ve been weighting by how often that Indicator has actually chosen the Nebula in the past. So, if you used that Indicator and that Indicator alone to guess, how often would you actually be right? Not every Indicator comes into play every year, and sometimes an Indicator doesn’t help (like if all the nominated novels previously had Nebula nominations). We’ll be looking at all that data late in April.

Now, on to my mathematical challenge: can I explain this in easy to understand terms?

A Linear Opinion Pool works this way: you walk into a bar and everyone is talking about the Nebula awards. You wander around, and people shout out various odds at you: “3 out of 4 times a prior Nebula nominee wins” or “70% of the time a science fiction novel wins” and so forth. Your head is spinning from so much information; you don’t know who to trust. Maybe some of those guesses overlap, maybe some of those don’t. All of them seems like experts—but how expert?

Instead of getting drunk and giving up, you decide to sum up all the opinions. You figure, “Hell, I’ll just add all those probabilities up, and then divide by the total number of suggestions.” Then you begin to have second doubts: that guy over in the corner is really wasted and doesn’t seem to know what he’s talking about. I sidle over and ask his friend: how often has that guy been right in the past? He says 5% of the time, but that guy over there—the one drinking gin and tonics—is right 50% of the time. So I figure I better weight each opinion based on how correct they’ve been in the past. I add things up using those weights, and viola!, I’ve got my prediction.

Advantages:
It’s mathematically easy to calculate; no fancy software needed.
This allows me to add more indicators (opinions) very easily.
This treats my data mining work as an “opinion,” not a fact, which I think is closer to reality.
The weighting component allows me to dial up or dial down indicators easily.
The simple mathematics reflects the relative low amount of data.
The methodology is easy for readers to follow.

Disadvantages
It’s not as mathematically rigorous as other statistical models.
The weighting component introduces a human element into the model which may be unreliable.
Because this treats my data mining results as “opinions,” not “facts,” it may compromise the reliability of the model for some readers.
Because it is simple, it lacks the flashiness and impressiveness of grander statistical models.

When we’re dealing with statistical modeling, the true test is the results. A rigorous model that is wrong all the time is worse than a problematic model that is right all the time. In my next post, we’ll talk about past accuracy. Here’s my older posts on the Linear Opinion Pool and weighting if you want some more info.

As a last note, let me say that following the way model is constructed is probably more interesting and valuable than the final results. It’s the act of thinking through how different factors might fit together that is truly valuable. Process, not results.

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Mountain Was Here

writing like a drunken seismograph

thegrimdarkreview.wordpress.com/

Grimdark Book Reviews Every Wednesday

SFF Book Reviews

a reader's thoughts about fantasy & science fiction books

Philip K. Dick Review

A Re-read Project

Notes From the Darknet

Book reviews and literary discussion

Bookish

All books, reviews, genres, and bookish types welcome