Archive | Predictions RSS for this section

2016 Nebula Prediction

I’ve spun my creaky model around and around, and here is my prediction for the Nebulas Best Novel category, taking place this weekend:

N.K. Jemisin, The Fifth Season: 22.5%
Ann Leckie, Ancillary Mercy: 22%
Naomi Novik, Uprooted: 14.7%
Ken Liu, The Grace of Kings: 13.3%
Lawrence Schoen, Barsk: 10.7%
Charles Gannon, Trial by Fire: 9.5%
Fran Wilde, Updraft: 7.3%

Remember, Chaos Horizon is a grand (and perhaps failed!) experiment to see if we can predict the Nebulas and Hugos using publicly available data. To predict the Nebulas, I’m currently using 10 “Indicators” of past Best Novel winners. I’ve listed them at the bottom of this post, and I suggest you dig into the previous year’s prediction post to see how I’m building the model. If you travel down that hole, I suggest you bring plenty of coffee.

Simply put, though, I treat a bunch of indicators as competing experts (one person says the blue horse always wins! another person says when it’s rainy, green horses win!) and combine those expert voices to come up with a single number. While my model gives Jemisin a very slight edge this year, anyone can (and has) won the Best Nebula Novel award. We’ve had some real curveballs in this category in the last 15 years, and if you bet money on this award, you’d lose. What I suggest is treating the list as a starting point for further thought and discussion . . . Why would Jemisin be in the lead? What about The Fifth Season seems to make it a front-runner?

This year, Jemisin does very well because of her impressive Hugo and Nebula history (6 prior nominations), her sterling placement on year-end lists, her nominations for the Hugo, Locus Fantasy, and Kitscies, and the fact this is the first novel in a series. Jemisin is very familiar to the Nebula audience and critically acclaimed. That’s a recipe for winning. The Nebulas tend to go to first books in a series (think Ancillary Justice or Annihilation from the past two years), so if Jemisin doesn’t win for Book #1 of The Broken Earth series, it could be quite a while before she has viable chance to win again. Does that help? Sure, SFWA voters could vote Book #3 to win, but that hasn’t happened in the past. I tend not to look much at content (there are plenty of other websites for that), but The Fifth Season does have some of the more experimental/literary prose Nebula voters have liked recently. Parts of it are in the second person, for instance. This book would fit pretty well with the Leckie and VanderMeer wins.

Leckie is probably too high in my formula—and that’s not because SFWA voters don’t like Leckie, but because Ancillary Justice just won 2 years ago. Do the SFWA voters really want to give Leckie another award for the same series so soon? Aside from that wrinkle, Ancillary Mercy has everything going for it: critical acclaim, award nominations, etc. A decade from now, I expect Leckie to have won the Nebule at least once more . . . but not until she publishes a new series.

I think Uprooted has a real shot. This is actually a great test case year, allowing us to balance what SFWA voters value the most: past Nebula history/familiarity? That helps Jemisin and Liu; Novik has 0 prior Nebula noms. If it’s popularity, that helps Novik—stroll over to Amazon or Goodreads, and you can see that Uprooted has 4-5 times more rankings than Jemisin or Leckie. In the past, though, the SFWA hasn’t much cared about mainstream popularity. If Uprooted wins, I need to recalculate my formulate take popularity more into account.

Ken Liu will be familiar to the Nebula audience–he’s already won a Nebula in short fiction. My formula dings him because he didn’t show up on year-end lists or in the other awards. Same for Updraft, although we’re lacking the Nebula history for Wilde.

Gannon is the new Jack McDevitt—and McDevitt got nominated a bunch of times and then won. So it’s not out of the realm of reason for Gannon to win this year: the other books split the vote, etc. Still, it’s hard to imagine voters jumping on to Book #3 of a series if Books #1 and #2 couldn’t win.

That leaves Schoen—a true wild card. Schoen had the most votes in the SFWA nomination recommended list, and we don’t yet know how much that matters. If Schoen wins, I’ll have to completely rejigger my formula. Things are getting a little creaky as is, and it’s probably time to go back and rebuild the model for Year #4.

Always remember the Nebula is an unpredictable award. Remember, The Quantum Rose won over A Storm of Swords. Who saw that coming? That’s why everyone has a decent chance in my formula: no one dips below 5%.

Lastly, remember Chaos Horizon is just for fun, a chance to look at some predictions and think about who is likely to win. A different statistician would build a different model, and there’s no problem with that—statistics can’t predict the future. Instead, they help us to think about events that haven’t happened yet. That’s just one of many possible engagements with the awards. Good luck to all the Nebula nominees, and enjoy the ceremonies this weekend!

Indicator #1: Author has previously been nominated for a Nebula (80%)
Indicator #2: Author has previously been nominated for a Hugo (73.33%)
Indicator #3: Has received at least 10 combined Hugo + Nebula noms (46.67%)
Indicator #4: Novel is science fiction (73.33%)
Indicator #5: Places on the Locus Recommended Reading List (93.33%)
Indicator #6: Places in the Goodreads Best of the Year Vote (100.00%)
Indicator #7: Places in the Top 10 on the Chaos Horizon SFF Critics Meta-List (100.00%)
Indicator #8: Receives a same-year Hugo nomination (60%)
Indicator #9: Nominated for at least one other major SFF award (73.33%)
Indicator #10: Is the first novel of a series or standalone. (80%)

The percentage afterward tracks the data from 2001-2015 (when available), so it reads that 80% of the time, the eventual winner had previously been nominated for a Nebula, etc.

Estimating the 2016 Hugo Nominations, Part 5

Let’s wrap this torturous series of posts up with a few final things.

Over the last few days, I’ve built a series of models to predict the 2016 Hugos based on a number of assumptions. Those assumptions are that voters will vote for the 2016 Hugos in similar patterns to last years. That’s an easy assumption to knock, but it gives us a place to start thinking and debating. Here are those posts with estimates: Introduction, Post 2 (Rabid Puppies), Post 3 (Typical Voters), Post 4 (Sad Puppies). I view Chaos Horizon more as a thought experiment (can we get anywhere with this kind of thinking?) than as some definitive fount of Hugo knowledge. The goal of any prediction is to be correct, not elegant.

By breaking these out into three groups and three turnout scenarios (40%, 60%, 80%), I produced 27 different models. To conclude, we can look to see if certain books show up in a lot models, and then I’ll make that my prediction.

To view the models or create your own, use this Google Worksheet. Instructions are included in the worksheet, but you can cut and paste the data to create your own prediction.

Let’s look at one likely scenario: 80% Rabid Puppy vote, 60% Typical Vote, and 40% Sad Puppy vote. This represents the organization and high turnout of the Rabid Puppies, moderate enthusiasm from the more typical (or new) Hugo voters, and then lower turnout because of the way the Sad Puppy list was built. Here’s what you end up with:

Novel Names Rabid  Vote Sad Vote Typical Vote Totals
Vote per Group 440 180 3000 3620
Seveneves 440 108 196 744
Uprooted 144 532 676
The Aeronaut’s Windlass 440 151 30 621
Somewhither 440 180 620
Ancillary Mercy 65 532 597
Golden Son 440 30 470
Agent of the Imperium 440 440
The Fifth Season 392 392
Aurora 392 392
Honor At Stake 173 173
A Long Time Until Now 122 122

So that makes the official 2016 Chaos Horizon Hugo prediction as follows:
Seveneves, Neal Stephenson
Uprooted, Naomi Novik
The Aeronaut’s Windlass, Jim Butcher
Ancillary Mercy, Ann Leckie
Somewhither, John C. Wright

Seveneves makes it in every scenario because it receives votes from all 3 groups. Now, my assumptions could be wrong—perhaps some voters are so angry that Seveneves appeared on the Rabid Puppy list that they won’t nominate it at all. However, even a modest showing for Sevenves among typical voters gets it on the ballot. Remember, a similarly complex SF novel by Stephenson in Anathem made the ballot just a few years ago.

Uprooted does well in my model because it’s own of the most popular SFF books of the year (as evidenced by both its Nebula nomination, it’s appearance on year-end lists, and it’s popularity on Amazon and Goodreads), and picks up votes from both the Typical and the Sad Puppies (it’s #4 on their list). This might be the major effect of the Sad Puppies in 2016, to act as a kind of swing vote when things are close, as they’ll likely be between Novik, Jemisin, and Leckie.

Then we have the two other books that overlap between the Sad and Rabid Puppies. You can think of this in two ways: two separate groups voting for these texts, or some Sad Puppies converting to Rabid Puppies. Statistical results are the same. I’d be a little cautions about the John C. Wright. While it placed #1 on the Sad Puppies list, was this placement inflated by passionate Wright fans? Compared to Butcher’s massive popularity, Wright is a fairly niche author. If Puppy support is weaker than predicted, I’d drop the Wright out and replace it with Jemisin’s book. That’s the slot I’m watching closely when nominations come out. I do have Jemisin down as a real possibility; it seems like a lot of readers think The Fifth Season is her best book. I may be underestimating Jemisin based on past performance. The modelling I use is prone to that problem, of using historical data even when conditions on the ground have changed. Every model has its flaws.

Then we have Leckie. Lost in the Hugo controversies is the fact that these Ancillary novels have been some of the best received, reviewed, and rewarded SF novels of the millennia. Take a look at SFADB to see just how well these books have done: 12 major award nominations, 5 major wins including prior Hugos and Nebulas. If one book is likely to break up a Rabid sweep, this is it.

Of course, things can also go the other way—I may be under-predicting the Rabid Puppies, and if I’m by around 100 votes, that would push Leckie out and Pierce Brown up.

So that’s it! I’ll update my Hugo prediction page tomorrow when I get a chance. Does a ballot of Stephenson / Novik / Butcher / Wright / Leckie make sense? Is Jemisin or Pierce next in line after that? Are there other books that could be major contenders that I’m not seeing?

Predict away!

Estimating the 2016 Hugo Nominations, Part 4

Predicting how the “Sad Puppy” voters are going to nominate in 2016 is the most speculative part of all. The Sad Puppies drastically changed their approach, moving from a recommended slate to a crowd-sourced list. It’s an open question of how that change will impact the Hugo nominations.

What we do know, though, is that last nomination season the Sad Puppies were able to drive between 100-200 votes to the Hugos in most categories, and the their numbers likely grew in the finally voting stage. I estimated 450. All those voters are eligible to nominate again; if you figured the Sad Puppies doubled from the nomination stage in 2015 to now, they’d be able to bring 200-400 votes to the table. Then again, their votes might be diffused over the longer list; some Sad Puppies might abandon the list completely; some Sad Puppies might become Rabid Puppies, and so forth into confusion.

When you do predictive modelling, almost nothing good comes from showing how the sausage is made. Most modelling hides behind the mathematics (statistical mathematics forces you to make all sorts of assumptions as well, they’re just buried in the formulas, such as “I assume the responses are distributed along a normal curve”) or black box the whole thing since people only care about the results. Black boxing is probably the smart move as it prevents criticism. Chaos Horizon doesn’t work that way.

So, I need some sort of decay curve of the 10 Sad Puppy recommendations to run through my model. What I decided to go with is treating the Sad Puppy list as a poll showing the relative popularity of the novels. That worked pretty well in predicting the Nebulas. Here’s that chart, listing how many votes each Sad Puppy received, as well as the relative % compared to the top vote getter.

Somewhither John C Wright 25 100%
Honor At Stake Declan Finn 24 96%
The Cinder Spires: The Aeronaut’s Windlass Jim Butcher 21 84%
Uprooted Naomi Novik 20 80%
A Long Time Until Now Michael Z Williamson 17 68%
Seveneves Neal Stephenson 15 60%
Son of the Black Sword Larry Correia 15 60%
Strands of Sorrow John Ringo 15 60%
Nethereal Brian Niemeier 13 52%
The Discworld Terry Pratchett 11 44%
Ancillary Mercy Ann Leckie 9 36%

What this says is that for every 100 votes the Sad Puppy generates for John C. Wright, they’ll generate 36 votes for Ann Leckie. I know that stat is suspect because not everyone who voted in the Sad Puppy list was a Sad Puppy, and that the numbers are so small it was easy for one person to get boosted up the list by a small group of fans. Still, this gives us something. What I’ll do is plug this into my chart of 40%, 60%, and 80% using the 450 Sad Puppy estimate to come up with:

Sad Puppies
Scenario 40% 60% 80%
Voters 180 270 360
Ancillary Mercy 65 97 130
Uprooted 144 216 288
The Fifth Season
Aurora
Seveneves 108 162 216
Golden Son
Somewhither 180 270 360
The Aeronaut’s Windlass 151 227 302
Agent of the Imperium
Honor At Stake 173 259 346
A Long Time Until Now 122 184 245

Does this make any sense? I’m sure many will answer no. But look closely: could the remnants of the Sad Puppies, no matter how they’re impacted by the list, generate 300-150 votes for Jim Butcher this year? I find it hard to believe that they couldn’t produce that number. Remember, Butcher got 387 votes last year in the nomination stage. Some of that was Rabid Puppies (maybe up to 200), but where did the rest come from? And will all the Sad Puppy votes for Butcher vanish in just a year?

How about that Somewhither number—is it too big? This could also model some Sad Puppies being swayed over to the Rabid Puppy side, as would the Seveneves number. The Novik and Leckie numbers could represent the opposite happening: Sad Puppies who joined in 2015 and are now drifting over to more mainstream picks and choices. I think I’d go conservative with this, staying in the 40% band to model the dispersion effect.

So now I have predictions for each of the 3 groups. If I combine those, I get 27 different models. Each model may be flawed in itself (overestimating or underestimating a group), but when we start looking at trends that emerge across multiple models, that’s where this project has been heading. In predictive modelling, normally you make the computers do this and you hide all the messy assumptions behind a cool glossy surface. Then you say “As a result of 1,000 computer simulations, we determined that the Warriors will win 57% of the time.” For the record, the Chaos Horizon model now says the Warriors will win 100% of the time and that Steph Curry will be nominated for Best Related Work.

We could go on and do 100 more models based on different assumptions and see if trends keep emerging. This kind of prediction is messy, unsatisfying, and flawed, and the more you actually understand the nuts and bolts behind it, the more it makes you doubt predictive modelling at all. Of course, the only thing worse would be if predictive modelling was 100% (or even 90% or 80%) accurate. Then we’d know the future with 100% accuracy. Come to think of it, wouldn’t that make for a good SF series . . . Better get Isaac Asimov on the phone. Maybe I should argue that this series is eligible for “The Best SF Story of 2017” Hugo.

Tomorrow we’ll start combining the models and see if anything useful emerges.

Estimating the 2016 Hugo Nominations, Part 3

This is where things get messy—perhaps to the point of incoherence. I estimated 5,000 voters in last years Hugos who seemed to not be associated with the Sad or Rabid Puppies: some Hugo voters of past years, some who joined to vote No Award for the Puppies, some who joined just to vote, some who maybe joined to participate in the controversy, and some who joined for unknown reasons. We don’t have much past data on this group, so how can we calculate how they’re likely to vote in 2016?

To be honest, we probably can’t, not with any definite certainty. What I want to achieve is just getting in the ballpark, of producing a low and high estimate that we can compare to the Rabid Puppies estimate. That’ll at least tell us something. You know me: I always like numbers, no matter how rough they are.

What we do have is data from past Hugo years, and if we make the assumption that voting patterns won’t be wildly different from previous years, we’d at least have a place to start. So, first thing I’ll do is take a look at previous voting percentages in the Hugos. This chart shows what percentage of the Vote the #1, #2, etc. novel receives in a typical Hugo year. For these purposes, I average the voting patterns from 2010-2013 (the 4 prior years with no Puppy influence, drawn from the Hugo voting packets).

Table 1: Voting Percentages in Hugo Best Novel Category

2013 2012 2011 2010 Average
Book #1 17.34 18.27 15.01 20.3 17.73
Book #2 12.4 17.01 12.98 15 14.3475
Book #3 12.13 13.57 12.24 14.3 13.06
Book #4 11.95 8.45 9.96 11 10.34
Book #5 10.6 7.41 9.36 8.9 9.0675
Book #6 9.07 7.3 8.88 8.9 8.5375
Book #7 8.18 7.2 8.64 7.6 7.905
Book #8 8.09 6.89 8.28 7 7.565
Book #9 6.65 6.47 7.68 7 6.95
Book #10 6.2 6.37 7.2 6.4 6.5425

This means that the most popular Hugo book, in any given year, gets around 17% of the vote, with a range of 15%-20% in the years I looked at. So if 4,000 people vote in 2016, we might estimate the top book as getting between 600-800 votes. If 3,000 people vote, that drops us down to an estimate of 450-600 votes.

Does this estimate tell us anything, or is it just useless fantasizing? I can see people arguing either way. What this does is narrow the range down to something somewhat sensible. We’re not predicting Ann Leckie is going to get 2000 votes for Best Novel. We’re not predicting she’s going to get 100. I could predict 450-800 and then match that against the 220-440 Rabid Puppies prediction. That would tell me Leckie seems like a likely nominee.

We can go destroy this prediction if we make different assumptions. I could assume that the new voters to the Hugos won’t vote in anything like typical patterns, i.e. that they are complete unknowns. Maybe they’ll vote Leckie at a 75% rate. Maybe they’ll vote her 0%. Those extremes grate against my thought patterns. If you know Chaos Horizon, I tend to chose something in the middle based on last year’s data. That’s a predictive choice I make; you might want to make other ones.

I believe voting patterns will be closer to traditional patterns than totally different from them. You may believe otherwise, and then you’ll need to come up with your own estimate. If I’m off, how far off am I, and in what direction? Too low by 100-200 votes? Too high by 100-200 votes? And if I’m off by only that much, is the outcome of this prediction affected?

So, this says . . . if these 5,000 vote along similar lines to past Hugo voters, and we imagine three turnout scenarios, where do we end up?

Let’s not drop in book titles yet, let’s just multiply table #1 by three different turnout scenarios for our 5,000 2015 voters (40%, 60%, and 80%):

Table 2: Estimated Votes in 2016 Hugo Best Novel Category Based on Prior Voting Patterns

Turnout 40% 60% 80%
# of Typical Voters 2000 3000 4000
Book #1 355 532 709
Book #2 287 430 574
Book #3 261 392 522
Book #4 207 310 414
Book #5 181 272 363
Book #6 171 256 342
Book #7 158 237 316
Book #8 151 227 303
Book #9 139 209 278
Book #10 131 196 262

What I’m interested is whether or not these numbers beat the Rabid Puppy numbers from last post. Even if we assume robust Rabid Puppy turnout of creating 440 votes, we have 1 novel above that in the 60% scenario and 3 in the 80% scenario. Even if you pump the Rabid Puppy number up to 500, we still have at least 1 novel above in both the 60% and 80% scenario. If we lower the Rabid Puppy vote to a more modest 400, we wind up with 2 in the 60% and 4 in the 80%. This is the piece of data I want: in most turnout scenarios, a few “typical” books beat the Rabid Puppies, but not all. I’d estimate we’re in store for a mixed ballot. Only in very modest turnout scenarios (40%) do the Rabid Puppies sweep Best novel.

We do need to factor the Sad Puppies in (next post), but the numbers don’t suggest to me that the Rabid Puppies will manage 5 picks on the final Hugo ballot. The numbers also don’t suggest that there won’t be 0 Rabid Puppy picks.

Two questions remain for this post: what will the turnout be? I don’t know how many of those 5,000 will vote. I know passions are high so I assume the turnout will be high. Then again, this stage is difficult to vote in: you have to read a bunch of novels, remember when the ballot is due, realize you’re eligible to nominate, etc. To my eye, somewhere between 50-75% seems about right. I’m going to pick a conservative 60% just to have something to work with. Since I included all three bands, you’re free to pick anything according to your tastes and your own sense of what will happen.

Next, what is Novel #1 going to be? Novel #2? Novel #3? I’m just going to utilize my SFF Best of Critics list to drop in here. I definitely think the top of the list (Leckie, Jemisin, Novik) is what this group will be voting for. I’ll preserve ties, so if we go to the Top 12 contenders I’m tracking for this prediction, here’s where they show up:

1. Ancillary Mercy (estimate 17.73% of vote)
1. Uprooted (estimate 17.73% of vote)
3. The Fifth Season (estimate 13.06% of vote)
3. Aurora (estimate 13.06% of vote)
10. Seveneves (estimate 6.54% of vote)

Golden Son and The Aeronaut’s Windlass did show up on the list, so I’m going to give them a 1% of the vote. Something but not a lot. This is in line with Butcher’s past totals from before the Sad Puppies; he was only getting a few votes based on 2009 Nomination Data, where WorldCon showed us how many votes everyone got.

Could those be off significantly? Sure they could. That’s why it’s an estimate! Multiply those out in the scenarios, and you get this chart:

Typical Voters
Turnout Scenario 40% 60% 80%
Voters 2000 3000 4000
Ancillary Mercy 355 532 709
Uprooted 355 532 709
The Fifth Season 261 392 522
Aurora 261 392 522
Seveneves 131 196 262
Golden Son  20  30  40
Somewhither
The Aeronaut’s Windlass  20  30  40
Agent of the Imperium
Honor At Stake
A Long Time Until Now

Leckie and Novik get lots of votes, probably enough to beat the Rabid Puppies without any help. Given Leckie pulled in 15.3% of the vote in 2015 and 23.1% in 2014, wouldn’t her vote percentage be somewhere in that ballpark for 2016? The average of those two is 18.15%, and I predicted her at 17.73% in the chart above. Estimating is a different act than logical proof, and one that is ultimately settled by the event—come a few weeks, we’ll known the ballot, and I’ll either be right or wrong. Jemisin and Robinson get votes and will be competitive. Stephenson is down lower, but he also appears on the Rabid and Sad lists, so that jumps him over Jemisin and Robinson’s total and onto the ballot.

I know this will be the most disliked of the predictions. You know my theory at Chaos Horizon: any estimate gives you a place to start thinking. Even if you vehemently disagree with my logic, you now have something to contest. What makes more sense? If you apply those estimates, what ballot do you come up with? So argue away!

Tomorrow the Sad Puppies, and then we combine the three charts to get a bunch of different scenarios. If we see patterns across those scenarios, that’s the prediction.

2016 Nebula Prediction 2.0

Time to finalize my 2016 Nebula Best Novel prediction.

This year, it’s the most boring and conservative prediction I’ve ever come up with. To catch those of you up who aren’t familiar with the prior discussions on Chaos Horizon: the SFWA made their Recommended Reading list available this year. After close analysis, it appears that this Recommended Reading list is closely (to the tune of 80%) aligned with the final Nebula nominations. Since Chaos Horizon tries to be a data-driven site, using past Nebula and Hugo patterns for its future predictions, we’re not going to find any better data than that.

As such, my prediction needs to mirror the top of the SFWA recommended reading list. Like I said, that’s boring and safe, but it is what it is. Here’s the Top 10 from the SFWA Recommended Reading list, as of 2/18/16:

35
Barsk: The Elephants’ G… Schoen, Lawrence M. Tor Books Dec-15
33 Raising Caine Gannon, Charles E. Baen Jul-15
29 Updraft Wilde, Fran Tor Books Sep-15
24 Uprooted Novik, Naomi Del Rey May-15
22 The Grace of Kings Liu, Ken Saga Press Apr-15
21 Ancillary Mercy Leckie, Ann Orbit Oct-15
19 The Fifth Season Jemisin, N. K. Orbit Aug-15
18 Beasts of Tabat Rambo, Cat WordFire Press Apr-15
18 Karen Memory Bear, Elizabeth Tor Books Feb-15
18
The Traitor Baru Cormorant Dickinson, Seth Tor Books Sep-15

There’s no reason to expect that the Nebula Best Novel nominations will look any different from the top of this list. We may see one novel from lower down like The Fifth Season jump up into the top six; that has happened in the past. The Fifth Season is particularly compelling due to Jemisin’s three prior Best Novel Nebula nominations. I place a lot of stock in former nominations. But who would she replace? Leckie, who won two years ago and whose Ancillary series is one of the most critically acclaimed works of the decade? Ken Liu, who has 7 prior Nebula nominations for his short fiction? Uprooted, one of the most read and talked about fantasy novels of the year? Wilde, Gannon, and Schoen, all of whom have a large vote lead? I wouldn’t be shocked to see Jemisin make it, but I’d be suprirsed to see anyone else leap up.

So, here’s my prediction. These are in the order of who I think is most likely to get nominated, not who I think is most likely to win. Also, I predict who I think will get nominations, not who should get nominations. I’ll grind through my winning prediction after we get the nominees:

1. Barsk: The Elephant’s Graveyard, Lawrence Schoen: The absolute surprise of the SFWA Recommended Reading list, this SF novel about a post-human future came out at the very end of the year, too late too make any of the “year’s best lists.” Schoen does have 3 prior Nebula nominations (0 wins) in the Novella category over the past three years, so that familiarity helped him roar up the list. The Nebula has a history of helping push ignored novels in the past, and this seems to be another example of that. It’s still lightly read, at least according to Amazon and Goodreads; a Nebula nomiation would bring it a lot of attention. If Barsk gets nominated, that will also give us some great data about how much a Nebula nomination impacts the Hugos.

2. Raising Caine, Charles Gannon: Gannon has become a favorite of the Nebulas (the new McDevitt?), with two prior Best Novel nominations for this same series. Raising Caine mixes it up, giving us a contact/strange planet story. It’s length (almost 800 pages) and place in a series (#3) would normally be strikes against getting a Nebula nom, but with such a high placement on the SFWA list, Gannon seems like a safe bet again.

3. Updraft, Fran Wilde: Wilde’s book hovers (I couldn’t resist the bad pun) in the territory between YA and Adult, and may grab a Norton (the Nebula’s YA category) nomination this year as well. If it does, this might signal a shift for the Nebulas, with a willingness to nominate more YA books not by Neil Gaiman. Wilde would be new to the Nebulas, having 0 prior nominations.

4. Uprooted, Naomi Novik: Novik has almost every metric going for her: good sales, good placement on year-end lists, strong fan response. She has no Nebula history (0 nominations), although she did a grab a Hugo best novel nomination back in 2007 for Temeraire. Novik was at the top of the SFWA Recommended Reading list when it debuted, but she hasn’t picked up much steam sense. Still, I think this is a safe bet and a strong contender to win the Nebula.

5. Grace of Kings, Ken Liu: Liu has been a recent Nebula darling : 7 short fiction nominations since 2012. This is his first novel, and since the Nebula audience is already very familiar with his short fiction from prior nominations, that brings a lot of eyeballs to the text. In Chaos Horizon predictions, eyeballs = possible voters. Ken also scored a Best Novel nomination last year for translating Cixin Liu’s Three-Body Problem.

6. Ancillary Mercy, Ann Leckie: Leckie is coming off of two straight Nebula nominations for this series, including her win for Ancillary Justice in 2014. I don’t expect anything to change this year; the final volume was well-received as a fitting conclusion to this trilogy. Could there be a little Leckie fatigue though: after so many awards over the past 2 years, could Nebula voters want to nominate someone else?

7. The Fifth Season, N.K. Jemisin: Jemisin has three prior best novel Nebula noms in 2011, 2012, and 2013, which is every year she’s been eligible for the novel category (she’s published 5 novels, but some years she published more than one novel). If anyone can outperform their place on the list, I think it’s Jemisin.

At this point, let me break from the SFWA list and include some possible strong competitors from lower down:

8. Karen Memory, Elizabeth Bear: Bear seems like a possible contender with her unique setting and decent placement on the SFWA list. In the negative column, she has 0 total Nebula nominations ever, and Karen Memory doesn’t perform particularly well in popularity metrics. The 19th century steampunk setting might be a challenge for some voters as well.

9. Aurora, Kim Stanley Robinson: Robinson has been a perennial Nebula favorite (12 total nominations, 3 wins, including Best Novel wins for 2312 in 2013 and Red Mars back in 1994). Even though he’s tied #15 on the SFWA list, this is a kind of Hard SF novel that appeals to the SF wing of the Nebulas; that group has always had enough votes to put 1-2 books on every Nebula ballot in recent years. If anyone dramatically outperforms their SFWA list placement, it could be Robinson.

10. The Water Knife, Paolo Bacigalupi: If Aurora doesn’t make it, this book is the other logical choice for a SF novel from a recent winner. Bacigalupi roared to huge Nebula and Hugo success with The Windup Girl back in 2010, and this is his first proper “adult” SF novel since then. 5 years is an eternity in these awards—has his popularity cooled off?

Everyone else seems unlikely. Cixin Liu got a nomination last year, but The Dark Forest is way down the list at #15. Maybe a book like The Traitor Baru Cormorant or The House of Shattered Wings has some buzz I’m not seeing, so those might be possibilities. Cat Rambo could grab support for Beasts of Tabat, but her position as SFWA President would seem to be a significant conflict of interest in taking a nomination. Laura Anne Gilman did get a Nebula Best Novel nomination back in 2010, so Silver on the Road is a possibility. The Nebulas have only ever nominated Stephenson once, back in 1994, so I don’t see Seveneves as having any real chance. It’s probably best never to count McDevitt completely out, but Thunderbird didn’t do well on the list this year.

There’s also an outside possibility that the SFWA Recommended Reading list won’t be predictive this year. Maybe making it public changed the dynamic so much that it’s no longer accurate. We won’t know that until the noms come out, though.

The fun thing about predicting is that we’ll know the answers soon. Nebula nominations should be announced shortly. Then it’s on to the Hugos: controversy ahoy!

Predicting the Hugos: Checking in with Sad Puppies IV

As we turn to the new year and thinking about predicting the 2016 Hugo nominations, it’s important to see what kind of recommendations are emerging from the Sad Puppy IV camp. According to Kate Paulk (one of this year’s organizers with Sarah Hoyt and Amanda from the blog Mad Genius Club), this is how it will work:

To that end, this thread will be the first of several to collect recommendations. There will also be multiple permanent threads (one per category) on the SP4 website where people can make comments. The tireless, wonderful volunteer Puppy Pack will be collating recommendations.

Later – most likely somewhere around February or early March, I’ll be posting The List to multiple locations. The List will not be a slate – it will be a list of the ten or so most popular recommendations in each Hugo category, and a link to the full list in all its glory. Nothing more, nothing less.

It’s a pretty open question of how exactly what kind of impact this will have on the 2016 Hugo nominations. Will these recommendations operate as slate, concentrating 100-300 (or more?) Sad Puppy votes into a unbreakable voting block? Or will a longer list diffuse the Sad Puppy vote, leading to a subtler effect on the final ballot? A lot is going to depend on what the list actually looks like, so, without further ado, here is the Chaos Horizon tabulation of the Sad Puppies IV recommendation, taken from the Best Novel web page:

Somewhither Wright, John C. 12
A Long Time Until Now Williamson, Michael Z. 10
Seveneves Stephenson, Neal 10
Uprooted Novik, Naomi 8
Honor at Stake Finn, Declan 7
The Aeronaut’s Windlass Butcher, Jim 6
The Just City Walton, Jo 5
Strands of Sorrow Ringo, John 5
The Desert and the Blade Stirling, S.M. 5
Ronin Games Harmon, Marion 4
Son of the Black Sword Correia, Larry 4
Ancillary Mercy Leckie, Ann 4

To produce this, I went through and counted each recommendation from the 150 comments. Sometimes the recommendations were a little unclear, so don’t take this as 100% accurate, but rather as a rough picture of the current state of the SP4 list. If anyone wants to count and double-check, please do! Here’s a link to my spreadsheet, which contains all recommended novels.

So, if this were the final list—and I expect it to change greatly by time we reach March—how would this impact the 2016 Hugo nominations?

I immediately see 4 “overlap” situations with more typical Hugo books (Stephenson, Novik, Walton, Leckie). Any number of votes driven to Seveneves, Uprooted, or Ancillary Mercy all but assures those books of a Hugo nomination. I have each of those as very likely to get nominations anyways (Leckie beat several SP/RP recommendations last year; Novik is buzziest Fantasy novel of the year; Stephenson is well-liked by Hugo voters with numerous past noms). Walton is the dark horse here; My Real Children missed the 2015 ballot by only 90 votes. How many votes could being in the #6 slot of Sad Puppies IV get you?

Three other texts stand out to me from this early list as real potential Hugo nominees. A Long Time Until Now is a military-SF novel published by Baen; it has a solid number of Amazon rankings (269); Michael Z. Williamson was in the middle of last year’s kerfuffle with the Hugo nominated Wisdom From My Internet. This could emerge as the “Baen” book for both the Sad Puppies and Rabid Puppies, although RP is much harder to predict. If this overlapped between those two groups, it would be a strong possibility.

Somewhither by John C. Wright was published by Vox Day’s Castalia House, and would seem to be exactly the kind of book the Rabid Puppies would select for their slate. Wright was nominated 6 times for the Hugo last year, although one was rescinded for eligibility reasons. This will be a work to keep your eye on as a test of SP/RP numbers.

Jim Butcher grabbed 387 votes for Skin Game last year. The Aeronaut’s Windlass is the first in a new fantasy series, which might make it easier for new readers to get into. I don’t think this book is as well-liked as the Dresden novels, but is it capable of grabbing tons of votes? Butcher’s reading audience is just that big.

Lastly, will certain writers from this list turn down Hugo nominations? Correia did exactly that last year, and I’ve heard rumors (but not seen sources; if someone has one, please post in comments) that Ringo would do the same. Would someone like Butcher or Stephenson just not want the hassle in 2016? They’re so famous and so sell so many books that they don’t need the Hugos.

There’s still a long ways to go in the Hugo Wars of 2016. What I’ll do at Chaos Horizon is the work I always do—collecting information, posting lists, and speculating as to what might happen. Enjoy the chaos!

Predicting the Hugos: Thinking of 2016

It’s getting close to the first of the year, and I’ll have my first 2016 Hugo prediction up soon! Here at Chaos Horizon, we use the stats and data from previous years to predict what will happen going forward. Obviously, that’s a very specific methodology that won’t be to everyone’s taste.

Here’s how it works: I begin with the assumption that what will happen in the 2016 Hugos will follow the patterns of previous years, particularly 2015. Of course, every year is different, but this gives us a starting point to begin a prediction. This assumption is useful in some cases (the Warriors have gone 28-1 games in the NBA so far; should I predict the Warriors to win their next basketball game?) and not useful in other cases (let’s say I rolled three fours in a row with a pair of dice; should I predict the next roll to also be a 4?).

The tricky part for the 2016 Hugos is to make a decent estimate of how much impact the Sad/Rabid Puppies will have this year. Before Correia and Kloos withdrew, the Puppies took 4 out of the top 5 Novel spots in the 2015 Hugos. After the controversy surrounding the slates hit in 2015, there was a huge increase of Hugo voters (5,653 people voted in the Best Novel category, up from 3,137 in 2014). All 5,653 of these voters are eligible to vote in the 2016 Hugo nominations—but how many of them will? And what percentage will follow/be influenced by the Puppies?

I don’t think we’ll exactly know until the nomination stats are released next August, but what we can do is work on some sensible guesses.

First thing, how many people will nominate in 2016? We saw a voting increase between the 2014 and 2015 Final Hugo Ballot of 5653/3137 = 1.8X. If we apply that number to last year’s nomination number, we’d get a 1827 2015 nomination ballots * 1.8 = 3289 nomination ballots in the Best Novel category. The controversy and high emotions surrounding last year’s Hugo could drive that number even higher. Remember, though, that the nomination process doesn’t get near as much ink as the final ballot does.

Next, to predict the nominees for the 2016 Hugos, I’ll begin with some stats from 2015:

Best Novel Nominations 2015 Hugo (1,827 ballots)
387 Skin Game Jim Butcher 21.2%
372 Monster Hunter Nemesis Larry Correia 20.4% *
279 Ancillary Sword Ann Leckie 15.3%
270 Lines of Departure Marko Kloos 14.8% *
263 The Dark Between the Stars Kevin J. Anderson 14.4%
256 The Goblin Emperor Katherine Addison 14.0%
210 The Three Body Problem Liu Cixin 11.5%
199 Trial By Fire Charles E. Gannon 10.9%
196 The Chaplain’s War Brad Torgersen 10.7%
168 Lock In John Scalzi 9.2%
160 City of Stairs Robert Jackson Bennett 8.8%
141 The Martian Andy Weir 7.7%
126 Words of Radiance Brandon Sanderson 6.9%
120 My Real Children Jo Walton 6.6%
112 The Mirror Empire Kameron Hurley 6.1%
92 Lagoon Nnedi Okorafor 5.0%
88 Annihilation Jeff Vandemeer 4.8%

Correia and Kloos turned down their nominations. We need to be aware that something similar could happen again. Also note how close Addison was—she almost beat Anderson (7 votes).

Let’s transform that chart by taking out the author’s names and replacing them with either Sad/Rabid Overlap (appeared on both the Sad + Rabid slates), Sad No Overlap (appeared only on the Sad Puppy slate), Rabid No Overlap (appeared only on the Rabid Puppy slate), or Typical (did not appear on a slate). Here’s what you get:

Best Novel (1,827 ballots)
Spot #1: 387 Sad/Rabid Overlap #1 21.2%
Spot #2: 372 Sad/Rabid Overlap #2 20.4% *
Spot #3: 279 Typical #1 15.3%
Spot #4: 270 Sad/Rabid Overlap #3 14.8% *
Spot #5: 263 Sad/Rabid Overlap #4 14.4%
Spot #6: 256 Typical #2 14.0%
Spot #7: 210 Typical #3 11.5%
Spot #8: 199 Sad No Overlap #1 10.9%
Spot #9: 196 Rabid No Overlap #1 10.7%
Spot #10: 168 Typical #4 9.2%
Spot #11: 160 Typical #5 8.8%
Spot #12: 141 Typical #6 7.7%
Spot #13: 126 Typical #7 6.9%
Spot #14: 120 Typical #8 6.6%
Spot #15: 112 Typical #9 6.1%
Spot #16: 92 Typical #10 5.0%
Spot #17: 88 Typical #11 4.8%

This allows us to see the relative power of the picks. When the Sad and Rabid Puppies overlapped, they were able to generate more votes than anything but the most popular Typical pick. At the top, in Spot #1 and #2, they had a comfortable margin (100 votes). When the Sad and Rabid puppies separated, they fell behind Typical #1, #2, and #3. We can also see that the Sad/Rabid numbers fell off rapidly: Overlaps #1 and #2 generated more votes than less popular Overlaps #3 and #4.

So, if everything stayed the same, or the number of votes generated by the Sad Puppies and Rabid Puppies increased at the same rate as the Typical votes, you’d predict Sad/Rabid Overlap #1 and #2 to make the final ballot, with the most popular Typical #1 book also to make the ballot, and then a dogfight for Spots #4-#5.

But everything isn’t likely to stay the same. Sad Puppies IV is already putting together a crowd-sourced list; with more than 5 suggestions in the Novel category, that could very well dilute the vote across those nominations. I suspect we’ll see something similar to last year: works at the top of the list that are very popular, on the order of Butcher popular, will generate far more votes than less popular works lower down on the list.

We also have no idea whether what I’m calling the “Typical,” “Sad,” and “Rabid” votes will increase at the same rate. Other discernible blocks could also emerge, although you can’t vote against someone in the nomination stage, so nothing like an explicit anti-Puppy vote can occur without generating an opposing slate.

This could also create a situation where the Sad Puppies and the Typical votes overlap, like if the Sad Puppies picked Seveneves or Uprooted, books already strong Hugo contenders. I’ll take a look at the leaders in the Sad Puppy nominations tomorrow.

I think things will be very close in the lower spots. A surge of 100-200 voters in either direction can imagine it, and the kind of predictive work I do at Chaos Horizon is incapable of tracking things that finely, particularly when faced with major change.

So, here’s my initial thoughts, what I’m calling the Overlap Theory: Since the 2016 Hugos nominations are likely to draw such attention, the works that are most likely to get nominations are those that overlap in more than one of the Typical/Sad/Rabid categories. The power of overlapping will usually be more powerful than going alone. So here’s what my initial top of the Ballot might look like:

1. Typical/Sad Overlap #1
2. Sad/Rabid Overlap #1
3. Typical/Sad Overlap #2
4. Sad/Rabid Overlap #2
5. Typical No Overlap #1
6. Sad/Rabid Overlap #3
7. Sad/Rabid Overlap #4
8. Typical No Overlap #2
9. Typical No Overlap #3
10. Sad No Overlap #1
11. Rabid No Overlap #1

That’s assuming no major shifts in percentages in the relative group sizes from last year. If you’ve got any suggestions on how to calculate such shifts, let me know!

So, as we dive into another controversial year, what do you think? Do the 2015 stats provide any meaningful guidance for 2016, or will things be so dynamic/unpredictable that the past is no guide to the future? What impact do you think the Puppies will have on the 2016 nominations? How can we best model that impact here at Chaos Horizon?

Check out Rocket Stack Rank

Rocket Stack Rank looks like a cool new Hugo/Nebula stat blog that’s running some stats and data for Novellas, Novelettes, and Short Stories! Since Chaos Horizon has never had time to delve into such issues, it’s cool to see someone else tackling the same kind of work I do.

More data is always good. They’ve got a nice post about the correlation between the Locus Recommended Reading list and the eventual Hugo/Nebula nominees. I use similar data in my Hugo and Nebula predictions, although for novels. It’ll be interesting to see if Rocket Stack Rank will develop their own predictions (hint, hint) . . .

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Mountain Was Here

writing like a drunken seismograph

thegrimdarkreview.wordpress.com/

Grimdark Book Reviews Every Wednesday

SFF Book Reviews

a reader's thoughts about fantasy & science fiction books

Philip K. Dick Review

A Re-read Project

Notes From the Darknet

Book reviews and literary discussion

Bookish

All books, reviews, genres, and bookish types welcome