Archive | December 2015

Chaos Horizon Declaring Itself Ineligible for 2016 Hugos

After careful thought, I’m declaring that Chaos Horizon (and myself) will not accept a Hugo nomination in 2016. Because Chaos Horizon reports so extensively on the numbers related to the Hugo process, I feel it would be a conflict of interest to be part of that process in any way.

Since I do reporting and analytical work here at Chaos Horizon, it’s important from me to maintain some journalistic distance from the awards. I couldn’t do that if I were nominated. This is consistent with my past practice; I haven’t voted in the Hugos since I began Chaos Horizon. Simply put, the scorekeeper can’t play the game.

I do want to thank anyone who has said nice things about Chaos Horizon or suggested me for a Hugo. Everyone, of course, is free to discuss the merits (or lack thereof) of Chaos Horizon in any venue they so wish. I’ll just silently turn down the nomination if I happen to be nominated.

I waffled on making a statement like this. I don’t think Chaos Horizon is competitive in the Hugo, so this seems a little like too much like grandstanding . Still, my old journalism professors would say it’s necessary: you either report on the story or you are the story, not both.

To be safe, I’m also going to declare myself ineligible for the Pulitzer, the Nobel Prize, and the NBA MVP.

Thanks to everyone for supporting Chaos Horizon this year! The site grew by about 100 times from 2014, and I think we’ve done some interesting work creating lists, looking at data, and making predictions. Thanks to everyone for keeping my honest by double-checking the stats. Here’s to a great 2016!

Advertisement

Hugo/Nebula Contenders and Popularity, December 2015

It’s the end of the month, so let’s check in on Goodreads and Amazon popularity (as measured by number of rankings) for various Hugo and Nebula contenders. This is one of many different measures I look at when predicting the Hugo and Nebula nominees.

As I’ve said before, this data is interesting but not necessarily predictive for the Hugos and the Nebulas. Goodreads and Amazon # of rankings doesn’t accurately measure popularity; they measure popularity with the Goodreads and Amazon crowds, which may or may not be well-synced up with Hugo or Nebula voters. We have no real access to sales number to actually measure books sold, so this is about the best we can do. Historically, being popular hasn’t helped much for the Nebulas. For the Hugos, it matters more, but only when that popularity is combined with strong critical response and past Hugo history.

I’m slowly migrating all my data over to Google Sheets and the cloud, so that you can look at and process the data any way you want. Here’s the link.

Table #1: Popularity of Hugo/Nebula Contenders on Goodreads, December 2015Goodreads December 2015 Chart

Table #2: Popularity of Hugo/Nebula Contenders on Amazon, December 2015Amazon December 2015 Chart

It’s interesting how static these charts are; no one really moved up or down more than 2 spots from November to January. I also track some books that aren’t contenders (Armada, for instance), just to give us some reference.

What does this mean for the Hugos? Well, Uprooted and Seveneves are hugely popular books this year, with 4 or 5 times more rankings than other award contenders like The Fifth Season or Ancillary Mercy. Even though someone like Stephenson may prove divisive (lots of people love or hate that book), the sheer number of readers may translate into more voters. Remember, you can’t vote against a book in the nomination stage. All that matters is how many people like a book, not how many hate it; the reverse can be true on the final ballot. The huge number of rankings for Novik and Stephenson is why I’ll have them very high in my initial Hugo predictions.

On the flip side, a book like Karen Memory is languishing with only 1,500 Goodreads ratings / 75 even though it came out in February. I don’t think that’s enough readers to drive Bear to a Hugo nomination in a competitive year, but only time will tell. I often use these popularity charts to distinguish between similar books. If Dickinson, Cho, Liu, Jemisin, and Novik all vaguely fall under the category of “experimental fantasy,” I’ll pick Novik/Jemisin over Liu/Cho/Dickinson based on their apparent popularity, using the theory more readers = more votes. Hopefully once I have several years of data I can find a more solid correlation, although one certainly isn’t visible yet.

Lastly, it’s fascinating at how different the Amazon rankings are than Goodreads. Why does Goodreads like Armada more than Seveneves? A book like A Long Time Until Now does terribly on Goodreads but well on Amazon (#12 on my Amazon chart, #28 on my Goodreads chart). Darker Shade of Magic is loved on Goodreads but middle-of-the-pack on Amazon. This goes to show how fundamentally different these audiences are. We shouldn’t trust either. Instead, I boost a book’s chances when it’s high across many of my different lists: if Uprooted is #2 on my Goodreads list, #3 on my Amazon list, #1 on the SFWA list, #1 on the Goodreads vote, #7 on my Mainstream Critics list, #1 on my SFF Critics list, etc., shouldn’t I predict it near the top? Throw in past Hugo/Nebula history, and that’s how the Chaos Horizon logic works; make what you will of it.

Later this month (let’s say mid-January) I’ll look to see what the ranking score is for each of these texts. Those scores don’t change much over time, so it hasn’t been worth tracking them month to month. I’ve also not found any correlation between the ranking score and award chances.

Let’s finish with a threat: I’ve gathered enough lists, 2016 is almost upon us, so I’ll make my first Nebula and Hugo predictions tomorrow!

Predicting the Hugos: Checking in with Sad Puppies IV

As we turn to the new year and thinking about predicting the 2016 Hugo nominations, it’s important to see what kind of recommendations are emerging from the Sad Puppy IV camp. According to Kate Paulk (one of this year’s organizers with Sarah Hoyt and Amanda from the blog Mad Genius Club), this is how it will work:

To that end, this thread will be the first of several to collect recommendations. There will also be multiple permanent threads (one per category) on the SP4 website where people can make comments. The tireless, wonderful volunteer Puppy Pack will be collating recommendations.

Later – most likely somewhere around February or early March, I’ll be posting The List to multiple locations. The List will not be a slate – it will be a list of the ten or so most popular recommendations in each Hugo category, and a link to the full list in all its glory. Nothing more, nothing less.

It’s a pretty open question of how exactly what kind of impact this will have on the 2016 Hugo nominations. Will these recommendations operate as slate, concentrating 100-300 (or more?) Sad Puppy votes into a unbreakable voting block? Or will a longer list diffuse the Sad Puppy vote, leading to a subtler effect on the final ballot? A lot is going to depend on what the list actually looks like, so, without further ado, here is the Chaos Horizon tabulation of the Sad Puppies IV recommendation, taken from the Best Novel web page:

Somewhither Wright, John C. 12
A Long Time Until Now Williamson, Michael Z. 10
Seveneves Stephenson, Neal 10
Uprooted Novik, Naomi 8
Honor at Stake Finn, Declan 7
The Aeronaut’s Windlass Butcher, Jim 6
The Just City Walton, Jo 5
Strands of Sorrow Ringo, John 5
The Desert and the Blade Stirling, S.M. 5
Ronin Games Harmon, Marion 4
Son of the Black Sword Correia, Larry 4
Ancillary Mercy Leckie, Ann 4

To produce this, I went through and counted each recommendation from the 150 comments. Sometimes the recommendations were a little unclear, so don’t take this as 100% accurate, but rather as a rough picture of the current state of the SP4 list. If anyone wants to count and double-check, please do! Here’s a link to my spreadsheet, which contains all recommended novels.

So, if this were the final list—and I expect it to change greatly by time we reach March—how would this impact the 2016 Hugo nominations?

I immediately see 4 “overlap” situations with more typical Hugo books (Stephenson, Novik, Walton, Leckie). Any number of votes driven to Seveneves, Uprooted, or Ancillary Mercy all but assures those books of a Hugo nomination. I have each of those as very likely to get nominations anyways (Leckie beat several SP/RP recommendations last year; Novik is buzziest Fantasy novel of the year; Stephenson is well-liked by Hugo voters with numerous past noms). Walton is the dark horse here; My Real Children missed the 2015 ballot by only 90 votes. How many votes could being in the #6 slot of Sad Puppies IV get you?

Three other texts stand out to me from this early list as real potential Hugo nominees. A Long Time Until Now is a military-SF novel published by Baen; it has a solid number of Amazon rankings (269); Michael Z. Williamson was in the middle of last year’s kerfuffle with the Hugo nominated Wisdom From My Internet. This could emerge as the “Baen” book for both the Sad Puppies and Rabid Puppies, although RP is much harder to predict. If this overlapped between those two groups, it would be a strong possibility.

Somewhither by John C. Wright was published by Vox Day’s Castalia House, and would seem to be exactly the kind of book the Rabid Puppies would select for their slate. Wright was nominated 6 times for the Hugo last year, although one was rescinded for eligibility reasons. This will be a work to keep your eye on as a test of SP/RP numbers.

Jim Butcher grabbed 387 votes for Skin Game last year. The Aeronaut’s Windlass is the first in a new fantasy series, which might make it easier for new readers to get into. I don’t think this book is as well-liked as the Dresden novels, but is it capable of grabbing tons of votes? Butcher’s reading audience is just that big.

Lastly, will certain writers from this list turn down Hugo nominations? Correia did exactly that last year, and I’ve heard rumors (but not seen sources; if someone has one, please post in comments) that Ringo would do the same. Would someone like Butcher or Stephenson just not want the hassle in 2016? They’re so famous and so sell so many books that they don’t need the Hugos.

There’s still a long ways to go in the Hugo Wars of 2016. What I’ll do at Chaos Horizon is the work I always do—collecting information, posting lists, and speculating as to what might happen. Enjoy the chaos!

Predicting the Hugos: Thinking of 2016

It’s getting close to the first of the year, and I’ll have my first 2016 Hugo prediction up soon! Here at Chaos Horizon, we use the stats and data from previous years to predict what will happen going forward. Obviously, that’s a very specific methodology that won’t be to everyone’s taste.

Here’s how it works: I begin with the assumption that what will happen in the 2016 Hugos will follow the patterns of previous years, particularly 2015. Of course, every year is different, but this gives us a starting point to begin a prediction. This assumption is useful in some cases (the Warriors have gone 28-1 games in the NBA so far; should I predict the Warriors to win their next basketball game?) and not useful in other cases (let’s say I rolled three fours in a row with a pair of dice; should I predict the next roll to also be a 4?).

The tricky part for the 2016 Hugos is to make a decent estimate of how much impact the Sad/Rabid Puppies will have this year. Before Correia and Kloos withdrew, the Puppies took 4 out of the top 5 Novel spots in the 2015 Hugos. After the controversy surrounding the slates hit in 2015, there was a huge increase of Hugo voters (5,653 people voted in the Best Novel category, up from 3,137 in 2014). All 5,653 of these voters are eligible to vote in the 2016 Hugo nominations—but how many of them will? And what percentage will follow/be influenced by the Puppies?

I don’t think we’ll exactly know until the nomination stats are released next August, but what we can do is work on some sensible guesses.

First thing, how many people will nominate in 2016? We saw a voting increase between the 2014 and 2015 Final Hugo Ballot of 5653/3137 = 1.8X. If we apply that number to last year’s nomination number, we’d get a 1827 2015 nomination ballots * 1.8 = 3289 nomination ballots in the Best Novel category. The controversy and high emotions surrounding last year’s Hugo could drive that number even higher. Remember, though, that the nomination process doesn’t get near as much ink as the final ballot does.

Next, to predict the nominees for the 2016 Hugos, I’ll begin with some stats from 2015:

Best Novel Nominations 2015 Hugo (1,827 ballots)
387 Skin Game Jim Butcher 21.2%
372 Monster Hunter Nemesis Larry Correia 20.4% *
279 Ancillary Sword Ann Leckie 15.3%
270 Lines of Departure Marko Kloos 14.8% *
263 The Dark Between the Stars Kevin J. Anderson 14.4%
256 The Goblin Emperor Katherine Addison 14.0%
210 The Three Body Problem Liu Cixin 11.5%
199 Trial By Fire Charles E. Gannon 10.9%
196 The Chaplain’s War Brad Torgersen 10.7%
168 Lock In John Scalzi 9.2%
160 City of Stairs Robert Jackson Bennett 8.8%
141 The Martian Andy Weir 7.7%
126 Words of Radiance Brandon Sanderson 6.9%
120 My Real Children Jo Walton 6.6%
112 The Mirror Empire Kameron Hurley 6.1%
92 Lagoon Nnedi Okorafor 5.0%
88 Annihilation Jeff Vandemeer 4.8%

Correia and Kloos turned down their nominations. We need to be aware that something similar could happen again. Also note how close Addison was—she almost beat Anderson (7 votes).

Let’s transform that chart by taking out the author’s names and replacing them with either Sad/Rabid Overlap (appeared on both the Sad + Rabid slates), Sad No Overlap (appeared only on the Sad Puppy slate), Rabid No Overlap (appeared only on the Rabid Puppy slate), or Typical (did not appear on a slate). Here’s what you get:

Best Novel (1,827 ballots)
Spot #1: 387 Sad/Rabid Overlap #1 21.2%
Spot #2: 372 Sad/Rabid Overlap #2 20.4% *
Spot #3: 279 Typical #1 15.3%
Spot #4: 270 Sad/Rabid Overlap #3 14.8% *
Spot #5: 263 Sad/Rabid Overlap #4 14.4%
Spot #6: 256 Typical #2 14.0%
Spot #7: 210 Typical #3 11.5%
Spot #8: 199 Sad No Overlap #1 10.9%
Spot #9: 196 Rabid No Overlap #1 10.7%
Spot #10: 168 Typical #4 9.2%
Spot #11: 160 Typical #5 8.8%
Spot #12: 141 Typical #6 7.7%
Spot #13: 126 Typical #7 6.9%
Spot #14: 120 Typical #8 6.6%
Spot #15: 112 Typical #9 6.1%
Spot #16: 92 Typical #10 5.0%
Spot #17: 88 Typical #11 4.8%

This allows us to see the relative power of the picks. When the Sad and Rabid Puppies overlapped, they were able to generate more votes than anything but the most popular Typical pick. At the top, in Spot #1 and #2, they had a comfortable margin (100 votes). When the Sad and Rabid puppies separated, they fell behind Typical #1, #2, and #3. We can also see that the Sad/Rabid numbers fell off rapidly: Overlaps #1 and #2 generated more votes than less popular Overlaps #3 and #4.

So, if everything stayed the same, or the number of votes generated by the Sad Puppies and Rabid Puppies increased at the same rate as the Typical votes, you’d predict Sad/Rabid Overlap #1 and #2 to make the final ballot, with the most popular Typical #1 book also to make the ballot, and then a dogfight for Spots #4-#5.

But everything isn’t likely to stay the same. Sad Puppies IV is already putting together a crowd-sourced list; with more than 5 suggestions in the Novel category, that could very well dilute the vote across those nominations. I suspect we’ll see something similar to last year: works at the top of the list that are very popular, on the order of Butcher popular, will generate far more votes than less popular works lower down on the list.

We also have no idea whether what I’m calling the “Typical,” “Sad,” and “Rabid” votes will increase at the same rate. Other discernible blocks could also emerge, although you can’t vote against someone in the nomination stage, so nothing like an explicit anti-Puppy vote can occur without generating an opposing slate.

This could also create a situation where the Sad Puppies and the Typical votes overlap, like if the Sad Puppies picked Seveneves or Uprooted, books already strong Hugo contenders. I’ll take a look at the leaders in the Sad Puppy nominations tomorrow.

I think things will be very close in the lower spots. A surge of 100-200 voters in either direction can imagine it, and the kind of predictive work I do at Chaos Horizon is incapable of tracking things that finely, particularly when faced with major change.

So, here’s my initial thoughts, what I’m calling the Overlap Theory: Since the 2016 Hugos nominations are likely to draw such attention, the works that are most likely to get nominations are those that overlap in more than one of the Typical/Sad/Rabid categories. The power of overlapping will usually be more powerful than going alone. So here’s what my initial top of the Ballot might look like:

1. Typical/Sad Overlap #1
2. Sad/Rabid Overlap #1
3. Typical/Sad Overlap #2
4. Sad/Rabid Overlap #2
5. Typical No Overlap #1
6. Sad/Rabid Overlap #3
7. Sad/Rabid Overlap #4
8. Typical No Overlap #2
9. Typical No Overlap #3
10. Sad No Overlap #1
11. Rabid No Overlap #1

That’s assuming no major shifts in percentages in the relative group sizes from last year. If you’ve got any suggestions on how to calculate such shifts, let me know!

So, as we dive into another controversial year, what do you think? Do the 2015 stats provide any meaningful guidance for 2016, or will things be so dynamic/unpredictable that the past is no guide to the future? What impact do you think the Puppies will have on the 2016 nominations? How can we best model that impact here at Chaos Horizon?

Best of 2015: Tor.com Reviewers’ Choice

A few weeks ago, the 2015 Tor.com Reviewers’ Choice list came out. Over the past several years, this has been an important list to track for several reasons. First, it gathers recommendations from 11 Tor.com critics, making it a collated list of its own. Second, it has been fairly well synced up to the Hugos and Nebulas, at least before the campaigning of last year. In 2013, they recommended Ann Leckie’s Ancillary Justice three times; it swept the Hugo and Nebula. Last year, Goblin Emperor was recommended 3 times; it scores Hugo and Nebula noms and that could very well have won the Hugo if not for the Puppies.

I’ll eventually include this list in my SFF Critics Meta-List, but for that I’ll only give each book mentioned one vote to keep the stats lined up. In this post, I’ll collate how many times the 11 critics mentioned each book, to see if there’s a Tor.com winner. I don’t count honorable mentions, and I don’t decide whether a book is a novel or not, or eligible or not. You’re mentioned as the top of the year, you make it. Without further ado, here are the results of books that got more than one recommendation:

3 mentions: Uprooted, Naomi Novik
2 mentions: Sorcerer to the Crown, Zen Cho
2 mentions: Escape from Baghdad!, Saad Hossain

No surprise to see Uprooted at the top of another list (it’s also leading the SFWA recommended list). At this point, I think it’s clear to say that Uprooted is the Hugo and Nebula front-runner. Escape from Baghdad! was a surprise, but 2 mentions is hardly dominant. Zen Cho has done fairly well so far this “Best Of” season and has a shot at the Nebula.

The Tor.com list was light on SF this year. Only one mention of Seveneves, and none of Aurora, The Water Knife, or Nemesis Games, just to pick three SF novels that have been getting attention elsewhere.

This lists become more valuable the more of them we get. Eventually, I’ll gather all the lists I find from big SFF websites into one Meta-List. If you want the sneak-preview, here it is. Only two lists so far (Tor.com and the Barnes and Noble SF Blog), so it’s not very useful (yet!).

Best of 2015: Debuting the Mainstream Meta-List

Now that my semester is over and my final grades are turned in, I can focus my attention on predicting the 2016 Hugo and Nebula awards. As a step towards that, I’ve been putting together collated lists of the “Best of 2015” posts from a variety of sources. Last year, I broke these down into two categories: a mainstream list (NY Times, Amazon, Goodreads, etc.) and a SFF Critics list (Locus, SF Signal, Book Smugglers, etc.). When combined with the 2015 Awards Meta-List, these three lists provide a very interesting take on what the best SFF novel of the year was. Each is idiosyncratic in its own way and are best used in concert rather than alone.

I’m going to debut the 2015 Mainstream list today. Last year, I collated 20 lists from a variety of popular sources. Appear on the list, get 1 point. Most points wins. Here’s last year’s top 5:

1. The Bone Clocks, David Mitchell: on 13 lists
2. The Martian, Andy Weir: on 10 lists
3. Annihilation, Jeff VanderMeer: on 9 lists
3. The Peripheral, William Gibson: on 9 lists
3. The Magician’s Land, Lev Grossman: on 9 list

Annihilation won the Nebula, and The Bone Clocks won the World Fantasy Award. So not too shabby for the mainstream! The lists I collect are very different: some are popular votes (Goodreads); some are marketing tools (Amazon). Some are short, some are ridiculously long (NPR). Some include graphic novels, collections, and books that aren’t eligible. I don’t make distinctions; I just list everything and figure that the sheer number of lists will balance everything out. I don’t include lists that are specifically Young Adult / Children’s / Graphic Novels, as my goal is to try to predict the Hugo and Nebula Best Novel categories. Historically, Young Adult works have not been nominated for those awards, although there are a few exceptions.

This year, in an effort to be more transparent, I’m going to use a Google Docs document that anyone can access at anytime. Only I can edit it though (of course, you can just cut and paste the data and then do what you want with it). So, if you don’t want to read on, you can just go look at the list right now. This way, I can update the list immediately and you can check where things stand at this moment.

We’re still early in the year. So far, I’ve collected Amazon’s Best of 2015, The Washington Post, Publisher’s Weekly, the Goodreads Choice Awards (Fantasy and SF categories), The Guardian, Buzzfeed, NPR’s Book Concierge, the NY Times Notable books, and Publisher’s Weekly. Links to all those sources are in the spreadsheet.

Here are the too-early results:

5 points: The Fifth Season, Jemisin, N.K.
5 points: Seveneves, Stephenson, Neal
5 points: Ancillary Mercy, Leckie, Ann
4 points: The Water Knife, Bacagulupi, Paolo
4 points: Golden Son, Brown, Pierce
4 points: Aurora, Robinson, Kim Stanley
3 points: Uprooted, Novik, Naomi
3 points: The Traitor Baru Cormorant, Dickinson, Seth

Not particularly surprising. We knew that past Hugo winners Leckie, Stephenson, Bacigalupi, and Robinson all published well-liked novels this year. Uprooted would be 1 point higher if Goodreads hadn’t strangely put her novel on their YA list. Jemisin running neck and neck with Leckie and Stephenson is impressive, and I expect at least a Nebula nomination to be very likely for The Fifth Season.

The real surprise here is Golden Son. The first volume in that series, Red Rising, was marketed more as a YA novel in the vein of The Hunger Games. Golden Son is being hailed as an improvement in every way, and now seems to be in the adult rather than YA category. It will be interesting to see if this book can develop into a viable Hugo contender.

Unlike last year, where The Bone Clocks, Station Eleven, and Annihilation where highly esteemed by the literary mainstream, we don’t necessarily have a breakout “literary” SFF book. All the authors I listed above are more associated with speculative than literary fiction. The Entertainment Weekly Top 10 list had 0 speculative works after 3 last year (unless you count the Zebulon Finch by Daniel Krauss; anyone read this?); the NY Times Top 50 novels only had 2, N.K. Jemisin’s The Fifth Season and Michel Houellecbecq’s The Submission.

As I said above, we’re still early in the year for best-of lists, but patterns are beginning to emerge. Will the mainstream list be mirrored when the Nebulas and the Hugos roll around? Contrast this list with the SFWA Nebula recommended list, and we already see some interesting points of overlap. If you know of a good list I should add to the data, let me know in the comments.

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Read & Survive

How-To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

SFF Book Reviews

random thoughts about fantasy & science fiction books

Philip K. Dick Review

A Re-read Project

Notes From the Darknet

Book reviews and literary discussion