Tag Archive | 2015 Hugo Award

Hugo Award Nomination Ranges, 2006-2015, Part 4

We’re up to the short fiction categories: Novella, Novelette, and Short Story. I think it makes the most sense to talk about all three of these at once so that we can compare them to each other. Remember, the Best Novel nomination ranges are in Part 3.

First up, the number of ballots per year for each of these categories:

Table 6: Year-by-Year Nominating Ballots for the Hugo Best Novella, Novelette, and Short Story Categories, 2006-2015
Table 7 Ballots Short Fiction Categories

Chart 7 Ballots Short Fiction

A sensible looking table and chart: the Short Fiction categories are all basically moving together, steadily growing. The Short Story has always been more popular than the other two, but only barely. Remember, we’re missing the 2007 data, so the chart only covers 2008-2015. For fun, let’s throw the Best Novel data onto that chart:

Chart 8 All Fiction Categories

That really shows how much more popular the Best Novel is than the other Fiction categories.

The other data I’ve been tracking in this Report is the High and Low Nomination numbers. Let’s put all of those in a big table:

Table 7: Number of Votes for High and Low Nominee, Novella, Novelette, Short Story Hugo Categories, 2006-2015Table 8 High Low Noms Fiction Categories

Here we come to one of the big issues with the Hugos: the sheer lowness of these numbers, particularly in the Short Story category. Although the Short Story is one of the most popular categories, it is also one of the most diffuse. Take a glance at the far right column: that’s the number of votes the last place Short Story nominee has received. Through the mid two-thousands, it took in the mid teens to get a Hugo nomination in one of the most important categories. While that has improved in terms of raw numbers, it’s actually gotten worse in terms of percentage (more on that later).

Here’s the Short Story graph; the Novella and Novelette graphs are similar, just not as pronounced:

Chart 9 Short Story

The Puppies absolutely dominated this category in 2015, more than tripling the Low Nom number. They were able to do this because the nominating numbers have been so historically low. Does that matter? You could argue that the Hugo nominating stage is not designed to yield the “definitive” or “consensus” or “best” ballot. That’s reserved for the final voting stage, where the voting rules are changed from first-past-the-post to instant-run-off. To win a Hugo, even in a low year like 2006, you need a great number of affirmative votes and broad support. To get on the ballot, all you need is focused passionate support, as proved by the Mira Grant nominations, the Robert Jordan campaign, or the Puppies ballots this year.

As an example, consider the 2006 Short Story category. In the nominating stage, we had a range of works that received a meager 28-14 votes, hardly a mandate. Eventual winner and oddly named story “Tk’tk’tk” was #4 in the nominating stage with 15 votes. By the time everyone got a chance to read the stories and vote in the final stage, the race for first place wound up being 231 to 179, with Levine beating Margo Lannagan’s “Singing My Sister Down.” That looks like a legitimate result; 231 people said the story was better than Lannagan’s. In contrast, 15 nomination votes looks very skimpy. As we’ve seen this year, these low numbers make it easy to “game” the nominating stage, but, in a broader sense, it also makes it very easy to doubt or question the Hugo’s legitimacy.

In practice, the difference can be even narrower: Levine made it onto the ballot by 2 votes. There were three stories that year with 13 votes, and 2 with 12. If two people had changed their votes, the Hugo would have changed. Is that process reliable? Or are the opinions of 2—or even 10—people problematically narrow for a democratic process? I haven’t read the Levine story, so I can’t tell you whether it’s Hugo worthy or not. I don’t necessarily have a better voting system for you, but the confining nature of the nominating stage is the chokepoint of the Hugos. Since it’s also the point with the lowest participation, you have the problem the community is so vehemently discussing right now.

Maybe we don’t want to know how the sausage is made. The community is currently placing an enormous amount of weight on the Hugo ballot, but does it deserve such weight? One obvious “fix” is to bring far more voters into the process—lower the supporting membership cost, invite other cons to participate in the Hugo (if you invited some international cons, it could actually be a “World” process every year), add a long-list stage (first round selects 15 works, the next round reduces those 5, then the winner), etc. All of these are difficult to implement, and they would change the nature of the award (more voters = more mainstream/populist choices). Alternatively, you can restrict voting at the nominating stage to make it harder to “game,” either by limiting the number of nominees per ballot or through a more complex voting proposal. See this thread at Making Light for an in-progress proposal to switch how votes are tallied. Any proposed “fix” will have to deal with the legitimacy issue: can the Short Fiction categories survive a decrease in votes?

That’s probably enough for today; we’ll look at percentages in the short fiction categories next time.

Advertisements

Hugo Best Novel Nominees: Amazon and Goodreads Numbers, May 2015

It’s been a busy month, but Chaos Horizon is slowly returning to it’s normal work: tracking various data sets regarding the Nebula and Hugo awards. Today, let’s take a look at where the 5 Hugo Best Novel nominees stand in terms of # of Goodreads ratings, # of Amazon ratings, and ratings score.

So far, I’ve not been able to find a clear (or really any) correlation between this data and the eventual winner of the Hugo award. In my investigations of this data—see here, here, and here—I’ve been frustrated with how differently Amazon, Goodreads, and Bookscan treat individual books. It’s also worth noting that I don’t think Amazon or Goodreads measure some abstract idea of “quality,” but rather a more nebulous and subjective concept of “reader satisfaction.” You definitely see that in something like the Butcher book: since it’s #15 in a series, everyone who doesn’t like Butcher gave up long ago. All you have left are fans, who are prone to ranking Butcher highly.

As a final note, Jason Sanford leaked the Bookscan numbers for the Hugo nominees in early April. Check those out to see how Bookscan reports this data.

On to the data! Remember, these are the 2015 Hugo Best Novel nominees:

Skin Game, Jim Butcher
Ancillary Sword, Ann Leckie
The Goblin Emperor, Katherine Addison
The Three-Body Problem, Cixin Liu
The Dark Between the Stars, Kevin J. Anderson

Number of Goodreads Ratings for the Best Novel Hugo Nominees, May 2015

Hugo Goodreads May2015

This chart gives you how many readers on Goodreads have rated each book; that’s a rough measure of popularity, at least for the self-selected Goodreads audience.

Goodreads shows Skin Game as having a massive advantage in popularity, with almost 5 times as many rankings as Leckie’s book. Given Skin Game is #15 in the series, that’s an impressive retention of readers. Of course, any popularity advantage for Butcher has to be weighted against the pro and anti Sad/Rabid Puppy effect. Also don’t neglect the difficulty that Hugo voters will have in jumping into #15 of a series.

While Liu is still running behind Addison and Leckie, keep in mind that Liu’s book came out a full seven months after Addison’s book and a month after Leckie. Still, the Hugo doesn’t adjust for things like that: your total number of readers is your total number of readers. That’s why releasing your book in November can put you at a disadvantage in these awards. Still, Liu picked up a huge % of readers this month; if that momentum keeps up, that speaks well for his chances. Anderson’s number is very low when compared to the others; that probably is a mix of Anderson selling fewer copies and Anderson’s readers not using Goodreads.

Switching to Amazon numbers:

Number of Amazon Ratings for the Best Novel Hugo Nominees, May 2015

Hugo Amazon May2015

I don’t have as much data here because I haven’t been collecting it as long. I foolishly hoped that Goodreads data would work all by itself . . . it didn’t. Butcher’s Amazon advantage is even larger than his Goodreads advantage, and Liu leaps all the way from 4th place in Goodreads data to second place in Amazon data. This shows the different ways that Goodreads and Amazon track the field: Goodreads tracks a younger, more female audience (check the QuantCast data), while Amazon has historically slanted older and more gender-neutral. Your guess is as good as mine as to which audience is more predictive of the eventual Hugo outcome.

Lastly, the rankings themselves:

Goodreads and Amazon Rating Scores for the Best Novel Hugo Nominees, May 2015
Hugo Goodreads Amazon Scores May2015

Let me emphasize again that these scores have never been predictive for the Hugo or Nebula: getting ranked higher on Amazon or Goodreads has not equated to winning the Hugo. It’s interesting that the Puppy picks are the outliers: higher and lower when it comes to Goodreads, with Leckie/Addision/Liu all within .05 points of each other. Amazon tends to be more generous with scoring, although Butcher’s 4.8 is very high.

The 2015 Hugo year is going to be largely useless when it comes to data: the unusual circumstances that led to this ballot (the Sad and Rabid Puppy campaigns, then various authors declining Best Novel nominations, and now the massive surge in voting number) mean that this data is going to be inconsistent with previous years. I think it’s still interesting to look at, but take all of this with four or five teaspoons of salt. Still, I’ll be checking in on these numbers every month until the awards are given, and it’ll be interesting to see what changes happen.

Margin of Victory: Breaking Down the Hugo Math

In my last post, I looked at the ranges of votes in the categories swept by the Puppy vote to estimate the effective min/max of the Puppy numbers. In this post, I’ll ask this question: How close were these categories to being sweeps? How many additional “traditional” Hugo voters (i.e. non-block voters distributed in the ways Hugo votes have been in the past) would it have taken to prevent a sweep?

I think this information is important because so many various proposals to “fix” the Hugos are currently floating around the web. Even if you accept the premise that the Hugos need to be fixed, what exactly are you fixing? One flaw in the Hugo system is that proposals to change the voting patterns—if such changes are desirable or needed—have to be proposed at the WorldCon, and that’s before the voting results are broadly known. Thus people are working in the dark: they might be trying to “fix” something without knowing exactly the scope (or even the definition) of “the problem.” That’s a recipe for hasty and ineffective change.

What we’ll do today is to compare the lowest Puppy nominee in the 6 swept categories (Novella, Novelette, Short Story, Best Related Work, Editor Short Form, and Editor Long Form) to the highest non-Puppy from 2014. I’ll subtract those two values to find a “margin of victory” for each of those 6 categories. That would give us an estimation of how many more votes it would have taken to get one (and only one) non-Puppy work onto the final ballot. To overcome more of the Puppy ballot would take more votes.

Now, this plays a little loose with the math: we don’t know for sure that the highest non-Puppy nominee from 2015 would have the same number of votes as the 2014 nominee. In fact, it’s highly unlikely they would be exactly the same. However, I think this gives us a rough estimate; it may be 5 higher, or 10 lower, but somewhere in a reasonable range. This will give us a rough eyeball estimate of how pronounced the victories were. On to the stats:

Novella:
Lowest 2015 nominee: 145 votes
Highest 2014 nominee: 143 votes (Valente’s “Six-Gun Snow White”)
Margin of Sweep: 2 votes

Novelette:
Lowest 2015 nominee: 165 votes
Highest 2014 nominee: 118 votes (Kowal’s “Lady Astronaut of Mars”)
Margin of Sweep: 47 votes

Short Story:
Lowest 2015 nominee: 151 votes
Highest 2014 nominee: 78 votes (Samatar’s “Selkie Stories are For Losers”)
Margin of Sweep: 73 votes

Best Related Work:
Lowest 2015 nominee: 206 votes
Highest 2014 nominee: 89 votes (VanderMeer’s Wonderbook)
Margin of Sweep: 117 votes

Best Editor Short Form:

Lowest 2015 nominee: 162 votes
Highest 2014 nominee: 182 votes (John Joseph Adams; Neil Clarke was second with 115)
Margin of Sweep: -20 votes (if JJA had gotten the same support in 2015 as he did in 2014, he would have placed on the ballot by 20 votes)

Best Editor Long Form:
Lowest 2015 nominee: 166 votes
Second Highest 2014 nominee: 118 votes (Toni Weisskopf was highest with 169 votes, but she was a Sad Puppy 2 nominee (although Weisskopf also has plenty of support outside of Sad Puppies); Ginjer Buchanon was second with 118 votes)
Margin of Sweep: 48 votes

What does all that data mean? That means if that 2015 played out like 2014 for the non-Puppy candidates, the highest non-Puppy candidate missed the slate by this much (I’m repeating the data to make it easier to find). This many more votes for the highest non-Puppy would have broken up the sweep by one nominee.
Novella: 2 votes
Novelette: 47 votes
Short Story: 73 votes
Best Related Work: 117 votes
Editor Short Form: -20 votes
Editor Long Form: 48 votes

The Puppy slates didn’t dominate every category as much as the final results makes it seem. We won’t know the exact margins until August, but I imagine that a non-Puppy Novella was very close to making the final slate. Something like Ken Liu’s “The Regular” probably only needed a few more votes to make it onto the ballot. Editor Short Form was probably even closer; John Joseph Adams must have lost support; if he kept his votes from last year, he would have gotten in.

The other categories were crushed. 73 for Short Story. 117 for Best Related. Given that Samatar’s story from 2014 only managed 78 votes and you needed 151 to make it this year, that’s an almost impossible climb to get just one non-Puppy story onto the slate. I think this reveals a major problem: the Puppies dominated these categories not only because of their organization, but because of the general lack of voting in those categories. EDIT (4/8/15): Tudor usefully pointed out in the comments that this is probably better understood as a diffusion of the Short Story vote (i.e. the vote is spread out across many stories), rather than a lack of Short Story voters. Thanks for the correction, Tudor, and I always encourage people to push back against any Chaos Horizon statements they think are wrong, incorrect, or misleading. The more eyes we have on stats, the better.

Let’s run some rough math to see how many more voters you would need to get that one work onto the slate. Here’s how I’ll calculate new voters (you may disagree with this formula). I’m assuming you’re bringing new voters into the Hugo process, and that those voters vote in a similar fashion to the past. So, to generate 2 more votes for the top non-Puppy Novella, you’d need to account for the % of voters that bother to vote for the Novella category (2122 people voted in the 2015 Hugo, but only 1083 voted in the Novella category, for 51%). So, to bring 50 new people into the Novella category, you’d need to bring 100 new people into the Hugo voting process.

Next, you need to account for how many people voted for the #1 non-Puppy work. In 2014, that was 16.9% (that’s the percentage “Six-Gun Snow White” got). If we assume a similar distribution, we wind up with 2 votes needed / 51% voting for the category / 16.9% voting for the number one work. That yields 23 new votes needed.

For the Novella category, that’s definitely doable. In fact, if only some of the 700 people who voted for the Best Novel category voted but sat out the Novella category voted, that would add one non-Puppy text back to the slate.

Let’s run the math for the six categories:
Novella: 23 new voters needed
Novelette: 597 new voters needed (47/48.5%/16.2%)
Short Story: 1450 new voters needed (73/55%/9.1%)
Best Related Work: 1830 new voters needed (117/54%/11.8%)
Short Form Editor: no new voters needed; data shows they would have a nominee best on last years patterns
Long Form Editor: 765 new voters needed (48/33.5%/18.7%)

That’s a lot of new voters, and remember this is the number of voters needed to place only one non-Puppy work onto the final nominee list. The Novella and the Short Form Editor categories were close to not being sweeps, but the others were soundly overwhelmed. Think of how the Novel category worked: with much more excitement (700 more voters), the Puppies still placed 3 works onto the list. Finally, I don’t know how you’d add 1000+ new voters without also adding more Puppy voters.

Just for grins, let’s imagine what it would take to eliminate all Puppy influence in the Short Story category. To do that, we’d have to elevate John Chu’s “The Water that Falls from Nowhere” onto this year’s slate based on last year’s percentage vote. Chu—who won the Hugo—managed 43 nominations. To beat the highest 2015 Short Story puppy (230), he’d have to add 187 votes. Chu managed only 5% of the 2014 vote, and 55% of the total Hugo voters voted in the Short Story category. That gives us (187/55%/5%) = 6800 voters. So, if the Hugos added a mere 6,800 voters (and managed to keep all new Puppy votes out!), the Puppies would have been shut out of the Short Story category.

Of course, counter-slates could boost the % of votes going to authors, and there are other solutions that could tilt the field (fewer nominees per voter, more works per slate). What this post goes to show, though, is how organized and enthusiastic the 2015 Puppy-vote was: they not only swept categories, they swept categories decisively.

How Many Puppy Votes: Breaking Down the Hugo Math

The dust is just beginning to settle on the 2015 Hugo nominations. Here’s the official Hugo announcement and list of finalists. If you’re completely in the dark, we had two interacting slates—one called Sad Puppies led by Brad Torgersen, another called Rabid Puppies led by Vox Dax—that largely swept the 2015 Hugo nominations.

The internet has blown up with commentary on this issue. I’m not going to get into the politics behind the slates here; instead, I want to look at their impact. Remember, Chaos Horizon is an analytics, not an editorial website. If you’re looking for more editorial content, this mega-post on File 770 contains plenty of opinions from both sides of this issue.

What I want to do here on Chaos Horizon today is look at the nominating stats. Using those, can we estimate how many Sad Puppies? How many Rabid Puppies?

For those who want to skip the analysis: my conclusion is that the total Puppy influenced vote doubled from 2014 to 2015 (from 182 to somewhere in the 360 range), and that this resulted in a max Puppy vote of 360, and a minimum effective Puppy block of 150 votes. We don’t yet have data that makes it possible to split out the Rabid/Sad effect.

Let’s start with some basic stats: there were 2,122 nominating ballots, up from 1,923 nominating ballots last year, making for a difference of (2,122-1,923) = 199 ballots. Given that Spokane isn’t as attractive a destination as London for WorldCon goers, what is the cause of that rise? Are those the new Puppy voters, Sad and Rabid combined?

If you take last year’s Sad Puppy total, you’d wind up with 184 for the max Puppy vote (that’s the amount of voters who nominated Correia’s Warbound), the top Sad Puppy 2 vote-getter. If we add 199 to that, we’d get a temporary estimate of 383 for the max 2015 Puppy vote. We’ll find that this rough estimate is within spitting distance of my final conclusion.

Here’s a screenshot that’s been floating around on Twitter, showing the number of nominating votes per category. Normally, this wouldn’t help us much, because we couldn’t correlate min and max votes to any specific items on the ballot. However, since the Puppies swept several categories, we can use these ranges to min and max the total Puppy vote in the categories they swept. With me so far?

Hugo Nominating Stats 2015

Click on that to make it bigger. As you can see, that’s from the Sasquan announcement.

The Puppies swept Best Novella, Best Novelette, Best Short Story, Best Related Work, Best Editor Long Form, and Best Editor Short Form. This means all the votes shown in these categories are Puppy votes. Let me add another wrinkle before we continue: at times, the Sad and Rabid voters were in competition, nominating different texts for their respective slates. I’ll get to that in a second.

So, if we were to look at the max vote in those six categories, we’d get a good idea of the “maximum Puppy impact” for 2015:
Novella: 338 high votes
Novelette: 267 high votes
Short Story: 230 high votes
Related Work: 273 high votes
Editor Short Form: 279 high votes
Editor Long Form: 368 high votes

Presumably, those 6 “high” vote-getters were works that appeared on both the Sad and Rabid slates. You see quite a bit of variation there; that’s consistent with how Sad Puppies worked last year. The most popular Puppy authors got more votes than the less popular authors. See my post here for data on that issue. Certain categories (novel, for instance), are also much more popular than the other categories.

At the top end, though, the Editor long form grabbed 368 votes, which was within shouting distance of the Novella high vote of 338, and even very close to the Novel high vote of 387. I think we can safely conclude that’s the top end of the Puppy vote: 360 votes. I’m knocking a few off because not every vote for every text had to come from a Puppy influence. I’m going to label that the max Puppy vote, which combines the maximum possible reach of the combined Rabid and Sad Puppies vote.

Why was there such a drop between the 368 votes for Editor Long Form and the mere 230 votes for Short Story when both of these were Puppy-swept categories? This means that not every Puppy voter was a straight slate voter: some used the slate as a guide, and only marked the texts they liked/found worthy/had read. Some Puppy voters appear to have skipped the Short Story category entirely. That’s exactly what we saw last year: a rapid falling off in the Puppy vote based on author and category popularity. This wasn’t as visible this year because the max vote was so much higher: even 50% of that 360 number was still enough to sweep categories.

Now, on to the Puppy “minimum.” This would represent the effective “block” nature of the Puppy vote: what were lowest values they put forward when they swept a category? Remember, we know that 5th place work had to be a Puppy nominee because the category was swept.

Novella: 145 low vote
Novelette: 165 low vote
Short Story: 151 low vote
Related Work: 206 low vote
Editor Short Form: 162 low vote
Editor Long Form: 166 low vote

Aside from Related Work, that’s enormously consistent. There’s your effective block vote. I call this “effective” because the data we have can’t tell us for sure that this is 150 people voting in lock-step, or whether it might be 200 Puppies each agreeing with 75% of the slate. Either way, it doesn’t matter: The effect of the 2015 Puppy campaign was to produce a block vote of around 150 voters.

If that’s my conclusion, why was the Best Related Work 206 minimum votes? That’s the only category where the Rabid and Sad Puppies agreed 100% on their slate. Everywhere else, they split their vote. As such, that’s the combined block voting power of Rabid and Sad Puppies, something that didn’t show up in the other 5 swept categories.

So, given the above data, here’s my conclusion: The Puppy campaigns of 2015 resulted in a maximum of 360 votes, and an effective block minimum of 150 votes. That ratio of 360/150 max/min (41%) is almost the same as last year’s (182 for Correia at the highest / 69 for Vox at the lowest, for a rate of 37.9%). That’s remarkable consistency. It doesn’t look the Puppy stuck together any more, just that there were far more of them. Of course, we won’t know the full statistics until the full voting data is released in August.

I think a lot casual observers are going to be surprised at that 360 number. That’s a big number, representing some 17% of the total Hugo voters (360/2122). Those 17% selected around 75% of the final ballot. That’s the imbalance in the process so many observers are currently discussing.

What do you think? Does that data analysis make sense? Are you seeing something I’m not seeing in the chart? Tomorrow I’ll do an analysis of how much the non-Puppy works missed the slate by.

The New Yorker Publishes Essay on Cixin Liu

The New Yorker ran a very complimentary essay about Cixin Liu’s The Three-Body Problem and his other stories, positioning him as China’s Arthur C. Clarke. Check it out here; it’s an interesting read.

This comes on the heels of Liu’s Nebula nomination for The Three-Body Problem, and will accelerate Liu being thought of as a “major” SF author. Essays like this are very important in establishing an author’s short and long term reputation. Much of the mainstream press follows the lead of The New Yorker and The New York Times; this means other newspapers and places like NPR, Entertainment Weekly, and others are going to start paying attention to Cixin Liu. While the influence of these venues on the smaller SFF community (and the Hugos and Nebulas) isn’t as significant, mainstream coverage does bleed over into how bookstores buy books, how publishers acquire and position novels, etc.

The Dark Forest, Liu’s sequel to The Three-Body Problem, comes out on July 7th. Expect that to get major coverage and to be a leading candidate for the 2016 Nebula and Hugo. I currently have Cixin Liu’s The Three-Body Problem at #6 in my Hugo prediction, and that may be too low. All this great coverage and exposure does come very late in the game: Hugo nominations are due March 10th. Liu’s novel came out on November 11th, and that’s not a lot of time to build up a Hugo readership. It does appear that most people who read The Three-Body Problem are embracing it . . . but will it be enough for a Hugo nomination?

As a Hugo prediction site, the hardest thing I have to account for is sentiment: how much do people like an individual novel? How does that enthusiasm carry over to voting? How many individual readers grabbed other readers and said “you’ve got to read this”? We can measure this a little by the force of reviews and the positivity of blogging, but this is a weakness in my data-mining techniques. I can’t account for the community falling in love with a book. Keep in mind, initial Liu reviews were a little measured (check out this one from Tor.com, for instance, that calls the main character “uninspiring”), but then there was a wave of far more positive reviews in December, such as this one from The Book Smugglers. My Review Round-Up gathers some more of these.

Has the wheel turned, and are most readers now seeing The Three-Body Problem as the best SF book of 2014? For the record, that’s my opinion as well, and I did read some 20+ speculate works published in 2014. Liu’s has a combination of really interesting science, very bold (and sometimes absurd) speculation, and a fascinating engagement with Chinese history in the form of the Cultural Revolution. In a head to head competition with Annihilation, I think The Three-Body Problem wins. You’d think that would be enough to score a Hugo nomination, and maybe it will be. We’ll find out within the month.

2015 Hugo Prediction, Version 4.0

A lot has happened in the past month that will shape the 2015 Hugo Best Novel nominations. These are usually announced around Easter weekend, which has the unfortunate tendency of burying the nominations in the holiday. The deadline for nominations this year is March 10, 2015, so WorldCon voters still have time to get their nominations in.

In this post, I’ll focus on my final prediction: which 5 books I think will make the 2015 slate. Since the Nebula nominations just came out, these are likely to influence the Hugos in a substantial way. Over the past several years, about 40% of the eventual Hugo slate has overlapped with the Nebula slate. The Nebula slate is widely seen and discussed within the SFF community, and even if it only influences 4-5% of WorldCon voters, that’s enough to push a book from “borderline” to “nominated.”

Speaking of widely seen and widely discussed, the “Sad Puppies 3” slate is also likely to have a substantial influence on this year’s Hugo. Helmed by Brad Torgersen this year (and by Larry Correia in the past), the Sad Puppy 2 group of suggested nominees had a definite impact on the 2014 Hugos (placing 1 book into the Best Novel category, and several other nominees into other fiction categories), and there’s not a lot of evidence to suggest this campaign won’t be equally (or slightly more) successful this year. See my “Modeling Hugo Voting Campaigns” post for more discussion.

So where does that leave us? Here’s my top 5, based on awards history, critical acclaim, reviews, and popularity. Remember that The Martian by Andy Weir isn’t up here because of eligibility issues. Otherwise I’d have Weir at #3.

Reminder: Chaos Horizon is dedicated to predicting what is likely to happen in the 2015 awards, not what “should” happen. So, long story short, I’m not advocating any of these books for the Hugo, but simply predicting, based on past Hugo patterns, who is most likely to get a nomination.

1. Annihilation, Jeff VanderMeer: VanderMeer’s short book, the first in the Southern Reach trilogy that all came out this year, was one of the most critically acclaimed SF/weird fiction novels of recent years. It sold well, received a Nebula nomination, and provoked plenty of debate and praise, including high profile features in The New Yorker and The Atlantic. While the Hugos aren’t as susceptible to literary acclaim as the Nebulas, this is either a “love it” or “hate it” kind of book. Readers are either fascinated by VanderMeer’s weirdness and fungal based conspiracies or completely alienated by them. Since you can’t vote against a book in the nominating process, the “loves” will outweigh the “hates.” I have VanderMeer as my early Hugo favorite: I think he’ll win the Nebula, and that win will drive him to the Hugo.

2. Ancillary Sword, Ann Leckie: The Hugo tends to be very repetitive, nominating the same authors over and over again. Given how dominant Leckie’s 2014 Hugo win was (and overall award season), it’s hard to see her not getting another nomination. Even if Ancillary Sword is slightly less acclaimed than Ancillary Justice, it still placed first in my SFF critics collation list, and it has already garnered Nebula and BSFA noms. While I think it’s unlikely Leckie will win two Hugos in a row, the VanderMeer may prove too divisive for the Hugo audience. In that case, Leckie might emerge as the compromise pick. The Hugo preferential voting system can easily allow for something like that to happen.

3. Monster Hunter Nemesis, Larry Correia: Correia finished 3rd in the 2014 Hugo nominations, with only Leckie and Gaiman placing above him (Gaiman declined the nom). That put him very safely in the field, and the mathematics are in Correia’s favor for this year. While Monster Hunter Nemesis is a slightly odd choice for the Hugos, being 5th in a series and urban fantasy to boot, it’s hard to imagine Correia’s supporters abandoning him en-masse in just one year. Despite the vigor of his campaign, Correia doesn’t haven’t the broad support necessary to win a Hugo.

4. The Goblin Emperor, Katherine Addison: There are a number of edgier fantasy novels that could work their way into the Hugo. I’ve had the race down as between Robert Jackson Bennett’s City of Stairs and this book for a while. With Addison grabbing the Nebula nomination, that probably boosts her into the Hugo field. This was well-liked in certain circles and placed very high on the SFF critics list. It’s also fantasy, which has a definite block of support behind it—not every WorldCon voter reads SF.

Now things get interesting. I expect their to be an all-out war for the fifth spot, given that there are 4-5 viable contenders. This’ll come down to who gets the vote out, not necessarily which novel is “better” than the other novels.

5. Skin Game, Jim Butcher: Skin Game was part of the “Sad Puppy 3” slate, but Butcher’s appeal extends well beyond that block of voters. While Butcher has never gotten much Hugo love in the past, he is one of the most popular writers working in the urban fantasy field, and his Henry Harry Dresden (EDIT 3/12/15: Stupid typo on my part. Names are always hard to catch. I’ve read multiple of these novels, too!) novels have been consistently well-liked and well-loved by fans. Even WorldCon voters who don’t agree with the Sad Puppy 3 argument may look at the list, see Butcher, and think, Why not? If Correia can make the slate, so too can Butcher—and Butcher might be even more popular in Sad Puppy realm than Correia. On the negative, this is #14 in a series, and that’s a tough sell to new readers. I’ll be fascinated to see how the vote turns out on this one.

Just missing:

6. The Three-Body Problem, Cixin Liu: Liu is a best-selling Chinese science fiction author, and this is his first novel translated into English. Liu’s chances have been greatly boosted by his Nebula nomination: this is going to put Three-Body front and center in SF fandom discussions. But is this a case of too little, too late? Are people rushing out to buy the Liu, and will they have time to read it before the Hugo voting closes? Liu’s novel will be very appealing to certain groups of SF WorldCOn voters since it has has throwback elements to hard SF writers like Arthur C. Clarke. I think it’ll be very close between Butcher and Liu (and maybe even Addison), and we’re dealing with guesswork here, not solid facts. There’s simply not enough data to model how a Chinese novel might do against an urban fantasy novel supported by a voting campaign.

7. Lock In, John Scalzi: Although Scalzi isn’t getting a ton of buzz right now, he does have 4 recent Best Novel nominations and a 2013 win for Redshirts. That indicates a broad pool of support in WorldCon voters; Scalzi is an author they’re comfortable with. While he might not be #1 on a lot of ballots, is he #4 or #5 on a plurality? We saw an old-standby in Jack McDevitt grab a Nebula nomination this year. Could Scalzi play the same role in the 2015 Hugos? You can never assume that the Hugos or Nebulas won’t be repetitive.

So, there’s my field. I’m going to drop City of Stairs down to 8th place: no Nebula nom really hurts it. I’m leaving McDevitt off the Hugos; he’s never had much chance there. Charles Gannon received both a Nebula nomination and an endorsement on the Sad Puppy 3 slate. Gannon isn’t as popular as Correia or Butcher, so I don’t think as highly of his chance. I’m slotting him in at #10. That gives us:

8. City of Stairs, Robert Jackson Bennett
9. Words of Radiance, Brandon Sanderson
10. Trial By Fire, Charles Gannon
11. Symbiont, Mira Grant
12. The Mirror Empire, Kameron Hurley
13. The Peripheral, William Gibson
14. My Real Children, Jo Walton
15. Echopraxia, Peter Watts

So, that’s how Chaos Horizon thinks it’ll play out. What do you think? Who is likely to grab a nomination in 2015?

Goodreads Popularity and the Hugo and Nebula Contenders, February 2015

It’s the last of the month, so time to update my popularity charts. Now that we have the Nebula slate, I’m debuting a new chart:

Chart 1: Nebula Nominee Popularity on Goodreads and Amazon, February 2015
Nebula Pop Feb 2015

Nicholas Whyte over on From the Heart of Europe has been tracking similar data for several years now, although he uses Library Thing instead of Amazon. He’s got data for a few different awards going several years back. Like me, he’s noted that popularity on these lists is not a great indicator of winning. A few weeks ago (here and here) I took a close look at how Goodreads numbers track with Amazon and Bookscan. The news was disappointing: the numbers aren’t closely correlated. Goodreads tracks one audience, Amazon another, and BookScan a third. The ratio between Amazon rankings and Goodreads rankings can be substantial. Goodreads tends to overtrack younger (under 40), more internet-buzzed about books. You can see how Amazon shows McDevitt, Lui, Addison, and Leckie to be about the same level of popularity, whereas Goodreads has Leckie 10x more popular than McDevitt. What do we trust?

The real question is not who we trust, but how closely the Goodreads audience correlates either to the SFWA or WorldCon voters. It’s hard to imagine a book from the bottom of the chart winning over more popular texts, but McDevitt has won in the past, and I don’t think he was that much more popular in 2007 than in 2015. I think the chart is most useful when we compare like to like: if Annihilation and Ancillary Sword are selling to somewhat similar audience, VanderMeer has gotten more books out than Leckie. Hence, VanderMeer probably has an advantage. I’m currently not using these numbers to predict the Nebulas or Hugos, although I’d like to find a way to do so.

Now, on to the big chart. Here’s popularity and reader change for Goodreads for 25+ Hugo contenders, with Gannon and McDevitt freshly added:

Goodreads Popularity February 2015

One fascinating thing: no one swapped positions this month. At the very least, Goodreads is showing some month to month consistency. Weir continues to lap the field. Mandel did great in February but that didn’t translate to a Nebula nomination: momentum on these charts doesn’t seem to be a good indicator of Nebula success. I’ll admit I thought Mandel’s success on Goodreads was going to translate to a Nebula nomination. Instead, it was Cixin Liu, much more modestly placed on the chart, who grabbed the nomination. Likewise, City of Stairs was doing better than The Goblin Emperor, but it was Addison who got the nod. At least in this regard, critical reception seemed to matter more than this kind of popularity.

Remember, Chaos Horizon is very speculative in this first year: what tracks the award? What doesn’t? I don’t know yet, and I’ve been following different types of data to see what pans out.

Interestingly, McDevitt and Gannon debut at the dead bottom of the chart. That’s one reason I didn’t have them in my Nebula predictions. That’s my fault and my mistake; I need to better diversify my tracking by also looking at Amazon ratings. I’ll be doing that for 2015, and the balance of Amazon and Goodreads stats might give us better insight into the field.

AS always, if you want to look at the full data (which goes back to October 2014), here it is: Hugo Metrics.

Modeling Hugo Voting Campaigns

What’s Hugo season without some impassioned discussion? 2015 is shaping up to be just as vehement a year as 2014. As I’m fond of saying, Chaos Horizon is an analytics, not an opinion, website. While that line can be delicate—and I sometimes don’t do a great job staying on the analytics side—neutrality has always been my goal. I want to figure out what’s likely to happen in the Hugo/Nebulas, not what should happen. If you want to find opinions about Hugo campaigning, you’ve got plenty of options.

At Chaos Horizon, I try to use data mining to predict the Hugo and Nebula awards. The core idea here is that the best predictor of the future is the past. I make the assumption that if certain patterns have been established in the awards, those are likely to continue; thus, if we find those patterns, we can make good predictions. There are flaws with this methodology—it can’t take into account year-by-year changes in sentiment, nor shifts in voting pools, and data mining tends to slight new or emerging authors—but it gives us a different way of looking at the Hugo and Nebula awards, one that I hope people find interesting. An analytics website like Chaos Horizon is most useful when used in conjunction with other more opinion driven websites to get a full view of the field.

What I need to work on is figuring out how to model the effectiveness of a campaign like “Sad Puppies 3” on the upcoming Hugo awards. For those out of the loop, a quick history lesson: over the past several years, we’ve seen several organized Hugo “campaigns” (for lack of a better word) that have placed various works—for various reasons—onto the final Hugo slate. Larry Correia’s “Sad Puppy” slate (we’re up to Sad Puppies 3 in 2015) has been the most effective, but the campaign to place Robert Jordan’s entire series The Wheel of Time also worked remarkably well in 2014. We’ve also seen some influence from eligibility posts (such as in Mira Grant’s case) on the final Hugo slate.

At this point, I think the effectiveness of campaigning is clear. If an author (or group of authors and bloggers) decides to push for a certain text (or texts) to make a final Hugo slate, and if they have a large and passionated enough web following, they can probably do so. Whether or not that is good for the awards is another question, and one that I’m not going to get into here. A quick google of “Hugo Award controversy” will find you plenty of more meaningful opinions than mine.

Instead, I want to focus on this question: How much influence are campaigns likely to have this year? Let’s refresh our memory on the Hugo rules, taken right from the Hugo website:

Nominations

Nominations are easy. Each person gets to nominate up to five entries in each category. You don’t have to use them all, but you have the right to five. Repeating a nomination in the same category will not affect the result; for instance, if you nominate the same story five times for Best Short Story, it will count as only a single nomination for that story. When all the nominations are in, the Hugo Administrator totals the votes for each work or person. The five works/people with the highest totals (including any ties for the final position; see below) go through to the final ballot and are considered “Hugo Award nominees.”

So, from January 15th to March 10th, eligible WorldCon members are casting nominating ballots to choose the final slate for the 2015 Hugos. Who is eligible to vote?

Anyone who is or was a voting member of the 2014, 2015, or 2016 Worldcons by the end of the day (Pacific Time/GMT – 8) on January 31, 2015 is eligible to nominate. You may nominate only once, regardless of how many of those three Worldcons you are a member.

You can either be an attending member (i.e. someone who actually goes to the WorldCons) or a “supporting member,” which costs you $40 and allows you to participate in the Hugo process. That “supporting member” category has been the buzzed about issue. While $40 may seem like a lot, that nets you the “Hugo Voting Packet,” which includes most of the Hugo-nominated works (author/publishers decide if they’re included). If you like e-books, $40 is a decent bargain for a variety of novels, novellas, novelettes, and short stories. Supporting membership also lets you nominate for at least 2 years (the year you join and then the following year), even if you only get to vote on the final slate once. All around, that’s a pretty good deal.

EDIT (see comments): I’ve been told that the “Voting Packet” is not guaranteed by the Hugo Awards. This has been the practice for the last several WorldCons, but it depends more on rights-holders and the individual WorldCon committees as to whether that it will happen in any given year. So don’t join for the sole reason of grabbing a packet!

That’s a fair amount of info to wade through, and it shows how the Hugo nomination process is relatively complex. Nominations combine last year’s Hugo voters with a new crop of attending and supporting members. That year-to-year carry-over means you don’t start fresh, and this is why the Hugo often feels very repetitive: if voters voted for an author the previous year, they can vote for that author again. I’d go farther than that: they’re very likely to vote for that same author again. This is one of the reason I have Leckie predicted so high for 2015.

So, preliminaries aside, let’s start looking at some data. What did it take to get onto the ballot in 2014? In this case, I’m mining the data from the 2014HugoStatistics, which gives us all the gritty details on what went down last year.

Table 1: Minimum Votes + Percentages to Make the Hugo Slate, 2014
Competetiveness Chart

That chart puts into sharp relief why campaigns work: you can get a Hugo Best Novel nomination for fewer than 100 votes. The other categories are even less competitive: a mere 50 votes to make the Short Story final slate? Given the way the modern internet works, putting together a 50 vote coalition isn’t that difficult.

Now, how did the various texts from the Wheel of Time campaign and the Sad Puppy 2 campaign perform?

Table 2: Nomination Results for 2014 Hugo Campaign Books
Nomination Results

A couple notes: Sad Puppy 2 got their top nominees in well above the minimums, particularly in Correia’s case. You can also see that, even within the Sad Puppy 2 campaign, different authors received different numbers of votes. 184 voted for Correia, but only 91 (less than 50%) followed his suggestion and also voted for Hoyt.

What can we learn from this chart?
1. Correia grabbed 11.5% of the vote and Jordan around 10%. Correia also ran a Sad Puppy 1 campaign in 2013 that netted him 101 votes and 9% (he placed 6th, just missing the final ballot). Using that data, I could predict 2015 in two different ways: I could average those campaigns out, and argue that a vigorous Hugo campaign will average around 10% of the total vote. While a campaign brings supporters in, it also brings in an opposition party that wants to resist that vote. 10% seems a pretty reasonable estimate. The other way to model this is to note that Correia’s number of voters increased from 101 in 2013 to 184, an impressive 80% increase. If Correia matches that increase this year, he’d jump from 184 to 330 votes. In an earlier post, I estimated the total nomination ballots for this year to be around 2350 (that’s pure guesswork, sadly). 330/2350 = 14.0%. Either way, the model gets us in the same ball park: Sad Puppies 3 is likely, at the top end, to account for between 10% and 15% of the 2015 Hugo nominating vote. For good or bad, that will be enough to put the top Sad Puppy 3 texts into the Hugo slate.
2. The data shows that the Sad Puppy 2 campaign fell off fairly fast from the most popular authors like Correia to less popular authors like Toregersen (60% of Correia’s total) and Hoyt (50% of Correia’s total) to Vox Day (33% of Correia’s total). Torgersen and Vox Day made the final slate based on the relatively weakness of the Novella and Novelette categories. While I don’t track categories like Novella, Novelette, or Short Story on Chaos Horizon (there’s not enough data, and I don’t know the field well enough), I expect a similar drop-off to occur this year. If you want to assess the impact of the whole Sad Puppy 3 slate, think about which authors are as popular as Correia and which aren’t.

If we put those two pieces of data together, we get my “Hugo Campaign Model”:
1. A Hugo campaign like “Sad Puppies 3” will probably account for 10-15% of the 2015 nominating vote.
2. The “Sad Puppies 3” slate will fall off quickly based on the popularity of the involved authors.

How does that apply to the 2015 Sad Puppy Novel slate? Brad Torgersen (running it this year instead of Correia) put forth 5 novels:

The Dark Between the Stars– Kevin J. Anderson – TOR
Trial by Fire – Charles E. Gannon – BAEN
Skin Game – Jim Butcher – ROC
Monster Hunter Nemesis – Larry Correia – BAEN
Lines of Departure – Marko Kloos – 47 North (Amazon)

Based on my modeling, I expect Monster Hunter Nemesis and Skin Game to make the 2015 Hugo slate. Butcher is even more popular than Correia. As such he should hold on to (or even improve upon) most of Correia’s campaign vote. The other authors are not as popular, and will probably hold on to between 60%-30% of the Sad Puppy 3 vote. They’ll probably wind up in the 8-12 spots, just like Hoyt did last year.

As for the other categories—you’ve got me there. If I had to guess, I’d pick the 2 most popular Sad Puppy 3 choices for each category (I don’t even know how to begin doing that) and predict them as making the final slate. That’s sort of how the math worked last year, with 2 “Sad Puppy” slate nominees making into Novelette and Novella. It’s a more robust slate this year, which might actually hurt the chances of more texts making it (by dividing the vote)

Of course, there could be a major change in the nominating pool this year: more voters mean less predictability. Still, given the lack of “huge” 2014 SFF books (The Martian probably isn’t eligible, and we didn’t get a Martin/Mieville/Willis/Bujold/Vinge/Stephenson book this year), I don’t anticipate it being particularly difficult to make the 2015 slate. It’s also hard to predict the “Sad Puppy” support doubling or tripling in just a year’s time. It’s also exactly difficult to imagine “Sad Puppy” support collapsing by 50% or 75% percent. We’ll find out soon enough, though.

So, that’s the model I’m going to us to handle Hugo campaigns. Satisfied? Unsatisfied?

Amazon vs. Goodreads Reviews

Time to get technical! Break out the warm milk and sleeping pills! Earlier in the week, I took a look at Amazon and Goodreads ratings of the major Hugo candidates. An astute viewer will notice that those rankings don’t exactly line up (nor do the number of ratings, but I’ll address that in a later post). Which is more representative? Which is more accurate?

At Chaos Horizon, I strive to do things: 1. To be neutral, and 2. Not to lie. By “not lie,” I mean I don’t try to exaggerate the importance of any one statistical measure, or to inflate the reliability of what is very unreliable data. So here’s the truth: neither the Goodreads or Amazon ratings are accurate. Both are samples of biased reading populations. Amazon over-samples the Amazon.com user-base, pushing us towards people who like to read e-books or who order theirs books online (would those disproportionally be SFF fans?). Goodreads is demographically biased towards a younger audience (again, would those disproportionally be SFF fans? Worldcon voters?). Stay tuned for more of these demographics issues in my next post.

As such, neither gives a complete or reliable picture of the public reaction to a book. If you follow Chaos Horizon, you’ll know that my methodology is often to gather multiple viewpoints/data sets and then to try to balance them off of each other. I’ve never believed the Hugo or Nebula solely reflects quality (which reader ratings don’t even begin to quantify). At the minimum, the awards reflect quality, reader reception, critical reception, reputation, marketing, popularity, campaigns, and past voting trends/biases. The Chaos Horizon hypothesis has always been that when a single book excels in four or five of those areas, then it can be thought of as a major candidate.

Still, can we learn anything? Let’s take a look at the data and the differences in review scores for 25 Hugo contenders:

Table 1: A Comparison of Reader Rankings on Goodreads and Amazon.com
Review Comparison Chart

The column on the far right is the most interesting one: it represents the difference between the Amazon score and the Goodreads score. As you can see, these are all over the place. There are some general trends we can note:

1. Almost everyone does better on Amazon than Goodreads. The average Amazon boost is .19 stars, and only three books out of 25 scored worse on Amazon than Goodreads. Amazon has a higher bar of entry to rate (you have to type a review, even if it’s one word; Goodreads lets you just enter a score), so I think more people come to Amazon if they love/hate a novel.
2. There doesn’t seem to be much of a pattern regarding who gets a higher ranking bump. Moving down the top of the list, you see a debut SF novel, a hard SF novel, an urban fantasy novel, a YA novel, etc. It’s a mix of genres, of men and women, of total number of ratings, and of left-leaning and right-leaning authors. I’d have trouble coming up with a cause for the bumps (or lack thereof). So, if I had to predict the size of a bump on Amazon, I don’t think I could come up with a formula to do it. I’ll note that since Amazon bought Goodreads, I think the audiences are converging; maybe in a few years there won’t be a bump.
3. If you want to use either ranking, you’d have to think long and hard about what audience each is reflecting, and what you’d want to learn from that audience’s reaction. It would take a major effort to correlate/correct the Amazon.com audience or Goodreads audience out to the general reading audience, and I’m not sure the effort would be worth it. Each would require substantial demographic corrections, and I’m not sure what you would gain from that correction. You’d have to make some many assumptions that you’d wind up with a statistics that is just as unreliable as Goodreads or Amazon.

I think “Reader Ratings” are one of the most tantalizing pieces of data we have—but also one of the least predictive. I’m not sure Amazon or Goodreads tells you anything except how the users of Amazon or Goodreads rated a book. So what does this mean for Chaos Horizon, a website dedicated to building predictive models for the Hugo and Nebula Awards?

That reader ratings are not likely to be useful in predicting awards. Long and short of it, Amazon and Goodreads sample different reading populations, and, as such, neither are fully representative of:
1. The total reading public
2. SFF fans
3. Worldcon Voters
4. SFWA voters
Neither is 100% useful (or even 75% . . . or 50%) reliable in predicting the Hugo and Nebula awards. So is it worth collecting the data? I’m still hopeful that once we have this data and the Hugo + Nebula slates (and eventually winners), I can start combing through it more carefully to see if it comes up with any correlations. For now, though, we have to reach an unsatisfying statistical conclusion: we cannot interpret Amazon or Goodreads ratings as predictive of the Hugos or Nebulas.

2015 Hugo Contenders: Amazon and Goodreads Ratings, January 2015

Here’s an update to the “Ratings” chart for the major Hugo candidates. What I’ve done is look at the Goodreads and Amazon ratings for each of 25 possible Hugo books, and sorted those out by Goodreads ratings. Here’s the data (as of January 31st); comments follow. Click on the chart for a better view.

Ratings January 2015

I’ve never felt that Goodreads or Amazon ratings accurately measure the quality of the book. They probably measure something closer to “reader satisfaction.” Take some widely hailed classics of American literature: William Faulkner’s As I Lay Dying only manages a 3.73 on Goodreads (on 80,000 ratings) and 3.9 on Amazon (on 404 ratings). Toni Morrison’s Beloved scored a 3.71 on Goodreads (on 185,000 ratings) and a 3.9 on Amazon (on 900 ratings). Whether you like those books personally or not—they’re both difficult and divisive—a 3.7 rating is ridiculous. Moby-Dick does worse, grabbing a 3.41 rating on Goodreads (on 320,000 ratings). Huck Finn does a little better, at 3.78 (on 840,000 ratings). Unless you believe that the classics of American literature are awful—believe me, many of my students do—we have to take these ratings with a heavy dose of salt.

Remember, though, I’m casting a wide net to see if we can find anything that’s predictive. Maybe these will be, maybe not. Maybe they should be, maybe not. We can’t know until we try. So, the real question is this: can “reader satisfaction” tell us anything about the Hugos or a possible Hugo slate? I don’t know.

Some quick observations. You’ll note that sequels dominate the ratings. That’s a structural issue: everyone who didn’t like the first book bailed out on all future volumes, leaving only enthusiastic fans. As long as the book satisfies that audience, you’ll get great ratings.

After the sequels, The Martian does well, with a very strong 4.36/4.6 Goodreads/Amazon score. Bennett and Addison also put up good showings, with a 4.19/4.4 and 4.15/4.4; that definitely helps boost their Hugo chances over something that did more poorly, like The Mirror Empire way down at 3.66/4.0.

VanderMeer does surprisingly awful in this metric, scraping by with a 3.67/3.8 rating. That’s probably an example of a book being “unsatisfying” to many readers. Annihilation is a strange text, and you could go in expecting one kind of novel (more traditional science fiction?) and wind up with a strange, spooky, somewhat incomprehensible book of weird fiction. That’s going to push ratings down. I don’t expect this to hurt VanderMeer (people either love or hate the book), but it’s definitely interesting to note.

Most books are clustered in a fairly narrow range, from 4.2 to 3.8. I wouldn’t make too much of an issue of a slight difference like that; you can’t claim much by saying one book was ranked 4.1 and another 3.9.

And why do people hate California so much? I haven’t read it, but I don’t think I’ve seen an Amazon score below 3.0 for a professionally published book before. A 3.26/2.9 is truly awful.

Lastly, let me note that there are some inconsistencies between Amazon and Goodreads scores. That reflects the different constituencies of those two websites. I’ll be back later this week with a post on that very issue. Which is more reliable? Can we tell? Could we correlate these to actual sales? Chaos Horizon is on the case!

Xeno Swarm

Multiple Estrangements in Philosophy and Science Fiction

AGENT SWARM

Pluralism and Individuation in a World of Becoming

Rare Horror

We provide reviews and recommendations for all things horror. We are particularly fond of 80s, foreign, independent, cult and B horror movies. Please use the menu on the top left of the screen to view our archives or to learn more about us.

Space and Sorcery

Adventures in speculative fiction

The BiblioSanctum

A Book Blog for Speculative Fiction, Graphic Novels... and more!

The Skiffy and Fanty Show

Running away from the thought police on wings of gossamer and lace...

Relentless Reading

"A Veritable Paladin of Blogging!"

MyLifeMyBooksMyEscape

A little about me, a lot about books, and a dash of something else

SCy-Fy: the blog of S. C. Flynn

Reader. Writer of fantasy novels.

Far Beyond Reality

Science Fiction and Fantasy Reviews

Andrew Liptak

three more from on high

Reading SFF

Reading science fiction and fantasy novels and short fiction.

Eamo The Geek

The Best In Sci-Fi And Fantasy Book Reviews by Eamon Ambrose

Read & Survive

How- To Read Books

Mountain Was Here

writing like a drunken seismograph

The Grimdark Review

The very best of fantasy.

From couch to moon

Sci-fi and fantasy reviews, among other things