I’ll cut right to the chase: I’m bullish on Uprooted‘s chances in the 2016 awards season. A lot factors point to this being the break-out novel of 2015: Ellen DeGeneres snapped up movie rights. The book has been doing absolutely gangbusters on Amazon and Goodreads; as of late September, this has 620 reviews on Amazon, with a 4.5 score. On Goodreads, this has 16,133 ratings (!), with a 4.21 score. Those are very high numbers for a Hugo and Nebula contender. Compare those to last years Hugo winner, The Three-Body Problem. Cixin Liu’s book is still sitting (after wining the Hugo) at 467/4.4 at Amazon, and 7041/4.0 at Goodreads.
Part of the reason Uprooted has been a break-out book is its unusual approach. This novel, from cover to content, has a “fairy-tale” feel without imitating any specific fairy tale. This novel tells the story of a young woman named Agnieszka. She’s taken to a wizard’s tower, where she at first seems to prisoner and then becomes a witch-in-training. We’ve got shades of Rapunzel, Beauty and the Beast, etc. Then the novel shifts gears, and we end up having a confrontation with the corrupted Wood, a kind of moving forest of evil that threatens the land. Uprooted is a wide-ranging book, bringing in plots that involve magic education, romance, and even war.
Since fairy-tales are one of the obvious roots of fantasy literature, this book has plenty of appeal for a broad fantasy audience. More than that, the fairy-tale approach make this seem original and important book. It’s very different than your typical epic fantasy, and I think this will follow in the footsteps of something like The Goblin Emperor. If anything, I think this book is stronger, more original, and more appealing than the Addison. Add in that this book seems to be about 10x as popular as Addison’s book . . .
Novik has been writing her well-regarded Temeraire series for years, a kind of Napoleonic wars with dragons series, and this is her first step outside that familiar territory. Hugo and Nebula voters tend to reward authors when they begin new series. Since this is an accessible starting point for new readers, it makes more sense to nominate Uprooted than volume #7 or #8 of Temeraire. Novik does have one prior Hugo nomination, back in 2007 for Her Majesty’s Dragon.
A lot of the question for 2016 will revolve around the Hugo controversies swirling in the field. Those uncertainties might substantially change voting patterns. If such changes were out of the equation, I think Novik would be a shoo-in for the Hugo and Nebula. I wouldn’t even be shocked to see Novik win some major awards next year, although we’re probably too early to predict that.
Even with various campaigns, expanded Hugo electorates, and other kerfuffles, I think Uprooted has an excellent chance. It’s widely read, original, positively reviewed, and written by an author with a prior Hugo nomination. That’s pretty much the Hugo profile in a nutshell.
Good but not great mainstream coverage. The mainstream only tends to push books if they’re by super well-known writers (William Gibson, Neil Gaiman); I think Uprooted was more of a surprise word-of-mouth success.
This is as positive a slate of reviews from the SFF world as you’ll see. It’s already being hailed as a “great book,” an “enchanted forest,” as one of the best of the year, as a new and exciting step forward for Novik. I think these reviews are more influential in the Hugo/Nebula process than the mainstream. What’s more important is that is early reviews seem to have led to word-of-mouth success. There will be times a book gets positive SFF coverage but doesn’t break out of the core; Uprooted seems to have a very broad reach.
Bree’s Book Blog
Bookish and Awesome (3.5 out of 5)
SFF Book Reviews (9 out of 10)
Woven Magic (10 out of 10)
Bibliodaze (5 out of 5)
Bibliosanctum (4.5 out of 5)
Relentless Reading (4 out of 5)
Bibliotropic (4 out of 5)
Love is not a Triangle
I could have added another dozen links if I wanted to—this book has been broadly reviewed and discussed amongst my fellow WordPress bloggers. As you’d expect, we’re a feisty bunch, and no one agrees on everything. Overall, though, the reviews are highly positive, singling this out as a unique, original, and unexpected 2015 read. Broad reception really helps in the Hugos: the more people read a novel, the more people can fall in love with it. Remember, you get a nomination based on having a highly passionate group of readers, not necessarily across the board consensus. It looks to me that Uprooted has exactly that.
My Take: I’ve been trying to add short microreviews of my own to these Review Round-Ups; I think readers need to be able to see my own biases and tastes so they can decide whether or not to trust me. My reading experience with Uprooted split neatly in half: I greatly enjoyed the first part of the book. I thought it’s ability to weave the familiar with the unfamiliar was striking, and drawing on fairy tales without reproducing classic fairy tales was original and exhilarating. I liked Novik’s prose style, and she has some well-crafted sentences and scenes, and the Wood certainly comes across as frightening and sufficiently mysterious.
Then the book shifts gears in the second half, and settles into Novik’s more familiar territory, being a kind of war novel between the Wood and the main characters. This part didn’t work for me; I never felt I fully grasped the “rules” of what the Wood or the wizards/witches could do. This made the battle sequences feel incoherent—instead of being involved in the war, I was taken out by the new powers that were constantly being introduced. I imagine that other readers might feel differently; there’s enough romance to keep driving the plot forward, and others might found those battles fun. Individual taste is individual taste; if the book would have ended on page 200, I probably would have given it an 8.5 or a 9, and it would have been my third or fourth favorite book so far this year (the one-two punch of Sevenves and Aurora are currently at the top of my list). I found the second half boring, and that drags my score down to a 7.5.
So, what do you think of Uprooted‘s chances? 9 months from now will Novik be celebrating a Hugo/Nebula sweep? I think it’s a strong possibility, but this year is going to be so chaotic that it’s too early to say.
There’s a last few pieces of data I want easy access to before I move on from 2015. It’s been quite a year, and I think 2016 is going to be even more chaotic.
Not only do we have the specter of further Hugo voting kerfuffles, but 2015 is one of the stronger award years in recent memory. Of the last 7 Hugo Best Novel winners, 5 published new novels in 2015. Bacigalupi, Walton, Scalzi, Cixin Liu, and Leckie are all back at the party with well-received works—and that’s not including past Hugo heavy-hitters like Neal Stephenson (I think Sevenves will be a major player) and Kim Stanley Robinson (Aurora likely for a Nebula nom?).
That’s 7 books already . . . and you have to think Pratchett will grab plenty of sentimental votes for the final Discworld novel. Then we’ve got a whole range of other authors with competitive novels in 2015: George R.R. Martin’s A Knight of the Seven Kingdoms (I don’t know whether this is eligible or not as a novel, but if it is, watch out), Ken Liu’s debut Grace of Kings, Brandon Sanderson’s Shadows of Self (without the Puppies, he was awfully close to breaking through for a nom this year; see the stats below), N.K. Jemisin’s The Fifth Season (1 Hugo Best novel nom and 3 Nebula Best novel noms for Jemisin in the last 5 years), the list goes on and on. And I haven’t even mentioned potential breakout novels this year, like Naomi Novik’s Uprooted. Throw in the chaos of various Puppy picks, and you’ve got a very murky year coming up. I better roll up my sleeves and get to work!
So there’s two sets of numbers I want to look at today: sweep margin in various categories and the Best Novel nominations with the Puppy picks removed. Both will give us some good insight as to what might happen next year. Once again, I’m using the official Hugo stats for this info, which can be accessed here: 2015HugoStatistics.
If we look at categories where the Puppies, Sad and Rabid combined, had at least 5 picks, we can calculate something I’m calling the “sweep margin.” Basically, we subtract the Puppy #5 nominating number from the highest non-Puppy pick. This tells us how close the category came to getting swept. If the number is greater than 0, that means the category was swept, and we know how many votes it would have taken the non-Puppies to break up that sweep. If the number is less than zero, that means the category wasn’t swept. Note that I’m not taking withdrawals into account here; I just want to look at the raw numbers. Here’s the table:
A little hard to parse, I know. Let’s think about Novella first: this chart tells us that Patrick Rothfuss’s The Slow Regard of Silent Things just missed the ballot by a mere 21 votes. So the Puppy sweep in Novella wasn’t particularly dominant; just a few more voters and one non-Puppy would have made it in. Does that mean a category like Novella might not be swept next year?
Look at another category, though, like Short Story. Since there are more short stories published in any given year, the vote is usually more spread out, and this time the Puppies were much more dominant. They actually had seven stories that placed above every non-Puppy pick. Two eventually declined, Bellet and Grey, but I’m not tracking that in this study. If we look at the #5 Puppy story, “Turncoat” by Steve Rzasa, it got 162 votes. The highest non-Puppy story only got 76 votes, “Jackalope Wives” by Ursula Vernon. That’s a heft margin of 86 votes—Vernon would have had to double her vote total to make it into the field (before Bellet and Grey declined, but you can’t count on people declining). Possible? Sure. I think the Hugo nomination vote can double next year—but the Puppy vote will also increase. Depending on Puppy strategy, it’s very possible that Short Story or Best Related Work will be swept next year.
I’m most interested in Best Novel here at Chaos Horizon; it’s the area of SFF I know best, and what I’m most interested in reading and predicting. Despite all the Puppy picks, Leckie was safely in the 2015 field: she placed 3rd in the raw nomination stats, above 5 different Puppy picks. Even Addison’s The Goblin Emperor (256 votes) and Cixin Liu’s The Three-Body Problem (210) beat two Puppy picks, Trial by Fire (199) and The Chaplain’s War. Those two picks, Ganon for Sad and Torgersen for Rabid, are examples of the Sad and Rabid picks not overlapping. The Best Novel category received enough attention, and enough votes, and was centralized enough, that the non-Puppy voters were able to overcome the Puppy votes when they didn’t team up. I think that’s a key piece of evidence for next year’s prediction: when the Sad and Rabid picks overlap, they’ll be very strong contenders. If they’re separate, I don’t think they’ll be able to beat the motivated Hugo nominators. We’ll see, of course.
That leaves out one obvious point—if the Sad/Rabid picks overlap with something already in the mainstream, that will definitely boost the Sad/Rabid chances.
Last thing is to look at the Nomination stats with the Puppies taken out. I need these for my records, because they’ll me to do year-to-year comparisons for authors. What I’m going to do is subtracting all the Puppy picks and then recalculating the percentages. Skin Game got 387 votes, so I’m just going to brute force subtract 387 from 1827 votes and recalculate. Were all of Butcher’s votes from the Sad Puppies? Probably not, but Butcher doesn’t have a strong history of vote-getting in the Hugos. By erasing those 387 votes, I’ll restore the percentages to what they might have looked like otherwise, which will help for my year to year tracking. I like to ask questions like, is Sanderson getting more or less popular?
Here’s that chart:
Fun little note: they misspelled Katherine Addison’s name as Katherine Anderson in the Hugo nominating stats. It’s incredibly hard to enter all the data correctly when you’ve got so many data-points! EDIT 9/13/15: The error has been fixed! Check the comments for the full story.
Those percentages make a lot of sense. Leckie grabbed 23.1% for Ancillary Justice in 2015, and I don’t think the sequel was quite as popular.
A couple things of note: Weir probably would have been ineligible and many Hugo voters knew that. If that wasn’t the case, I expect he would have easily made the top 5. VanderMeer is much lower than I would have expected. Walton did very well (8th) for what was an experimental novel; that means The Just City might have a shot this year. Sanderson in 7th place has been moving steadily up in these Hugo nomination stats; he managed only 4.2% for Steelheart last year. Will his return to his very popular Mistborn universe be enough? I’m still going to predict just outside the Top #5, but it looks like it’s just a matter of time. Okorafor is actually eligible again in 2016 (Lagoon just only came out in the United States). There’s also a lot of experimental-ish fantasy on the list (Addison, Bennett, Hurley); that might speak well of Novik or Bear’s chances in 2016.
Well, that brings to an end my 2015 Hugo analysis! It’s been quite a year. I’m going to spend the several weeks doing Review Round-Ups of the big contenders for the 2016, and I should have my first 2016 Hugo/Nebula predictions in early October.
Time to dig into the nomination stats. Since Chaos Horizon is a website dedicated to award predictions, this is data we really need—2015 is going to be our best model for 2016, after all.
Let’s tackle this in a methodical and organized fashion. The 2015 nominating stats are included as part of the 2015 Hugo packet, easily available at the Hugo website or right here: 2015HugoStatistics. The first thing we can do is go back to the Sad Puppy and Rabid Puppy slates and see how many votes each of those texts got. I’ve divided this into three lists: joint Sad/Rabid selections, Sad selections, and Rabid selections.
Joint Sad and Rabid Picks, Number of Nominations in 2015 Hugos:
263 The Dark Between the Stars
387 Skin Game
372 Monster Hunter Nemesis
270 Lines of Departure
338 One Bright Star to Guide Them
338 Big Boys Don’t Cry
259 The Journeyman
248 The Triple Sun
267 Championship B’tok
266 Ashes to Ashes
230 Goodnight Stars
184 On a Spiritual Plain
206 Letters from Gardner
273 Transhuman and Subhuman
254 The Hot Equations
236 Wisdom from my Internet
265 Why Science is Never Settled
Best Graphic Story
201 Reduce Reuse Reanimate
314 Lego Movie
769 Guardians of the Galaxy
170 The Maze Runner
169 Grimm “Once We Were Gods”
170 The Flash “Pilot”
368 Toni Weisskopf
276 Jim Minz
238 Anne Sowards
292 Sheila Gilbert
236 Jennifer Brozek
217 Bryan Thomas Schmidt
279 Mike Resnick
228 Edmund Schubert
173 Carter Reid
160 Jon Eno
188 Alan Pollack
181 Nick Greenwood
229 InterGalactic Medicine Show
208 Elitist Book Reviews
187 Revenge of Hump Day
179 Sci Phi Show
158 Dungeon Crawlers Radio
169 Adventures in SF Publishing
150 Mathew Surridge
156 Jeffro Johnson
175 Amanda Green
201 Cedaer Sanderson
229 Jason Cordova
224 Kary English
219 Eric S. Raymond
If we toss out the Best Dramatic, Long Form as an outlier (the stat numbers are way high, indicating that far more than just the Rabid + Sad Puppies voted for Guardians of the Galaxy, as anyone would predict), we wind up with this as the following range:
387-150. That takes us from the most popular pick to least popular choice (Skin Game by Jim Butcher in Novel, down to Matthew Surrdige in Fan Writer). That’s the “effective joint Sad/Rabid Puppy vote,” or how many votes the Puppies delivered to the 2015 Hugo nomination process. That wide range reflects two things: the lack of popularity of categories like Fan Writer, and lack of slate discipline (not every Puppy voter voted for all the works on the slate). To illustrate how some people didn’t follow the slate, look at Best Novel:
387 Skin Game
372 Monster Hunter Nemesis
270 Lines of Departure
263 The Dark Between the Stars
All four are joint Rabid/Sad picks, but Skin Game and Monster Hunter Nemesis grabbed 100 more votes than the Kloos or Anderson. That means that least 25% of these voters were picking and choosing from the slate, not voting it straight down the line.
A couple number to parse: how do we know Skin Game (or any other nominee) didn’t pick up some non-Puppy voters? We don’t know that for sure, but we can look at the 2009 Hugo Nominating stats for references. That’s the last year where they released the complete list of everyone who got a vote. Small Favor, Butcher’s Dresden Files #10, only got 6 votes that year. Now, this year’s pool is bigger, and maybe people liked Skin Game more, but that looks like a relatively trivial number to me. Your mileage may vary.
On the flip side, how do we know that every Puppy voter voted for Skin Game? Again, we don’t know for sure—there could have been 500 Sad Puppies, and only 80% of them voted for Butcher. In this case, I don’t think it matters. We’re looking at “effective” strength: this is how many votes the Puppies actually delivered in the categories, not a potential estimate of their max number. The actual number of votes is what is useful in my predictions.
Conclusion: So, Chaos Horizon is concluding that the effective Sad/Rabid combined block vote was 387-150, with sharp decay by both the popularity of the chosen work and the popularity of the category. I think that number can explain some of the vitriol in the field: of the 387 people who voted for Skin Game, at least 200 of them didn’t vote all the way to the bottom of the slate. More people only voted part of the slate than voted the whole thing—thus opening up the door for all kinds of online arguments as to exactly how “slate”-like this whole thing was. Expect those to continue as we move into Sad Puppies 4.
On to the Sad Puppy picks. When all was said and done, the Sad Puppies only had a few picks that were not mirrored by the Rabid Puppies (8, in fact), so we’ll learn far less here.
Sad Puppy Picks, Number of Nominations in 2015 Hugos:
199 Trial by Fire
132 A Single Samurai
185 Tuesdays With Molakesh the Destroyer
41 Adventure Time “The Prince Who Wanted Everything”
Didn’t make top 15 Regular Show “Saving Time”
111 Abyss & Apex
100 Andromeda Spaceways In-Flight Magazine
132 Dave Freer
If we toss out the “Dramatic Short” category as an obvious outlier (the Sad Puppy voters didn’t seem to have liked picking cartoon shows in that category, as “Regular Show” didn’t even make the top 15), we wind up with this as a range:
199-100. I think the Trial by Fire number (at 199) is a little inflated; Gannon did grab a Nebula nom for this series in both 2014 and 2015, and I expect he picked up a fair amount of votes outside the Puppy blocks. That 185 number for “Molakesh” might be the more solid estimate of the max Sad Puppy core; that story is from Fireside Fiction, a rather obscure venue. Neither Andromeda Spaceways nor Abyss and Apex placed in the Top 15 in the 2014 Hugos, and the cut off there was a mere 10 votes, so I think we can attribute the lion’s share of those votes to the Sad Puppies.
Conclusion: We only have 8 data points here, but we’ve got a 199-100 range, with the top end only happening in popular categories (Novel, Short Story). That’s a 50% difference from the highest voted to the lowest voted, perhaps suggesting that only 50% of the Sad Puppy voters voted straight down the slate. You could get that number even lower, though, if you counted the television shows that not even the Sad Puppies voted for.
Rabid Puppy Picks, Number of Nominations in 2015 Hugos:
196 The Chaplain’s War
172 The Plural of Helen of Troy
145 Pale Realms of Shade
165 Yes, Virginia, There is a Santa Claus
151 The Parliament of Beasts and Birds
141 Game of Thrones “The Mountain and the Viper”
86 Supernatural “Dog Dean Afternoon”
166 Vox Day
162 Vox Day
118 Kirk DouPonce
119 Black Gate
66 Daniel Enness
143 Rolf Nelson
A couple interesting outliers here. The Dramatic Television category seems strange; you’d have to imagine that more people voted for Game of Thrones than just the Rabid Puppies, and Supernatural only picked up a scant 86 votes. Even the Rabid Puppies didn’t follow VD’s instructions in Fan Writer, only voting 66 times for Daniel Enness. I think the most sensible explanation is that Rabid Puppy voters didn’t follow the recommended picks in these categories. If you get rid of those 3 outliers, you end up with a very tight grouping of:
196-100. The Torgersen is probably inflated from Sad Puppies; even though he didn’t include himself on his own list, I can imagine some Sad Puppies coming over to vote for him. He’d also had a prior Hugo nomination outside the Puppy process. The tightly grouped Vox Day number (166 and 162) might be an equally sensible top number for the Rabid Puppies group. We’re only take 20-30 votes difference, though, and we’d be splitting hairs. I’m a stat site, though, so if you want to split hairs, go ahead!
Conclusion: 196-100 seems safe, and not even the Rabid Puppies had perfect slate discipline. This surprised me, although I could probably be persuaded there was a core group of 166 (Vox Day’s editor nom) to 119 (the Fanzine/Professional Artist number) of Rabid Puppies that did stick pretty closely.
So that leaves us:
Nomination Estimates, Sad, Rabid, and Joint Puppy Picks (percentage calculated using 1595 total nominating ballots):
Joint: 387-150; 24.2% – 9.5%
Sad Puppy: 199-100; 12.5% – 6.3%
Rabid Puppy: 196-100; 12.2% – 7.4%
Let’s double check-the math. If we add the Rabid and Sad picks together, we wind up with 395-200. The joint picks is 387-150. Obviously, that top number looks great; those 8 extra votes would seem to fall within the margin of other votes Skin Game is likely to have picked up. 200 and 150 are quite a bit farther apart, but this might reflect the limited data set we have for Sad Puppy picks alone (8 data points) and Rabid Puppy picks alone (15) compared to joint Sad/Rabid picks (52). Some of the joint picks may have been unappealing to both the Sad and Rabid voters, as well as being in categories with low voter turnout (Fan Writer, Fan Cast, etc.). Take a look at this chart, showing how quickly the various voting groups decayed (excluding Best Drama, for the reasons stated above):
The chart just lines up the most popular pick to the least popular pick to take a look at the decay curve. Rabid and Joint alike fell off very quickly and then evened out. I think that reflects how much more popular the Best Novel category is than the rest of the Hugos. In 2015, it pulled in almost twice as many votes as the other fiction categories. Sad Puppies fell quickly the whole way down, but I don’t know if that reflects a greater variance amongst Sad Puppy voters or just a lack of data.
What does this all mean? That’s the big question. What it means for Chaos Horizon is that I can use these ranges and totals as I put together my 2016 prediction. The max number of nominating ballots was in Best Novel, where 1595 were cast; 5950 voted in the Hugo finals, an increase of almost 3.75. According to me previous analysis, here’s my final Puppy estimates:
Core Rabid Puppies: 550-525 (9.2% – 8.9%, using 5950 total votes for percentage)
Core Sad Puppies: 500-400 (8.4% – 6.7%, using 5950 total votes for percentage)
There are also some Puppy inclined neutrals; I’m not including them, because I don’t know if they’ll follow the Puppies into the nomination stage.
Those percentages are a little down from the nominating ballot, but not aggressively so. That’s what you would expect: the Puppies had the advantage of surprise in the nomination stage, while the push-back against them came in the final balloting. Much of the growth in the final ballot was from people wanting to vote specifically against the slates.
Boil this all down, and we now have a set of numbers to use in future predictions. In my next nominating analysis, I’ll be looking at how big the sweeps were for each category. With that data in place, I can then predict whether or not there will be sweeps (or in which categories) in 2016.
My posts have been slow these last weeks—my university is starting it’s Fall semester, and I’ve been getting my classes up and running. I’m teaching Kurt Vonnegut’s Slaughterhouse-Five, Nathanael Hawthorne’s The Scarlet Letter, and Mark Twain’s Adventures of Tom Sawyer across my three classes this week. Strange combination. Toni Morrison, Edith Wharton, and Benjamin Franklin next week!
Today, I want to look at category participation: how many people voted in each of the Hugo categories. Historically, there has been a very sharp drop off from the Best Novel category (which has around 85% participation) to the less popular categories like Fan Writer, Fanzine, and the Editor Categories (usually around the 40%-45% range). The stat we’re looking at today is what percentage of people who turned in a ballot voted for that particular category.
There are a lot of categories in the Hugos, and it’s unlikely that every fan engages in every category with the same intensity. That stats show that, but the 2015 controversy changed some of those patterns in interesting ways.
Let’s take at a big table of data from 2011-2015. I pulled the numbers directly from the last five years of Hugo packets, and what my table shows is the number of total ballots and the number of votes in each category. Divide those by each other, and you get the percentage participation in each category. Notice how skewed 2015 is compared to the other numbers; we had a total change in voting patterns this year. Click the table to make it larger:
Table 1: Participation in Final Ballot Hugo Categories, 2011-2015
A lot of numbers, I know. Let’s look at that visually:
That’s a very revealing chart. Ignore the top turquoise line for the moment; that’s 2015. The other four lines represent 2011-2014, and they’re pretty consistent with each other. Participation across categories declines until the Dramatic Presentations, then it declines again, then spikes at Best Professional Artist (who knew), before declining again. Best Fancast began in 2012, messing up the end of the 2011 line in the chart.
Historically, the ballot plunges from 85%-90% for Novel down to 40%. Even the major fiction categories (Novella, Novelette, and Short Story) manage only about 75% participation. Those declines are relatively consistent year to year, with some variation depending on how appealing the category is in any given cycle.
Now 2015: that line is totally inconsistent with the previous 4 years. Previously ignored categories like Editor grabbed an increase of 30 points—there’s your visual representation of how the Puppy kerfuffle drove votes. Thousands of voters voted in categories they would have previously ignored. I imagine this increase is due to both sides of the controversy, as various voters are tying to make their point. Still, 80% participation in a category like Editor, Short or Long Form is highly unusual for the Hugos. Even the Best Novel had a staggering 95% participation rate, up from a prior 4 year average of 87.4%.
Not every category benefitted in the same way. Let’s see if we can’t chart that increase:
Last three columns are key. I averaged the 2011-2014 stats, and then looked to see how much they increased in 2015. If you take that absolute value (i.e. 80% to 90% is a 10 point increase), you can then calculate the percentage increase (divide it by the average value). That shows us which categories had the biggest relative boosts. Best Novel only increased slightly. Categories like Editor, Short, Editor, Long, and Fan Writer had huge relative boosts. The categories with little controversy, such as Fan Artist, didn’t enjoy the boosts other categories saw. A visual glance:
So, what did we learn from all this? That there was a Hugo controversy in 2015, and that it drove huge increases in participation to categories that had previously been ignored. I think we knew that already, but it’s always good to have the data.
Here’s my Excel file with the numbers and the charts: Participation Study. All data on Chaos Horizon is open, and feel free to use it in any wish you wish. Please provide a link back to this post if you do. Otherwise, you might want to check out my similar “Hugo Nomination Participation” study, which looks at the same data but in the nominating stage. It’s linked under the “Reports” tab.
Up next: the 2015 nominating numbers!