Updated to reflect change of lead in Montmagny--L'Islet--Kamouraska--Rivière-du-Loup
The election is over, and we have a Conservative majority. In this post, I will discuss where the projection went wrong, and why. A major reason for most projectors' failure to foresee a Conservative majority is that outside Québec, the polls underestimated Tory support by at least 5% in every single region. This performance is frankly embarrassing. My vote share adjustment allowed me to be closer to the mark by bumping the Tory vote estimate by 1-2%, but that wasn't nearly enough.
In a subsequent post, I will compare my final projection to those of other models using poll averages, as well as to predictions made using other methods. It turns out that although projections based on polling averages didn't fare well, they did better than those based on other methodologies. Moreover, among projections based on polling averages, Canadian Election Watch performed very well.
Atlantic Canada (Projected|Actual)
CON - 12|14 ; 32%|38%
LIB - 12|12 ; 28%|29%
NDP - 8|6 ; 34%|30%
Here, we can see that the polls overestimated the NDP vote share, and underestimated the Conservative one. My vote share adjustment from the NDP to the Tories was not enough to compensate for the bias. Had the actual vote shares been known, the model would have been exact. (There would still have been ridings called incorrectly, but overall numbers would have been correct.) As I had predicted, a 14+ seat performance in Atlantic Canada would point to a Conservative majority.
Québec
NDP - 44|59 ; 40%|43%
LIB - 7|7 ; 15%|14%
CON - 8|5 ; 17%|17%
BQ - 15|4 ; 25%|23%
IND - 1|0
The polls in Québec were mostly on the mark. They were a little low for the NDP and the Conservatives, and a bit high for the Bloc. My vote share adjustment appropriately compensated for the bias against the Tories, reduced the Bloc problem, but exacerbated the NDP inaccuracy. Unlike in Atlantic Canada, the projected result would have differed from the actual one even if the actual vote shares had been known: the Bloc would still have been given around 10 seats.
Would simply applying a uniform swing, without taking riding polls and potential swing variability into account, performed better? Well, yes and no: as I said previously, uniform swing on polling numbers would have given the Bloc just 5 seats, which would have been very prescient. However, uniform swing on the actual results would have left the Bloc with just one seat, a significant underestimate. Thus, taking other factors into account and increasing the Bloc's predicted count was correct - however, it should not have been done to the extent that everyone, including myself, did.
If you've been reading this blog, you know that I've been fighting against two serious misconceptions of the Bloc vote, namely that it turns out well and that it is efficiently distributed. In fact, Bloc voters, as usual, turned out less than polls suggested, and the Bloc vote was shockingly inefficiently distributed - even much more than I had thought. Indeed, most of the NDP victories were not tight as the model suggested: the Bloc didn't even come close except in a handful of cases. Many political commentators outside Québec either don't know basic electoral facts about Québec (sovereigntists almost always underperform polls) and/or can't process numbers correctly (how can the NDP win only 5-7 seats with 41% in the polls?).
Ontario
CON - 62|73; 40%|44%
NDP - 22|22; 27%|26%
LIB - 22|11; 26%|25%
Like in Atlantic Canada, the polls failed in Ontario. While the former can be attributed to small sample size, the latter is somewhat embarrassing for the polling industry. The actual result was well outside the margin of error of the last polls from Forum, EKOS and Léger, and it was right on the upper edge for Angus Reid. Other pollsters were probably saved by their small sample size, which implied wider confidence intervals: no pollster came within 3% of the Conservative tally in Ontario. My vote share adjustment only mitigated the problem by a tiny bit.
On actual results, the model with the GTA adjustment would have done quite well: 72-20-14. It would have been farther off without the GTA adjustment, which on actual results, shifted 3 net seats from the Liberals to the Conservatives. This adjustment has been heavily emphasized on this blog, while it was barely mentioned elsewhere.
Manitoba/Saskatchewan
CON - 21|24 - 50%|55%
NDP - 5|2 - 29%|29%
LIB - 2|2 - 15%|13%
Once again, the pollsters severely underestimated the Tory vote, and my pro-Conservative vote share adjustment was not nearly big enough. On actual popular vote, the projected count would have been closer, at 23-4-1.
Alberta
CON - 27|27 - 63%|67%
NDP - 1|1 - 18%|17%
The projection was, unsurprisingly, correct in Alberta, though the pollsters underestimated the Conservative vote here as well.
British Columbia
CON - 21|21 - 41%|46%
NDP - 13|12 - 33%|33%
LIB - 2|2 - 16%|13%
GRN - 0|1 - 8%|8%
Once again, the Tories did much better than predicted. Here, however, the polling inaccuracy did not significantly impact my projection. On the actual provincial split, the model would have given 22-12-2, though I might have made it 23-11-2 via a risk adjustment.
Overall
CON - 152|166 - 37.3%|39.6%
NDP - 94|103 - 30.5%|30.6%
LIB - 46|34 - 20.0%|18.9%
BQ - 15|4 - 6.3%|6.0%
GRN - 0|1 - 4.9%|3.9%
IND - 1|0
Overall, the polls underestimated the Conservative vote by 3.7% - and, I repeat, that was over 5% everywhere outside Québec. They would have been even worse had they accounted for the fact that turnout is significantly lower in Alberta than elsewhere. My vote share adjustment shrank the gap to 2.3%.
On actual vote splits, I would have projected 166-167 Conservative seats - bang on! The Liberal count would have been 36, which is very close as well. The Bloc would still have been overestimated (though by less), and the NDP would still have been underestimated because although I kept repeating that the NDP vote is efficient in Québec, even I did not grasp the full extent of it. Outside Québec, the projected count would have come within 3 seats of the actual count for all parties. Based on these observations, I believe that the model was, in fact, very solid.
As for the riding-by-riding calls, I was correct in 259 cases out of 308 = 84.1%. (Etobicoke Centre and Westmount--Ville-Marie flipped late yesterday, both away from the projected winner. I am, however, very pleased at the change in my childhood riding. Update: Montmagny--L'Islet--Kamouraska--Rivière-du-Loup also flipped away from the projection.) This is worse than how such a model usually performs - you would have expected about 280 correct calls - but obviously that was caused by the inaccurate polls and the great uncertainty in Québec.
7 comments:
The biggest story off all to me in this election is how the media and pollsters attempted to influence the outcome in favour of a Conservative minority government.
The reason yours (and others) projections were off was because you had to rely upon the poll results, which were fundamentally flawed. I can understand there being one or even two polls missing the actual result by the margin of error, but having several miss the MOE and all but one under-estimating the Conservative vote? That's more than a simple honest mistake. It smacks of a deliberate methodological bias in favour of a prefered result (being the underestimation of a Conservative vote in the hopes of discouraging Conservatives to vote). The seat projections were all off as well with only one giving the Conservative a majority (155).
And the really suspicious thing is, this seems to be an exact repeat of the 2008 election. The average seat projection of the prognisticators for that federal election? 129. Actual 143. This election, the prognosticators were off by almost the same exact percentage.
To have say even 5 final polls off by their MOE is an astronomical improbability (actually about 3 million to one). To have all but one of them off in a particular direction is more than just coincidence.
The media should take their pollsters to task for misleading the public. And the viewers should take the media to task for using the pollsters.
I hope that all the website political prognosticators remember this the next time they get involved in making predicitions in a Canadian federal election.
Hi Glenn, thanks for your input. I agree with you that there was a large methodological bias in polls. However, I don't believe that this was done purposefully: if pollsters knew that the results were wrong, each of them would have had an overwhelming incentive to make the correct call. The resulting boost in reputation would have been worth millions - more than whatever benefit they could otherwise derive.
Also, regardless of whether this was done on purpose, I believe that it influenced the outcome in favour of a Conservative majority. If polls had instead suggested that a majority was on its way, (more) left-of-center voters, and maybe even candidates, might have tried coordination in order to avoid such an outcome. In other words, it's unclear whether having one party too low in the polls helps it or hurts it.
The problem with incorporating a large adjustment for poll bias next time is that pollsters will likely change their methodology in light of the results. So there will be no way of knowing whether an adjustment is still appropriate next time. Remember that the Liberals had a bump in 2006 - everybody thought they'd lose by more - so one might have been tempted to adjust their numbers upward for 2008. That, of course, would have worsened any projection.
What I do think is that the few websites that provide systematic probability estimates (Calgary Grit, The Mace) should change how they simulate results, as their current ranges are far too tight.
Sean, thanx for your seat projections during the campaign. We've added all 14 2011 model results to the 2004/2006/2008 Scoreboard @ http://www.trendlines.ca/free/elections/Canada/electcanada.htm
A huge thank you for all the work you've done on this site. Once I discovered it I checked it for updates several times a day.
I normally feel pretty confident from reading the polls as to what the outcome of an election will be. This one left me scratching my head thinking that anything from a CPC majority to an NDP minority was possible. The polls, with the possible exception of Nanos Sunday poll, failed to capture the changes that were taking place. Your models are only as good as the information you feed into them.
You can be sure I will continue to check your site as long as you continue to post. Have you given any thought to doing the upcoming provincial elections?
Thanks again,
Earl
Thanks a lot for your kind words, Earl! I'm glad that you found it to be a useful resource for making sense of the data out there.
Unfortunately, I will be starting a new job in a new city in the fall, so I might be too busy to cover the provincial elections. Plus, I don't have any ties to the 5 provinces scheduled to go to the polls. However, I will be following those campaigns from afar, and may post about them occasionally.
I too want to thank you for your work on the 2011 election. Of all the sites, your was the most accurate and the easiest to navigate (in my opinion). You also have the most updates to other websites and you kept them current.
Your projections were bang on for the most part. Too bad the polls weren't as reliable.
Thanks for the compliments, Glenn! I'm glad you appreciated the various facets of this site. When I started this blog, it was mainly a resource for myself, so I spent quite a bit of effort on making it easy to use and visually appealing.
Post a Comment