This is the second post of a two-part series on understanding projections. The first post, which explains why projections currently disagree on which party is leading, is available here.
The chances that any given projection model will hit the exact seat count is close to zero. So if a projection tells you that party A leads party B by 20 seats, is it a close race? Or is party B basically out of the running?
This post provides a back-of-the-envelope argument for determining the uncertainty of the national seat count of the three main parties for the 2015 election. The main conclusions are:
- For this election, the standard error on the main parties' seat counts is roughly 15-20 seats. This implies that the 95% confidence interval should extend 30-40 seats in either direction, and that the standard error on the difference between two parties is about 30 seats. Thus, a 20-seat lead by a party means a lead of about 2/3 of a standard deviation, which translates to a roughly 75% chance of winning (if the 3rd party is not in contention) assuming a normal distribution. The approximate probabilities that I have been providing in my posts are based on this rough calculation. (Currently, I think that the Liberals have a 15% chance of winning if you take my adjusted projection as your best guess, and 40% chance if you take my unadjusted projection as your best guess.)
- Of the models providing seat ranges and probabilities, The Globe's Election Forecast gives the most reasonable estimates. Too Close to Call, The Signal and Le calcul électoral also run simulations to account for uncertainty, but their simulations appear to be miscalibrated and most likely understate the extent of the uncertainty. (I should mention that Too Close to Call's riding-level uncertainties are credible and highly recommended.) ThreeHundredEight's ranges are not based on simulations, and are difficult to interpret in terms of probabilities. (Links to all of these websites are available on the left.)
Where does the "15-20 seats" come from?
As we know, polls are not exact. Part of the issue is sample size, which determines a poll's reported "margin of error." But a greater issue is turnout: pollsters in Canada do not have a good handle on who is actually doing to show up. (There's also movement in voting intentions as a poll is being conducted.) Therefore, even very large polls and their aggregation come with a significant degree of uncertainty.
How much is an average of polls likely to be wrong? Helpfully, ThreeHundredEight.com's methodology page tells us that in recent Canadian elections, its poll average was off by an average of 2.15 points per party. How does this relate to the standard deviation for main parties in a federal election?
- For a normal distribution, the average deviation is roughly 80% of the standard deviation. So this would bump up the estimate of the standard error to 2.7 points.
- Standard errors for parties closer to 50% are bigger than for smaller parties. So the 2.7 points estimate understates the standard error for large parties.
- There are more polls in a federal election, which reduces the sample-size related error.
I would venture that the latter two effects roughly cancel each other out, so the standard deviation on the support of the Liberals, Conservatives and NDP is probably, say, 2.5 to 3 points.
If the parties' support levels were independent, to get the standard deviation on the difference between two parties' support, one would multiply the above numbers by sqrt(2). But independence is clearly violated: if a party outperforms polls, other parties are likely to underperform them. Therefore, the true standard deviation on the difference between two parties is a bit bigger: roughly 4 to 5 points, if you believe the assumptions made so far.
How many seats is that? At most times during this campaign, there have been 50-55 races decided by under 4 points, and 65-70 races decided by under 5 points (though these figures are currently a bit higher). Roughly speaking, each of the main parties is winning 30% of them, and losing 30% of them. Thus, I estimate that a party stands to gain/lose 15-20 seats on a one-standard-deviation error.
What about other sources of uncertainty?
There are indeed other sources of uncertainty, e.g. arising from the transposition of regional vote shares into seats. But they are relatively small, and roughly independent from the uncertainty on the national vote share. Therefore, accounting for them would add little to the estimate above.
(Clarification: If the standard error from poll inaccuracy/noise is 18 (variance 324), and the standard error from independent factors is, say, 7 (variance 49), then the total standard error would be roughly 19.3 (variance 324+49=373), not much more than the 18 from polls.)
There are so many assumptions in the above calculations! Is there an independent way of getting the "15-20 seats" estimate?
I've been doing seat projections using a similar methodology since the 2004 election (just for fun, sharing with my buddies, before 2011). Here's how much the final projection was off each time:
- In 2004, the Liberals were underestimated and the Tories were overestimated by around 20 seats.
- In 2006, the Liberals were underestimated and the Tories were overestimated by around 10 seats.
- In 2008, the Liberals were overestimated and the Tories were underestimated by 12-15 seats.
- In 2011, I missed the Liberal, NDP and Tory seat counts by 9 to 14, despite adjustments that improved the accuracy of the model. Indeed, as you can see here, most other models fared worse, sometimes by a lot.
I think it is pretty clear that any claim of a standard deviation under 10 seats, which implies 95% confidence intervals extending less than 20 seats in each direction, is implausibly optimistic. Yet, the confidence intervals provided by Too Close to Call, Le calcul électoral, as well as most of the ones given by The Signal, are this small.
Why are those three sites wrong?
Firstly, I'd like to say that the three sites I mention run simulations to get their ranges, which is in principle much better than the back-of-the-envelope calculations I posted here. The problem is that their simulations appear to be miscalibrated.
- For Too Close to Call, I've had a conversation with Bryan Bréguet, and the problem appears to be that errors in each polling region (Atlantic, Québec, Ontario, MB/SK, Alberta, BC) are treated as fully independent. In reality, a party outperforming the polls in a given region is also more likely to do so in other regions - many factors are common across regions (age of supporters and other socio-economic characteristics, enthusiasm, etc.). Thus, while his simulations yield the right uncertainty in each region, they underestimate the national uncertainty: errors in different regions cancel themselves out more than they do in reality.
- Le calcul électoral does a wonderful job accounting for model uncertainty and statistical uncertainty. (It also deserves props for exemplary transparency in the description of its methodology.) However, it misses turnout uncertainty, which, as explained above, is the dominant problem here.
- Unfortunately, The Signal's methodology is not described in sufficient detail for me to figure out where the problem lies. But its ranges for the national popular vote seem too narrow.
13 comments:
Thank you for this. It is wonderfully illuminating to have someone objective walk me through all this.
You've mentioned voter turnout as a factor. Is there any way one can detect whether any given pollster is modeling his or her poll based on StatsCan age demographics or on Elections Canada voter turnout age group estimates? My sense is that most don't adjust away from the StatsCan to the actual voter turnout history we have and that leads to certain parties being reported higher and others lower than they are likely to achieve on election day.
I'm glad you liked it! This post is a bit more theoretical and mathy than the other one, so I figured it'd be less popular. It's good to know that it does have an audience.
I believe that all pollsters use Census demographics unless otherwise indicated. However, pollsters can still play with lots of things. For phone polls (both IVR and live), one huge issue is cell-phone vs. landline users, as Frank Graves pointed out.
We'll probably see likely voter models being trotted out by some pollsters to go along their final polls. It'll be interesting to see whether they roughly match my turnout adjustment.
Bryan Beguet had an interesting article (in French) on his website about the Quebec figures in recent polls. He noted that Conservative numbers were higher in phone calls and lower in online or IVR surveys. The Bloc was the opposite. I suppose that makes common sense -- the typical conservative voter is more apt not to be spending time on computers talking to an anonymous pollster, but more likely to respond to a real human voice on a telephone. I'm sure sociologists would have a field day working on why and what that means. For pollsters, I suppose it suggests that a mix of approaches is more likely to get a (more)accurate response.
Actually, IVR = robocall. Maybe left-of-centre voters hang up when they get a robocall because they think it's a repeat of the 2011 fiasco...
Only Nanos does fully live interviews (EKOS is about 2/3 IVR, 1/3 live). Tories are not doing well in Nanos polls...
Thank you! I loved this post over the previous as this was incredibly more informative because when you explained the numbers this time, you actually gave room for your numbers to be on the downside too for the conservatives which I thought you were not doing because a casual observer would look at your blog and say that you seem to be projecting the conservatives way too highly.
While you do want to be on the safe side based on recentness, as you can see in the following report, I thought you were really discounting the effect of the 10 year fatigue setting in among voters.
http://www.ctvnews.ca/politics/election/campaign-shifts-as-harper-looks-to-protect-seats-in-ontario-1.2605931
The explanation Robert Fife is giving is that its because the NDP vote is collapsing, the switch is going to the liberals. From a lot of people I speak with, they all want to end a harper majority at all costs and they are willing to go anywhere else and a lot of them voted for Harper the last time because they felt he hadnt gone overboard the first 5 years but the last 4 years, they felt he was acting very much from the right that they want to end it and these are older folks not the young. Its extremely unfair when leaders stay beyond 2-3 terms, it doesnt give anyone else a chance moreover, no chance for fresh ideas. When I heard Michelle Rempel say that she didnt even know what Old Stock Canadian meant on Everything is Political, I knew there was a problem because many in the younger cohort dont see people as old stock, existing stock or "whatever" stock and Harper and Kenney staying allows for staleness to set in. Then again, lets see but I personally felt he should've given someone else a chance. I really think we need something like the American system where we have to restrict a PM to a max of 3 terms, even though in their case, its 2 terms. It will at least make the focus during their 2-3 terms to govern rather than be on permanent campaign mode. And our system very much needs renewal as first past the post is deeply flawed as once in power, the majority never has to work with the a minority and leads to deeply flawed legislation.
Yes, I just saw that CTV report. Very interesting! Keep in mind:
- We haven't gotten many public polling results from this weekend, so the median point of the projection weight is still stuck at October 7. If things have moved over the weekend, the projection should move quickly over the next few days as new numbers come out.
- In 2011, the Tories apparently knew that they were in majority territory, but told journalists that they weren't. Back then, they didn't want to scare people. This time, perhaps they're saying that the Liberals are winning anyway to discourage NDP supporters from strategically shifting their support. We won't know if they're again playing the media until Election Night...
Word is that even in the unlikely event that he wins a majority, Harper is gone in 2 years. And there are rumours that Jean Charest is mulling a leadership run...
Yes, for sure, will have to wait and see. And Jean Charest would be a welcome change.
Election Watcher, if the Globe's election forecast is the most reasonable, do you expect that your seat projection will move closer to that one over time considering that your projections tend to be more accurate than EKOS', ThreeHundredEight's, etc.
The comment is about the estimation of the projection uncertainty by the Globe's forecast, which is consistent with what I outlined in this post. It is not about the actual projection per se, although my unadjusted projection is quite close to the Globe's forecast, and would be even closer if I adopted more aggressive discounting for polls 5-14 days old.
Posted this on the other blog entry, but figured its more relevant here:
========================================================================
On 308.com's CBC anaylsis, in addition to the debateable choice (it appears biased) poll weighting that commentor Jeremy Akerman already mentioned on another of your blog posts, I also note the following:
There is a "simplistically" labelled, low, middle, avg, high distribution for the seat projections. The conservatives' "likely" bubble has always included the low, middle, and avg points, with the "high" point projected as an extreme outlier.
The NDP and Liberals, (its shifted over time), have their "low" points projected as extreme outliers, but their high points are WITHIN (or right beside) the shaded "likely" outcome window. By what justification does he skew these "likely outcome" seat projection windows to push the conservatives "high" point to be an outlier, but having left leaning parties only have their "low" points as extreme outliers....
The whole min/low/high/max thing at ThreeHundredEight is incredibly clumsy. They mix up a whole bunch of considerations, and you really have to read the methodology carefully to have any clue what they mean. And once you've done that, you realize that the explanation is so complicated that it's really hard to remember.
There's no bias going on there. Just a bunch of really poorly constructed indicators. Simulations, such as used by the other sites, are definitely the way to go - I'm just too lazy to do them.
How does 308 get their riding projection numbers?
@Habs24cups: Go read its methodology, now that the hockey game is over!
Post a Comment