Update Sept. 16: Too Close to Call is now active, Lean Tossup adjusted its calibration for uncertainty, and Calculated Politics has provided me with some methodological information. (
Further update: 338Canada has added more details on its methodology page.) For readability, instead of crossing out the old text as I usually do, I have simply deleted it. New text below is in italics.
Beyond providing my own seat projections, a purpose of this blog is to help readers become better informed consumers of seat projection models, and to propose improvements for other modelers. For example, in 2015, this
post regarding uncertainty pointed out that confidence intervals given by some websites, including fairly well-known ones, were way too narrow. It was widely shared (Andrew Coyne retweeted it), and at least one other modeler (Bryan, who runs Too Close to Call) responded to it favourably. The results of the election vindicated the post - and then some: if anything, the intervals given in the post itself were still a tad narrow.
This post discusses the strengths and weaknesses of models currently active (as well as Too Close to Call, which I believe will be active soon). You can find links to all of these sites in the blog's right-hand column.
Overall Assessment
If you need *one* go-to website, I strongly recommend
338Canada due to its treasure trove of information, accurate depiction of uncertainty, and ease of navigation.
The methodological description is now also much more detailed - other than remaining vagueness on how demographic factors are included (which is understandable from an intellectual property perspective), the rest of the model is clearly described.
I would suggest this site,
Canadian Election Watch, as your second source. As explained below, I believe my projections are now based on the most solid polling averages. I also provide a slightly different perspective by making a (prudent) turnout adjustment based on past elections. My methodology is described in detail, so if you see a result you don't like, you can figure out how it came about. Finally, I occasionally provide commentary, such as this post, that goes beyond describing the numbers and their implications, and helps you better understand how they're generated.
The
CBC Poll Tracker and
Too Close to Call are tried and tested websites. Worth a visit, as always.
LISPOP also has a long track record, though it is more basic than the above websites.
Lean Tossup has a vague methodological description
, though I have received helpful additional information about the model from Evan Scrimshaw, the managing editor for the site. They do things somewhat differently (as detailed below), which means that their results often look like outliers, but also that they provide a useful different perspective - so go visit their site! They have fixed their calibration of aggregate uncertainty, which I complained about in an earlier version of this post, so I have removed the warning about their probability estimates. (However, the probabilities given for individual ridings still imply too much confidence, in my opinion.) The articles on the site are also worth a read: sometimes brilliant, sometimes unreasonable, but never dull.
Calculated Politics is a very sleek website, but has no published methodology for now. This review is based on the information that I received via Twitter - pretty basic (so I can't really judge the model's quality), but enough so say that the projections are done seriously. Unfortunately, the 90% confidence intervals given for national seat totals and vote shares are too narrow, so those should be ignored for now (or considered, say, 65% confidence intervals). A fix may be forthcoming, however. I will move them from "Other" to "Main" once the methodology is provided on their webpage.
As far as I'm concerned, other currently active seat projectors have not earned any credibility because they do not describe their methodology at all (as explained in
this post, this is why they aren't listed under "Main Projection Sites")
, do not model uncertainty*, and have little track record. There are links to them on the right-hand side of the page under "Other Projection Blogs/Sites" for convenience, but rely on them at your own risk.
*Except for Visualized Politics, whose given probabilities are currently unreasonable given where their projection is - if you visit their site, stay away from the probabilities.
Transparence of Methodology
Winners: CBC Poll Tracker, 338Canada, Canadian Election Watch
Honourable Mention: Too Close to Call
The CBC Poll Tracker,
338Canada and this blog are the only sites that describe in some detail how polls are averaged. These sites and Too Close to Call also provide detail about how polls are converted seat projections. Additionally, I fully detail every riding-level adjustment made, although I recognize that the information given by the other sites mentioned above (list of factors taken into account, without listing affected ridings) is adequate.
LISPOP lists the polls considered, but does not explain how they are combined (or show any vote share estimates). It also states that it uses regional uniform swing without incumbency effects. However, something more must be going on for the Greens to be projected to get a 6th seat in BC...
In my view, Lean Tossup has the most room to improve regarding transparence
(other than Calculated Politics, of course, who is working on a methodology page). They describe which variables are considered by their model, which is good.
However, the poll weighting method is a bit of a black box (though the information provided below by Evan Scrimshaw should shed some light).
Presentation (and Transparence) of Results
Winner: 338Canada
Honourable Mention: Calculated Politics, Too Close to Call, CBC Poll Tracker
There really isn't much to be said here other than the fact that
P.J. Fournier's site is a gem: information about model results is very detailed and easy to find, and the page is visually appealing.
Calculated Politics' website is also excellent. In this case, the extensive presentation of results makes up somewhat for the opaque methodology: for example, you might now know how the polling weights are derived, but you can see what they are.
Too Close to Call's seat calculator is a very nice feature that gives readers the ability to make their own hypothetical scenarios. For this, TCTC deserves an honourable mention.
The CBC Poll Tracker's website has a link to historical polling trends, which is interesting.
It does not provide riding by riding information, which is understandable given that such information is often misused, which the CBC has some responsibility to avoid putting out.
I improved the transparence of results on this blog a couple of weeks ago by adding a table of regional polling averages at the upper left. It's a simple feature that I wish more sites had - Too Close to Call does, but elsewhere, you have to click through to each region's page (or, for the CBC Poll Tracker, toggle to each region's chart) to access that information. (338Canada and LISPOP have a regional table for seats, but not polling averages.)
Poll Averaging
Winner: Canadian Election Watch
This is an area in which I've put a lot of effort for this year, and I'm pretty confident that my poll averages are now the most reliable. (Note: I'm not talking about the turnout adjustment, where any defensible guess is basically just as good.) By reliable, I mean that it does not overly react to any single poll or pollster, and yet remains sufficiently responsive. I have separate sets of poll weights for each region, reflecting the fact that one needs to weight older polls more heavily in smaller regions to get a decent sample size. The weights I use depend not only on a poll's own characteristics (recency, sample size), but also on the characteristics of other polls. All this is derived from a statistical framework to extract information optimally
(in the sense of minimizing the variance of the estimate's error). You can read the details
here,
here and
here.
338Canada's poll averaging method uses the square root of the sample size (like my old method), which ensures that no excessive weight is put on a single poll (as opposed to a single pollster in my current method); time discounting uses a power function. The CBC Poll Tracker
discounts polls using an exponential function, and applies a penalty to outliers.
Too Close to Call uses a rudimentary time discounting scheme, and applies a penalty to a pollster's older polls. All these schemes are sensible (except for the outlier penalty by the CBC Poll Tracker, which I believe to be statistically unsound), but they are not optimized based on a statistical model like my scheme. It is therefore virtually impossible to answer the question "Under what assumptions on the evolution of voting intentions and the generation of polling errors is this weighting scheme optimal?"
Too Close to Call will also enhance its averages with riding polls (see below).
Lean Tossup doesn't describe its method, and its polling average
often deviates noticeably from other websites'. From information provided by Evan Scrimshaw, this appears to be due to two factors: (i) a more restrictive poll inclusion policy (no IRG, DART or, if I remember correctly, Ipsos polls), and (ii) extrapolation from riding polls. The most obvious deviations so far appear due to the former, though I do feel that the weighting of riding polls is a bit too high - at least currently, given the small number of riding polls. As the campaign progresses and riding polls accumulate, however, extrapolation from them could become a significant asset - Too Close to Call will start extrapolations at some point - so keep an eye on them!
338Canada, the CBC Poll Tracker and Calculated Politics also use pollster ratings. Without knowing how they are derived, it is difficult to judge how much this helps (or even whether it is counterproductive).
Model Skeleton
Winners: Canadian Election Watch, Too Close to Call
What I mean by "model skeleton" is how the seat model converts polling averages to vote shares in each riding,
before adjusting for riding specific factors like incumbency, star candidates, demographics,
etc.
There are two main basic approaches to this:
uniform swing, where all ridings change by the same amount (
e.g. if a party goes from 20% to 30% within a region, it increases by 10 points in each riding), and
proportional swing, where all ridings change by the same proportion (
e.g. if a party goes from 20% to 30% within a region, its vote share increases by half in each riding). With proportional swing, an adjustment is required to bring each riding's total back to 100%. Neither approach is clearly superior. LISPOP, Too Close to Call and I use uniform swing; 338Canada, the CBC Poll Tracker
and Calculated Politics use proportional swing; Lean Tossup does not specify its method. (Note: all models apply swings at the regional level; using only the national swing just does not make sense for Canada due to large regional disparities.)
I listed Too Close to Call and myself as "winners" because we make enhancements to uniform swing. Too Close to Call introduces sub-regional coefficients in recognition of the fact that some areas within a region have been historically more volatile than others. I adjust uniform swing in recognition of the fact that swings tend to be smaller for parties with very low or very high vote shares (as described
here).
The CBC Poll Tracker tries to enhance proportional swing by using each of the last three elections as baseline, and averaging the results. While this helps avoid cases where a skewed baseline overly impacts a result, it may increase the error in areas experiencing meaningful demographic change. Indeed, if the projections based on 2008, 2011 and 2015 baselines move in a direction due to continuing demographic change, using the average of these is worse than just using the 2015 baseline.
338Canada gives "probabilistic floors and ceilings" for regions and districts based on electoral history. This sounds sensible, but it's hard to judge the pertinence of this without more details.
Use of Demographics
Winners: 338Canada
Honourable Mention: Lean Tossup
Two models use demographic characteristics: 338Canada and Lean Tossup. As stated above, neither explains in detail how this is done, but at least 338Canada gives an inkling, and can point to specific examples where this has helped in the past (
e.g. QS winning Sherbrooke in the 2018 QC election).
Lean Tossup does say that the indicator it uses is education.
Riding-Specific Adjustments
Winners: CBC Poll Tracker
Honourable Mention: Calculated Politics, 338Canada
The CBC Poll Tracker seems to account for the largest set of potential riding-specific factors (
e.g. incumbency, star candidates, ministers,
etc.).
338 Canada and Calculated Politics also seem to make many adjustments; Calculated Politics additionally takes social media sentiment into account. I also make some riding-specific adjustments, but not to the same extent
(about 10% of ridings); Too Close to Call appears to be in the same boat. LISPOP does not seem to make these adjustments (except when a minor party/independent is expected to have a strong showing in a district, as is evident from its current projection with seats for Bernier and Wilson-Raybould), while it is unclear to what extent Lean Tossup factors these in.
Treatment of Uncertainty
Winner: 338Canada
To me, this is *the* major strength of 338Canada: it takes uncertainty very seriously, comes out with reasonably calibrated results, and presents them in a helpful way.
The CBC Poll Tracker now also has reasonable confidence ranges and probability estimates, though it shows less information than 338Canada.
As mentioned at the start of the post, Lean Tossup has recalibrated its aggregate uncertainty, and the resulting probabilities are now much more reasonable than what they had before. (The current high probability of a Liberal majority is commensurate with the high Liberal projection.) However, I do not believe that enough uncertainty is taken into account for the individual seats' win probabilities.
I provide confidence ranges from time to time (here or on Twitter), and distinguish between uncertainty for an election taking place as of the last poll or now (which, as far as I know, is what all other websites provide) and uncertainty for the actual election (which is MUCH MUCH larger).
Calculated Politics provides confidence ranges as a regular feature, but as mentioned above, they seem miscalibrated and should be ignored or re-interpreted for now.
I look forward to what Too Close to Call puts out this cycle
(the first projection did not come with ranges). LISPOP does not provide numerical evaluations of uncertainty.