Latest national poll median date: October 20
Projections reflect recent polling graciously made publicly available by pollsters and media organizations. I am not a pollster, and derive no income from this blog.

Saturday, September 28, 2019

Comparing 2019 Seat Projections

In August, I wrote a post about the methodology of various poll-based projections you'll find around the web. This post takes a look at what these models have been showing, and how to explain the differences between them. I will only review projections that provide both seat counts and vote shares; those showing only the former are excluded from the analysis.

I have been keeping track of this graph for over a month:
Quito Maggi's (President and CEO of Mainstreet Research) projection was posted on Twitter on Sept. 27; the national vote shares were stated as CON 36, LIB 34, but the population-weighted regional breakdown is consistent with a tied national vote. Forum's projection is based on a Sept. 19-21 poll. All others are aggregator projections posted on Sept. 27 or 28. TCTC = Too Close to Call; CEW no adj. = Canadian Election Watch without turnout adjustment.

The first thing to note is that all these projections agree: the Tories are tied or better in terms of the popular vote, but the Liberals lead in terms of seats.

Beyond that, there are differences in both: (i) the seat projections, even for models with nearly identical vote share projections, and (ii) the vote share projections themselves. I will address these separately.

Seat Projection Given Vote Share
To compare the seat projections for the two main parties, link the two red dots on the graph (corresponding to my projections with and without turnout adjustment), and extend that line out. Models on this line give similar results as mine conditional on the popular vote, while models above are friendlier to the Liberals, and those below are friendlier to the Tories.

Clearly, 338Canada and the CBC Poll Tracker give very similar results to my model, controlling for the vote share. This has quite consistently been the case since August, with the CBC Poll Tracker bouncing around just a bit because it gives the number of seats projected ahead while 338Canada and I give the sum of seat win probabilities. Given that these are by far the post popular seat models, my personal opinion is that Canadians are being well served. This line suggests that the Tories need to win by about 3% to even the seat count.

At this moment, Too Close to Call is below the line, though it had been above by roughly the same amount just before "blackface," and roughly on the line before that. Therefore, I would include it in the same group of models as 338Canada, CBC Poll Tracker and mine.

Lean Tossup has consistently been well above the line. This is due to at least two, and possibly three factors:
- Restrictive poll inclusion policy
- Use of riding polls
- (Possibly) Use of demographics
The use of riding polls likely causes Lean Tossup to put a very high weight on Mainstreet results, since almost all riding polls are conducted by Mainstreet (and there are a lot of them: around 40 so far); this is compounded by a restrictive poll inclusion policy. It is therefore unsurprising that Lean Tossup's projection is near Quito Maggi's. Because Mainstreet is especially favourable to the Liberals in crucial Ontario, this increases Liberal vote efficiency and pushes those projections up the graph.

Calculated Politics has shown no consistency: it has been alternately slightly below the CBC-338-CEW line a couple of weeks ago to almost as far above the line as Lean Tossup just a few days ago. I have not investigated why this is the case.

Finally, Forum Research is well below the line 338-CBC-CEW line. This is a head-scratcher as the regional distribution of support in the poll on which this projection is based is very good for the Liberals. Something "interesting" might be going on with Forum's seat model.

Vote Share Projection
Note: For this section, ignore pollsters' projections (since they use only their own data).

Because poll aggregators mostly use the same data, the vote share projections (before any turnout adjustment) should be very similar. And that's what we broadly see, with the Tory lead ranging from 0.2% to 1.4%.

Close followers of this blog will have noticed, however, that my polling average - even without the net C+1 turnout adjustment - often leans more Conservative than other websites', as it does now. This is due to the interaction of the two following factors:
1. My vote share model assumes that each pollster has an unknown house effect. As a result, it tries to stabilize the weights placed on different pollsters: those with recent polls will still have higher weight, but not by nearly as much as without the stabilization.
2. Nanos and Mainstreet, the two pollsters with daily tracking, tend to exhibit a Liberal lean.

As a result, when polls from firms other than Nanos than Mainstreet are a bit older, they carry higher weight in my average than in others' averages; when all pollsters have come out with recent data, this gap should in theory disappear. And that's exactly what we saw right after blackface: other projections converged to mine after DART, Ipsos and Angus Reid released new numbers (along with almost every pollster active on the national vote intention scene), as evidenced by the thread below:

This is where I will shamelessly advertise my projection trends for the popular vote, which I believe have two meaningful advantages over others you might see:
(Note: I'm specifically referring to the up/down trends. I'm NOT saying that the levels of my popular vote estimates are the most accurate - the turnout adjustment is merely an educated guess that could be wrong this year. And I'm NOT talking about seat projections at all.)

1. The trends are "real" because they attenuate variations due to the mix of pollsters that happen to have recently conducted a poll. Other aggregators do not stabilize the mix of pollsters in their averages (or do so to a much lesser extent than I do), while pollsters' own trends use much smaller samples and are more affected by statistical noise.

2. Stabilizing pollster mix allows me to more aggressively weight recent polls without generating undue volatility (this is done by assigning negative weight on the recent pollsters' older polls). The "blackface" episode illustrated this starkly: my polling average moved swiftly on Sept. 21 and then stayed stable while the bulk of the post-"blackface" polling data came out, while others required more time (2-3 days) to eventually move as much as mine. And yet, the trends my model produces are quite stable when there is no major news while other projections show, for example, Liberal drops every time Angus Reid releases a poll.

Now that I've whet your appetite, keep an eye out for updated projection trends, probably coming tomorrow!

No comments: