If you are a poll junkie (and I'm sure a lot of you are), you may have noticed that Nanos' numbers are usually very different from the numbers in the other polls. For instance, 4 days ago, Nanos had the NDP at 13% nationally, while Ipsos and Ekos, with polls conducted during mostly the same period of time, had the this party at respectively 19% and 16.6%. On this date, Nanos also had the NDP at a crazy low 8% in Ontario.

So are those numbers only due to the margins of error? Is it normal to have such discrepancies? Let's look into it a little bit. First of all, something very few people seem to know but the margin of error (MOE) is NOT the same for every party in a given poll. The MOE depend on the level of support. So if a party is at 50% or 10%, the MOE will be different. What the medias and pollsters report are the maximum MOE and that happens when a party is at 50%. You have a nice and short paper explaining that here. If we put the NDP at 20%, the MOE in a poll with 1000 observations is: 2.48%, not the usual 3.3% reported for the whole poll. In my example above, the Nanos poll had 1200 observations, the Ipsos one 1200 as well and the Ekos poll 2000, giving a reported MOE of 2.1%. So the actual MOE for the NDP in the Ekos poll was 1.75.

So far, Nanos is within the MOE of Ekos, and this one is in the MOE of Ipsos. But Nanos and Ipsos don't overlap (19%-2.48%=16.52% and 13%+2.48%=15.48%). What does it mean? Well it could mean Nanos and Ipsos have very different methodology (sampling weights, screening for likely voters, questions asked, etc) or that simply, one of the polls was the one poll every 20 that is out of the MOE.

But what is probably even more weird is that during this campaign, we have 3 new polls every 3 days (give or take). We have a full new Nanos exactly every three days, and we have the weekly Harris-Decima, Ipsos, Angus-Reid, etc. So at any given moment in this campaign, my projections are based on 4-7 polls. Public opinion doesn't move too quickly so we can assume that polls conducted within a period of 1 week should be comparable. But if do use an average, the MOE will naturally shrink, and by a lot.

If we assume we have 4 polls in our average (1 Nanos with 1200, one Ekos with 2000, one Harris-Decima with 1000 and one Angus-Reid with 2000), the MOE for the NDP in this average is as low as 1.06% (I gave the same weight to every poll in this average, so see this number as the upper bound of the actual MOE for the average). So given that the current average for the NDP is 18.4%, it means at 95% confidence interval, the NPD is between 17.34% and 19.46%.

So every time you see a single poll where one party is 3-4 points higher than in the average of other polls, be very skeptical. If you do the calculations, the NDP at 8% in Ontario in the Nanos poll was clearly an outlier. That happens sometimes (and actually should happen 5% of the time...). This reminds me when one poll, last year, put the Green at 34% in Quebec... and the medias decided to actually publish it!

By the way, we didn't get a lot of new polls recently, probably because the medias are waiting for after the debates. But if I use only the latest Nanos, I get the following projections:

CPC: 148

LPC: 89

NDP: 27

Green: 0

Bloc: 44

On other hand, if I use only the latest HD poll, I get:

CPC: 145

LPC: 79

NDP: 33

Green: 0

Bloc: 51

The main reason for the big difference for the Liberals is Ontario. In HD, the Tories are projected to have a 5-points lead, while Nanos actually have the Liberals only 2-points behind. This naturally makes a big difference. The point of this post was to show you that given all the polls we have, the MOE isn't that big, even at the province-level. Projecting federal elections is difficult because we need to have the correct percentages in every province. But the MOE in some provinces are really high for single poll. So it can be difficult to calculate the best and worst case scenario of each party.

On a side note, yet related, other websites doing seats projections have a very weird way to calculate the best case scenario for each party. Basically, they look at all recent polls and they take the best result in each province. It's like cherry-picking the best-yet-totally-unlikely scenario. The results? Such calculations give a confidence interval for the seats projections for the LPC as large as 70-115... This makes no sense to me. If this is really the best accuracy you can achieve when adding some uncertainty, then you might as well guess the results. I mean, come on, anyone is able to predict that the Liberals will be between 70 and 115 seats.

On the other hand, if you actually look at close races or do the calculations for the MOE, you don't get so much volatility. Given the number of polls that we have, I think we shouldn't be that affraid to be accurate.