"Within the margin of error" doesn't mean what you think it means (plus other things about polls)

During a federal election, we get an incredible number of polls (including riding ones). Not as much as in the States, but significantly more than for provincial elections. With this comes the caveat that we'll see variations. It's normal, even with sample size of over 1000. If anything, it's when every pollster is showing the same numbers that it becomes suspicious.

With that said, I often read incorrect analysis of these polls, the variations and the differences. So here are some points.

1. The margins of error provided by the pollsters are pretty much useless

Bold statement I know, but let me explain. First of all, pollsters report margins of error as one number (like plus or minus 3% 19 times out of 20). But this is the margin for a party with level of support at 50%. Unless you are the Liberals in Atlantic or the Conservatives in Alberta, you are most likely a lot lower than that. And the lower you are, the smaller the margins of error. So for the Green Party for instance (a party at 5%), with sample size 1000, the MoE are 1.35%, not 3.1 as for a party at 50%.

This is a major difference! Election Canada has relatively strict guidelines on how to report polls, but somehow nothing about this.

Moreover, most of the time, what you want to know is if the lead of one party is statistically significant. For instance, let's take the most recent Nanos that puts the Liberals at 32.5% and the CPC at 31.5%. You want to know if the 1 point difference is significant. Which is a fancy way of saying that you want to make sure it wasn't due to luck/bad luck in the sampling process.

Most people would do "twice the margins of error". But this is wrong. Beyond the fact, as we just showed, that the actual MoEs are different, even the correct ones are not applicable to a difference. The twice the margin is only valid if there are only two parties.

The real calculations are more complicated. You can use my calculator for that. But it's often quite a lot less than twice the margins. By the way, no, the 1 point lead is not significant. Actually no Nanos poll has shown any statistically significant lead recently. You'd need one party with a lead of at least 4.4 points for this.

In the same way, if you want to compare the same party over two different polls, it's not twice the margin of error. You need to do different calculations. There as well, my calculator can do it for you.

As you see, the margins provided by the pollsters are pretty much not applicable to anything you want to know.

2. "within the margin of error" and statistically significant don't mean what you most likely think.

I often read that if it's within the margin of error, then we can't say anything. It's a tie and anything could happen. It's not that simple. Let's take the Nanos poll of September 24th. The Liberals were at 32.3% and the Tories at 28.9%. Not significant at 95% (or 19 times out of 20) since you'd need a difference of around 4.4 points for this to be significant. Still, it doesn't mean the two parties are tied or that the Conservatives are just as likely to finish ahead of the LPC than the opposite.

Remember we set the threshold of 95% pretty arbitrarily. What this means is that we only want to keep a 5% chance of finding something significant when it's in fact not. But we could have chosen 99% or 90%. In the Nanos example, the lead would be significant at 80%.

Being "within the margin of error" simply means that if the two parties were tied for real and you were to sample a 100 times, there would be more than 5% chances to get the results Nanos did. Still, it doesn't mean the Liberals were just as likely to be polled that high as the Conservatives were. It just means that the chance/risk is too big for us to make a conclusion (which would, anyway, still have a 5% chance of being wrong).

The other way to look at it, and this is more a Bayesian approach, is to see that the chances of finishing first (in votes) of the Liberals are much higher than the Conservatives (based on only this poll of course). Specifically, with sample size 1200, there is a 93% chance the Liberals would get more votes (I ran 10000 simulations with sample sizes 1200 and look at how many times the Liberals were first).

So, don't see a non-significant lead as meaning it's just as likely to be a tie or the other party being ahead. It just means that it doesn't meet the arbitrary requirement we fixed ourselves. Plus, a collection of individual polls showing insignificant leads can actually give us an overall good picture of the race.

3. All polling firm make mistakes and have misses

I often read people saying "oh well it's Forum, their polls are a joke". The truth is that Forum has done relatively well in the last couple of years. Sure they were wrong in Alberta, but less than other firms. And, well, there was the by-election in Brandon-Souris.

But if you look at the elections since 2011, you see that some firms sometimes do really well and sometimes really bad. In Quebec in 2014, Forum was the best while Leger has a bad night. But earlier this year in Alberta, Leger was clearly the best. As for Ontario, Angus-Reid, Ekos and Abacus did well but only if we took their non Likely Voters numbers.

Notice that each pollster misses at least one party (i.e: doesn't have this party within the margins of error). Quite often, they only get half of them! Averaging is really the way to go. Putting too much confidence on one pollster is absurd.

Tony Nickonchuk that follows me on Twitter did the analysis thoroughly for every election since 2011 and he gladly provided his work. The table below shows the the various pollsters, how many elections they polled and the percentage of parties they got within the margins of error. Note however that he used the margins provided by the pollsters. So the percentages here are overestimating the actual performance.

% of Parties Within MOE
Abacus Data
Angus Reid

So sure Nanos is doing great but there is a catch: they almost never poll.They only covered Ontario 2011 and the federal election in 2011. So Nanos is good when they poll. But the 100% success rate is a little bit misleading. In particular they got lucky enough not to poll BC 2013 or Alberta 2012.

Otherwise, as you can see, it's quite hit and miss. Being super accurate shouldn't always be expected. And actually, nailing an election is also a question of luck. But having the parties within the margins of error is less dependent on luck. After all, there is only a 5% theoretical chance that you get a sample so skewed your results are so off. Yet it seems to happen a lot more often than we think. It's probably because the margins of error don't correctly represent all the uncertainty that exist (people can lie, change their mind, etc).

And to go back to this notion that you sometimes do well, sometimes you do bad, look at Ekos. Terrible in 2011 and in some elections after, but got Ontario 2014 really close (again, if you forget their LV adjustments).

Finally, polls with bigger sample sizes should be more accurate, but empirically it doesn't seem to be really the case. In this paper I co-wrote with David Coletto (from Abacus that the only significant determinant of poll accuracy were turnout and changes in turnout. Sample size or polling dates are not significant. It's most likely because margins of error measure the sampling volatility. But in practice, measuring voting intentions is way more uncertain. People can lie, change their mind, not vote, etc. Oh and by the way, IVR and online were not significant, which leads us to my last point.

4. Online polls work, period

If there is one thing I can't endure, it's the hate for online polling. Yes their samples are not purely random and yes technically they shouldn't report the same margins of error. But the fact is that it works. Using the same data as above, Tony Nickonchuk found that online polls beat IVR (automatic phone calls) by having 74% of parties within the MoE and only 57% for IVR.

Moreover, look at this blog post by Angus-Reid clearly showing they've been doing quite well. Recently, Innovative Research published one poll with two samples, one by phone and one online. results were very, very similar and definitely within the margins of error.

Also, in the States in 2012, online polls did remarkably well.

So let's stop the nonsense to criticize online samples. People need to realize that in 2015, phone samples are also anything but random. And with responses rates of 10% sometimes, there is just as much of a selection process (after all, you need to answer your phone and accept to answer the questions). So either hate all polls, or don't. But don't cherry pick the method you prefer.