How bad were the polls for the Calgary mayoral election of 2017?

On Monday, residents of Calgary (and elsewhere in Alberta, but I'm only focusing on Calgary here) were asked to vote to elect their Mayor. Incumbent Naheed Nenshi won after a long and difficult campaign, beating challenger Bill Smith 51-44.

I live in Vancouver and don't follow the politics of Calgary. The only reason I had an interest in this race is because pollsters had very different numbers. Mainstreet had Smith easily ahead by 16 points while Forum had Nenshi winning by 19 points! Obviously one of these firms would be wrong and I wanted to grab the pop corn and watch!

Mainstreet is obviously the one that was wrong and they already admitted it. They had Nenshi behind by 16 and he won by almost 8 points. The total difference is actually 23.7 points! That is a massive failure. We have seen polls being wrong before (Alberta 2012, BC 2013, etc) but missing by 24 points is something else. This is more similar to the infamous Brandon-Souris by-election in 2013 when Forum was predicting the Liberals to be 29 points ahead when they lost by a little bit less than 2 - so a total error of almost 31 points. However, this was a riding poll (usually less accurate) for a by-election (harder to predict) with a really small sample size (below 400). The miss by Mainstreet here, with 1500 respondents, is more stunning.

What makes it worse for this firm is how confident its president, Quito Maggi, was that his numbers were right. Maggi is often active during election periods. He will tweet his numbers, comment them, defend them. I actually truly appreciate that. Much better than some firms who publish numbers and are never accessible for any question. Mainstreet was so confident, they even publish (and shared on twitter) a polling scorecard pdf for people to be able to easily see which pollster was the most right! Talk about shooting yourself in the foot!

Since then, Mainstreet has been criticized by many. And let's face it, when you miss as badly as they did for this election, people will remember. Forum still has a bad reputation among many people even though they have had good to excellent results in pretty every single election since 2012 (to be fair, Forum does have a tendency to publish some weird polls once in a while). So of course Mainstreet will have to deal with the blow to its credibility for a while. With that said, some of the attacks go beyond what you'd expect. Because Mainstreet is the main partner for Post Media, some are quick to jump to the conclusions that Mainstreet was publishing fake numbers and/or trying to change the race. Those are very serious accusations for a pollster. The objective of this post is to determine how badly Mainstreet really did and to put this failure in perspective.


Polls were bad overall, Mainstreet was just the worse

Mainstreet was obviously very, very wrong on Monday. But Forum and the third pollster, Asking Canadians, were far from being good. They did get the winner right, but their numbers were also fairly off. Look at the table below.



Look, in particular, at the corresponding margins of error associated with the Mean Square Error for each firm. The Mean Square Error is a standard measure of accuracy and widely used to compare polling accuracy in the industry or in academic papers. If the estimator -the poll here- is unbiased, it's also an estimator of the variance and we can therefore calculate the corresponding margins of error. This is how I proceed to calibrate my model and simulations. For instance in France for presidential election, the average error is usually small and the corresponding margins of error are around 3-4%. On the other hand, for US election, they are closer to 6-7%. See it as a general measure of the effective polling accuracy as opposed to the theoretical one. At 17%, Mainstreet is doing terrible but the other two firms are way above what we usually observe in the industry. If Mainstreet was off by almost 24 points in total, Forum was off by more than 11!

What this means is that this was a difficult race to poll. One possible explanation is the increased turnout that went from 39% in 2013 to 58%! If there is one thing that is usually correlated with polling errors, it's a sharp change in turnout.

I have reached to Mainstreet to ask them if they knew what happened and they said not yet. Which is a completely fair answer. I'd actually be very skeptical if they could pinpoint exactly what went wrong. Also, they aren't hiding behind the usual "people changed their mind 2 minutes before voting" excuse often used by pollsters.

Could Mainstreet simply have been unlucky? After all, basic statistics teaches us that once in a while, you can get a bad sample and results that are very off. Technically speaking, we should actually get such a poll every 20 polls (the 19 out 20 thingy of the margins of error). For this part, I can safely answer: no, being unlucky wouldn't explain being off by 24 points. How do I know? I ran simulations. Specifically, I ran 100,000 simulations using the actual results of Monday and the sample size of Mainstreet. Essentially, I'm simulating taking 100,000 samples of 1500 respondents. The results are below with the gap Nenshi-Bill (positive means Nenshi is ahead in one simulated poll, negative means Smith was ahead).

As you can see, out of 100,000 samples of 1500 respondents, the most skewed sample I got was one were Smith was ahead by 3 points (remember, those simulations are drawing samples from a population where Nenshi is actually ahead by 8 points. This is really showing what can happen when you only have a sample of the population). So getting so unlucky that you get a sample where Nenshi was down 16 points is impossible. If anything, given that Nenshi was actually ahead by 8, getting a sample of 1500 respondents where Smith was ahead at all was incredibly difficult as you can see on the graph (very few points with a gap below 0 on the left). Sure, ti might be possible to get Smith up by 16 if you do a billion samples and you are incredibly unlucky in one of them, but this is becoming a stretch. For all intents and purposes, random sampling can't explain the error here.

While the failure by Mainstreet can't be explained by simply being unlucky, it doesn't mean the sample of Mainstreet was good though. Quito Maggi admitted it already. What it means is that this sample couldn't be bad simply because of bad luck. Other reasons were at play. At this game, your guess is as good as mine, but it could be anywhere from oversampling some demographics (maybe not enough young people as suggested by Maggi), to using incorrect weights not representative of the voters with this turnout.

Finding out why polls failed is never easy. To this day, we still don't really know what happened in Alberta in 2012 or BC in 2013. In the US last year, State level polls mostly missed because they apparently didn't sample enough white Americans without a college degree. In all likelihood, we might never know what happened here in Calgary.


The "Mainstreet was just making things up and/or had an agenda" criticism

Ok, this is getting beyond my expertise and what I like to write about on this blog. Quito Maggi runs his firm the way he wants. As I said before, I have zero problem with him being active on Twitter. I'm not going to judge what is a "professional behavior for a pollster". I myself am guilty at times of being too active online.

But I simply have no reason to believe Mainstreet would do such a thing. They have so far done fairly well in Canadian elections. They were fine in BC earlier this year (except for their riding level polls; Also, to be fair, pretty much every firm did well there this year) or in Nova Scotia (although Forum did better). They were also good for the federal election in 2015. Moreover, Maggi has previously been accused of being a Liberal, not a Conservative. So why would he use his firm to help Bill Smith who was the more right-wing candidate for the Mayoral election? Because Mainstreet has a deal with PostMedia, a group that owns more right-wing leaning newspapers such as the National Post? Sure. It's very easy to see bias where people want to find it. I mean, during an election, I can be accused daily of being a "Liberal shill" or "trying to help the Conservatives" depending on the projections I post.

And yes I'm aware of the dispute between Mainstreet and the MRIA. I have no comment on this. If you read more about the story, it sounds more and more like an internal dispute between two organizations. It seems more an issue of PR and behavior of Maggi than of the reliability of the firm and its numbers.

At the end of the day, as far as I'm concerned, Mainstreet's track record so far had been very good. This giant failure in Calgary will not change my mind. All it does is remind us why taking an average is better (you had the results pretty much spot on by averaging here) and that failure can happen for pollsters. Should the be more humble? Sure, maybe.