It is four years after the 2016 presidential election polling debacle, and once more we were suckered by inaccurate political polls. Not that I want to add to the avalanche of anger currently being directed at Nate Silver, but here we are, again naming political polls as our Measurement Menace of the Month.
The problem this time was mostly that no one was answering their phones. The pandemic rendered phone banking and texting the best possible alternative to knocking on doors, so phones were flooded with calls from unknown numbers. Many of those calls were from polling firms and went unanswered.
People just don’t answer the phone anymore if it’s a number that isn’t in their contact list. Especially during an election season, when we are bombarded with calls. And, if someone does answer and a voice says, “I’m calling from XYZ organization do you have a few minutes to take a survey?” then the only people who actually take the survey are those that have time on their hands and a high level of trust in the media. So you end up with a tremendously skewed sample and a database that doesn’t represent the electorate.
When a sufficient number of a particular demographic isn’t available, or isn’t answering, pollsters use algorithms (some might call it mathematical chicanery) to model the demographic based on the answers they do have. University of Sydney political sociologist Professor Salvatore Babones suggests that the problem is permanent:
“The pollsters do their heroic best to model the likely behaviour of the masses from the self-reports of a few phone-answerers, but all such models are approximations. They inevitably introduce error. Model error may be even bigger than the sampling error that goes into calculating the “error margins” that are often reported alongside polling data. Or it may not be. No one knows but the pollsters, and they’re not saying…”
The models that help pollsters extrapolate from a three percent sample to the whole electorate rely heavily on exit poll data: in-person surveys conducted on election day just after people have voted. But with the majority of votes being cast by mail in this election – 102 million of them – the 2020 exit polls will be next to useless. As a result, future political polls will stray further and further from reality. Political polls are good fun, but they should be treated more as entertainment than as serious politics.
Nate Laban, owner and founder of Growth Survey Systems, says:
“Incorrect poling is often the victim of those who do not know how to segment and cross-analyze their data to account for obvious methodological errors. If they combined the data from three key questions (when and how they were voting, and who they were voting for), they would have had a better model than relying on their total response. I saw so many mathematical errors on TV last night, it made my heart hurt a little.”
The consequences of inaccurate polling are bigger than just disappointing election predictions. Polling is a tremendously valuable tool, and not just for communications measurement. As David A. Graham writes in The Atlantic: “The real catastrophe is that the failure of the polls leaves Americans with no reliable way to understand what we as a people think outside of elections—which in turn threatens our ability to make choices, or to cohere as a nation.”
No doubt there will be lots of hand wringing by both pundits and pollsters, and lots of blathering on how to fix the problem. But until they figure out how to get a statistically valid random sample of voters to accept a probe placed in their amygdala, all that polling data that Nate Silver crunches to get his forecasts will be based on too small a sample to be truly representative of the outcome. ∞