Bob Garfield of On The Media has done some great reporting on how polls and the media hype that reports on them are essentially BS. His essential message is that the media is hyping these polls, while ignoring the margin of error that renders most of the data completely meaningless. Here in New Hampshire, people stopped believing in polls on January 9, 2008 — when to everyone’s surprise Hillary Clinton bested Barack Obama in the NH Primary. To this day no one is quite sure where to lace the blame. However, eight years later with Donald Trump and Bernie Sanders surging in the polls, the most common reaction from my friends who support their opponents is: “yes, but who believes the polls anyway?” The skeptics have a point. Pollsters got it wrong not just in New Hampshire in 2008, but in Scotland and in the most recent UK elections and according to Garfield, they’ve been getting it wrong daily since this presidential election cycle began.
This phenomenon has important lessons in it for those of us who want to measure our social media and PR efforts. Sadly I’ve seen people in the measurement industry or the agencies that they work for make the same mistakes the media and the pollsters are making. In the rush show off our data, the nitty-gritty details of the data get lost, and with it the integrity and credibility of the. As Garfield points out, if the difference between two candidates is 5 percentage points and the margin of error is 6 percentage points, for all you know the candidates are tied. Until you either broaden your sample or adjust your research to lower the margin of error, your data has no meaning at all.
Far too often I’ve seen research firms trumpet in a headline “xyz brand beat abc brand in the category in terms of positive coverage” or “xyz message dominated the media this month.” Which sounds great until you dig into the data and find out that “dominates” means it appeared in 3 out of 5 total stories. Or “beating the competition in a category” means receiving 4 out of 7 total articles. NEWSFLASH! 7 stories is not enough data from which anyone can draw reliable conclusions.
If a press release announced a new miracle cure for arthritis based on a trial of 7 people, chances are good that not a lot of sentient arthritis sufferers would throw away their Celebrex/Aleve/Advil and immediately jump to the new and unproven drug. But again and again I see PR agencies and client make decisions with far too little data. And we wonder why the profession’s credibility is suffering?