1. Making up Metrics
I was recently on a conference call with a respected nternational measurement organization and a new member of the group was explaining how she’d been pressured by her CEO to put a dollar value on her efforts. “So what we’ve done,” she said with some pride,”is equate the reach of posts to the reach of a banner ad and value them according to what it would have cost to purchase the banner ad. “ For the record: There is no evidence that banner ads have an even remotely similar impact on a customers path to purchase as PR. As John Oliver and Bob Garfield will tell you – the chance of anyone intentionally clicking on a banner ad is lower than surviving a plane crash. There should be a special place in measurement hell for people who make up bad metrics.
2. Torturing numbers until they say what you want
We once delivered a launch report that initially included data from around 250 social and traditional media items. It showed that the desired messages appeared in about 7% of the coverage – not a bad number at all. But it wasn’t what the client expected or wanted to see.
So first they eliminated all social media, arguing that their outreach efforts had all been focused on traditional media. That reduced the data set by half, but 150 is still an acceptable amount of data to analyze under the circumstances, as it happened, the data didn’t change all that much. The percentage of items containing one or more key messages went up slightly to 10%.
But apparently that still wasn’t an acceptable number. So we were asked to ONLY include the top tier media at whom the outreach was directed. That narrowed the data set to 10, but produced the desired percentage and the agency was able to declare that one third of all coverage contained a key message. The only problem—of all the effort and energy put into the campaign only three contained a key message. When you put it that way, it probably won’t get much praise from the corner office.
If you are given a bunch of data collected from agencies who may or may not have been clipping consistently with your methodology you are treading in dangerous waters. If you are told to just show the “good news” you have crossed to the dark side.
3. You use multipliers
The reality is that when someone tells you that you have “reached” 200 billion eyeballs” chances are pretty good that you haven’t. In those organizations that still believe that impressions count, no matter how they’re counted, inflating impressions is a common problem. For some reason people want to “add weight” for specific media outlets or certain types of stories. The right way to do this is to develop a custom index that you can track over time (see stories about OCS) The wrong way is to multiply impression counts ( which are arguably flawed to begin with) by multiplying a top tier publication by 2 or 3 or whatever number someone dreams up. The IPR has published a great paper on why you should never use multipliers (link).
Your top tier list should reflect that degree to which it reaches your target audience, that’s why it’s called Top Tier. Why do you need multipliers if you are already reaching a high percentage of your target audience?
Most outlets and reporting companies like Compete and Alexa use average unique visitors and calculate data on a monthly basis. Unless you pay for a professional account which will differentiate between URLs, it will also tell you that a story in an obscure food blog a www. Nytimes.com has the same number of unique visitors as the front page.
Then there’s broadcast. Most services use Nielsen or ComScore to report viewers, but got help you if you have data from both because their numbers can vary dramatically.
4. Using BS Benchmarks
Comparing results to the competition is a very powerful and persuasive way to benchmark your results. However, in order to be truly comparable you need to make sure you are comparing the same media outlets in a similar time frame, in the same geography. If the competition launched it program the same day that Princess Diana died, and they were lucky to get any coverage at all (true story) it’s hardly a fair comparison. If you do not have the full data set for the competition because their launch or activity exceeds the time frame of your data, it’s not a fair comparison. If you are launching into a mature market but the competition was the first to market and had to educate the market as well as promote its products, it’s not a fair comparison.
5. Failing to get agreement on what “good” is.
Too often, PR people define success as a big pile of clips, or a lot of neutral coverage, when in fact senior leadership thinks success is more leads or more messages communicated. So when the PR person delivers results, his/her success is seen as worthless. K
The same problem plagues RFP processes these days. Companies decide they want a measurement vendor and start calling in the sales people from various measurement firms. The problem is that fi you hire a measurement vendor before you agree on what success looks like, it’s very possible you’ll end up with a vendor that doesn’t measure what you need measured, Start with a solid list of metrics that have to be delivered, then write up a list of cretieria. A good place to start would be with the Vendor Questionnaire/ Transparency Table.
6. Not having a test in place to judge the completeness of the data
Reporting results based on lousy data is like building a high rise building on a foundation of bad concrete – it will look good just long enough to get everyone to buy into it, and then it will collapse and possibly Put it another way, there is nothing more important to measurement than the accuracy of your data. So how do you check to make sure that your data is accurate? Test, test and more tests.
Take at least a month’s worth of data and test it for completeness — Double check that your key media outlets are represented in the data. Next you need to check for duplicates and spam. Chances are about 50% of raw data will fall into one of those two categories. A good vendor will have systems in place to screen for duplicates, spam, and inappropriate content (wedding announcements, incorrect names or references, police blotters. If they don’t have the ability to create good filters, you’ll be paying for a lot of stuff that you don’t want and will just have to sort through yourself.
Finally, check for accuracy of the way the incoming data is tagged or coded. Chances are you need data in specific buckets – i.e. corporate, product, customer service etc. Most systems rely on some sort of algorithm to identify these subjects based on computer-automated selection. Check a random selection of items – at least 50 to see if they are correctly tagged and coded.
If your vendor is providing sentiment or tonality, you will need to conduct a separate test to validate their coding. Select a sample of 50 items and have them read by an intern, or someone who didn’t have anything to do with “placing” the stories. Compare the results to the system. If you don’t find agreement in 80% of the items, you may have to adjust their algorithm. Make sure they can handle this level of customization, or you’ll be in trouble down the line.
7. Not having a system in place to deal with the data when it comes in
In today’s torrent of media, you will be surprised at just how many “alerts” you will get from your monitoring system. So many in fact that the system will soon be like the boy who called wolf too many times, and you will find yourself skipping over the emails. But the reason you have a monitoring system is to ensure that you aren’t the next Domino’s Pizza or Kenneth Cole – so you need a process to stay on top of them and send them along to the people who will need to handle them. Do NOT foist that task on to your summer intern – you need to identify your own internal Olivia Pope who has the judgment and background to know how and when to respond.
8. Not having clear definitions and search terms
Today’s monitoring services are a lot like the early days of Match.Com before e-Harmony came along. You entered a few basic parameters, and voila, there were matches. Whether or not you wanted to date them was a whole other matter. Who knew whether the real person behind that photograph was your weird cousin Ralph who lives in his parent’s basement, or a 55 year old pretending to be a 30-something? The only way you could find out was to go beyond the search and actually start a conversation with them. Then e-Harmony came along and it’s “search” could factor in a whole slew of other desired characteristics. Sure, it took hours to complete the questionnaire, but in the end it was worth it.
Today’s media search is exactly the same. You get what you search for. If you are just tracking a brand name like “Visa” – that big spike in coverage in June may be the result of your PR efforts on behalf of the credit card company, or it could be that there were new Visa requirements to enter China. Even more problematic are search strings that aren’t periodically updated with new products and new brand names. Chances are, you don’t realize you’re not capturing the coverage until a product manager announces at an important meeting that you are missing coverage and therefore all the results are invalid.
Whether you are incorporating human coding or relying solely on machine coding, or taking a hybrid approach and combining the two, you still need to have good clear definitions… Make sure that you provide your vendor with information – and more importantly make sure they understand it. Any abbreviations, acronyms, or internal code words need to be clearly defined and those definitions should be provided to each vendor.
Part of good definitions is manifested in the search terms. Search strings tell their systems what to monitor for. And even more importantly what to exclude. So if you are the PR manager for the city of Philadelphia you will want to make sure your monitoring excludes the Philadelphia Eagles, and traffic reports for Philadelphia Avenue in King of Prussia.
9. Having unrealistic expectations – budget wise
If you’ve been using Google Alerts and doing your own collection, you know how time consuming this process is. So don’t expect someone else to do the same thing better, faster, or more efficiently and think it will cost peanuts. The reality in the monitoring world is that you get what you pay for. You can pay as little as $5,000 a year, but you’ll have to do most of the work yourself. A monitoring and measurement service that comes with account management can range from $20,000 to $500,000 depending on the number of items collected and analyzed, the degree of customization. Do NOT be the client I had once who asked me to prepare a proposal to monitoring, measure and evaluate its media coverage in 10 different countries, with 5 different competitors, and then admit three months later that the budget was only $25,000.
10. Trying to compare apples to fish
Given that there are 450 different vendors providing some form of monitoring, the easiest mistake to make is to believe that they all do the same thing. They do not. Some are designed around vertical markets, some are designed for large enterprises and most do a few things well, and the rest of the promises you can chalk up to good website copy. Some are great at monitoring, others are great at managing your conversations and only so-so at monitoring. Others may be great at local coverage but have no experience on a national or international basis. Make sure that the vendors you are talking to have experience in our industry, and are really good at what you need them to do.