One small step you can take immediately to reimagine your communications measurement system is to take a hard look at your data and make sure it’s accurate. Accurate data is key to your credibility as a communicator. If you’re going to persuade anyone to change entrenched habits, you have to have confidence in the data you use in your arguments. So does the person you are persuading.
Back when I actually ran measurement companies, we developed a few standard checks we would always run to help insure accurate data:
#1. The pub month check
We learned the hard way that the feeds that we relied on for our client’s content actually could not be relied upon. We once sent out a quarterly report that clearly showed that a client’s business press coverage had declined precipitously from the prior quarter. We just assumed that the company hadn’t done anything that might be worthy of business coverage. The client pointed out that it was a highly visible public company that released numerous press releases each quarter and also released earnings every quarter, which were routinely covered in the business media. So, there was no logical reason that certain key business media outlets wouldn’t cover them as frequently, and of course they were right. What had happened was that the service through which we were getting The Wall Street Journal had simply stopped sending us data.
From then on, the first thing we did to insure accurate data was to run a quick check on key publications that most frequently covered a client, to compare the volume of coverage that month to the average over the prior six months. Any steep declines or upticks were investigated before we conducted further analyses.
#2. Check your media types
Another good review that will reveal missing data is to check for abnormal coverage from your media types, e.g., social, trade press, business press, blogs, broadcast, etc. Most services categorize coverage according to the type of media. After the first six months of monitoring, you should have a good sense of how much of each type you “normally” get. If the percentage of each coverage has changed substantially, check your data. If we’d done that for the client in #1, we would have avoided a lot of egg on our faces.
#3. That sentimental feeling
After 30+ years of analyzing media data, I can assure you that your percentage of positive, negative, and neutral articles won’t vary much on a monthly basis — unless there’s a crisis, a major announcement, or your sentiment algorithm is screwed up. So, your next step is to look at your monthly average sentiment scores. If they’ve gone up or down by more than five percentage points, dig into your data.
Read more about accurate data here:
“The 6 Most Common Data Problems—and Practical Tips to Fix Them.”
#4. Automated sentiment? Look at the negatives first
For whatever reason, automated sentiment machines tend to falsely code articles as negative more often than they do as positive or neutral. This is especially true in industries that deal with sensitive topics. We had a client once who had an incredibly successful launch but the automated monitoring service rated all the articles negative. After some review of the data, we realized that the launch had promised a concerted effort to reverse aging, which the media morphed into headlines announcing that my client was going to “cure death.” The automated system assumed that any mention of death was negative. Usually a good assumption, but not this time.
#5. Scrutinize your subject categories
Most media monitoring systems will automatically assign a subject category for an article, or allow you to put articles in different subject category “buckets.” So, another quick check is to look at the subject categories and make sure your hot new product or latest initiative is showing up where it should. Is there any major fluctuation between the volume this month and last month?
#6. Check on your peer group
If you are tracking four competitors, in an ideal world each company would get 20 percent of the coverage. If you find that that number varies by more than 10 percent, you will want to dig into the data and figure out why. It may be that the system accidentally dropped one of your competitors off the tracking list sometime during the reporting period.
#7. Do a standard word search for sentences from your boilerplate
If your boilerplate text shows up in your “coverage,” then that’s a sure sign your system has picked up a press release — which doesn’t belong in your earned media coverage. Do the same check for indicators of job postings. Words like “apply” or “position” or “hiring” could be indicators that your system is collecting from the help wanted section.
#8. Finally, take a big picture look at your results
If there’s anything that makes you say, “That’s weird!” then check the data. “Weird” generally means the data is flawed. ∞