This article is part of The Measurement Advisor’s series on the 2015 International Public Relations Research Conference last month in Miami. Here’s more:
- Can We Talk? 5 Lessons for CEOs from the Newest Public Relations Research
- Internal Communications and Managing Millennials: Research from IPRRC on What Works
- IPRRC 2015: Is CSR Still a Thing?
- Scandal? PRSA Silver Anvil Award Winners Ignore Barcelona and Measurement Standards
- IPRRC 2015: A Treasure Trove of Ideas, Research, and Information
- IPRRC 2015: What Do You Think About These Proposed Standards for Content Marketing, Influence, and Transparency?
The prize for most depressing research findings at the recent International Public Relations Research Conference 2015 goes to Maureen Schriner, of the University of Wisconsin Eau Clare, Rebecca Swenson of the University of Minnesota, and Nathan Wilkerson of Marquette University. For their paper, Outputs or Outcomes? An assessment of evaluation measurement from award winning public relations campaigns, they conducted a quantitative and qualitative analysis of PRSA Silver Anvil winners from 2010 to 2014. Results showed that The Barcelona Principles and industry standards have been virtually ignored by most award winners, despite PRSA’s efforts to make them tie results to objectives. Only a handful presented evidence of actual outcomes. Most focused on outputs and generalized the results. Since most professions understand that you award the best and what you award should be what you want to become, this was depressing news indeed.
Award programs like the Silver Anvil, the Gold Quill, Sabre etc. charge upwards of $350 per entry. It’s a lucrative business to be in. We can understand why they are willing to accept substandard entries—because it makes them money. They get a bunch of agencies to fork over thousands of dollars and trust that the judges, frequently people like myself and other members of the IPR Measurement Commission, will sort them out. What’s missing is a feedback mechanism that tells people that they lost because they are clueless about the correct ways to measure their results. Sadly, the result is that organizations and their agencies continue to believe that impressions and activity are acceptable measures of success. ∞
Thanks to the Rand Eastwood blog for the image.
If the summary didn’t contain the outcome, the impact, the answer to ‘so what’, that would seem to indicate something very important was missing.from the research project.
(I don’t know why this is in caps) I hope the researchers will study all the major awards programs, something that is very feasible considering much data is on their websites. PRWeek/U.S. and Paul Holmes each have award banquets in posh new york hotels drawing close to 1,000 people. PR News has an extensive awards program.
pr can have a negative side such as spreading bad news about competitors, blocking what may be good legislation, spreading negatives about candidates (goes on all the time in politics). pr can bring attention to a product or cause but that product must be priced right and stand up against competition.
katie, as a Silver Anvil Judge, I can tell you that the only way these researchers could review the entire award entries is if they were themselves judges and broke their promise of confidentiality – which throws their ethical judgement into question. I can tell you that the first thing I do as a judge is look for measurable objectives that represent outcomes, and then whether they actually measured what they said they were going to. For one thing, Silver anvils are supposed to be the best of the best, and that’s a fast way to winnow out the real contenders. I am very disappointed in the “click bait” headline above this story, and the sensationalism that implies more than is actually there.
Actually, Debra, any prsa member can view the entire entry. i’ve been using that feature for years. in fact it’s probalby one of the major reasons i’m a member,
Katie, if you believe you are seeing the entirety of Silver Anvil entries on the PRSA website, I’m afraid you are mistaken. You are seeing the two-page summary that accompanies each full entry. Entries may run upwards of a couple hundred pages and contain information arranged under four sections: Research, Planning, Execution and Evaluation. Entrants often include confidential details that bolster their entry. Hence, only the two-page summary is shared with all PRSA members as a resource. I, too, found the “scandal!” headline unnecessarily inflammatory. As a longtime PRSA Silver Anvil judge, I can speak to our desire to recognize and award work that achieves measurable objectives. I’m happy to share more information about the process with you. Or, you could just give prsa an opportunity to comment before reacting.
Thanks for the conversation starter, Katie! As the authors of this research-in-progress, we appreciate the many perspectives from you and others who provided us ideas at the 2015 IPRRC conference. However, we would like to clarify some points.
First, this is research in progress, as we stressed at the conference. We are far from any sort of conclusive “findings” on what we have completed to date. Despite the headline here, we don’t feel there is any “scandal” involved (especially since we only have very preliminary results.)
Our interest in this research topic was driven by our interest in how public relations awards can serve to strengthen the field and reinforce best measurement practices within the profession. In fact, part of our research of PRSA Silver Anvil Awards is to identify exemplars that showcase outcome evaluations.
We will consider Stephanie’s comment to examine entire entries, rather than posted PRSA Silver Anvil Award summaries. We have other useful ideas from the conference to consider as well, from examining additional award programs, to international PR campaigns, to interviewing those involved with award judging.
NATHAN GILKERSON, PHD, MARQUETTE UNIVERSITY
Maureen Schriner, PhD, UW-Eau Claire
Rebecca Swenson, PhD, University of Minnesota
I am greatly disappointed to see this piece and the lack of data to support the premise and conclusion.
Once I printed out the (upside down) power point I quickly realized the flaw in your logic – the information studied was but a sliver of the actual entry submitted. The online archive of information is a two-page summary required of each entrant, but it is not the entry. In fact, most entries are hundreds of pages of information and data with some entries numbering upwards of a thousand pages. How can you make a definitive claim about judging against a set of standards when you haven’t seen the entry that was judged?
Additionally, PRSA provides feedback to all who were not chosen as a finalist. This process began in 2002 and has occurred every year since. Saying that PRSA does not provide that information is irresponsible since it is an important facet to our awards program.
Stephanie, I apologize for the upside down paper, That has been fixed. in terms of the research methodolgy, I have asked the authors to confirm that they are in fact anlayzing the full case studies, not just the exerpts.
Thank you on both counts.
I’m fortunate enough to Judge several awards from many different publications and industry bodies and it is certainly true that we receive widely differIng guidance oN what ‘good’ looks like. That said, i have noticed a general move to moRe responsible measurement being encouraged. Ive just finished juding one category this week and was given very firm instruction by the awards organisers to disqualify any entrant that even mentions AVE in the measurement section. I agree completely that we have a long way to go but at least it seEms we’re now willing to start the journey.