The Definitive Instruction Manual for Evaluating the Effectiveness of Risk Communications and Public Awareness Programs

This article provides a six-step process for evaluating risk communications effectiveness, along with a discussion of the human motivations and attitudes behind successful risk communication.

In a world where most individuals are bombarded with thousands of messages, images, and pieces of information daily, the biggest challenge for professional communicators is to break through the clutter and get their messages lodged in the brains of their audiences. For communicators tasked with preventing risk, the challenge is further complicated by the fact that you have to make your messages compelling enough to get attention, yet not harm your brand by causing panic or fear.

Understand the human side of risk

To succeed at risk communications—and to be able to measure your success—requires understanding human behavior and how humans make choices. First of all, research shows that people don’t seek information unless they think they need it and it will benefit them in some way. Therefore, people need to believe that there is a risk, or they won’t act to mitigate it. Which means that if your legal department is saying “Don’t scare anyone by telling them about potential risks,” then they are ensuring that your communications efforts can’t be successful.

Secondly, in order to get people to act in a way that protects them from harm, they need to believe that:

  1. They can accomplish the behavior you recommend,
  2. They trust the source of the information, and
  3. The recommended behavior will actually reduce their risk.

So simply telling them that they “should” do something isn’t enough, because it doesn’t to anything to change their behavior or belief.

People typically seek information to reduce uncertainty, or because of their social environment. They want to know what everyone else knows, especially if the topic is a community-based risk, like a chemical plant or a pipeline.

My recent survey of 20 or so academic papers leads me to the general conclusion that risk communication fails when:

  • It ignores the role of the receiver,
  • There is no understanding of what motivates the audience to attend to and seek personally relevant risk information,
  • There is too great a gap between what they know and the information to be gathered;
  • They prefer to retain the idea of a world being a safe place to live, and/or
  • It is one-way communication, rather than a dialog.

Take flu vaccines as an example. “It’s flu season, time for your shot,” can’t be your only message. People also need to believe that the current flu vaccine is available, affordable, and safe. Once you’ve convinced them of all that, they also need to believe that it will work. It’s not as easy as it first appears. Hence the 40% or so of Americans that continue to avoid getting flu shots.

How to know if your risk communications work

Communicating your public awareness and risk messages is the easy part. Determining if they actually work is a much bigger challenge. It requires making sure you truly understand what is going on in the minds of your stakeholders and then tracking their behaviors.

Further complicating the process is that there are so many different strategies that can go into encouraging safer behavior. A typical plan might include:

  1. Disclosure of information about the risk,
  2. Improving understanding of the risk,
  3. Modifying attitudes towards the risk,
  4. Improving acceptance of the risk,
  5. Encouraging appropriate mitigation behavior, and/or
  6. Increasing trust in the process.

Each of those strategies has its appropriate metric:

Objective Metric Necessary tool or process
Disclosure of information about the risk           % of affected population who receive information about the risk Media tracking, web analytics, social analytics, survey research
Improving understanding of the risk % increase in affected population who understand the risk Pre/post attitude survey
Modifying attitudes towards the risk
% change in attitude toward the risk Pre/post attitude survey
Improving acceptance of the risk % increase in affected publics accepting the risk Pre/post attitude survey
Encouraging the appropriate behavior % increase in belief that the behavior can be accomplished

% increase in belief that the behavior is effective

% increase in behaviors

Pre/post attitude survey

Behavior tracking

Increasing trust in the process % increase in trust index score Pre/post attitude survey

These are only some of the metrics we recommend in Step 4, below.

Six basic steps to risk communications measurement

To create a best practices measurement system for risk communications, we apply the six basic steps of measurement:

Step 1. Be clear on what impact you’re trying to achieve.

Most risk communication measurement stops far short of measuring the actual behavior you want people to take. In public health, it’s relatively easy to track flu shots administered, but for many other risks there may not be a consistent tracking system in place or even possible. (Read about British Columbia’s very successful flu shot campaign in our article “3 Exceptionally Effective Risk Communication Case Studies.”)

In the chemical industry, the typical goal is to get people to shelter in place when and if there’s an accident. But you can’t measure that if there’s no accident. So, your alternate goal is to make sure that everyone in the affected area understands exactly what “sheltering in place” means. This may require open-ended survey questions, measuring performance in practice drills, or other types of tests. (Read about the Wally Wise Guy chemical industry campaign in our article “3 Exceptionally Effective Risk Communication Case Studies.”) As with all types of evaluation, make sure that senior leadership and whatever regulatory authorities involved are all in agreement on what “impact” really means in your particular case.

Step 2. Segment your stakeholders and understand what motivates them.

In earlier, simpler times typical stakeholders for a risk communicator might include a few basic categories such as:

  • the population that might be affected,
  • government officials, and
  • emergency personnel.

But research and social science has long since shown that, to be effective, you need to take a more nuanced approach to defining your stakeholders. For example, the U.S. Nuclear Regulatory Commission guidelines on risk communication suggest prioritizing audiences based on vulnerability. They list nearly 50 specific stakeholder categories to reflect changing demographics. These include:

  • education leaders and education community (including students)
  • elderly populations
  • faith leaders
  • families of those involved in the response effort
  • homebound populations
  • homeless people
  • hospital personnel
  • illiterate populations
  • institutionalized populations
  • military leaders
  • neighborhood associations
  • non-English speaking groups
  • prisons
  • tourists or business travelers and their relatives
  • union officials and labor advocates
  • veterinarians
  • volunteers ready and willing to assist in the emergency response

Of course, very few organizations will actually have the resources to measure their impact on each of these specific audiences. So as you plan how to assess the effectiveness of your risk communications program, you need to prioritize the multitude of stakeholders you could impact.

Next you need to understand what motivates your audiences to believe and act in the ways you want them to in the event of an emergency. For example, in the event of a disease outbreak, people are more likely to adhere to public health recommendations if:

  • They believe the recommended behaviors are effective,
  • They perceive that they have a high likelihood of being infected,
  • They recognize that the illness has severe results,
  • They believe it is difficult to treat, and
  • They believe the government is providing understandable and sufficient information about the outbreak and can be trusted to control the spread of infection.

So, you need to understand their level of anxiety as well as their perceptions about the effectiveness of what you are recommending. You also need to understand what sources they trust. For example, during the H1N1 flu crisis, the news media reached most people, but the public was skeptical about what it reported. In that case the most trusted source was health care providers.

Remember that communication can only be effective if it considers the nature, norms, and existing beliefs of the recipient. In other words, if you don’t understand what your stakeholder deems to be relevant, you probably won’t get through to them.

Some of the data points you might need include:

  • Exactly what they currently know and believe about the potential risk,
  • What information sources they trust and access regularly,
  • What motivates them to pay attention,
  • What motivates them to seek out information, and
  • Their level of trust (or lack thereof) in your organization or program.

Step 3. Define a benchmark.

As always, raw numbers without context quickly become trivia. You need something to compare your results to. If you want to show change in behavior or attitude, you need to establish what that behavior or attitude is before you start your communication program. That means either establishing a baseline before your communications efforts begin, or designating an unexposed population that will also be measured.

Step 4. Define key metrics

Based on your findings in steps 1-3, you now need to develop the specific metrics with which you will regularly measure success. These metrics need to test:

  • The consistency of message communications—which would be reflected in what messages the audience remembers.
  • The behavioral intent the messages evoke—which would be reflected in their plans to get a flu shot, for example, or how they plan to act in the event of an emergency.
  • Actual behavior change— like getting the flu shot or calling 811 before digging near a pipeline.

If you want to measure message reception, you might use a metric like % of affected population who remember receiving information about the risk. But note that this only tests whether anyone remembered receiving your materials, not whether they remembered the message or the degree to which they understand the risk. For that you would need to test whether they could recall the messages (unaided and aided).

Referring back to what makes people actually act, you will also need metrics like:

  • % change in attitude toward the risk,
  • % increase in affected publics accepting the risk,
  • % increase in belief that the behavior can be accomplished, and
  • % increase in belief that the behavior is effective.

See the table above for other recommendations.

Another mandatory metric is a trust score. Your stakeholders may get your information, read it, save it, and even remember it, but if they don’t trust the source, they’ll never take the actions you need them to take. So, whenever you survey your audience, make sure you include 2-4 questions that will determine their level of trust. (Read our article “How To Measure Trust in a Skeptical World.”) So, a key metric will be % increase in trust scores.

Define a regular schedule for your measurement, so you can quickly determine what is working or not working and make appropriate changes. The frequency depends on how frequently you establish program budgets and make strategic decisions. Data is like bread—it’s best when fresh—so when you make decisions make sure you are using the most recent data available.

Step 5. Select and define your tools.

The tools you need to evaluate your results are determined by what you need to measure, see the table above. With so much data (read about data: “Your Guide to Keeping Your Sanity Amid Too Much Data”) and so many tools available it’s important to make sure the tools at hand are appropriate, timely, and, most of all, accurate. This may require testing and retesting of your surveys, as well as a scrutiny of media (traditional and social) to ensure you are getting accurate, consistent, and appropriate content.

Step 6. Analyze, report, repeat.

Once your data has been collected, start to use it to make better decisions. Look at the pre/post results for every activity to see which increased your scores. Factor in costs to see which tactic was most effective for the price. Dig into your media data to find the specific authors that were most likely to pick up on and disseminate your key messages.

When reporting, talk about what isn’t effective first, and figure out how you can fix it or improve it. Then point to your successes. ∞

No ratings yet.

Please rate this

Shopping Cart
Scroll to Top