You know your nonprofit communications are valuable, but how do you measure or demonstrate it? ROI is not the best way. If you took business 101, or have been a regular reader The Measurement Advisor newsletter, you’ll know that return on investment (ROI) is an accounting term that should only be used if you know and are using all the costs involved as well as all the revenue. In fact, true ROI is almost impossible to calculate for public relations and social media.
But don’t despair; there is more than one way to demonstrate the value that your PR and communications efforts generate. What you most likely want to calculate is a cost-effectiveness analysis (CEA). Cost-effectiveness analysis compares the relative costs and outcomes (effects) of two or more courses of action. (For more information on how to compare costs, read Fraser Likely, “Principles for the Use of ROI, BCR & CEA metrics in PR_Communication.”)
Here are a few alternatives to ROI and how to calculate them that will truly reflect the value that your efforts generate:
1. Cost-effectiveness metrics
Count the number of new donors each month. Divide that number into the total amount (including salaries) that you spent trying to communicate with those donors. Compare the cost per donor acquired to last year or to a different tactic to determine which is most cost-effective at generating new donors.
Do the same with volunteers. Track the number of new volunteers you’ve signed up each month. Divide that number into the total amount (including salaries) that you spent in recruitment efforts.
If your activity or event is designed to bring more people through the door, you need to be counting traffic on a daily or weekly basis. The key is to track that traffic against other marketing and communications activities. For instance, if you double the resources you devote to social media, does attendance go up proportionately? Even more revealing, if you cut back on social media efforts, does attendance decline? With sufficient amounts of data, you can calculate what it costs to get someone through your doors.
If you are taking donations from your website, calculate the amount of money coming divided by the cost of the site and you get a cost per donor value that can be compared to other donation sources.
2. Donor lifetime value
Every new contact, donor, or member will be a resource for your organization for some length of time into the future. Many of them you will never hear from again, but some will become steady donors or supporters for years, and some will eventually even leave you money in their wills. The lifetime value of a donor will vary greatly from organization to organization, but the point here is that you can use your data to understand what it is for your own nonprofit. If you can determine how to increase this value even slightly, it can make a significant difference over a large number of donors. It is also a more accurate calculation of the value of a new donor, which you’ll want to track monthly to evaluate the impact of different outreach and promotional efforts.
The easy formula to calculate average lifetime value is to divide average annual donations by attrition rate. However, be aware that there are several different ways to calculate this, and the calculation can get very complex, depending on how specific you wish to be, and what assumptions you make. This page includes a couple techniques, and a downloadable Excel spreadsheet that will do the calculations for you. Here is a Lifetime Value Calculator, which generates other donor stats as well. This page discusses donor lifetime value in the context of other valuable stats.
3. Conversation quality score
We recommend the Conversation Quality Score (CQS) to evaluate the conversations that people are having about your organization. The CQS is a customized index based on what motivates your target audience to purchase, support, donate, or change a habit or opinion. A story or post that succeeds at generating revenue for your organization would be a perfect 10, correct? And then there are some that might have the opposite effect, and they’d be a -10.
Now, what components might a perfect 10 item contain? Probably one or more of these:
- It mentions your organization in the headline
- Contains at least one key message
- Contains a quote from one of your spokespeople or prominent supporters
- Leaves a reader more or less likely to support your organization (also called sentiment or tone)
- Your organization dominates the story
- Contains a desirable visual, i.e. a photo, a caption, or a call-out that includes your organization
On the opposite site, what’s your worst nightmare conversation? It might be one that:
- Perpetuates a myth
- Mispositions your organization or mission
- Contains an incorrect message, or misleading information
- Leaves a reader less likely to support or more likely to oppose your organization
Whatever those are, those elements are included on the negative side. Put all that into a chart that looks like the following and make sure that all the various weightings add up to + 10 and -10:
|Desirable Criteria||Score||Undesirable Score||Score|
|Positive: Leaves reader less likely to oppose||1||Negative: Leaves reader more likely to oppose||-2|
|Contains one or more positive messages||3||Contains one or more negative messages||-3|
|Event or program is mentioned||2||No event or program is mentioned||0|
|Positive headline||2||Negative headline||-1|
|Third-party endorsement||1||Recommends competition||-2|
|Contains a desirable visual||1||Contains undesirable visual||-2|
|Total Score||10||Total Score||-10|
Now select all the posts that have appeared in key blogs or on key social media platforms that you know influence your target audience. Give each one a score based on the CQS index you’ve developed. Take the average CQS for all posts that appear in a given month. That is your CQS for the month and ideally, now that you’re focusing on conversations that contain these elements, your average CQS should improve month after month.
You can now use that score to compare campaigns, and also to easily correlate your results to web traffic and other outcome metrics.
Learning Activity: Think of a blog post, or social media post that generated a lot of activity on your site, or maybe one that resulted in an uptick in sales. What did it look like and sound like? Did it contain a key message? A quote? A photo? That’s your perfect 10.
Now, think of a post that made everyone in your organization upset, or caused donors to complain. What did that contain? That’s your worst nightmare or -10.
List the elements in both and then weight them according to the importance they deserve in the context of your organization’s goals.
4. A customized engagement index
Now you can follow a similar process to create a custom engagement index that reflects the desired behavior of your target audiences. Assign a point value for every possible engagement you see occurring within your social media program. Have a conversation with your team about what the relative impact is on your success. Make sure the total adds up to 10. Then you can multiply each type of engagement by its net engagement score to get an average engagement for any campaign. Here’s a sample of the elements and weightings that you might consider:
|Favorite or Opens or Views||1|
|Signs up to receive email or other owned content||2.5|
|Shares a link to an owned site||2.5|
Learning Activity: Make a list of the type of engagement that you believe is most valuable (in terms of revenue generation) to your organization. Now list the types of engagement that are least important. Force rank them all from best to worst. Give 2-3 points to the most valuable and zero points to the least. Remember that you only have 10 total points, and distribute the rest according to the importance of each type of engagement.
5. A/B testing
A/B testing compares the effectiveness of two variants. It can easily be used to test two different campaigns or tactics against each other. Some email platforms (MailChimp for one) can automate the process for your email. We strongly recommend taking advantage of that feature if you have it. As the name implies, an A/B test compares results from two versions (A and B) which are identical except for one variation that might affect a user’s behavior, e.g., a different headline, visual, or call to action. A/B testing is frequently used for websites where you can easily vary one element, but can also be used to compare different social media tactics. Say you’ve always issued invitations to an event via email. To see how effective reaching out via social media is, you would target half your audience with email and the other half with social media.
6. Cost per message communicated
By analyzing the content of conversations around your organization you can evaluate whether your messages appeared in those conversations. (This is a good place to put interns or volunteers to work.) Track the percentage of those conversations in which your messages appeared for different campaigns. Which campaigns generated the most messages? Now divide the cost of the campaign by the number of messages communicated to get “cost per message communicated” for that campaign. Compare this cost between different campaigns to help you decide where and how you want to disseminate messages in the future.
7. Awareness testing
Many social media programs have as their goal to raise awareness for a particular issue or message. Many people think that by counting impressions they are measuring awareness. However, the only way to measure an increase in awareness is to do a pre/post survey. Survey your target audience before your event or campaign and ask if they are aware of your issue or message. Repeat the same survey after the campaign and compare the results to see if your audience’s opinion has changed.
If your raison d’être is educating the public or getting your messages across, you not only need to know if your idea was seen by your audience, you also need to determine whether they heard it or not. We all know it’s not enough to merely get messages out there; what we really need to evaluate is if anyone heard them or if they changed anyone’s opinions. But doing pre/post awareness surveys is too expensive, right? Wrong. Today, with instruments like SurveyMonkey and Qualtrics, it is relatively simple to do such a study.
Many times the necessary data to calculate the metrics simply are not available, if that is the case you will have to agree on an acceptable proxy that is representative enough to be acceptable to senior leadership.
For instance a non-profit’s institutional barriers prevented them from getting access to their donor database so they couldn’t measure their results against the real goal – increasing revenue. Instead they figured out that Google Analytics could tell them how many times the “Thank You” page was served up – and the only way anyone saw the “Thank You” page was if they gave a donation. Sessions on the thank you page became an acceptable proxy.
(Thanks to Kimberly Dawn for the image.)