A favorite marketing technique of non-profit consultants involves surveying a set of their clients to see what their metric is (like email open rate) and then publishing a study showing a “benchmark” average. Everyone eats up this kind of ”drive by analytics”, because if you are performing better than the benchmark, you look awesome, and if you are performing worse, you can just say it doesn’t apply to you.
Some companies with enough clients will sub-categorize the data into mission areas (“animal welfare”, “healthcare service delivery”, etc). This is even more deceptive, because it strongly suggests the data should apply to you because you fit in that umbrella.
I find it an odd marketing technique, because those whose numbers beat the metric have no reason to hire the consultant. And those whose numbers don’t beat the metric might be doing so because they have other, structural issues (poor funding, bad management, problems in the mission or its execution, emotional decision making processes that aren’t data driven) that are a factor in their performance. These structural issues are likely to make them very difficult clients.
But none of that matters, because the truth is that these benchmarks don’t apply to you in any way. Here’s why.
All non-profits are not the same.
Comparing something like “email open rate” between two non-profits like Kiva and Planned Parenthood is absurd. The relationship people have with those organizations is entirely different. The emotional connection, which is likely to drive the email open rate, is not the same at all. You may feel strongly about third world poverty, but Planned Parenthood has probably touched your life in a very personal way. The missions are different, and the reasons for you to interact with them is different too.
This is why people tend to split the data into “mission categories”, like “animal welfare”. But…
All non-profits in the same “category” are not the same.
Let’s take The Marine Mammal Center (a client) and the Humane Society of the Peninsula. The Marine Mammal Center rescues, rehabilitates, and repatriates (haha) marine mammals like seals. The Humane Society does something different with dogs and cats. Unless a lot of you are keeping seals as pets, your relationship to these two organizations is wildly different because you’ve got a personal relationship with a dog or cat now or at some time in your life, and your relationship with seals is from a great distance. (By the way, seals can be really cute.)
Perhaps to address this, you would try and compare two non-profits that are really close, perhaps even chapters of the same organization. For example, you might try to compare, say Planned Parenthood of San Francisco with Planned Parenthood of Southern Oregon. That should be close, right? Sadly…
All non-profit affiliates in the same organization are not the same.
Let’s say PPSO has an email open rate of 15% and PPSF has an open rate of 8%. (I’m making those numbers up) There’s a lot of reasons why that might be, which you’ll never learn about in a benchmarking report. Perhaps there’s a large segment of PPSF’s list that is non-performing because they bought it and they now fail to open emails. Perhaps they provide a slightly different set of services in the Northern California area than are offered in Southern Oregon. Perhaps email open rates all over Northern California are different than those in Southern Oregon.
It may be that nothing PPSF does ever raises their email open rate to 15% because of factors that aren’t revealed in a benchmark report. PPSF could spend years trying lots of different tactics and never be able to reach the performance of PPSO because their difference may have nothing to do with how they run their email campaign.
Occam’s Razor dictates that if you can’t compare your non-profit against any other non-profit, that leaves only one person you can compare yourself against.
The best nonprofit you can compare yourself against is…..you!
When you know what your email open rate clocks in at because you’ve been measuring it for over a year, and then one week it spikes up, you can pretty quickly eliminate all the things that didn’t affect your open rate. Odds are good that it was something you did that changed it, or something enormous in your environment that changed it. And now that you know what it was, you can probably learn from it and repeat it.
So if you’re in the habit of making resolutions, may I suggest this one?
“I commit to recording my data, the same data, every week.
And when I do so, I will look back at how this week’s data differs from my baseline to learn how my work this week did better or worse, and how I can do better in the future.”
This is typically what I help non-profits who are setting up an analytics program do.
- We identify what the metrics are, and get it down to a manageable few;
- We setup a weekly process for recording the data and examining the data; and
- We spend a small amount of time every week analyzing it, comparing it to the baseline.