I spend an inordinate amount of time with new analytics clients thinking about, and then talking about, their data gathering. My analytics program involves gathering and studying data weekly. Choose too many data points to gather weekly, and there’s no time to analyze. Choose bad data points to gather and your analysis is irrelevant to your organization.
One of the questions that I struggle with when thinking about the measurement strategy is always choosing exactly what to measure and how to suggest they handle those measurements. As Avinash Kaushik is fond of saying, “You are what you measure“. I often worry that my clients are measuring things that don’t matter to their bottom line, or are entirely out of their control. This is what is referred to as “web reporting”, “data reporting”, or even more derisively as “data puke”.
You often end up with bad data because a HiPPO asks for a metric they’ve heard someone else talk about, and then it becomes a Key Performance Indicator (KPI) because they are a HiPPO.
A metric is a piece of very generic data: how many visits came to the website, how many page impressions you had, how many people are on your email list. However a Key Performance Indicator is a metric with two important features:
- it has a direct, bottom line impact on the success of the business; and
- it is analyzable and actionable.
This last statement is crucial. It is possible to have a metric that has a bottom line impact on the success of your business that isn’t actionable. That’s a really important metric, but it’s not a KPI.
For example if you’re a farmer and your fields need one inch of rain a week, the amount of rain you get is a really important metric, but it’s not a KPI. When there’s no rain and you need it, you can’t “make rain”. You can water your fields, but once you do that, you realize that “rainfall” isn’t the right KPI. “Cost of water used” this week is probably the right KPI.
We have to choose our KPIs carefully, because as each week goes by, we will spend a lot of time studying their variance, and presumably changing our tactics to account for their rise and fall.
Your weekly analysis session with your team should look like this:
- Review of the data from the previous week, starting with KPIs.
- Discussion of variance of KPIs. Did our KPIs rise or fall? Was this in response to something we did? Is something we’re doing not working anymore? Is there a seasonal impact that we can confirm from long-term baseline data? Did something we do last week fail to move a KPI that should have changed?
- Resolution of changes in tactics for coming week.
- Discussion of longer term strategy impacts of results.
- they’re an aggregate average of behavior from a ton of different marketing channels (social media, email, rss feeds, etc) and therefore tell the story of an amalgam of different kinds of visitors lumped into one; and
- it’s a derivative of a metric that doesn’t have anything to do with the bottom line of the nonprofit.