Sunday 17 June 2007

#38 Measuring the Impact of Initiatives

One of my business goals is to increase subscribers to the mezhermnt Handy Hints ezine, so I can get lots of useful information out to lots of people, and also help people get to know me and the PuMP approach to performance measurement.

Obviously I can't control whether someone joins the ezine list - it is unethical to simply add people to the list without their permission (do you recall the confirmation you had to give in order to be added to the mezhermnt Handy Hints list?). But I can influence a few things that increase the number of people that find out about it, and even the proportion of those people that go to the next step and sign up.

So whether your improvement initiatives are small like mine, or much larger and more complex, there are a few good tips to consider when you measure the impact your improvement initiatives have on the intended results.

tip #1: start with some baseline data

The performance measure for building my list is the number of new subscribers. Before starting any list building initiatives, subscriptions were averaging about 10 per week. That's my measure's baseline.

What's your performance measure's baseline? Did you measure it before you began your improvement initiatives? Can you establish the baseline from historic data, or estimate where it was at that time? Can you use correlated data to calculate roughly where it was?

tip #2: pilot test each initiative separately

I don't just implement all the possible improvement initiatives for list building at once (it is tempting, but not sensible). Why? Because I want to first know how effective each strategy is for my situation, and then only invest in the strategies that work best. For solo professionals and large organisations alike, time and resources are limited and must be invested where they get the highest return!

My initiatives include Google Ads that appear when someone searches for "kpi" or "balanced scorecard", improving my website design to list higher in internet search engines, and publishing free articles on the web. For 10 weeks I tested Google Ads, in isolation of any other initiative. The effect was that subscriptions lifted to an average of 105 per week, an impact of 95 subscribers per week (not too shabby a result).

Are you in the habit of jumping into several solutions and actions before really testing the size of impact of each? Yes testing is slower, but you'll just waste time and money on initiatives that don't really work, when you have no way of knowing either way.

tip #3: use diagnostic indicators too

There are a few other indicators that are also useful to give more information about the ezine sign-up process. One is the click through rate of people that become aware of my ezine, who then sign up. By adjusting the wording of the Google Ad, I can finetune its relevance to people who search for information on KPIs. Of the 3 Google Ads I tested, one achieved a click through rate of 0.3%, and the other two were equal at 2.9%.

To decide which of the two better ads to go with, I needed more information. Another diagnostic measure is the position that Google ranks my ad along with other ads for the same search keywords. One ad averaged around position 5, and the other around position 4. Now I know which ad performs the best, and what size of impact it is capable of having on my performance measure of subscriptions.

Do you know which of your initiatives are most successful, and by how much? Have you tested variations of your initiatives to pinpoint what makes them more successful? Diagnostic indicators can be designed before you test your initiatives, but often you discover helpful diagnostic indicators during your testing too.

No comments: