Sunday 17 June 2007

#33 Compared to What?

One of my favourite authors on subjects relating to performance measurement is Edward Tufte, who has written many books on the visual communication of information, including statistical information as it pertains to decision making (all his books, courses, musings and such are at www.edwardtufte.com). Why is he one of my favourite authors? He has extensive knowledge, he communicates this knowledge incredibly and entertainingly well, and he draws on many intrinsically interesting historic cases to illustrate his points. A quick browse of his website will confirm that for you.

[read my review of one of Edward Tufte's books. "The Visual Display of Quantitative Information" at amazon.com]

As Tufte puts it, "the deep, fundamental question in statistical analysis is Compared with what?" Information only has meaning in context, and quantitative information in particular runs such a high risk of misinterpretation in the absence of context. There are several types of context for performance measures to help you mitigate this risk of misinterpretation, three of which we discuss here:

the context of history

The easiest way to present your performance measures with some context is to add as much historical data as you have available (within reason) each time you report the measure. For example, if you are measuring things like revenue, expenses, profit, order cycle time, on-time supplier delivery, outstanding bills, rework, and so on, then try to report at least 20 points of historic values for such measures. Even less frequently measured results can benefit from the context of history, for example, if you have been running customer satisfaction surveys for a few years, then report your current customer satisfaction rating along with all the satisfaction ratings from previous years. And I can hear you say, "but the survey we use now is different from the one we used to use!" So we need an additional type of context...

the context of changes

There is no reason why you can't add to your graphs events such as when a customer survey was redesigned, or when a new product stream was launched, or when the ordering process was streamlined, or when you moved from ad hoc purchasing to formal supplier agreements. These events are markers in history that usually correlate to a sudden or gradual change in the level of your performance results. For example, after the new product was launched you may well have seen revenue start to climb, or after the ordering process was streamlined you probably saw a sudden "step change" reduction in the order cycle time. Adding events to your performance graphs can help you and others interpret why specific changes in the level of performance occurred. But of course, how do you know which events to put on your graphs? A little bit more context...

the context of causation

Some of our strategies or initiatives will have an impact on our performance results, and others will not (even if we intended them to). Some trends or patterns of behaviour in the market or in our industry or societies will also have an impact on our performance results, and others will not (even if you expected them to). The only factors that will influence our performance results are those that have a causal relationship with those results. For example, educating customers on how to use our order form may have been born from the hope of reducing errors that hold up the ordering process, but in reality had little impact. However, redesigning the order form to make it faster and more intuitive from our customers' point of view had the impact we intended, because it addressed one of the root causes of errors on orders. Correlations between the implementation progress of your initiatives and the changing results in your performance measures can give you clues about which initiatives are working, and which may not be. And that brings us to the last type of context for performance measures that we'll discuss here...

the context of contrast

To shed some light on why certain initiatives work and others do not, it can help to analyse your data a bit deeper to find where the successful initiatives failed, and where the failing initiatives succeeded. These investigations can uncover additional factors associated with the performance results you are getting, and thus give you more clues to increase the power you have over managing that performance. For example, perhaps the education of customers in how to use our original order form worked mostly for those customers ordering technical products: they are technically minded, and found the order form easy to understand once they had been shown how it worked. But the majority of customers are not technically minded - most of our products are designed to make things easy for our customers, and these products attract customers that like things to be made easy for them. No education in the world was ever going to change their mindset.

in conclusion

These are not the only forms of context you can surround your performance measures with, but I do hope they give you something to play with. And I'm very curious about your own ideas and examples of how you've used context to improve the rigour of your performance analysis and decision making. If you do, please send me an email!

No comments: