Sunday 17 June 2007

#41 Drowning in Activity Measures

If you were to ask me the most common mistake I see with performance measurement, I’d probably have to say it’s the use of activity measures. What I mean by activity measures is this:
  • Completion of system audit
  • Implement policy review by June 2007
  • Number of training programs conducted

It’s when people count up whether or not, or how many times, they’ve done a particular type of task. Why do I think activity measures are a performance measurement mistake? Here are five reasons.

Reason #1: activity measures often aren’t really measures

Why aren’t “completion of system audit” and “implement policy review by June 2007” really measures? Because they’re events. They’re milestones to reach, usually within a project or implementation plan of some kind. They are activities. Measures are evidence of the degree to which something is occurring, through time. Measures are supposed to be regular and ongoing feedback that we can use to adjust our course so we continue heading where we need to go.

If you want people to improve their ability to meet deadlines, like the milestones described above, then you don’t measure a single deadline as “met” or “not met”. You get tonnes better information if you track the proportion of deadlines met as time goes by, months and years. And thus, activity measures like these milestones are really just the pieces of data that would comprise a measure that has everything to do with someone’s on-time performance and nothing to do with the nature of the activity itself. That leads us to reason #2.

Reason #2: activity measures don’t measure performance

Many times I’ve seen activity measures being used as evidence of the achievement of strategic results. For example, one organisation (through the use of this measure in their strategic plan) seems to believe that the number of people trained is evidence of how committed their staff are. Activity measures are evidence of nothing more than the activity having occurred. The result of the activities is another matter entirely.

Evidence. Feedback. These are important words when it comes to measurement. Good evidence is convincing, and will convince the right people that a particular result is really happening. Good feedback is regular, and when it is regular enough (not annual) it helps people change their activities to stay on track to influencing their desired results to happen. Of course, there’s the presupposition that people have results that they want to happen.

Reason #3: activity measures drive the wrong behaviour

A measure that monitors the amount of activity being done sends the message to do more of that activity. These measures don’t send the message to do the right activity, and they don’t send the message to do the activity well.

That said, you can still devise some good activity measures that encourage the kind of behaviour you want. For example, one of my own personal performance measures is the number of articles and books I read each week. Because I hate wasting time reading things that don’t add to or challenge my knowledge, I know that this measure is going to drive me to read lots of valuable literature. So if you carefully think through the behaviours you want and the unintended consequences that might surround those behaviours, you can still use activity measures to improve performance.

Reason #4: activity measures waste data collection resources

Generally it’s really easy and quick to measure activities. We have easy access to the data, it’s not too hard to collect, and most organisations have systems for capturing a good deal of it. And it’s data that can rarely inform any important decision. The more activity measures we have, the less meaningful data about results we seem to have.

Letting go of our plethora of activity measures has the potential to free up time, energy and mental focus to start collecting and using data that gives us measures of results. Letting go is hard, but trying to improve organisational performance using activity measures is much harder.

Reason #5: activity measures breed learned helplessness

If there is one thing in our lives that we can control most, it’s what we do. Our direct activities are more within our control than the results of those activities, for example. Using activity measures to assess our own performance is usually associated with a desire to measure only what is in our circle of control.

What a waste! What if we all turn up to work and just accept responsibility for our circle of control, and don’t accept any responsibility for our relationships with other people, for our impact on the activities and work of other people, for the impact of our activities on the world around us? We all have a circle of influence, and we only really perform when we start measuring and improving what lies in our circle of influence.

#40 Advantages of Samples

For the last 3 years I have been helping a client take a sampling approach to measuring the accuracy of their inventory records. The measure is the net error rate, based on the size of the difference between their electronic inventory records, and the actual inventory that is held at the storage locations.

you can save money and time

Sampling the inventory storage locations instead of trying to do a full stock take saves this client millions (probably more). They have thousands of storage locations, carrying many, many thousands of inventory line items. Counting them all would require an army. And even though they didn’t used to try to count them all, they are now spending less time and less money than they used to. Where can you save time and money by using samples to measure, instead of trying to measure everything?

you can improve the integrity of your data

And even though they are spending less time and money than they used to, because of the way we designed the survey, they are getting much more integrity in their data. The secret here is in segmenting the storage locations and the line items by value, designing sample sizes in each segment that accommodate the variability within each segment, and randomly selected the samples. The random selection is very important! Are you selecting samples that aren’t random? If so, your data is most likely biased, and could be misleading your decisions.

you can measure more frequently

Because measuring with samples is less costly than measuring everything, you have the option to measure more frequently. Often it is just as easy to do monthly “pulse surveys” of small random samples of customers, as annual surveys of larger samples. But with annual measures, you have to wait a long time to work out if change is happening. Monthly measures (or other more frequent timeframes) more powerfully show you emerging and sudden change, and you can then respond before it’s too late.

you can refute the squeaky wheels

Well designed (and randomly selected) samples are not subject to the bias that some of the more familiar data collection methods have. The old inventory accuracy measurement was biased because auditors would choose to measure the sites they suspected had the worst accuracy. Now the client can put any specific accusations about poor inventory management into context. His recommendations about improving inventory management are objective and considered, rather than knee-jerk reactions. Who are your squeaky wheels, how much do they influence decision making and resource consumption, and how could a well designed sample approach help?

#39 Getting People Involved in Measuring Performance

Getting people involved in measuring and improving performance is one of the greatest challenges (and greatest enablers) in designing measures that lead to improvement. But just inviting them to a workshop, or telling them to come up with measures, rarely works. Here are my favourite ways to authentically involve people, in a way that has meaning for them.

idea #1: ask them what their biggest obstacles are to doing their job well

What will get people's attention more than talking about what bugs them the most? And what better a place to start involving them in performance improvement than in helping them improve what matters to them? Even if what bugs them isn't strategically important right now, it's a valuable exercise that will lead to them thinking more easily on what 'bugs the organisation' (the biggest obstacles to the organising performing well).

idea #2: ask them to give feedback on someone else's measures

Aside from the obvious value that comes from getting feedback from others, this approach makes it safe for people to start getting familiar with performance measures, without feeling imposed. As part of my PuMP Implementer Program, we use a measure gallery as a way gathering feedback from the wider organisation on the measures a team has just freshly designed. These measure galleries have been known to generate lots of interest in other parts of the organisation to further explore performance measures for themselves.

idea #3: coach them as individual people, rather than facilitating or teaching them as a group

The group approach, where you get people in the room and walk them through designing measures, is certainly very efficient. But it can leave people behind - people that are worried about a problem on the job, people that are starting from less experience with measurement, people that feel cynical about measurement ("It's just another big stick!"). Outside of workshop time, you may well need to share a coffee or chat over the phone to support a few participants that are feeling less than committed to the process.

idea #4: ask them why they aren't as involved as you'd like

Often we just assume that people don't care, or haven't got the time, or just have a bad attitude, and that's why they won't get involved in measuring performance. (And you know what assumptions are, don't you?) Instead, just ask. Not everyone will feel confident to give you an honest answer, but some will. And the conversation you can then have to explore their objectives, answer their questions, take on their ideas, could be just the opportunity you need to get them more involved.

idea #5: role model the design and use of great measures

Your actions speak louder than your words. You can't expect anyone to get involved in something you aren't involved in yourself. So establishing a few great performance measures that will help you improve performance, and then using those measures to make performance improve, can be a great way to show others the value of doing it too.

#38 Measuring the Impact of Initiatives

One of my business goals is to increase subscribers to the mezhermnt Handy Hints ezine, so I can get lots of useful information out to lots of people, and also help people get to know me and the PuMP approach to performance measurement.

Obviously I can't control whether someone joins the ezine list - it is unethical to simply add people to the list without their permission (do you recall the confirmation you had to give in order to be added to the mezhermnt Handy Hints list?). But I can influence a few things that increase the number of people that find out about it, and even the proportion of those people that go to the next step and sign up.

So whether your improvement initiatives are small like mine, or much larger and more complex, there are a few good tips to consider when you measure the impact your improvement initiatives have on the intended results.

tip #1: start with some baseline data

The performance measure for building my list is the number of new subscribers. Before starting any list building initiatives, subscriptions were averaging about 10 per week. That's my measure's baseline.

What's your performance measure's baseline? Did you measure it before you began your improvement initiatives? Can you establish the baseline from historic data, or estimate where it was at that time? Can you use correlated data to calculate roughly where it was?

tip #2: pilot test each initiative separately

I don't just implement all the possible improvement initiatives for list building at once (it is tempting, but not sensible). Why? Because I want to first know how effective each strategy is for my situation, and then only invest in the strategies that work best. For solo professionals and large organisations alike, time and resources are limited and must be invested where they get the highest return!

My initiatives include Google Ads that appear when someone searches for "kpi" or "balanced scorecard", improving my website design to list higher in internet search engines, and publishing free articles on the web. For 10 weeks I tested Google Ads, in isolation of any other initiative. The effect was that subscriptions lifted to an average of 105 per week, an impact of 95 subscribers per week (not too shabby a result).

Are you in the habit of jumping into several solutions and actions before really testing the size of impact of each? Yes testing is slower, but you'll just waste time and money on initiatives that don't really work, when you have no way of knowing either way.

tip #3: use diagnostic indicators too

There are a few other indicators that are also useful to give more information about the ezine sign-up process. One is the click through rate of people that become aware of my ezine, who then sign up. By adjusting the wording of the Google Ad, I can finetune its relevance to people who search for information on KPIs. Of the 3 Google Ads I tested, one achieved a click through rate of 0.3%, and the other two were equal at 2.9%.

To decide which of the two better ads to go with, I needed more information. Another diagnostic measure is the position that Google ranks my ad along with other ads for the same search keywords. One ad averaged around position 5, and the other around position 4. Now I know which ad performs the best, and what size of impact it is capable of having on my performance measure of subscriptions.

Do you know which of your initiatives are most successful, and by how much? Have you tested variations of your initiatives to pinpoint what makes them more successful? Diagnostic indicators can be designed before you test your initiatives, but often you discover helpful diagnostic indicators during your testing too.

#37 Realistic Target Setting - Part 2

The last 3 of the 6 most common worries about setting targets for performance measures are:

  • challenge 4: Anticipating the consequences of achieving and not achieving the target.

  • challenge 5: Finding the courage to go beyond your comfort zone.

  • challenge 6: Having the wherewithal to change whatever must change for the target to be accomplished.

Here are my ideas and learnings about overcoming them.

idea #4: keep one eye on the target, and one eye on the bigger picture

Even if you had enough foresight to explore the unintended consequences of achieving your target before you locked it into your plan, the world will still change later on. I once heard a story about a rail organisation that placed more importance on on-time running of trains than any other performance outcome. So much so, that one day, due to pressure risking the train running late, the driver omitted an important safety check to save time. The train derailed because of a braking problem that the safety check would have easily picked up.

Every now and then, ask your self "is this target still a good idea?" and "if we miss it, what's likely to happen?" and "if we achieve it, what's likely to happen?". It's okay to change a target that is no longer going to serve its original purpose. Is this check a part of your regular performance review process?

idea #7: give yourself (and your staff) permission to learn by not achieving targets

You are not supposed to achieve every goal or target you ever set. And if you do, then it's probably because you aren't challenging yourself enough. You're staying inside your comfort zone, inside of what you know works, what you know you can accomplish. That's not what improvement is about. There is no learning without failing, no improvement without learning.

If you want to jump over a creek without landing in the water and getting your shoes all wet, then don't aim for the far bank of the creek. Aim for a metre or so beyond it. Set the target further than you think you can achieve. That way, you'll be less likely to land in the water, and more likely to land even further than you thought possible. Somehow, our strides are longer when our eyes focus further ahead.

idea #6: do some preliminary scoping of "how-to" before locking in the target

If you and your team do not yet possess the target setting and achieving prowess of an Olympic athlete, then avoid setting any kind of target without first exploring a range of ideas of how you might go about achieving it. A very innovative manager I know has for years used simulation software to model his business processes (freight). The model simulates the steps in the process, the variability in the time each step takes, the variability in market demand, resource constraints, and much more. He can then make changes in the model to simulate changes like investing in more equipment, or changing a step, or removing a constraint (like a policy). So before he spends a single dollar, he can get a good idea about which strategies are going to work best to reach his targets.

What's wrong with taking an iterative approach to finding the right target? Scope a little and set the first target value. Explore what it might take to achieve that, then revise that value if necessary. Start the more detailed action planning to get a stronger idea of resource implications, and revise the value again if necessary.

#36 Realistic Target Setting - Part 1

Some of the most common worries about setting targets for performance measures are:

  • challenge 1: Striking that sensitive balance between making the target achievable but also a stretch.

  • challenge 2: Creating that sense of urgency that will motivate people to hunger after the target.

  • challenge 3: Having a measure or means of monitoring progress as the target timeframe approaches.

I'd like to share some ideas with you, about how to lessen the burden when you come face to face with worries like these.

idea #1: don't strike a balance between achievable and stretch - do both

What I've learned is that it takes practice and confidence-building to achieve a target or goal. Why not set at least two or three targets for any single performance improvement? The first one is shorter term and not very challenging, for the purpose of building target-accomplishing momentum. The interim target is an opportunity to build more capability and confidence to stretch. The last one is the stretchy target, which you might have no idea of how to reach at this point in time, but be in a better position to know after you've achieved the interim target.

idea #2: use vivid and specific language to describe the world after the target is accomplished

Numbers alone are hardly enough to motivate anyone. So handing a team a performance measure + target value + timeframe won't likely be enough motivation. Have you ever tried telling the story about what the world (or at least your part of it) is like after the target is met? Colour, sound, movement, emotion, expression, behaviour, shape, rhythm and all those other sensory experiences emblazon the meaning of the target into the minds and hearts of those setting out to achieve it. Motivation from within is the best kind.

idea #3: make sure your measure can be monitored at least 6 times within the target timeframe

Design your measure so you can calculate it as regularly as is feasible, and then set a target timeframe that accommodates frequent enough feedback to increase your chances of staying on track. For example, monitor your measure weekly or monthly for a 1 to 2 year target timeframe. Yes, sometimes you just can't get data this frequently, but that doesn't change the fact that a single point of data says nothing. Is it worth setting a target that you cannot honestly know is achieved?

Stay tuned next month for the next 3 challenges of target setting!

#35 The Good, The Bad, And The Ugly

There are lots of so-called “measures” that people choose to monitor business results. Some are good, some bad and some downright ugly! This is one of the most colossal mistakes I see people making with performance measures: to claim as a performance measure something that absolutely is not a measure of performance at all.

Here are three of the so-called performance measures that I really dislike most:

"win the BlahBlah Award"

The award might be a customer service award, or environmental award, or workplace health and safety award. Why do I dislike awards as measures? The winning of an award is an event, and can’t give regular, ongoing feedback that can inform decision making and improvement - to use it as evidence of business performance assumes that the criteria for the award correlate directly to the business’s priorities and strategy. And just think about the kind of behaviour and culture this kind of "measure" would encourage... everyone aiming to impress the judges of the award and taking their eyes off their real stakeholders.

"complete BlahBlah Project by June 2007"

Projects such as implementing a customer relationship management system, or upgrading a maintenance facility, or running a new employee training program are typically put in the KPI column of business plans. They are next to useless as evidence or feedback about business performance. Finishing a project by a particular date is an action, not an outcome, and thus provides no evidence whatsoever of the result that the project should have achieved. But they are the most common type of "measure". My theory is that it's because we are an activity culture - we have been duped into the false belief that as long as we do things (and finish them on time and to budget) then we have succeeded. A little more scientific thinking would go a long way: we need to use measures to test our hypotheses that the actions we have chosen in fact do produce the results we intended. So in reality, measures like these are actually strategies - the means we have chosen to achieve the results we want for our business.

"Annual BlahBlah Survey"

The survey could be an employee survey or a market survey or a community reputation survey - who knows? Irrespective, surveys are just data collection processes, not measures. The measures come from the data the survey collects and the measures must be very clearly designed and defined in order to ensure the survey collects the right data. Way too much money is wasted on surveys that ask irrelevant questions, and collect data that is never used. I guess having my foundations as a survey statistician makes me particularly frustrated by measures of this type. I'd just love to see more people demonstrating that they can discern the difference between data and measures!

#34 Zen, And The Art of Performance Measurement

I just love the book "Zen and the Art of Motorcycle Maintenance" by Robert M. Pirsig, in part because I love philosophy, in part because I love trail bikes and in part because I am keenly interested in the issues of Quality versus Quantity (a major theme of this book). I'm just about to start reading it for the third time, because each of the last two times I drew new and different meaning from it. Anything philosophical awakens in me the almost overwhelming awareness that we are each part of something bigger than just ourselves, bigger than our day to day activities, our beliefs, our intentions and dreams and fears and penchants. Everything we "know" is relative - relative to the experiences we have had, relative to what we believe about the world, relative to our assumptions about the intentions of others, relative to what we have noticed and learned through our lives (and relative to much more too). Our "knowledge" is a mud map, not a satellite image from Google Earth and most certainly not the territory itself.

It's not hard to see then, why different people behave differently in response to the same performance measurement activity. And that's one of the big reasons why buy-in is such an elusive state to attain.

how people can react to performance measurement

Someone's map of reality influences the way they feel and act around performance measurement. Someone who is used to being blamed for things will feel defensive and fearful around performance measurement. They may throw up unlimited objections as to why performance measurement isn't needed or how they haven't got time to collect all the data. Someone who has put a lot of time (perhaps even blood, sweat and tears) into collecting performance data and never seen anything come from it will feel cynical and frustrated by performance measurement. They will at best bring their body to any new performance measurement initiative, leaving behind their heart and mind. Someone who has been frequently rewarded for outstanding performance would feel very comfortable and engaged around their existing performance measures, but may feel very nervous at the prospect of changing those performance measures.

These are just simple examples. And I'm sure you can imagine a selection of the these people in your own organisation. I've seen a selection of these people in just about every team I have ever facilitated through performance measurement activities. But getting these team members to a state of buy-in is something I seem to consistently achieve. How do I do this?

the art of performance measurement and buy-in

Performance measurement certainly does have (and need) a substantial technical base. Our performance measures would be a waste of time if they weren't linked to strategy, clearly defined, calculated consistently and using good quality data, or presented in a way that encouraged valid interpretation. However, our performance measures are also a waste of time if people involved in the measurement process (selecting measures, bringing them to life, or using them) don't buy-in to their measures, don't have a strong sense of owning those measures. This is the non-technical or human base to performance measurement, and without it, the technical base isn't enough.

Getting buy-in is, to me, more an art than a science. It's not about following a set of steps that will lead you to a state of buy-in. It's about creating and holding the space for people to safely explore what performance measurement can mean for them, personally. And creating and holding the space for this can mean adopting attitudes and behaviours like:

  • don't educate people in performance measurement - facilitate them through an action learning cycle that combines a little theory (such as techniques) and a lot of implementation (or even pilot testing)

  • don't tell people what they should measure - do show people a process to follow that can help them decide what is worth measuring themselves

  • don't be the judge, jury and executioner of people's measures - do suggest that people invite open feedback from all stakeholders about their chosen measures

  • don't micro-manage performance - do give people the time to use their measures to understand their performance and take the initiative to improve it themselves

  • don't blame people for poor performance results - do encourage people to analyse the causes, take corrective action and learn from this

  • don't assume that performance measurement is about control - do believe that performance measurement is about connecting people to something meaningful for themselves as well as the organisation (better control is a by-product of this)

How people respond to performance measures has a huge amount to do with how they connect themselves to a bigger picture. I'm curious about your ideas on this notion. Do you agree? Do you disagree? Do you have more ideas for how to create and hold space for people to buy-in to performance measurement? If you do have something to say, please send me an email!

#33 Compared to What?

One of my favourite authors on subjects relating to performance measurement is Edward Tufte, who has written many books on the visual communication of information, including statistical information as it pertains to decision making (all his books, courses, musings and such are at www.edwardtufte.com). Why is he one of my favourite authors? He has extensive knowledge, he communicates this knowledge incredibly and entertainingly well, and he draws on many intrinsically interesting historic cases to illustrate his points. A quick browse of his website will confirm that for you.

[read my review of one of Edward Tufte's books. "The Visual Display of Quantitative Information" at amazon.com]

As Tufte puts it, "the deep, fundamental question in statistical analysis is Compared with what?" Information only has meaning in context, and quantitative information in particular runs such a high risk of misinterpretation in the absence of context. There are several types of context for performance measures to help you mitigate this risk of misinterpretation, three of which we discuss here:

the context of history

The easiest way to present your performance measures with some context is to add as much historical data as you have available (within reason) each time you report the measure. For example, if you are measuring things like revenue, expenses, profit, order cycle time, on-time supplier delivery, outstanding bills, rework, and so on, then try to report at least 20 points of historic values for such measures. Even less frequently measured results can benefit from the context of history, for example, if you have been running customer satisfaction surveys for a few years, then report your current customer satisfaction rating along with all the satisfaction ratings from previous years. And I can hear you say, "but the survey we use now is different from the one we used to use!" So we need an additional type of context...

the context of changes

There is no reason why you can't add to your graphs events such as when a customer survey was redesigned, or when a new product stream was launched, or when the ordering process was streamlined, or when you moved from ad hoc purchasing to formal supplier agreements. These events are markers in history that usually correlate to a sudden or gradual change in the level of your performance results. For example, after the new product was launched you may well have seen revenue start to climb, or after the ordering process was streamlined you probably saw a sudden "step change" reduction in the order cycle time. Adding events to your performance graphs can help you and others interpret why specific changes in the level of performance occurred. But of course, how do you know which events to put on your graphs? A little bit more context...

the context of causation

Some of our strategies or initiatives will have an impact on our performance results, and others will not (even if we intended them to). Some trends or patterns of behaviour in the market or in our industry or societies will also have an impact on our performance results, and others will not (even if you expected them to). The only factors that will influence our performance results are those that have a causal relationship with those results. For example, educating customers on how to use our order form may have been born from the hope of reducing errors that hold up the ordering process, but in reality had little impact. However, redesigning the order form to make it faster and more intuitive from our customers' point of view had the impact we intended, because it addressed one of the root causes of errors on orders. Correlations between the implementation progress of your initiatives and the changing results in your performance measures can give you clues about which initiatives are working, and which may not be. And that brings us to the last type of context for performance measures that we'll discuss here...

the context of contrast

To shed some light on why certain initiatives work and others do not, it can help to analyse your data a bit deeper to find where the successful initiatives failed, and where the failing initiatives succeeded. These investigations can uncover additional factors associated with the performance results you are getting, and thus give you more clues to increase the power you have over managing that performance. For example, perhaps the education of customers in how to use our original order form worked mostly for those customers ordering technical products: they are technically minded, and found the order form easy to understand once they had been shown how it worked. But the majority of customers are not technically minded - most of our products are designed to make things easy for our customers, and these products attract customers that like things to be made easy for them. No education in the world was ever going to change their mindset.

in conclusion

These are not the only forms of context you can surround your performance measures with, but I do hope they give you something to play with. And I'm very curious about your own ideas and examples of how you've used context to improve the rigour of your performance analysis and decision making. If you do, please send me an email!

#32 Are Your Measures Informing or Deforming Your Decisions?

For some, the sound of the word "statistics" coming from someone's mouth impulses them to run far and fast in the opposite direction. Our society seems to be almost proudly fearful of anything to do with mathematics, especially its branch of statistical analysis. "Lies, damn lies and statistics!" they scoff. What a horrible foundation on which to build adequately informed decision-making. Particularly with performance measurement, due to its potent influence on business decision-making, the way performance data is often analysed produces a grotesque parody of useful and usable information.

What can we do about this? One simple step in a good direction is to be aware of how different types of performance data analysis relate to the different types of performance management questions we ask. How does your organisation compare to the following suggestions?

business question 1: Is this performance result getting better (or worse, or not changing)?

For example:

  • is the accuracy of our bills improving?

  • is cycle time reducing?

  • is our cost of inventory stabilising?

  • are we getting better at retaining our best staff?

Analyses that can help answer these questions:

  • use trend over time graphical methods, such as line charts, run charts and statistical process control charts

  • avoid linear trend lines - in the vast majority of cases they explain very little of the pattern of change, and rarely is anything so simple it can be explained by a straight line

  • avoid tables comparing this month to last month - two points of data are totally inadequate in accurately evaluating change, due to a natural phenomenon known as variation - you need a good time series of around 20 points

business question 2: What are the main reasons this result is happening?

For example:

  • what are the main types of errors on bills?

  • what are the 20% of problems that cause 80% of our cycle time blowouts?

  • what is the total cost of each of the slowest moving inventory items?

  • what are the reasons that staff leave us?

Analyses that can help answer these questions:

  • use a bar chart of causal factors or reasons

  • even better is to use a Pareto chart (most visually effective when the bars are horizontal, instead of vertical and ordered from largest to smallest)

  • avoid tables of numbers as they are much more difficult to interpret than the visual impact of charts

  • avoid pie charts, as they are designed to compare a part with it's whole, and encourage misleading visual comparisons between the slices

business question 3: Is this result really related to that result?

For example:

  • is the extent of errors on bills related to the workload of our billing staff?

  • what is the best lead indicator of cycle time: total throughput, customer ordering lead time or supplier on-time performance?

  • if we increase the percentage of inventory items on auto-order to 60%, what percentage change could we expect in inventory costs as a result?

  • to what extent is employee turnover related to employee job satisfaction?

Analyses that can help answer these questions:

  • use regression analysis to get a quantitative model of the relationship between two measures, also useful to predict changes in one measure as a result of changes in another

  • use correlation analysis to get a measure of the strength of the relationship between the two results (called a correlation coefficient)

  • more simply, use a scatter plot of the two measures

  • sometimes okay to use a line chart of the two measures in time series and visually look for direct or lagged correlations in their patterns over time

  • avoid relying on gut feel and hearsay - any relationship needs to be objectively tested

business question 4: How big is this result for us, compared to the same result for them?

For example:

  • what is our bill accuracy level like across each of our product streams?

  • how does our cycle time compare to our competitors?

  • what is the relative cost of each category of inventory we hold?

  • is employee retention different between our departments?

Analyses that can help answer these questions:

  • use vertical bar charts, with the bars ordered in a way that is logical to the classes or categories you are comparing

  • avoid pie charts and tables, for the same reasons listed above

So there are at least two things you could start doing now to ensure that you are analysing your performance data in the most useful way:

1) work out which types of business questions you're asking (or need to ask)

2) choose the types of analyses that are best suited to answering those questions

It's far more worthwhile to use the simplest analysis method well, than to use a more sophisticated analysis method poorly.

#31 Performance Conversations

We all know that having performance measures isn't worth the paper they are printed on if they don't get used to drive performance improvement. But for many, there is a great leap from having performance reports to actually putting them to good use, and that's because the conversations that are had to interpret performance results don't have the conviction they really need. Here are some samples of conversation snippets I have heard over many years of performance measurement coaching and consulting. Which level does your organisation best fit?

signs of a poor performance conversation

Some of the signs that your performance conversations are next to a complete waste of time include hearing statements like these being made by those participating in the conversation:

"I think this project was a great success because last week I was talking to so-and-so and they said..."

"Well we did have to face quite a few challenges, and in the context of that, we've actually done quite well."

"You can see that this initiative was worthwhile because our annual customer survey result shows improvement."

"No we didn't really have any good measures, because it really isn't possible to measure this."

"Yes but our goals just weren't the right ones, and we had to change tact along the way."

"You know, it's a very difficult goal we set, and we really didn't think we'd achieve it. But we're heading in the right direction."

"Everyone else is suffering in this volatile environment. We're not doing as badly as some."

"I just know it worked. I know the area and I can see things changing."

signs of a constructive performance conversation

Your performance conversations are faring a little better than those above when you can hear a few comments like these being made (and listened to):

"Okay, so let's take a look at the measures we designed for this project and see what they're doing."

"A few people did complain about the change in our process, but when you look at the survey results, the majority of people have said they like it better than before."

"These measures are showing some improvement over the past few months, and we believe it's due to our new initiative."

"We intended for costs to reduce, but we just didn't seem to get that to happen. Maybe we should check that the project was implemented properly..."

"In hindsight, we set way too difficult a target without really knowing what it would take to reach it. And not reaching it caused some cynicism too."

"So it seems that we set some goals that were really outside our control to achieve, so let's focus on the parts we can still influence."

signs of a strong performance conversation

And great performance conversations show signs of people talking like this:

"Four of our project outcome measures are showing statistically significant improvement, and the other three are showing no significant change at all. The project lead measures suggest this is because our training is working, but we still haven't managed to make the system available to everyone."

"Just as our lead indicators suggested a few months ago, we are starting to see performance stabilise around our targeted level."

"The pilot areas where we introduced the new work procedure are consistently showing 23% higher productivity than in the areas where we are still using the old procedure."

"Yes we are getting higher satisfaction ratings, but even our competitors are too, and they haven't tried to improve anything. Correlation analysis shows it has more to do with recent media coverage in our industry than with our new incentive scheme."

"No, there is very little evidence to suggest this project is working. Costs haven't changed, and cycle time is still averaging around 10 days. It's time to hit the pause button, find out why it isn't working and either pull the pin or redesign the project."

"It was unfortunate that fuel prices rose so suddenly when we were trying to reduce costs, but we re-scoped the project to also find ways to reduce our mileage and dependence on fuel."

"So we just identified an important question we can't answer with the data we have. What information will help us answer this, and how can we get it?"

Welcome to the mezhermnt Handy Hints blog!

G'day, and welcome to the official blog for the archives of the free ezine "mezhermnt Handy Hints". It's my twice-monthly subscriber-based email newsletter about business performance measurement - specifically featuring tips to make it easy, rigourous and even fun!

First things first - how do you pronounce "mezhermnt"? Well, it was my brainwave years ago (I might be starting to regret it now) of the phonetic spelling for "measurement"!

I hope you enjoy reading these short articles. I'm trying to keep them conversational and practical, so be sure you give me lots of feedback about the degree to which I am actually achieving that goal!

Smiles,

Stacey Barr
the Performance Measure Specialist
http://www.staceybarr.com
http://www.mezhermnt.com