Wednesday 11 July 2007

#50 Six Performance Measure Facilitator Attributes

Over the last 5 or so years, there seems to be an ever-increasing number of organisations that are creating a new role in the corporate office: the Performance Measurement Officer. Actually, the title of this role varies from organisation to organisation, and where exactly in the organisation structure that role is placed also varies.

Titles for performance measure facilitator positions have included Performance Measurement Officer, Performance Measurement Director, Manager Performance Measurement, Corporate Planning and Performance Reporting Officer, Corporate Performance Management Coordinator and Manager Planning and Performance.

Most often the person in this role of performance measure facilitator will be associated with the corporate planning team, but they are also associated sometimes with the information services team or even somewhere in the human resources department.

The one thing that is consistent, however, is the thing this person is responsible for: to facilitate the design, reporting and use of performance information in decision making about organisational results and improvement, usually across the entire organisation. This calls for some very specific attributes, and the following six should be considered the bare minimum.

attribute #1: intimate understanding of the organisational planning process

Without a very detailed understanding of how the organisation does its strategic planning, and cascades this strategic direction down into tactical and operational plans, the performance measure facilitator will struggle to assist managers and teams to focus on measuring what matters most. Knowing how to integrate performance measurement with the planning process ensures everyone is measuring the results that will most likely lead to the organisation fulfilling its strategic direction and achieving its vision.

attribute #2: a working knowledge of several performance measurement frameworks

If a performance measure facilitator can only claim knowledge of the Balanced Scorecard, then the organisation faces the risk of having its strategy too quickly packaged into a model that may not be the most appropriate. They need know how to apply a range of frameworks (e.g. the Performance Prism, Triple or Quadruple Bottom Line, Six Sigma Business Scorecard, EFQM or ABEF or Baldrige models) to assist managers and teams to decide what types of things to design measures for.

attribute #3: experience with at least one performance measure implementation process

There are far more performance measurement frameworks out there than performance measure implementation methodologies (e.g. six sigma and PuMP)! A performance measure facilitator that is worth their salt will have experienced at least one step-by-step process for designing and implementing measures, and will be on the lookout continuously to find emerging methodologies or to continue to develop and fine tune one that works for the organisation.

attribute #4: basic quantitative skills for creating and reporting performance measures

While they certainly don't have to be a statistician, the performance measure facilitator does need to be comfortable and capable to design simple data collection processes, manipulate and prepare data for analysis, perform simple analysis calculations (such as percentages, averages, ratios, standard deviations), choose and format charts that clearly announce the true signals in the data, and validly interpret those signals.

attribute #5: change management skills that are second-nature

Performance measurement is not usually fun and enticing, and it is usually threatening and hard work. The successful performance measure facilitator will know this, and will be so well equipped with at least some basic change management techniques that they find it almost second nature to establish the support of leaders, encourage ownership and buy-in, make the reason for change clear and communicate very well to all kinds of people.

attribute #6: intermediate project management skills

Often the performance measure facilitator is running several areas of the organisation through the performance measurement process at any one time. And especially when they don't have a large enough team to meet the demand for performance measures throughout their organisation, very strong project management skills can keep them focused on the priorities and keep everything else as organised as possible.

#49 The First Performance Conversation

Are you so busy that you battle to find time to have the kind of conversation with people that absorbs your full attention? The kind of conversation where you're listening to them with your eyes and ears and speaking to them from your heart? Do you instead write them emails, speak in bullet points and hope that when you call their phone you'll go straight to message bank so you can leave a concise message without getting caught up in small talk?

Are you writing your business goals and "communicating" them to everyone through email and presentations? Is "consultation" when you run some brainstorming workshops so people feel that have participated (irrespective of what you do with their ideas)? Then you are very likely still having trouble getting people to understand and buy-in to your strategy, performance measures and performance improvement.

Emails, brochures, PowerPoint presentations, strategy documents and vision/mission posters fail to get people excited about organisational performance. They consist of words and maybe a few images that are usually too vague and too bland to paint colourful and animated visions in the minds of those that read them. These artefacts of modern organisational strategy are always political: designed more to not provoke those that would oppose it, designed less to evoke those that would bring it to life.

When people read things that are written in typical management-speak, what happens in their minds, honestly? They can jump to their own conclusions about what "efficient, effective and productive best practice processes" look like, or they can slide deeper into cynicism or learned helplessness, or they can keep on keeping on, oblivious and unresponsive to any change in organisational direction. Not buying in, not owning it, not seeing their own aspirations and values in it.

Remember: staff usually have no knowledge whatsoever of the conversations that were had before the goals were written (and polished and rewritten and polished some more). The seven strategic objectives or the five critical success factors are just the sanitized remains of what probably started out as a very rich, emotive and inspirational dialogue about the things that really matter right now for the organisation. And here lies the secret to getting staff to buy-in to strategy: giving them that same chance to engage in rich, emotive and inspirational dialogue about what matters most right now for the organisation.

When you take the time - and it need only be one hour each month - to facilitate a conversation among staff about what the organisation should look, sound and feel like, then you'll have started the transformation. Stimulate this conversation with prompts like these:

  • What results are implied by our goals?
  • If we were already achieving our goals, what would we notice was different to how things are now?
  • What are some of the things that our team does that directly influence how the organisation's goals are achieved?
  • Are there some things our team does that impacts on other teams' performance?
  • What are the most important results that we should be trying to produce or improve?

This is the kind of conversation that should precede any other conversation about performance with your staff (especially the individual performance management or appraisal conversations). A leader has no right to expect staff to perform in a way that improves organisational performance if that leader has failed to make space and time for everyone to clearly and colourfully paint in their minds a picture of that place in the future they are collectively trying to create.

#48 Five Tips for Naming Measures

What's in a name? Well for performance measures, there's a lot in how they are named. Different organisations, in their performance measure experiences, have helped me see that what we call each of our performance measures can have a big impact on how useful those measures are. Here are five of the tips I'd recommend you consider when you want to formalise a particular measure in your organisation (you don't have to use them all, though):
  • unique name

  • accompany with a description

  • motivating language

  • adopting industry standards

  • 5 words or less

  • leave the target out

tip #1: give each measure a unique and specific name

A transport company I have worked with measures hundreds of things. One of them is the number of orders for deliveries. A pretty straightforward measure, you might think. Except that depending on who reports it, it is called different things, so users of the reports never know exactly what they are looking at.

Make sure the adopted name is the one that is used where ever and when ever that measure is reported.

tip #2: accompany every measure name with a description

Have you ever been frustrated by a report where a name like "Customer Loyalty Index" sits above a chart, and you have no idea what the numbers mean?

Use a sentence that describes what your measure is, giving more information than any name can. You might like to include things like the type of statistic (e.g. average or percentage), for what population (e.g. all employees versus non-managerial employees), and what the construct of the measure means (e.g. have attained all competencies associated with their current roles).

tip #3: use engaging and motivating language

I've recently worked with an organisation whose people are very creative, and they inspired me with their approach to naming measures: they used very emotive exclamations as measure names. For example, "You can't keep me away!" as the name for a measure of customers coming back for more.

Play with using affirmations, catch cries, headlines or other sensory rich statements to name measures.

tip #4: adopting industry naming standards

In the procurement industry, how fast inventory is turned over is a commonly used measure, and most often, it is referred to as 'Inventory Turn'.

If you're using measures that are accepted more widely in your sector or industry, adopt the naming conventions that are already accepted.

tip #5: use five words or so in the name

Too few words in a measure name can be as bad as too many. "Customer Index" says virtually nothing, whereas "The percentage of customers that either strongly agreed or agreed that our service is better than any of our competitors" is too long. A balance might be struck half way between the two: the measure name of "Compared to Our Competitors" with a description matching the longer statement above.

Aim for writing you measure names in around 5 words, and fine-tune it from that starting point.

tip #6: leave out the target

"Reduce waste going to landfill by 20% next year" is not a measure, but a goal (or objective if you prefer). The measure is actually the amount of waste going to landfill. The rest of it is really the target and timeframe.

Because measures often outlive their targets (that is, a single measure may have several targets throughout its lifetime, each subsequent target encouraging further improvement), name your measure before you frame it in a goal or objective statement.

the advantages of well named measures

Irrespective of whether you take on these ideas for naming measures or not, you'd have to acknowledge that when measures are named well, they get higher recognition, greater ownership, and far less confusion. So thoughtful naming of your measures is one little thing you can do toward simplifying an activity that probably already causes you more rework than you dare to think about!

#47 Measure Your Measurement

How do you know if all your efforts to do with measuring organisational performance are efforts worthwhile? Do you know what impact your measurement system is having on the very things it's there to help improve (which is organisational performance, in case it's not obvious)? What we're talking about here is measuring the performance of your performance measurement process.

Yes, it probably feels a bit like your brain is bending back onto itself, but there are some very good reasons why measuring your measurement process is so worthy a cause. You can do things like:

  • better convince people that performance measurement is worth doing

  • improve your measurement process like you'd improve any other process

  • evaluate different approaches to performance measurement to find the best

  • make the accountability of your Planning & Performance team more objective

But first, you'll need to think about how you go about measuring a performance measurement process. A sensible place to start is to decide what are the results you most want from measuring performance, then design measures for these results. And to give you a head start, here are some ideas about the main types of results you should consider.

result #1: people understand their role in achieving organisational goals

Performance measurement is (or should be) connected to the planning process like your ears and eyes are connected to your head. When measurement is done properly (that is, measures are designed and not brainstormed), it makes the goals of the organisation much less fluffy and much more tangible. And it puts persuasive pictures in the minds of people of the future they're going to help make real.

result #2: people find it easy to measure what matters

You know that the best way to get buy-in from people is to get out of their way, don't you? People will only love their performance measures if they conceived them and brought them into the world themselves. To do this, obviously they need to know what steps to take to design measures and how to decide what is worth designing measures for and what is not.

result #3: the measures are used (to improve organisational performance)

A great measurement process isn't about selecting measures. It's about bringing them to life and making sure they get used. You want people using them regularly, using them constructively, using them to test their prior decisions and actions, and using them to prioritise where they spend money for the good of the whole organisation.

result #4: organisational goals are achieved faster

A reasonable performance measurement process will mean you are measuring your goals. But an outstanding performance measurement process help you achieve those goals quicker than other processes. But you want to make sure that there aren't any unintended consequences, like there often can be when you're trying to do things faster.

result #5: the measures used create more value than they cost

The return on investment of your performance measurement process is just as important as the return on any other investment your organisation makes. If you've got a fabulous performance measurement process, then use of the measures has produced more savings or other value for your organisation than the cost of creating, reporting and using them.

how important is it to you to know?

If you want to know how good your performance measures are, really, a survey of staff or stakeholders just can't cut it. You have to put more elbow-grease into it than that, which means you must decide what impact you want performance measurement to have, and design measures that give you the evidence of that impact, specifically.

#46 How To Design Great Measures

Are you guilty of using the following methods as your approach to measure selection:

  • brainstorming with your team in a one-hour session during your two-day planning workshop?

  • trawling the internet or other places to find out what others like you measure?

  • asking your IT guy or gal what data you have and creating measures from that?
  • hoping someone will tell you (maybe a consultant or a stakeholder)?

These aren't approaches to measure selection. They are just ways to gather ideas for what to measure. None of these methods include any kind of overt and deliberate evaluation of which measures are the best measures. And you're probably wondering why your organisation has so many meaningless measures! Plus, on the flip side, these methods may have left you high and dry without any viable options for measuring some of those less tangible results like culture or sustainability or engagement or confidence.

If this is your burden - having no logical, practical way of choosing or designing the measures that can truly convince you that you're making the differences you need to in the world - then here are three tips that might make your life a little easier:

  1. measure the result, not the action
  2. what you can observe and describe, you can measure
  3. don't measure it just because it's easy

measure the result, not the action

"Ensure that people with a disability do not experience discrimination and have their particular needs for services and support acknowledged and met." Such an inspirational and noble goal is so easily cheapened by a measure like "Establishment of an effective Advisory Council on Disability". Such measures track the activity associated with the initiatives hypothesised to produce the results implied by such wonderful goals. They can't let us know how much or how frequently people with a disability experience discrimination.

No doubt you're going to want to monitor activities in your organisation, but what meaning does that have unless you are first monitoring the results those activities exist to produce or influence?

what you can observe and describe, you can measure

Why is it so hard to measure results like "achieving equi-marginal efficiency for trade-exposed industries on a least-cost trajectory within a general equilibrium model"? The answer is not because no-one has thought of the right measure yet! If we go back to basics, measuring is about observing and collecting specific information about something. If you don't know how to recognise when that something is happening, you can't know where and when and how to collect information about it, can you?

So before you can measure "achieving equi-marginal efficiency for trade-exposed industries on a least-cost trajectory within a general equilibrium model", you need to know what equi-marginal efficiency and least-cost trajectories look and feel and sound like when they're happening. Even more, you need to be able to describe it in words that are evocative, words that conjure rich and detailed and shared pictures in the minds of people before they select or design measures.

don't measure it just because it's easy

It's easier to select more viable candidates for measuring a result when you describe it richly and without weasel words. In fact, you can end up with so many candidate measures that you might be tempted to pick those that are the easiest to bring to life. You already have the data, it would be a waste not to use it, no-one's got the time to collect more data. But you'd be falling into one of the deepest traps of organisational performance management: not making sure that your organisation has the data it really needs.

You'll need to think about more than just the feasibility of each potential measure in deciding on the best ones. Measures are meaningful when they have strong relevance to the result you want them to evidence.

use your brain when you design your measures

Three basic steps to better measures: focus on the result and not the activities, articulate clearly what that result looks like, and shortlist your potential measures by balancing feasibility with strength of relevance. Yes it will take a little more time that you have probably been giving to measure selection, but it will save you loads more time than you have probably been wasting managing with the wrong measures.

If you want to get some step by step help (plus examples and templates) to design your measures, visit http://www.pumphowtokits.com/measuredesign.htm

#45 Why Do We Measure, Anyway?

Why do we measure organisational performance? The first answers that pop into your head might be:

  • you can't manage what you don't measure

  • what you measure gets done

  • we have to be accountable

  • they have to be held accountable

  • they told us to

These aren't the answers to the question this article asks. The reasons why so many organisations - particularly high performing organisations - measure things are more authentic, more fundamental and more motivating than those listed above.

to avoid knowing too late

At a government agency executive meeting I attended, participants were evaluating whether an end of year revenue target had been met. No it hadn't, and they did have lots of reasons why, most of which were how the market was changing and all their competitors were facing similar revenue downturns. If they'd had this kind of conversation more frequently throughout the year, perhaps they would have had time to create some strategies to better understand what was happening in their market and find new avenues of revenue generation.

Annual evaluation, or end-of-project evaluation is always too late to give you choices about changing your course. Are targets just about playing numbers games, or do they really represent important changes to ensure future health? The above organisation is no longer in existence. Perhaps if they'd treated their revenue target more seriously, they might still be around.

Frequently reported measures can give us early warning signs about whether what we are doing is actually making the differences it's supposed to, early enough that we have the chance to modify or stop doing it if the intended results are not forthcoming.

to avoid knowing too little

My friend works in a wholesale technology business that operates out of two cities over 1000km apart, with a staff of about 25 people and they sell approximately 50 product lines. The directors of this company only measure typical balance sheet stuff. Their staff complain incessantly about product returns, warranty service workload and availability of spare parts. Do they measure any of these non-financial things? No. They reckon they don't need to, because it's a small business and they can see what's going on by walking around. But the same simple problems that plagued them six years ago are still plaguing them.

Can you be everywhere at the same time, all the time in your organisation? Of course not. Most of what goes on in our organisations our physical senses (sight, hearing, touch, etc...) can't absorb or even detect with sufficient reliability for us to understand them.

A small suite of performance measures help us know far more about what is going on with the health of our organisation's processes, than our own eyes and ears ever could, with any reasonable amount of reliability.

to know the right things

A manager in the rail freight industry faced a typical problem for that industry several years ago: they were running out of capacity to move all their customers' produce. The typical solution to this problem is to invest in more rollingstock. Millions and millions of dollars worth. But he didn't take the typical solution. Instead, he measured and studied the way the system worked until he discovered that it wasn't how many wagons you had, but how quickly you could cycle those wagons through, that impacted the capacity. So he didn't need to buy new wagons because he did find a way to cycle the wagons through the system much faster, ending up with even more capacity than they actually needed.

How well do the decision makers in your organisation learn about what works and what doesn't work in fixing performance problems? Trial and error? Following traditional, already-proven strategies? How much real learning do they do about the real leverage points of unacceptable performance?

Well chosen performance measures, that monitor the root causes of the most important organisational health results, are measures that focus us on the things we really need to know. They help us break away from knowing things that really don't make much of a difference.

why do you measure performance?

If you aren't measuring to know enough about the right things, and frequently enough to do something about them, then perhaps you're not actually measuring performance?

#44 The Social Life of Performance Data

One of my clients is drowning in dozens of reports collectively containing over 100 measures. Where he expects two measures from separate reports to have the same values, they don't. Where he expects a measure's value to be accepted by his customer, it is disputed. Where he thinks he's looking at the right measure to answer his question, someone warns him no. The tangle of reports and measures is unwieldy, but has become the dogma of decision-making. Untangling them all into a streamlined sensible suite of reports is not as simple as setting up a swanky scorecard.

Data quality worries most users of performance measures. There are an obscene number of reported measures that only generate dialogue about how unreliable the underlying data is. But what can you do about the quality of performance data? I've heard some performance measure experts proclaim that performance data must have 100% integrity. Hogwash! It never will, and here are some of the reasons why.

performance data is gathered by people

A vast proportion of our performance measures rely on data that has been touched at least once by human hands. People design data collection forms and processes, people fill out those forms, people enter the data from the forms into computer databases, people extract and manipulate data out of databases, people filter and analyse the data to produce performance measures.

So human error and misunderstanding, ambiguity or absence of clear data definitions, ad hoc data collection and analysis processes, and vague measure definitions (the calculation of measure values) all contribute to the low confidence people have in reported measures.

How many of your performance measures are defined in enough detail to avoid miscalculation or use of the wrong data? How many of your data collection processes are documented consistently and ingrained into work practices? How many of your people that collect data have been trained to do it according to the documented process? Does your organisation have a data dictionary that is available outside of the IT team?

people know that performance data can sting

Unfortunately many of our organisations are still carrying the burden of a blame culture. People can still remember (or are still experiencing) the use of data as a big stick to humiliate, take resources away from, demote or sack the so-called poor performers. We know in this kind of environment people swing into self-preservation mode (it's only natural) and weigh up their choices: cop another whack with the data stick or sweep that nasty data under the rug?

Managers and decision-makers need to earn the trust of employees again, that data will not be used against anyone. Performance measures and data need to be seen more often being used to honestly assess performance of systems and processes, more often being used to explore root causes and learn from the past, more often being used to stimulate dialogue about how the future can be influenced.

How many of your managers and decision-makers look for root causes of undesirable performance in the systems and processes (as opposed to the people)? How many performance measures are supported by diagnostic measures of causal factors (as opposed to just slice-and-dice the data into smaller fragments)? Have you got an automatic improvement process that kicks in when a performance measure reveals a problem?

data has no meaning apart from its context

An event must occur before data can be produced. And the data is the product of the event being observed and interpreted and coded. When people are doing the observing (as opposed to a machine such as a temperature gauge), the person unconsciously - and occasionally consciously - applies filters that affect how the event is interpreted and how it is coded.

These filters are influenced by beliefs the person has about the event, their interactions and relationships with others around them, their physical and mental health on the day, what they are thinking about at the time, their values and priorities regarding their work, and the list goes on.

Have you explored the context around the types of performance data you collect? Have you thought about the factors that might influence the way someone interprets and codes what they observe when they are capturing performance data? Do you have guidelines and examples in your data collection instructions to help data collectors capture quality data?

don't just rely on technical solutions to data integrity problems

Yes, there's certainly more to the social life of data than the three parts discussed here. Most of them can be discovered and dealt with through better communication among the people involved in data capture: from designing measures to developing data collection processes, to collecting data, to storing and analysing it. Don't rely just on the technical solutions - think through what needs to change in the social systems surrounding data. And be concerned more with how much integrity your decisions can survive with, as opposed to 100% integrity.

#43 Three Approaches to Traffic Lights

Traffic lights – the decoration de rigueur for performance dashboards and reports. Have you gotten more carried away with the decoration, than with the rigueur? Take a look at these four common approaches to traffic lights, and see if you’ve got some room for improvement.

approach 1: % difference from month to month

When this month is 10% worse than last month, the traffic light turns red. When it’s 5% worse than last month, the traffic light turns amber. When it’s 10% better than last month, the traffic light turns green. Obviously, this approach works for time periods other than a month, and for cut-offs other than 10% and 5%.

Such traffic lights encourage us, usually, to ask questions like “what caused such a big difference?” In turn, such questions encourage us, usually, to find some way to explain the difference. If we’re clever, we’ll already have added a comment to the performance measure explaining that the difference is due to something outside our control. If we’re not so clever, we’ll be putting up different explanations every month, and have a list as long as Santa Claus’ of improvement projects.

There’s no advantage I can see to this approach to traffic lighting. It tends to encourage us to knee-jerk react to data, tamper with business processes or blaming something we don’t have to do anything about. Time gets wasted chasing problems that aren’t there and we miss problems that are.

approach 2: up and down, good and bad

When some performance measure values increase, it’s a good thing (like revenue, satisfaction and on-time performance). There are others whose values decrease and it’s a good thing (like rework, cycle time and pollution). Combine this with whether there’s an upward change or downward change in actual performance values and you get a complex range of traffic light signals to deal with: upward change that is good, upward change that is bad, downward change that is good, downward change that is bad. This “solution” probably resulted from a confusion that erupted when upward and downward arrows were chosen as the traffic light symbols.

When we sort out the confusion, these multi-faceted traffic lights encourage us to ask questions like “what’s behind the trend?” and the trend is concluded from maybe 3 consecutive points of data. Marginally better than approach # 1, and only just.

Any system of traffic lighting that moves us away from point to point comparisons (the essence of approach # 1) is a step in a good direction. But we still risk drawing the wrong conclusions from trend analysis that is based on not nearly enough data to be valid. And does upward and downward really matter nearly as much as good and bad?

approach 3: statistically valid signals

Statistical process control is an analysis method that discerns variation that is typical from variation that signifies change has occurred. It’s like filtering the signals from the noise, something the other two approaches don’t do (they assume that any arbitrary difference is a signal, irrespective of the typical size of differences over time). The signals are defined from a set of rules that test the probability that a difference is due to just normal variability (no change) versus atypical variability (change). Signals include sudden shifts in performance, gradual shifts in performance and instability in performance.

When our attention is moved from point to point variations to patterns in variation over time, we ask questions like “what caused that shift in performance to occur at that time?” and “why is performance so chaotic and unstable?” and “what do we have to focus on improving to improve the overall average level of performance?”.

These questions seek root causes, not symptomatic causes. They lead us to find the solutions that don’t just fix next month’s performance, but fundamentally improve the baseline performance level further into the future.

#42 Techniques For Cause Analysis

Measuring performance results is a great thing to do, but understanding the causes of those results is at least as worthwhile. Understanding causes means you have information about how to exercise more influence (or control) over those results. If you want your results to improve, you've got to change the right things about the process or activity or function that produces those results.

Understanding the real causes of performance results means taking a more rigourous approach than knee-jerk reacting to hearsay, opinion or gut feel. Here are some basic techniques to help you navigate through the stages of cause analysis:

  • find the likely causes, and measure the incidence of each
  • assess the nature and size of the cause's impact
  • check for interaction with other causal factors

Technique #1: flow charting

It's impossible to do any kind of serious cause analysis unless you can actually trawl through all the factors that have some kind of potential impact on your performance result, and sift out those factors that have the most dominant impact. Flow charting the process or activity or function whose results you are measuring, is a great way to systematically trawl through all the potential causes of those results. There is software available for flow charting, but hand-drawn charts are quick and easy.

Technique #2: cause-effect diagrams

After flow charting your process and identifying what can sometimes be dozens of potential causes, you can have long lists that contain duplicates and related causes. Cause-effect diagrams (or fishbone diagrams) are a great way to collate and organise potential causes as you identify them, clustering related causes together so you can more clearly see the themes, and more easily discuss the most likely causes. There is software available for cause-effect diagrams, but again hand-drawn diagrams can do the job well enough.

Technique #3: Pareto charts

When you then go and count or measure how often or how much each likely cause is associated with your results, Pareto charts can then help you rank the causes and highlight those that have the biggest impact. You're now getting the stage where you have between 2 and 5 (roughly) causal factors you may wish to learn even more about. In Microsoft Excel, just use a vertical bar chart on your data, after sorting it from biggest to smallest.

Technique #4: scatter plots

When you arrived at the few causes that have the biggest impact on your performance result, it can be useful to know just how big that impact is. Scatter plots are an easy and visual way to explore when the cause variable changes, how much and in which direction the performance result changes. Scatter plots are one of the charts available in Microsoft Excel.

Technique #5: correlation coefficients

To get more a quantitative measure of the impact of a causal factor on your result, you can calculate a correlation coefficient which will give you a value between 0 and 1 indicating the strength of the relationship between your causal factor and the result. A postive value means that an increase in your causal factor will likely lead to an increase in your result, and a negative value means that an increase in your causal factor will likely lead to a decrease in your result. In Microsoft Excel, use the CORREL function to calculate your correlation coefficient.

Technique #6: regression analysis

Regression analysis goes a step further, and builds a mathematical model you could use to predict a result based on a change in your causal factor. Knowing this can help you set achievable targets for improvement, and estimate realistically what resources you're really going to need to get that improvement. In Microsoft Excel, create your scatter plot between your causal factor and performance result, then add a trend line, with the options of showing the equation and R-squared value on the chart (the R-squared value is a measure the reliability of the equation).

There are certainly more statistical techniques that can help you with cause analysis (such as multi-variate regression, experimental design and analysis of variance or ANOVA), but those provided above will still bring some valuable rigour to your performance improvement efforts.