Traffic lights – the decoration de rigueur for performance dashboards and reports. Have you gotten more carried away with the decoration, than with the rigueur? Take a look at these four common approaches to traffic lights, and see if you’ve got some room for improvement.
Approach 1: % difference from month to month
When this month is 10% worse than last month, the traffic light turns red. When it’s 5% worse than last month, the traffic light turns amber. When it’s 10% better than last month, the traffic light turns green. Obviously, this approach works for time periods other than a month, and for cut-offs other than 10% and 5%.
Such traffic lights encourage us, usually, to ask questions like “what caused such a big difference?” In turn, such questions encourage us, usually, to find some way to explain the difference. If we’re clever, we’ll already have added a comment to the performance measure explaining that the difference is due to something outside our control. If we’re not so clever, we’ll be putting up different explanations every month, and have a list as long as Santa Claus’ of improvement projects.
There’s no advantage I can see to this approach to traffic lighting. It tends to encourage us to knee-jerk react to data, tamper with business processes or blaming something we don’t have to do anything about. Time gets wasted chasing problems that aren’t there and we miss problems that are.
Approach 2: up and down, good and bad
When some performance measure values increase, it’s a good thing (like revenue, satisfaction and on-time performance). There are others whose values decrease and it’s a good thing (like rework, cycle time and pollution). Combine this with whether there’s an upward change or downward change in actual performance values and you get a complex range of traffic light signals to deal with: upward change that is good, upward change that is bad, downward change that is good, downward change that is bad. This “solution” probably resulted from a confusion that erupted when upward and downward arrows were chosen as the traffic light symbols.
When we sort out the confusion, these multi-faceted traffic lights encourage us to ask questions like “what’s behind the trend?” and the trend is concluded from maybe 3 consecutive points of data. Marginally better than approach # 1, and only just.
Any system of traffic lighting that moves us away from point to point comparisons (the essence of approach # 1) is a step in a good direction. But we still risk drawing the wrong conclusions from trend analysis that is based on not nearly enough data to be valid. And does upward and downward really matter nearly as much as good and bad?
Approach 3: statistically valid signals
Statistical process control is an analysis method that discerns variation that is typical from variation that signifies change has occurred. It’s like filtering the signals from the noise, something the other two approaches don’t do (they assume that any arbitrary difference is a signal, irrespective of the typical size of differences over time). The signals are defined from a set of rules that test the probability that a difference is due to just normal variability (no change) versus atypical variability (change). Signals include sudden shifts in performance, gradual shifts in performance and instability in performance.
When our attention is moved from point to point variations to patterns in variation over time, we ask questions like “what caused that shift in performance to occur at that time?” and “why is performance so chaotic and unstable?” and “what do we have to focus on improving to improve the overall average level of performance?”.
These questions seek root causes, not symptomatic causes. They lead us to find the solutions that don’t just fix next month’s performance, but fundamentally improve the baseline performance level further into the future.