Performance-Measurement Systems -- What are the Common Design Mistakes?

In October, Dennis Sherwood published a book entitled Strategic Thinking Illustrated: Strategy Made Visual Using Systems Thinking -- His book is about the behavior of systems. Systems are important, for we interact with them all the time, and many of the actions we take are influenced by a system – for example, the system of performance measures in an organization influences, often very strongly, how individuals within that organization behave. Systems thinking, the main subject matter of this book, is the disciplined study of systems, and causal loop diagrams are a very insightful way to represent the connectedness of the entities from which any system is composed, so taming that system’s complexity.

When I spoke with Dennis this month, I asked him: “What are the most common mistakes made when designing performance-measurement systems?” Here is his complete answer:

Let me choose just one. The BIG ONE. Unintended consequences.

A (real) example. A number of years ago, the UK government was becoming increasingly concerned about how long people had to wait in the reception area of hospital Accident and Emergency Departments until they were seen by a nurse or a doctor. To address this, they introduced a performance measure that at least 98% of those arriving must be satisfactorily treated and discharged, or transferred elsewhere within the hospital, within 4 hours.

That all sounds eminently sensible, and the intention was to act as a spur to hospitals to speed things up. 

Not long after this performance measure was introduced, a story appeared in a national newspaper with this opening paragraph:

"Hospitals were last night accused of keeping thousands of seriously ill patients in ambulance 'holding patterns' outside accident and emergency units to meet a government pledge that all patients are treated within four hours of admission."

It turns out that the "4-hour clock’" only starts running when the patient arrives at the front desk of the hospital. So if the patient is still in the ambulance, that doesn’t count… 

The "holding pattern" enables the hospital to meet its performance measure, but with two "unintended consequences." From the patient’s point of view, nothing has changed, nor has the government’s objective been met. And as a by-product, whilst the ambulance is acting as an "external waiting room," it remains stranded, and can no longer fulfill its primary purpose.

Not only did the performance measure not drive the intended outcome, it made matters worse by reducing ambulance capacity. Oh, dear.

This is by no means uncommon. Performance measures are introduced to influence behaviors so as to achieve some intended result. The problem, however, is that those behaviors take place within a complex context, and unless that complexity is well-understood, it is very easy to fall into the trap that a well-intentioned intervention has an unintentional, and perverse, effect.

How can this trap be avoided?

"Systems thinking" can really help, for systems thinking is a very powerful, and pragmatic, way to tame the complexity of real systems, so ensuring that, for example, performance measures can be introduced that really work. That designs out "unintended consequences." Indeed, in my view, that term is just a mask to disguise the fact that whoever designed the performance measures that failed just didn’t understand the corresponding system. Don’t let that happen to you!

What do you think of Dennis Sherwood's answer? Has your organization ever instituted a measure or process designed for improvement that drove incorrect or "gaming" behavior? His book is available here.