Tools, Technologies and Training for Healthcare Laboratories

Questioning Six Sigma metrics for the Laboratory

It is always good to challenge conventional wisdom. In revisiting the assumptions of the past, we sometimes find we have evolved beyond old practices and can adopt new ones. A recent series of questions posted on the AACC listserve deserves a broader audience.

Questioning Everything: Why Sigma? Why the lab?

Sten Westgard, MS
September 2021


1. Why are labs reporting sigma?

We are not aware of labs that report out the Sigma-metric of each test to clinicians and patients. However, a minority of labs, we estimate through our surveys between 10-15%, are calculating Sigma-metrics. There is probably no single answer as to Why they are reporting. But it probably has something to do with benchmarking performance and optimizing QC.


2. Why do medical labs assume a shift of 1.5SD over time and automatically subtract that from the measured value - when there is not one shred of evidence that this shift exists in medical labs?

Here we bump up against Six Sigma theory. It's perfectly fine to reject Six Sigma theory and whatever GE, Motorola, and all those management people came up with. Even if it's been widely accepted everywhere else for decades, if you believe it doesn't work in the laboratory, don't do it. If you disagree with Six Sigma theory, you are not compelled to use it. That's one nice feature of Six Sigma metrics, there are no regulatory mandates forcing you to do it. It's optional. If you think it adds value to the laboratory, you can do it. If you're not getting anything out of it, no inspector is going to flunk your laboratory if you don't.

Error rates today impacts patients you report today - what is the value of the average error rate over a long term instead?

Six Sigma metrics can give you a useful estimate of your error rate at the time you calculate it. If you find out you have a 2 Sigma test method today, you don't have to wait for the long term to fix it.


3. Why are labs told that 3 sigma = 6.7% error rate - and that is OK - when most lab methods actually produce less than 0.005% errors? 

If "most lab methods" are producing 0.005% errors, they have good Sigma metrics.  3 Sigma is the minimum acceptable performance according to theory, but if you believe that is too low, too error-prone, you are free to insist on higher Sigma metrics. 3 Sigma is a situation where we recommend using all the Westgard Rules and increasing the number of control measurements to 4 or 6, so for labs that want to do fewer rules and make only 2 or 3 measurements, they definitely need to demand better quality performance than 3 Sigma.

We don't disagree that there are many fantastic, world class methods and laboratories. Great quality is out there, but errors in the laboratory are not evenly distributed. There are some instruments, some methods, some labs, where errors are much higher than 0.005%. Unfortunately, a lot of existing laboratory metrics don't make it easy to pinpoint which method is good or bad or great or terrible.

4.  Why are we told that 6 sigma is always good when there is no correlation between sigma and cost?

If you disagree with having fewer errors in a process, then 6 Sigma is not always good. It's perfectly reasonable to accept some errors, even more errors than 6 Sigma (even 6 Sigma is 3.4 dpm). There are diminishing returns for chasing very high Sigma metrics. That's why no one set the bar a 7 Sigma, or 8 Sigma, etc. If you believe that 4 Sigma is all you need in your laboratory, don't worry about chasing and acquiring instruments with higher quality.

As to the correlation between sigma and cost, we disagree. There are tools that can reduce costs, waste and time spent troubleshooting outliers based on the Sigma-metric. Ever since the publication of Cost-Effective Quality Control, there has been good advice on how to turn better methods into less QC, and thus, less expense for the laboratory. There are also publications in the literature that detail how higher Sigma metrics can be leveraged into savings:

So it's possible to reduce costs, but not inevitable. If you have a high Sigma metric, but you don't take advantage of it, you keep running QC the same way you did 10, 20, 30 years ago, the continuance of wasteful practices means you won't be able to save money. There are many labs out there that continue to run more controls than necessary, implement more rules than necessary, and chase troubleshooting ghosts when they don't have to.


5. Would it make more sense to monitor the number and cost of patient results that fail your allowable error limit each year?

For those who are curious, feel free to try Zoe Brooks' new software and training and approach. If Six Sigma is not good for the laboratory, another idea will supplant Six Sigma.