Tools, Technologies and Training for Healthcare Laboratories

Un-conventional Answers to Un-conventional Questions


Un-conventional Answers to Un-conventional Questions

Sten Westgard, MS
September 2021

On September 14th, Technopath sponsored a LabRoots webinar on the Un-conventional way to do QC.

Unconventional QC Webinar artwork 600px x 150px1

We ran out of time to answer all the questions that came in from nearly 1000 live attendees. Here, as promised, are the questions that were submitted during and after the session.

The questions fall into 7 broad categories:

  • QC Potpurri (starting up, troubleshooting, etc.)
  • Bringing up Bias
  • Taking on Total Allowable Error
  • Starting out with Six Sigma
  • The What's and Why's of Westgard Sigma Rules
  • Changing your Tune on QC Frequency
  • The Qualitative Conundrum

QC Potpurri (the Miscellaneous of means, SDs, CVs and LJCs)

How often do you suggest adjusting QC means and SD?

If by adjusting QC means and SD, you mean arbitrarily widening your control limits until your control can hit the side of a barn and remain “in,” the frequency of that should be zero, never, never ever.

If you mean updating your mean and SD with the latest set of performance you are witnessing in your own laboratory, you should do that as often as necessary. We typically see labs that start out with 1 month of data to determine mean and SD, then as additional months elapse, they update every month. The CLSI C24 guideline recommends using a cumulative mean and SD built on 3-6 months of your performance. Obviously, that assumes you have stability over 3 to 6 months. For controls with short shelf lives, and large changes in reagent lot to lot behavior, you may not be able to accumulate as much data on performance.
That said, there are special circumstances that may require you to re-establish your mean and SD, for example, whenever the instrument undergoes a major maintenance, or a part gets replaced, or any significant disruption of “normal” operation.
Some additional tips can be found here:

Do you suggest setting a %CV for QC data from lot to lot or a set SD from lot to lot?

We suggest determining the actual CV and SD for each QC lot. At the beginning of a lot switch, you might temporarily use the package insert range, but as soon as you have your own performance data, you should base your mean and SD on that observation.

When configuring qc data for the instrument, Some instruments calculate sd divided the range by 4 , is it recommended to validate qc results with wide limits and big sd?

We advise against doing this. The use of the manufacturer’s range is problematic. Their range or SD is built on the combined variation of multiple laboratories. Therefore, by definition, there is more variation present there than your own laboratory will experience. The attraction of the big SD is that everything will fall “in”. ( Of course, if you use that SD to determine your sigma metric, you will actually see a very low sigma metric. ) The best practice is to use your own mean and your own SD.

How many parameters, should we check QC L-J chart for complete blood count test of Hematology?
(Due to there are many parameters with automatic analyzer)

The theory of QC states that every parameter should have its own QC. Just because there are a lot of tests doesn’t mean you shouldn’t QC them. There are a lot of cars on the road, but the rules of the road still apply. There may be a valid argument that a ratio or an index built on the performance of other analytes doesn’t really need its own QC (just so long as the QC of all the component analytes are good). However, the regulations basically don’t care about that – every test needs QC.

We have some hematology tests with the value that is almost the same everyday, after we plot L-J chart, we will see systematic error. How can we solve it?

This is a different challenge to assays, and it’s not necessarily confined to hematology parameters. A longer discussion of it can be found here:

How do you calculate the range for each test on your QC?

This is a fundamental step in QC.

Is QC needed daily on backup equipment?

Yes. If you want the ability to run that test today, it must have QC run today.

Can we use samples of external proficiency testing as a QC material?

This is a can of worms. It can get you into trouble in the US under CLIA regulations, because it may appear like you are “testing” the PT in a different manner and thus you are giving it treatment unlike a normal patient specimen. In most cases, the PT material is too expensive to be used as a QC material.

We are having instability in measuring Chloride because of salicylate concentration on a QC material can we use six sigma to compare it with another qc material to determine what to use?

Using different control materials to determine the source of an error is an acceptable trouble-shooting technique. If the control is the source of the QC problem, it’s definitely a reason not to use that control.

Is it appropriate to correct with factors the results of the controls to bring them closer to the mean? for example correcting the slope and use this factors in patient samples

Correction factors on patient results can sometimes be problematic. It’s better to fix the method so that you report the number that you get, you don’t modify it. One of the dangers of modifying how you report the test is that you might be moving the classification of the test into Laboratory-Developed Test (for CLIA and CAP, etc), which has a much higher burden of statistical validation and staffing skills.

I usually instruct my staff to just rerun that QC. What necessary steps should be taken before rerunning QC?

Playing “double or nothing” is not a great approach to QC. It’s a roll of the dice. Better to design QC so that you aren’t plagued by false rejections, and only get an alert or flag when there’s a real problem. That should reduce the number of outliers you see, but that also means when the alarm goes off, it’s much more likely there’s a real fire and you should not be repeating the control in this situation.

Please, is it appropriate to use the 3 SD Control Limit?

In some cases, yes. The Six Sigma metric process (and Westgard Sigma Rules) helps you determine when this is.

I would appreciate your suggestion on preferred lot lengths for QC, calibrators and reagents. Eg it would be great to have QC lot for 18 months for QC lot, 6 months for calibrators, and if possible 6 months for reagents as well. The number of combinations of calibrator and reagent lots a year, may affect the freq of troubleshooting. Thanks!

What is the preferred length to buy a QC lot (> 1 year?)? How about lots for reagents and calibrators...6 months worth? to reduce the number of cal and reagent combos a year?

There are some quality factors that come into play when choosing lots for QC, reagent, and calibrators. You should not choose reagents only on the basis of the longest shelf life – you need to confirm in some way that a 1.5 year old control lot will still function reasonably like a new control lot. But if the quality is reliable of the lifespan of the lot, then other factors will drive your choice, such as the cost, the ability to sequester, the storage space you have in your laboratory, the combination of testing and platforms, etc. Most of these decision drivers are outside the scope of this webinar.

How to you handle the 6x rules in the LJ when you have onboard many bottles of the same reagent?

I think what this question is trying to get at is, What happens when there is a confirmed systematic error (6:x) when there is still a lot of the reagent left onboard? If we assume that the 6:x rule is necessary and the culprit of the drift is confirmed to be the reagent (that’s a lot of assumptions). Well, to be blunt, it doesn’t matter how much of the reagent there is, if it’s wrong. Having more reagent doesn’t make the error smaller. Having less reagent may make the rejection of it more palatable. If it’s a reagent problem, there may be a case to make the reagent vendor pay for new or replacement lots. They gave you a product that was not acceptable for performance.


Bringing up Bias

Please review again where we can find our test bias.

There are many different ways to estimate your bias, from a reference method or reference materials all the way down to the use of an assayed control target mean. A full discussion of all the possible ways to calculate bias can be found here:

Our recommendation, which is biased by our opinions, is that the use of a peer group software provides you with the most practical, easy to access estimate of bias.

To calculate six sigma values, can we use the bias data we obtained with a single sample in the external quality control program, which will be the bias of the previous month's result, for all three levels of internal quality control until the next inaccuracy data comes?

You can make your estimate of bias from the EQA/PT survey using your best judgment. If you have a lot of data covering multiple surveys, you may be able to narrow your focus on bias estimates just from the most important levels. You may feel more comfortable using all the levels from as many surveys as possible to get an average bias. That’s acceptable too. This is an area where you get to make the choice based on your best understanding of the data and the needs of your patients.

What is your recommendation for bias estimation? Which guidline?

CLSI EP9 is the best official set of recommendations on determining bias.

If peer QC data is used to determine bias, what is the ideal set of stats--our lab cumulative versus peer cumulative? Or our monthly vs peer cumulative? Our monthly vs peer monthly?

Cumulative means (peer and your lab) will provide the most data and therefore the best confidence in determining your bias. If you only have one month of data, then of course you can only do a monthly mean vs cumulative peer mean comparison, but as soon as you have a cumulative mean, compare that against the cumulative peer mean. All that assumes, of course, that there is reasonable stability in the reagents and QC lots across multiple months for the method. If you are trying to do this on a test where the reagent lots cause dramatic shifts in performance, that may not be helpful. There are some programs that provide the granularity to track by reagent lot and QC lot.


Taking on Total Allowable (analytical) Error

How to calculate TEa if required. Thank you

In general you don’t calculate TEa, you look it up.

Where do I find the CLIA guidelines you mentioned?

Our coverage of CLIA can be found here:
Our specific listing of current CLIA PT goals is here:
And our list of the proposed changes to CLIA PT goals (made in 2019 and in limbo ever since) is here:
But the official home of the CLIA regulations are the Federal Register and the CMS website.

How do you determine your six sigma metric if there is not a Total allowable error value mandated by CLIA/CAP, etc.

If CLIA or CAP don’t provide a goal, remember there are other sources for allowable total errors, you can see a few of them here:
If you can’t find a TEa anywhere, then you can consider creating one specifically tailored to the clinical use of the test in your hospital for your patient population. After all, that’s what the 2015 Milan Hierarchy ( ) advises as the best approach. Of course, that’s also the most fraught approach – hard to get all the clinicians to advise you, even harder to get them to agree.

For some tests, we cannot find TEa (%) in CLIA, how can we get it?

You can see some possible solutions here:

How can we find TEa values for MCV, MCH, and MCHC?

Some of these are not listed in CLIA
Here is a possible source of additional performance specifications:

I have some problem. My colleague thinks that we now have the concept of TEa and do not need to separately control imprecision and bias. Moreover, he chooses a big TEa in order to get a good sigma. This is how he wants to achieve the best quality. I disagree with him. I think we shouldn’t use a big TEa to get good sigma. What do you think about it?

The choice of a TEa simply to get a good sigma is a problem, to start. TEa does indeed represent the combined impact of imprecision and inaccuracy. You could set individual acceptability targets for CV and bias if you want, but the TEa allows you to by more dynamic, allowing a bit more room for bias if your CV is lower, etc. There may be a conflict between the individual CV and Bias settings and the TEa, but that could be reconciled by choosing the TEa on the basis of appropriateness for patients, not size.

When comparing Measurement Uncertainty, it should be compared with TEa?

This is another topic entirely, but we have discussed it recently here

Should you monitor CV % and compare with an allowable imprecision or should you only compare your TE against your TEa?

You should certainly still monitor your CV, if it looks unacceptable, dig deeper into the reasons. However, what may be considered unacceptable may have changed when you take the TEa into account.


Starting out with Six Sigma

If we shift to using six sigma, do we still need to use L-J charts?

Six Sigma does not replace LJ charts. It informs the set up and use of LJ charts.

Could you explain more about centering your Sigma metric around a medical decision point?

There’s a reason the medical decision point, or critical decision level, is important to us: we want to focus our Sigma metric efforts around that level. Wherever the test has the greatest impact on clinician decisions and patient diagnosis and treatment is obviously where you want to know your performance on the Sigma scale. If you are running multiple controls, you might have different Sigma metrics at different levels. Knowing the critical medical decision level will help you identify which control level and which sigma metric to pay attention to. You can find a rudimentary set of levels here: But it will be best to identify the levels important to your clinicians and your patients to guide you to the best decisions.

Where can I find more information about Six Sigma?

Six Sigma in general, you can find huge amounts of information on the Internet and in books.
For analytical Six Sigma, there’s a wealth of info here on the Westgard Website. Here’s a starting point:
We have books and courses and structured lab protocols.

What should we do if one level among the three levels has a poor Sigma value?

The question to ask next, Is that level important to patient diagnosis and treatment? If there are no patients there, if no important medical decisions are being made at that level, then the Sigma metric is less worrisome or relevant. Choose the most important decision level, and calculate the sigma metric with the control level closest to that level.

What if we change batches and the controls give different results from expected ( batch more sensitive )?

If your performance changes, and you have a new mean and SD, that will generate a new Sigma metric, which may drive you to a new implementation of QC.

After the calculation, how to you equate it to the sigma level?

Once you make a Sigma-metric equation, you can use the simple sigma categories (4, 5, 6 sigma) to match up with the Westgard Sigma Rules. There are more granular tools available to use, such as the OPSpecs chart, but the simplest tool for QC Design is the Westgard Sigma Rules.
For more on OPSpecs, check out these resources:

How do you determine which “sigma level” you're at after performing the sigma calculation?

This is a similar question as the one above. Sigma level is a simplification of the sigma metric. The Sigma metric equation will calculate, you guessed it, your sigma metric. Then you can simplify that into a category or level that can be used with the Westgard Sigma Rules. For all metrics at 6 or above, that simplifies to 6 Sigma. Why? Because with 6 Sigma you only need one rule, the 1:3s with N=2. If you have a higher Sigma metric, you can’t use “half” a rule, so you’ll still be using 1:3s with N=2. For metrics below 3 sigma, that simplifies down to panic. Ok, that’s not serious – but you should be doing all the Westgard Rules plus many other things beyond statistical QC, and ultimately you should be trying to reduce imprecision and bias or in the most extreme case move the test to a different method or a different instrument.

Can I use QC materials that are developed by our self to calculate the Six Sigma points?

If you build your own QC materials, from pooled patient samples, that’s what we call “old school” QC, what had to be done by all laboratories before commercial vendors existed. The challenge is in the composition and the stability and reproducibility of the control. If there is a lot of extra imprecision added by the patient control, that will give you lower sigma metrics than you might get with more stable, commercial controls.

If different levels on the same analyte have varying sigma level, is it acceptable to have different rule sets for each level?

While the theory would allow you to design QC on every control level, that way madness lies. I believe most QC software doesn’t really support the ability to have multiple QC designs on a multilevel control. It’s most practical to make one decision about how to QC each test, and drive that decision by the sigma metric measured at or near the most important medical decision level.


The What and Wherefore's of Westgard Sigma Rules

Are there any written texts that explain 6 sigma rules and westgard rules further please?

A paper in the literature: Westgard JO, Westgard SA. Quality control review: implementing a scientifically based quality control system. Ann Clin Biochem. 2016;53(Pt 1):32-50.

QC software used is westgard-based, so how do you switch to using 6 sigma rules values instead of entering SD data?

If you are implementing Westgard Sigma Rules on “normal” QC software, it comes down to the capabilities of the software. Older versions of QC software used to have essentially “on” and “off” switches for Westgard Rules – you either used all of the rules or none of them. More modern, sophisticated QC software will allow you to customize the implementation of your rules and activate and de-activate the Westgard Rules as you see fit. Most of the major vendors of QC have some level of this sophistication in their software, including an ability to calculate sigma metrics and define what rules should be applied.

The Westgard Sigma rules that you showed in this presentation for which level of QC 2 or 3 levels of the QC?

This very simplified version showed N’s of 2, 4, and 6. There are other versions that are slightly more complicated that are tailored for N’s of 2 and 4, and N’s of 3 and 6.


Changing your Tune on QC Frequency

How to control the frequency of running qc based on sigma calculation if in the quality control level covers more than 20 analytes and each has different sigma?

Ah, this is the sting of multiconstituent controls. If you have 20 analytes in the same control and one of them is reading at 3 sigma while 90% of them are reading at 6 Sigma, what should you do? Run the QC at the same frequency as dictated by the 3 Sigma analyte, thus over-running QC for all the 6 Sigma analytes, or run the QC at the 6 Sigma frequency and under-run QC for that one 3 Sigma? Here are a few articles tackling this new frontier: and

This challenge of finding the compromise between the efficiency of multiconstituent controls and the customization of QC for assays is one of the newest in our industry.

We are running around 120 samples per shift, what can you recommend to us to follow, 3 sigma or 4 sigma?

Unfortunately, that’s not how this process works. You don’t start with a patient test volume and decide to run at 3 or 4 sigma. You have to start by determining the Sigma metric, and that metric will drive the testing frequency. The testing frequency may or may not align with your actual patient volume. Getting the best QC design matched up with your testing volume is one of the new laboratory challenges of this decade.

Maybe I have missed it, could you please explain why do you suggest that we run 1 QC per 1000 tests?

We're not saying that all tests should only run QC once per 1000 tests. We're saying that QC frequency can be adjusted to the quality (Sigma metric) of the method.

The particular mathematics of QC Frequency are discussed here:

Is there a reference publication about the QC frequency?

It all goes back to Parvin’s seminal MaxE(nuf) paper:

 I did not understand you said with 6 sigma you can reduce qc frequency?

6 Sigma quality may allow you to reduce qc frequency. But really only if you are running QC more frequently than the bare minimum or the compliance level. If you are only running the least amount of QC that’s legal, 6 Sigma is not an excuse to violate accreditation or regulatory requirements.


The Qualitative Conundrum: Bringing all the tools of QC to bear on a plus-minus assay

How to measure sigma when qualitative test are performed?

Purely qualitative tests (with only positive negative answers) obviously don’t have any data for the Sigma metric equation. You can use the traditional approach to determine sigma metric, which is “counting defects”.

How can determine westgard Six sigma rules for Qualitative tests with quantitative cutoffs??

Semi-quantitative tests, which have numbers inside the lab, but only report positive/negative or similar categories outside the laboratory, represent a challenge to work with. For tests that have S/CO, there are usually no formal TEa goals set, so a laboratory has to develop their own. (as an aside, this is a project we – Technopath and Westgard – are working on as we speak). See for an example of the challenges and possible solutions.


 Thanks again for all the questions. Please don't hestitate to ask more: This email address is being protected from spambots. You need JavaScript enabled to view it.


Questions from the Repeat-Patient QC webinar

Repeat Patient QC was a well-attended webinar of 2023. But while the speakers answered dozens of questions live, there were dozens more that came in later. Here are the questions and answers that followed the webinar.

Questions on Patient re-testing after being out-of-control

By theory, after you get an out-of-control event, all of those patient specimens that were tested during the run must be repeated. But labs don't really do that.

What the Warning Rule Misses - differences in detection for different Westgard Rules

When you look at the classic Westgard Rules, there's a lesser-known wrinkle in how often some errors are detected.
Review the history of the 1:2s Warning Rule, why it went from rejection to warning, and see why it's now been eliminated from modern Westgard Rules and Westgard Sigma Rules.

Chronic Questions on QC Frequency

For most of laboratory history, how often you ran QC was not a question, it was a regulatory mandate. But the latest QC calculations enable laboratories to determine a completely customized QC frequency. Are laboratories ready?