Tools, Technologies and Training for Healthcare Laboratories

What's the Role of a Rule?

You've learned about control rules. Now that you've got your control chart set up, your controls running, and your data plotting, what do you do with the rules when the dots are out?

Do more with Thermo Scientific MAS Quality Controls

What's the Role of a Rule?

Sten Westgard, MS

When it comes to quality control in the medical laboratory, it seems that the complexity hides in every step. Defining statistical control rules should be simple enough. Throw some lines on a chart, and if the dot is is above the line, it's out, right? As other lessons on the website detail, the definitions of the control rules are fairly straightforward, but the selection of which rules to use for a given test is more complicated.

Well, the reality is not quite as simple. While it isn't complex to figure out whether or not the data has violated a particular control rule, what to do after that is often hard to figure out.

Over the years, we have seen laboratories interpret the out-of-control flag in many ways.  Here's a short list of interpretation styles:

Rejection Rule

The basic idea of the control chart is that if a control rule is violated, your system is out-of-control. You stop the process, determine and fix what's wrong, then resume production.

The general assumption of setting up control charts and QC rules is that you are selecting rejection rules. When you perform QC Design with the OPSpecs chart, either through manual tools or the automated software programs, the recommendations are provided in the form of rejection rules.

The efficiency of a rejection rule is this: it will only alert you when something is wrong. When everything is fine, the rule won't bother you. [Yes, this is an idealized definition - all control rules have performance characteristics with false rejection rates and error detection capabilities. But this is an article written in general terms.]

Repeat Rule

This is perhaps the most common type of rule interpretation. If you get an out-of-control result, repeat the control until it falls back "in."

There are multiple articles on this website detailing why this is a bad practice. Most often, this type of interpretation ("Repeated, Repeated, Got Lucky") is connected to the use of 2s control limits or similarly tight limits. When the control limits are too tight, there are more out-of-control events that will occur, and many if not most of them will be false rejections. The RRGL behavior is a natural response to that problem, but the long-term consequence is that all results are repeated, both true and false alarms, and the QC system is degraded. Once your bench techs lose confidence in the ability of the QC rules to detect a real error, you have a Cry Wolf scenario. The worst case is that the instrument system may have a real problem, but because the laboratory is routinely repeating control results, the error may persist for a time before the situation is recognized.

This is the one type of rule interpretation that we explicitly discourage in laboratories. A better approach is to use QC Design to find appropriate control limits - rules that will only alert you when there's a real problem.

"Warning" Rule - old style

When the original multirule QC procedure (aka "Westgard Rules") was published, the first rule in the combination was a 2s "warning rule." The idea behind this rule was this: if everything was within 2s, don't use any of the other rules on the data. However, if a 2s violation did occur, then you were to proceed through the rest of the multirules to see if another rule was violated. In this case, the other rules are rejection rules - if one of them is violated, you reject the run. But if the "warning rule" is violated, but no other rule is violated, than the run is still considered in control.

Historically, this was a balance of sensitivity and workload. Back when the multirule QC procedure was first published, all the QC data was recorded by hand (this was Before Spreadsheet and that kind of personal office software). The "warning rule" alleviated the problem of the excessive false rejections generated by the 2s rule, while still making use of the 2s rule's sensitivity.  It also lessened the workload on the techs - they didn't need to work through all the other rules when the data was within 2s limits.

Today, most QC data is entered and interpreted by computer, either through a program found on-board the instrument, or through software found on the LIS or in middleware.  There is no longer a workload issue (the computer has enough extra processing cycles to do this work continuously), so the old way of using 2s as a warning rule isn't necessary.

"Warning" Rule - modern style

Nevertheless, there are laboratories that don't like to wait until there is an actual problem before confronting it. They would like to have advanced warning of a systematic drift or some problem caused by a gradually degrading reagent or system component. Mean rules, like 9x, 10x, etc. are often used for these purposes.

These "warning rules" are also not rejection rules, but they do give the laboratory a chance to sniff out problems before they reach critical size. Potentially, a laboratory could see a warning rule violated, determine the source of the problem, and fix it, before any run violates a rejection rule. In this sense of the rule, y are trying to prevent an error before it occurs.

The drawback of these rules is that they require additional effort to monitor and it could confuse the issue. A warning rule may just be noise, particularly the more sensitive ones. If you treat every warning rule effectively as a rejection rule, you basically retunr to the problems with earlier 2s implementations. You may start to get more false rejections, which may in turn erode confidence in the system, causing techs for revert to Repeat Rule behavior.

Very sophisticated laboratories frequently employ "warning rules" in this modern sense, trying to be proactive. But the use of rules for this purpose isn't right for every laboratory.

Trouble-shooting Rule

This is probably a type of interpretation you haven't heard about very often. Here's the scenario: you have a set of rejection rules and one of them is violated. But now that you know there is a rejection, you can apply additional rules retroactively on the data, to see if there are any other control rule violations that might give you a clue to the source of the problem.

For example, if you have simplified your QC design to a single control rule like 13s (perhaps because the Sigma-metric of the method is quite high), there is not much information provided by the control rule when there is a rejection. Usually, the 13s control rule violation is typical of a random error. However, if you can look-back at the recent runs of control data, you may be able to apply and interpret the data with multirules. You might be able to see that a 22s rule was violated over the last two runs, which would lean toward a systematic error. Or perhaps a 41s rule was violated, again an indicator of a systematic error.

This style of interpretation doesn't even have to be a formal operating procedure. You don't have to designate a specific set of rules to use during trouble-shooting. Just be aware that if one rule has been violated, you can also see if other rules were also violated in the current and previous runs. These rules may give you additional information on where the problem is occurring.

The advantage of using a Rejection Rule, paired up with a Trouble-shooting Rule (or Rules), is that you have a simplified primary error detector. The workload is lessened during routine operation. But when an error occurs, you can call upon the full arsenal of control rules and use multiple rules to diagnose what might have gone wrong with the method.

Too many Rules? Too many Roles?

After reading all of these different ways to interpret rules - and taking into account all the different possible rules that can be used - you can be forgiven if you are a bit exhausted.

In practice, this is much simpler than it sounds. Ultimately, laboratories want some kind of Rejection Rule, one that maximizes error detection while minimizing false rejection, with as few rules as possible.  Then, depending on the sophistication of the laboratory, they may want to add in some (modern) warning rules. When an actual out-of-control flag occurs, the laboratory may also want to add some look-back trouble-shooting rules. (And let us repeat:  we strongly recommend avoiding Repeat Rules).

What unifies all of this is a QC Design approach. Using tools like Sigma-metrics and Method Decision Charts, you can select the appropriate Rejection Rules, be they single or multirule. Then you will also be able to determine how much QC is necessary, which will in turn help you decide whether or not you want to add additional warning or trouble-shooting rules.

So what are you waiting for? Roll out your Rules...