Basic QC Practices
QC - The out-of-control problem
What do you do when you're control is out-of-control? Conventional wisdom is that you repeat the control or try a new one. But that ignores the problem. It doesn't solve anything. Elsa P. Quam BS, MT(ASCP) explains what bad habits we have and what good habits we can adopt to make our laboratory practice better
PLEASE NOTE: An updated version of this article is now available in Basic QC Practices, 2nd Edition
Change Old Bad Habits - Recognize Problems:
Develop Good Habits - Solve Problems:
- Good Habit #1: Inspect control charts or rules violated to determine type or error
- Good Habit #2: Relate type of error to possible causes
- Good Habit #3: Consider factors in common on multitest systems
- Good Habit #4: Relate causes to recent changes
- Good Habit #5: Verify the solution and document the remedy
In routine operation of a quality control (QC) procedure, the control materials are analyzed before or during the analytical run, the control results are recorded and plotted on control charts, and control status is determined by inspecting the control data using the control rules (control limits) selected. If the control results are "in", the run is accepted and patient samples can be assayed or reported. If the control results are "out", the run is rejected, the problem is identified and resolved, and the a new run can begin or, in the case of batch assays, a run of patient samples can be repeated. This is the way control procedures are "supposed" to work.
Change Old Habits - Recognize Problems
Current QC practice does not always follow these guidelines. All too often, the first response to an "out of control" situation is to automatically repeat the control or "try it again". A 1994 CAP Q-Probe, aimed at identifying quality control exception practices, found that most laboratories simply reran controls to resolve out-of-control events [1]. Guidelines for corrective actions in out of control situations often suggest repeating the controls before inspecting the control charts or looking at the type of rule causing the rejection [2]. By automatically repeating the controls, we are saying that we don't trust the control procedure to do its job, i.e., provide a certain level of error detection at an acceptablly low rate of false rejections. If QC procedures have been carefully planned on a test by test basis, taking into account the quality required for each test and the performance capabilities of the test method, then the error detection capability should have been maximized and the false rejection rate should have been minimized. We can then trust the QC procedure to do its job and detect problems. Our job should then be to solve the problems and eliminate the causes or errors [3].
Bad Habit #1: Repeat the control
Simply repeating the controls is an outdated practice that can be traced to the use of 2s control limits, or the 12s control rule, whose false rejection rate is 5% for N=1, 9% for N=2, and 14% for N=3. The false rejection rate for a 13s control rule is only 0.3 % or 3 chances in a 1000! Logic should tell us to believe in what the control results are indicating. While repeating the control will often give us a value that may be "within the limits", careful inspection of the actual repeat result will often show that "we may have just squeaked by" and what we really have done is to delayed the troubleshooting and problem solving until a future run.
For example, the graphic here shows the expected distribution of control results for a systematic error that causes the mean to shift by 2 times the standard deviation of the method. For the 1st out-of-control event, the next control point shows that a repeat of the control is okay. For the 2nd out-of-control result, a repeat would again be okay. For the 3rd and 4th events, repeats would again be okay. For the 5th event, the first repeat is still out, but not quite as far; a second repeat is still out but better; a third repeat is finally in, so everthing now is okay. Right? Obviously not! When you see all these control results displayed on the chart, it is clear there is an analytical problem that needs to be identified and fixed. The practice of repeating the control just puts off the resolution of the problem.
Bad Habit #2: Try a new control
Another myth is that the control is "bad". The same QC troubleshooting guidelines that suggest repeating the control, often suggest trying a new vial of control if the repeat testing was not successful in providing an "in control" status [2]. It is true that sometimes controls are short sampled, are used beyond the stability date, have been stored improperly, or were prepared incorrectly. Why does this happen? Written instructions for the careful reconstitution, mixing, handling, storage, and stability of controls should be included with the QC procedure and also included in the QC procedure training and implementation. We should not be cutting corners for such an important process. Cost is another issue. Control materials are usually far cheaper than the cost of the repeat testing. Automatically repeating controls or blaming the control itself are often attempts to resolve the problem without the hassle and time delay necessary in finding and eliminating the true cause of the QC failure. These pracrices have become habit because they are easy and we often to do not have or do not teach the skills necessary to resolve the problem using a more systematic approach.
Develop Good Habits - Solve Problems
Problem-solving or trouble-shooting is both a skill and an attitude. It's a skill because it depends on your knowledge and experience. It's an attitude because it depends on having confidence to investigate the unknown, often under pressure of delaying the reports of critical test results and the stress of having others looking over your shoulder while they wait for you to get the analytical system running again.
Good Habit #1: Inspect the control charts or rules violated to determine type of error.
In order to solve a control problem, it is useful to identify the type of error (random or systematic) that is causing the QC failure. Random and systematic errors are described in QC - The Chances of Rejection. Different control rules have different capabilities (sensitivities) for detecting different types of errors because of the nature of the rule. Rules that test the tails of a distribution or the width of a distribution, such as the 13s and the R4s rules, usually indicate increased random error. Rules that look for consecutive control observations exceeding the same control limit , such as 22s, 41s and 10X rules, usually indicate systematic error. This is not to say that multi-rule QC procedures must always be used in order to determine the type of error present. Inspection of control charts will provide the same information from the graphical display of the QC points.
For example, consider the same control results discussed above where systematic error is present and we now apply the consecutive observations types of rules. For the 1st event, notice that the point before and the two points after are all above the 1s limit, giving us 4 points in a row or a 41s violation. The 2nd event again shows 4 points in a row are above the 1s limit. Notice that the 3rd event is followed by two points above the 1s limit, again showing a 41s violation. The 4th and 5th events both show two control points in a row above the 2s limit, or 22s violations. In inspecting the control chart, nearly all the points are above the line for the mean of the control material, again demonstrating a systematic shift or error.
Systematic trends will be detected in a similar manner because they are systematic changes that occur gradually over time. Random error, on the other hand, will show an increased scatter in the data points about the established mean. The points will be bouncing around on both sides of the mean and will be detected by rules that look for points in the wide tails of the distribution (outside 3s control limits or 13s control rule) or rules that depend on the range or difference between the high and low values in a group of control measurements (R4s control rule, for example).
The type of error should always be determined, if possible, before beginning to identify the cause of the problem. Further classification of systematic error as to a shift or trend is also helpful.
Good Habit #2: Relate the type of error to potential causes
The type of error observed provides a clue about the source of error because random and systematic error have different causes. Problems resulting in systematic error are more common than problems resulting in increased random error and are also generally easier to resolve.
Systematic errors may be caused by factors such as a change in reagent lot, change in calibrator lot, wrong calibrator values, improperly prepared reagents, deterioration of reagents, deterioration of calibrator, inadequate storage of reagents or calibrators, change in sample or reagent volumes due to pipettor misadjustments or misalignment, change in temperature of incubators and reaction blocks, deterioration of a photometric light source, and change in procedure from one operator to another. Random errors may be caused by factors such as bubbles in reagents and reagent lines, inadequately mixed reagents, unstable temperature and incubation, unstable electrical supply, and individual operator variation in pipetting, timing, etc.
Erratic performance due to occasional air bubbles in sample cups or syringes or defective unit-test devices are a different kind of random error, often called "flyers", because they aren't really caused by a change in the imprecision of the method, but rather represent an occasional disaster. It is very difficult to catch flyers by QC. Patient replicate determinations may be a better way of detecting these kinds of events.
Good Habit #3: Consider factors in common on multitest systems
In the case of muti-test instruments, problems may occur with only one test or with many tests. The problem identification process when only one test has an error is identical to that described in the above paragraphs. When many tests within an instrument are displaying QC problems, troubleshooting steps should be aimed at those things that the tests have in common. For example: do all the tests have small or large samples sizes, do they use the same filter, do they use the same lamp and tests without the problem use a different lamp, do the tests all use the same mode of detection (endpoint vs. rate, MEIA vs. FPIA), are all the tests calibrated or are they verified, do they all have certain mechanical components in common or certain optical components in common, etc.
Good Habit #4: Relate causes to recent changes
Systematic errors are most often related to reagent or calibration problems. A sudden shift is usually due to a recent event such as replacement of reagent, introduction of a new reagent lot number, a recent calibration, or change in calibrator lot number. When a shift is identified, the operator should inspect the reagent, calibration, and maintenance records for clues to resolving the problem. For example, if the shift occurred immediately following a reagent replacement, verify that the lot number is correct and has been checked out or calibrated, that it has been prepared properly, that it is indeed the correct reagent.
A systematic trend can be more difficult to resolve than a shift simply because the problem is occurring over a longer period of time. Review QC records, including documentation of function checks, prior to taking actions to resolve the cause. Trends can be the result of a slowly deteriorating reagent, a calibration shift, a change in instrument temperature, or a deteriorating filter or lamp. Use a systematic logical troubleshooting approach in isolating the cause, making only one change at a time and documenting each action taken.
In contrast, problems resulting in increased random error are much more difficult to identify and resolve, mostly due to the nature of the error, which cannot be predicted or quantified as can systematic error. Random errors are more likely due to bubbles in the reagent, reagent lines, sampling or reagent syringes, or improperly mixed/dissolved reagent, pipet tips not fitting properly, a clog in the pipettor, imprecise pipettor, the power supply, and even power fluctuations. Many of the sources of random error can be observed by physical inspection of the analytical method during operation. Careful inspection of reagents and the sampling/reagent pick-up and dispensing activities will often identify the cause of the problem. If nothing is observed during the inspection process, consult troubleshooting guides and manufacturer recommendations. If the run is repeated and the controls are "in" but you feel that you didn't really do anything to "fix" the problem, you may want to perform a precision run using a patient sample making 10 back to back determinations. This step may identify further imprecision problems. Duplicate analysis of patient specimens is also recommended when monitoring random error problems.
Good Habit #5: Verify the solution and document the remedy
After the cause of the problem has been identified, it must be corrected and the solution verified by retesting all of the controls. This generally means "loading up" all the controls at the front of a run to assess control status. Once in-control, patient samples from the out-of-control run should be repeated as necessary. The out-of-control event must be documented along with the corrective action. Troubleshooting reports should be completed for unusual problems to aid in future problem-solving.
Summary
Not all analytical methods are subject to the same sources of error. Certain problems occur more frequently with some systems than others. Basic troubleshooting guides, based on the system operating characteristics, should be developed for each analytical method/system. Key operators can often recognize the most common problems with a given system and are more skilled at problem resolution than the infrequent operator. The knowledge of these key operators should be tapped to identify logical troubleshooting approaches which can be used by all operators.
References
- Terault GA, Steindel SJ. Q-Probe 94-08. Daily quality control exception practices. Chigago, ILL.: College of American Pthologists, 1994.
- Seehafer, JS. Corrective Actions: What to do when control results are out of control. Med Lab Observ 1997;29 (3):34-40.
- Hyltoft Petersen P, Ricos C, Stockl D, Libeer JC, Baadenhuijsen H, Fraser C, Thienpont L. Proposed guidelines for the internal quality control of analytical results in the medical laboratory. Eur J Clin Chem Clin BioChem 1996;34:983-999.