Basic QC Practices
What's wrong with traditional quality control?
Dr. Westgard addresses the Frequently-Made-Complaints (FMC's) about statistical quality control. Over the years, a whole host of gripes have accumulated. Learn which of them are valid criticisms and which of them are just plain whining.
Frequently-Made-Complaints (FMC's) about Statistical QC (SQC) :
- "SQC is not patient focused"
- "SQC doesn't consider improvements in instrument performance!"
- "'One size fits all' SQC is not appropriate!"
- "SQC is not appropriate for unit devices!"
- "SQC is too expensive!"
- "SQC takes too much time!"
- "SQC is too complicated!"
- "SQC is old!"
- What's right with SQC?
- References
Reprinted with permission from the Australasian Association of Clinical Biochemists Newsletter, June 1999 issue.
There seems to be a lot of sentiment for changing traditional QC practices - getting rid of what we've been doing - statistical QC (SQC) - and doing something different (???, whatever that might be). I'll take this opportunity to comment on some of the reasons that are given.
"SQC is not patient focused!"
That's true when a laboratory's QC practices are based on regulatory, accreditation, and professional recommendations, instead of selecting control rules and numbers of control measurements on the basis of the quality that is required for the test. Today's control practices are often arbitrary and we should properly call what we're doing "arbitrary control", not quality control.
I have long advocated the use of a QC planning process to select appropriate QC procedures on the basis of the quality required for the test. That was the theme of the Roman Lectures I presented in 1994 [1]. The tools and technology are available to accomplish this, and they have been validated against standard industrial techniques by Australian scientists [2]. The problem is that laboratories haven't defined the test quality that is necessary for patient care! No QC technique can be patient focused until you define the quality needed and select the QC procedure to assure that quality is achieved by the methods in your laboratory.
"SQC doesn't consider improvements in instrument performance!"
We're all aware that later generations of instruments have better precision and better stability that those in the past. It's correct to question whether we should do QC the same old way on these new instrument systems. The answer again is to plan the QC procedure properly and take instrument performance into account. New NCCLS guidelines [3, document C24-A2] outline the steps for planning a QC procedure as follows:
- Define the quality requirement
- Determine method performance - imprecision and bias
- Identify candidate SQC strategies
- Predict QC performance
- Set goals for QC performance
- Select appropriate QC
The 2nd step considers improvements in instrument performance and should lead to simpler SQC procedures. For example, we demonstrated (ten years ago) that it was appropriate to change from multi-rule QC procedures to single-rule procedures with wide limits (13.5s rule) for 14 out of 18 tests on a multi-test chemistry analyzer [4,5]. The approach was to optimize QC for each individual test on the instrument, rather than use a single QC procedure on all tests.
"'One size fits all' SQC is not appropriate!"
True, you should select the control rules and number of control measurements that are appropriate for each test by implementing a QC planning process. You should establish run length and frequency of control analysis on the basis of the stability of the method and it's susceptibility to problems. That will allow you to individualize the QC procedure for the quality required by your customers and for the performance observed for the methods in your laboratory. Follow the general CLSI C24-A3 guidelines or follow the more detailed planning process described in the Quality Control Planning for Healthcare Laboratories training CD.
"SQC is not appropriate for unit devices!"
There are both "business" and technical aspects to this argument. The "business" aspect tends to be the main driving force because unit devices are often used in the expanding Point-of-Care testing applications, where the personnel seldom have laboratory training and any understanding of SQC. The solution offered by CLSI in the proposed guideline on "Quality Management for Unit Use Testing" is to develop a "sources of errors" matrix and identify specific methods for controlling each source of errors [6]. One common way of monitoring many of the individual error sources is to depend on operator "training and competency", which is itself difficult to evaluate and verify. SQC may actually be the most quantitative way to monitor operating training and competency, as well as important operator variables [7].
The technical argument is that unit devices can't be monitored by SQC because each device is a separate and different. Sampling one device doesn't assure the next one is okay, therefore, it's argued that SQC can't be used. However, all manufacturers claim that these unit devices are uniform, otherwise, they shouldn't be marketing them at all. If they are uniform, then SQC can be applied to monitor the general stability of the devices, as well as operator proficiency in using the devices.
"SQC is too expensive!"
Compared to what? Certainly there are techniques like procedural controls and electronic QC that are less costly to perform, but are they actually less expensive? Procedural controls and electronic QC typically monitor only a few instrument variables and steps in the measurement process. What about the failure costs from errors in other steps that go undetected and impact negatively on patient treatment and outcome? It may be that these techniques are cheap for the laboratory and expensive for the patient. Because traditional QC can monitor many analytical steps and variables in the total testing process, it is a very efficient technique compared to all the separate checks that are needed to comprehensively monitor all the individual variables [8].
"SQC takes too much time!"
What time are we talking about here - the time to analyze controls, the time to interpret control results, or the time for dealing with out-of-control signals? The real concern should be the turnaround time for reporting test results, where the QC problem in many laboratories is due to "false rejections" that require repeat analysis of controls, analysis of new controls, and re-runs of patient samples. Again, proper planning of QC procedures is critical to minimize false rejections and to provide appropriate detection of medically important errors. When this is done, a rejection signal should lead to trouble-shooting that eliminates analytical problems, rather than the wasteful re-work without improvement [9]. This is a good investment of time because, in the long run, there will be fewer problems, fewer out-of-control situations, more rapid and effective problem-solving, and fewer delays in reporting patient test results.
"SQC is too complicated!"
There are at least two different issues here - one dealing with the difficulty in training personnel and the other the difficulty in implementing QC in a proper manner. The solutions for both are new technologies - in the first case Internet technology to support training and in the second computer technology to automate the QC process.
Basic QC training is already available on the Internet through the American Society for Clinical Laboratory Science (http://www.ascls.org/basicqc/). These materials are also available in hardcopy and CD formats for laboratories that have limited access to the Internet [11].
Automated QC technology is under development. An example of software that automates the QC selection process is provided by the QC Validator 2.0 program [12,13], which makes use of charts of operating specifications (OPSpecs charts) to show the relationship between the quality required, the imprecision and inaccuracy observed for the measurement procedure, and the error detection capabilities of different QC procedures [14]. An educational version of this computer program is included as part of the training CD for Quality Control Planning.
It would be ideal, of course, to have an automatic QC selection function integrated with a QC flagging function in an instrument system, a QC workstation, or a laboratory information system. The laboratory could then specify the quality required for the test and the automated QC process would select the appropriate QC procedure, load and sample the control materials, acquire the necessary control data, interpret the control data, and release or reject patient test results. Rather than worrying about which rules to use, the laboratory's responsibility would be focused on the quality needed for the application of the tests.
"SQC is old!"
I am beginning to take offense to the comment that "old" implies "no longer useful." It may because of my increasing age, but I think experience is beneficial and leads to dependability. Traditional QC, which was derived from the industrial statistical QC, continues to be the cornerstone of industrial production worldwide because it's a dependable, proven technique. It has also been a fundamental technique for improving the quality of test results in health-care laboratories. We need to recognize that it takes time to become "tried and true" and that a well-established technique should not be discarded without careful evaluation of the new technique. What's the new technique and where's the documentation of effectiveness? You shouldn't have to "trust me" or the manufacturer! You need a dependable, proven, and independent technique for managing and controlling the quality of your work.
What's right with SQC?
There are lots of things that are wrong with the way we use SQC, but here what's right - SQC is still the best technique available for managing analytical quality in healthcare laboratories! The biggest potential for improving QC systems is to do SQC right.
References:
- Westgard JO. A QC planning process for selecting and validating statistical QC procedures. Reviews in Clinical Biochemistry 1994;40:1909-14.
- Chester D, Burnett L. Equivalence of critical error calculations and process capability index Cpk. Clin Chem 1997;43:1100-1.
- NCCLS C24-A2. Statistical quality control for quantitative measurements: Principles and definitions; Approved guideline - second edition. NCCLS, Wayne, PA, 1999.
- Koch DD, Oryall JJ, Quam EF, Feldbruegge DH, Dowd DE, Barry PL, Westgard JO. Selection of medically useful quality-control procedures for individual tests done in a multitest analytical system. Clin Chem 1990;36:230-3.
- Westgard JO, Oryall JJ, Koch DD. Predicting effects of quality-control practices on the cost-effective operation of a stable, multitest analytical system. Clin Chem 1990;36:1760-4.
- NCCLS EP18-P. Quality management for unit use testing; Proposed guidelines. NCCLS, Wayne, PA, 1999.
- Westgard JO. Taking care of point-of-care QC. Clin Lab News Viewpoint, August 1997.
- Westgard JO. Electronic QC and the total testing process.
- Hyltoft Petersen P, Ricos C, Stockl D, Libeer JC, Baadenhuijsen H, Fraser C, Thienpont L. Proposed guidelines for the internal quality control of analytical results in the medical laboratory. Eur J Clin Chem Clin Biochem. 1996;34:983-99.
- Watkinson L. ACB newsletter, Mar/Ap 1999.
- Westgard JO. Basic QC Practices. Westgard QC, Madison, WI, 1998.
- Westgard JO, Stein B, Westgard SA, Kennedy R. QC Validator 2.0: a computer program for automatic selection of statistical QC procedures for applications in healthcare laboratories. Comput Methods Programs Biomed 1997;53:175-86.
- Westgard JO, Stein B. Automated selection of statistical quality-control procedures to assure meeting clinical or analytical quality requirements. Clin Chem 1997;43:400-403.
- Westgard JO. OPSpecs Manual - Expanded Edition. Westgard QC, Madison, WI, 1996.