Tools, Technologies and Training for Healthcare Laboratories

QC 2000: What changes are needed?

Forget about Y2K - what about Y2QC? What's going to happen to quality control in the laboratory? Are we going to give it up entirely? Will there be any people left in the automated laboratory? Dr. Westgard attempts to predict and recommend changes for this new millenium.

This article is reprinted from Clinical Laboratory News, July 1999.

Since its introduction by Levey and Jennings in 1950, statistical quality control (QC) has been a fundamental management practice in clinical laboratories (1). Today, some 50 years later, QC still depends on producing Levey-Jennings control charts -- a graphical technique in which the control rules and numbers of control measurements are selected manually, and the control results are often interpreted manually. Most laboratory information systems, automated analytic systems, and some point-of-care devices now provide computer assistance for online data acquisition, display of control data, and flagging of "out-of-control" runs; however, QC is still a problem area in many laboratories. It continues to be one of the most frequently cited deficiencies in laboratory inspections. Clearly, QC practices will have to change in the next century for laboratories to continue to deliver accurate test results.

Will QC Mean "Quality Compliance"?

Overall, CLIA '88 has equalized QC practices in U.S. laboratories. Almost every laboratory regulated under CLIA now performs only the minimum QC needed to meet the law's requirements. If this trend continues, QC in the future may stand for "quality compliance" rather than "quality control." However, there are other positive influences on standards of practice that I believe will counteract this trend towards compliance with government regulations as the only definition of appropriate QC practices.

International quality standards are being widely advocated, particularly the adaptation of the International Organization for Standards (ISO) guidelines for health care laboratories (2). Diagnostics manufacturers must now consider the international marketplace when developing
new analytical systems. As a result, international standards will take on as much importance as
national regulatory requirements.

National guidelines, such as the new National Committee for Clinical Laboratory Standards document on statistical quality control (3), are recommending changes in QC practices. In particular, the guidelines focus on the following: careful planning of QC procedures to minimize cost and maximize quality; eliminating the wasteful practice of repeating runs and controls by reducing false rejections; troubleshooting out-of-control situations to eliminate sources of problems; and providing flexibility for manufacturers and laboratories to develop objective measures of run length to maximize the cost-effectiveness in routine service.

Professional interest in quality requirements is re-emerging in an effort to provide more objective and quantitative evaluation and management of laboratory testing processes. For example, a recent "Information for Authors" section of Clinical Chemistry (1999;45;1-5) recommended that the "results obtained for the performance characteristics [of new analytical methods] be compared objectively with well-documented quality specifications, e.g., published data on the state of the art, performance required by regulatory bodies such as CLIA '88, or recommendations documented by expert professional groups" (4).

How Should QC Evolve in the Future?

In a previous article in Clinical Laboratory News (5), I discussed the evolution of measurement procedures from manual methods to today's fourth and fifth generation automated systems. In that article, I pointed out that QC technology has not kept pace with technological advances. As we approach the year 2000, QC needs to evolve quickly to catch up with the latest generations of automated systems.

Here's a look at how QC could be conducted in the future. An operator steps up to an automated analyzer, turns on the computer, and accesses the instrument set-up program. Next, the instrument asks the operator to identify the tests to be activated for use in the laboratory and to define the level of quality required for the medical application of each of these tests. The operator loads the consumables, starts the analyzer, and the analyzer samples the patient specimens. Within a few minutes, the instrument provides test results that meet the defined requirements for quality. Simply put, with the next generation of analytical systems, operators should only have to define the quality needed, and the analytic system should be able to select and implement the QC procedures that are needed.

Sound too simple? I believe that QC must move in this direction because the skills of analysts are changing-and not for the better. Statistical QC is a complex process that is not well understood by many of the health care providers who now perform laboratory tests. In today's clinical laboratories, new personnel are not as skilled as they were in the past, staffing is tighter, and less time is available for training and problem solving.

Manufacturers have hard evidence of these changes: the number of calls they receive for help in solving simple and routine problems is increasing dramatically. Additionally, health care organizations seem to be adopting a strategy of purchasing complete systems, then holding the manufacturers responsible for everything, including the quality of the test results. To assume this responsibility, manufacturers will need to develop analytical systems that are 100% dependable and never have problems. Since that scenario is unlikely, QC technology needs to be incorporated into the next generation of analyzers that can accommodate the skills of operators who have a wide variety of education and training.

Is Improved Instrument Performance and Accuracy the Answer?

Improvements in instrument precision, accuracy, and stability will certainly reduce the demands for QC. For example, QC is easier with later-generation, automated clinical chemistry systems. Almost ten years ago, we documented that single-rule QC procedures with limits as wide as 3.5 standard deviations and as few as two control measurements per run provide adequate control for 14 of 18 tests on a multi-test chemistry analyzer (6). However, only five years ago, we described the need for more complicated multi-stage QC designs for tests on an automated immunoassay analyzer (7), where automation is a generation or two earlier than the more advanced clinical chemistry and hematology automated systems. By and large, there will always be laboratory methods that are evolving from first generation manual methods to the highly automated fifth generation or later systems; hence, there will always be a need for QC with emerging tests and early generation automated systems.

Is Electronic QC the Answer?

Because statistical QC is difficult to implement, consumes time and reagents, and requires training for operators, as well as ongoing support to update the QC statistics and maintain appropriate QC charts, there is increasing interest in electronic QC. In general, electronic QC is much less complex because it involves measuring a substitute electric signal instead of generating that signal by detection of the analyte in a sample. And since there are no specimens and no sample handling steps involved, it is easy to build electronic QC into most analyzers.

One way to assess the usefulness of electronic QC is to determine which steps in the total testing process are monitored by the different procedures (6). Keep in mind that regulatory and accreditation guidelines emphasize that laboratories should perform QC procedures to monitor the "total testing process," which includes pre-analytic steps (e.g., specimen acquisition and handling), analytic steps (e.g., sample manipulation, sample processing, measurement of a signal from the sensor, and instrument readout), and post-analytic steps (e.g., recording of test results). Typically, electronic QC checks only the performance of the instrument readout device. On the other hand traditional QC monitors a multitude of parameters: the readout device; the sensor response; the chemical reaction that generates the product read by the sensor; many of the steps involved in processing the sample; and the proficiency of the operator who is performing the laboratory test.

Somehow, the issue of what steps are being monitored has gotten lost in the discussions of the usefulness of electronic QC. There now seems to be acceptance of electronic QC as a substitute for traditional QC if the electrical signal is available in a form that can be charted, instead of just a yes/no indicator. But focusing on the form of the signal is missing the whole point. The manner of displaying a result from electronic QC doesn't expand the steps of the process that are being monitored! Electronic QC monitors only one step or instrument component involved in the process. Statistical QC is still the most efficient procedure for monitoring the effects of many process steps, analytical factors, and instrument variables that affect the quality of the test result, as well as being the most quantitative way to assess the proficiency of the operator who is performing the test.

Is Automated QC Technology the Answer?

Given that statistical QC will still be useful and probably even essential in the future, QC process will continue to be complex. Laboratory automation has proven to be a solution for making complex processes easy to use and less dependent on operator skills. I believe that efforts in total laboratory automation will have to expand to include automation of the QC process.

Assuming that the user or laboratory will define the quality required for a test, automated QC technology in the future should have the capability to perform the following functions:

  • integrate instrument function checks as part of ongoing QC;
  • characterize the ongoing precision and accuracy of the method;
  • select control rules and numbers of control measurements to guarantee the quality
    required by the user;
  • set up appropriate control charts for displaying performance;
  • introduce control solutions needed to monitor performance;
  • collect and interpret control data;
  • determine when controls need to be analyzed again;
  • identify problem specimens and problem runs;
  • support trouble-shooting that solves control problems; and
  • release test results automatically that meet the user's or laboratory's requirements for

The good news is that all these capabilities are available today -- they're just not available in an integrated package that brings complete automation to the QC process. They will, however, be necessary for the evolution of QC technology into the next millennium because the skills of those performing laboratory tests are changing. Advances in QC technology are coming. In fact, techniques for the automatic selection of QC procedures and measurement of run-length using patient data have already been described in the literature (9, 10). New QC technology will become available to laboratories once customers identify the need for improved technology and manufacturers recognize its advantage in the marketplace.


  1. Levey S, Jennings ER. The use of control charts in the clinical laboratory. Am J Clin Pathol
  2. CLSI ISO/TC212.
  3. National Committee for Clinical Laboratory Standards. Statistical quality control for
    quantitative measurements: principles and definitions. Wayne, Pa.: NCCLS Document
    C24-A2, 1999. (approved guideline-second edition).
  4. Information for authors. Clin Chem 1999;45:1-5.
  5. Westgard JO. Strategies for cost-effective QC. Clin Lab News 1996;22(10):8-9.
  6. Westgard JO. Electronic QC and the total testing process.
  7. Koch DD, Oryall JJ, Quam EF, et al. Selection of medically useful QC procedures for
    individual tests on a multi-test analytical system. Clin Chem 1990;36:230-3.
  8. Mugan K, Carlson IH, Westgard JO. Planning QC procedures for immunoassays. J Clin
    Immunoassay 1994;17:216-22.
  9. Westgard JO, Stein B, Westgard SA, et al. QC Validator 2.0: a computer program for
    automatic selection of statistical QC procedures in healthcare laboratories. Comput Method
    Program Biomed 1997;53:175-86.
  10. Westgard JO, Smith FA, Mountain PJ, et al. Design and assessment of average of normals
    (AON) patient data algorithms to maximize run lengths for automatic process control. Clin
    Chem 1996;42:1683-8.

James O. Westgard, PhD, is a professor of pathology and laboratory medicine at the University of Wisconsin Medical School, Madison. He also is president of Westgard QC, Inc., (Madison, Wis.) which provides tools, technology, and training for laboratory quality management.