An analysis of the scientific rationale provided by CMS to justify "Equivalent" QC procedures in the CLIA Final Rule.
I applaud CMS for responding to Patrica White’s request (1) for an explanation of the Equivalent QC (EQC) recommendations (2) .
There clearly is a need for better QC methodology and technology, especially in Point-of-Care applications, which seem to be the focus of the EQC recommendations. My concern is whether there is a scientific basis for EQC, including a scientific basis for the evaluation process recommended by CMS.
CMS recognizes the need for laboratories to evaluate the suitability of applying the EQC option to their tests and methods (2):
“Prior to making any decisions about decreasing the frequency of external control testing, a critical factor repeatedly mentioned was the need to conduct an evaluation to verify the test system’s stability and the ability of internal controls/monitors to reliable detect errors and alert the operator to test system problems.”
CMA justifies their recommendations for evaluation process by reference to CLSI standards (2), particularly M2-A “Performance Standards for Antimicrobial Disk Susceptibility Tests.” According to the CMS letter, “performing and documenting these types of evaluations appears to be standard practice in many accredited laboratories.” Maybe this practice exists in microbiology, but it hasn’t – to my knowledge – become a standard practice in chemistry and other areas. Given that most POC testing is in the chemistry area, I’m also a little concerned with the extrapolation from microbiology to other areas where measurements are both more complicated and more quantitative.
Nonetheless, having to admit that I don’t know much about QC in microbiology, I have begun to study the M2-A document in the hopes of gaining a better understanding of the EQC guidelines. Unfortunately, this has raised even more questions!
M2-A began as proposed standard in 1975 and was first approved in 1984. Since then, it has been revised in 1988, 1990, 1993, 1997, 2000, and 2003. The latest document includes the notation “M2-A8” to identify the eighth revision.
Here’s what M2-A8 (3) says about the frequency of quality control testing:
10.5.1 Daily Testing
When testing is performed daily, for each antimicrobial agent/organism combination, 1 out of every 20 consecutive results may be out of the acceptable range (based on 95% confidence limits, 1 out of 20 random results can be out of control). Any more than 1 out-of-control result in 20 consecutive tests requires correction action (see section 10.6).
10.5.2 Weekly Testing
10.5.2.1 Demonstrating Satisfactory Performance for Conversion from Daily to Weekly Quality Control Testing
10.5.2.2 Implementing Weekly Quality Control Testing
If the CLSI M2-A8 document is the basis for the evaluation processes recommended for Equivalent QC, then CMS needs to answer a the following questions to justify their recommendations:
In this era of evidence-based medicine, quality and quality control also need to be evidence-based. If laboratory tests are important for evidence-based medical decision making, then the quality of laboratory testing will also need to be based on the best scientific evidence. The need for a scientific assessment is particularly important when making changes in our standard procedures and practices. Such changes are supposed to make things better, not worse! How will we know if we don’t properly evaluate the effects of these changes?
In the scale used for evaluating the “level of evidence,” consensus groups are the lowest form of evidence. All forms of scientific study rank higher. Therefore, reference to the M2-A guidelines is the weakest form of evidence. We have recommended earlier that CMS make use of the 30 day evaluation study to characterize the sigma performance of the test methods and then relate the QC recommendations to the observed quality of the methods. There is a scientific way to do this, it can utilize the same data from the 30 day evaluation period, and it provides better evidence of the amount of QC that is needed to assure the quality of each test-method combination (4).
CMS has yet to demonstrate that the recommendations for evaluation of Equivalent QC procedures are scientifically sound! But we’ll give them another chance and look for a response to these questions about their proposed evaluation methodology, as well as the issue of utilizing Six Sigma methodology to provide a more scientific assessment of method performance and the related appropriateness of QC procedures.