Essays and Publications
CLIA QC and Q-less Compliance
An updated version of this essay appears on the Nothing but the Truth about Quality book.
We've posted a large number of articles about the specifics of the Final CLIA Rule and the accompanying Interpretive Guidelines. But in addition to covering the specifics, we need to look at the big picture. Why did we get the CLIA Final Rule and Interpretive Guidelines the way that we did? Why did it take over ten years to complete? Why did manufacturer's QC clearance go by the wayside? How did we end up with "electronic QC" and "equivalent QC"?
GAMECHANGE: AN ON-DEMAND WEBINAR ON CLIA
CLIA QC and Quality-Free Compliance
- In the news: lies, damn lies, and economic indicators
- What's the problem with CLIA QC regulations?
- What's the goal? Quality or Compliance?
- Who's responsible?
February 2004
Now that the CLIA Final Rule is finally final and the CMS Interpretive Guidelines are out, it is worthwhile to take a step back and try to understand the Big Picture of QC. What is the final outcome for QC in laboratory testing?
We’ve posted a number of earlier discussions of the CLIA Final Rule, the CMA Interpretive Guidelines, and the new options for “equivalent QC procedures.” Those that are particularly relevant include the following:
Although CMS acknowledges that QC remains one of the major deficiencies observed during laboratory inspections, the new CMS interpretative guidelines open the door for laboratories to do even less QC, reducing the already minimum practice of running controls once a day to options to run controls only once a week or even once a month. We call these new “equivalent QC options” Eqc, i.e., big E, little q, and little c, meaning they may allow Big Errors and provide little quality and little control. While these Eqc options should make it easier for laboratories to be in compliance with the regulations, QC compliance won’t really be an indicator that QC is improving. And the fact that the inspection statistics will show a higher level of compliance will likely be misused and misinterpreted.
For a moment, let's step into a different world to see the effect of government rules, regulations, and indicators. A look at the evolution of economic indicators may give us a better perspective on what is happening to us in healthcare and in the laboratory.
In the News: Lies, damn lies, and economic indicators
If you look at the numbers of early 2004, the US economy was growing, unemployment is down, and stocks are on the rise again. So everyone's happy, right?
Unfortunately, no. Many of the economic indicators that determine the health of the economy are statistics in the worst meaning of the word. That is, they are numbers twisted and turned and manipulated (like an Enron annual report) until they look good.
The unemployment rate is a classic “statistic” that is misused and abused. It is common knowledge that the number of “unemployed” reported by the government is actually the number of unemployed persons still actively looking for work. If a jobless worker gets discouraged and stops looking for employment, than he or she is no longer considered unemployed. Thus in December of 2003, when the unemployment rate went down, it was for the wrong reason – people had given up looking for work.
However, the unemployment rate has become even more of a “statistic” due to relatively recent changes in Social Security policy. Austan Goolsbee, professor of economics of the University of Chicago Graduate School of Business [New York Times, Op-Ed Week in Review, Page 9, November 30th, 2003], notes that in the 1980s and 1990s, the rules for Social Security disability were loosened. The net effect is that people who would normally have been considered unemployed were able to apply for disability under Social Security, and were then considered “out of the workforce.” During the last four years, applications for disability payment rose 50%. Professor Goolsbee points out that this doesn't mean that a large number of people have suddenly become disabled – it's that they have decided to leave the workforce while the economy is bad. If those figures on disability claims are counted, as many as an additional million people are unemployed than the government currently reports.
Another part of the good news on the US economy is productivity. In the third quarter of 2003, productivity grew by 8.1 percent (in the non-farm business sector, we should note).
The chief economist of Morgan Stanley, Stephen S. Roach [New York Times, Op-Ed Week in Review, Page 9, November 30th, 2003], notes that there are significant problems in how productivity is determined. At best, the figure is more of an estimate than a hard number. Productivity is supposed to be a simple ratio: (work output) / (units of work time). In our increasingly service-based economy, however, it is exceedingly difficult to determine the output of a worker. Currently, the government just estimates that your compensation is roughly equivalent to your output (that is, your average CEO has several hundred times the output of his lowest-paid employee). Furthermore, the denominator of productivity is in Roach's words, “patently absurd,” because it assumes that everyone is working a 35.5-hour week. As most of you know, there are few workers right now with the luxury of that short a work week; many of them are working far longer and are showing the strain.
To summarize, the productivity numerator is really a guess and the denominator is artificially low. So the result is that the productivity figure is more of an arbitrary and inflated estimate than an indicator. Nevertheless, recent US productivity is regularly praised as one of the 'miracles' of the New Economy. These statistics are non-partisan, equal-opportunity offenders. Administrations both Democratic (Clinton) and Republican (Bush) were quite content to tout good productivity figures and low unemployment rates when it suited them. But the numbers have been manipulated to the point that they no longer reflect reality.
It would be nice to say that while economic indicators have been corrupted, we in healthcare are at least honest about our performance. Alas, with the Final CLIA Rule and the just-released Interpretive Guidelines, we are on the same path. Rather than assuring better QC from manufacturers and laboratories, the changes will lead to easier QC and false conclusions about the quality of laboratory testing. Statistics on laboratory compliance to CLIA QC regulations should improve, but that won’t mean that laboratory QC practices are improving.
What’s the problem with the original CLIA QC regulations?
Have you ever wondered why it has taken ten years to issue the CLIA Final Rule? Did CMS take those years to do further research into quality and the best practices for laboratory QC? NO, that's not the answer. What we saw was a series of postponements, delay after delay, while the government tried to figure out how to appease the various interests or stakeholders, as they’re now called.
- Manufacturers wanted the regulations to be as business-friendly as possible. They didn't want additional burdens of proof or support placed on them, such as the need to document the effectiveness of their QC instructions, which was required in the original CLIA rules. They opposed that provision and won that battle. Whether the government was in the hands of the Democrats or Republicans, it has been kinder, gentler, friendlier, and more subject to business interests and lobbying.
- Even though the intent of CLIA was to be sure that all laboratories provide quality services, the regulations were merely meant to be minimum standards. That is, you should probably do more than this, and if you go below this, you could end up in jail. The government probably didn't expect that many laboratories would reduce their QC practices to the minimums, squeezing costs down to the lowest level, quality be damned. And despite reducing QC to a minimal level, the government still finds that laboratories are deficient in their QC practices. Which, of course, leads to the need for Eqc to further reduce the QC in laboratories, right?
- Many laboratories want do as little QC as possible to stay in business because there is a serious shortage of well trained laboratory professionals, spurred in great part by the reduction in personnel standards in the CLIA rules. Workload has not diminished, therefore laboratories must continue to crank out all the tests that the physicians are ordering as fast as possible, with as little expense as possible, and hopefully still boosting the hospital’s bottom line. QC is a hassle, particularly if quality is poor and personnel skills are low. While doing less QC doesn’t make the problems go away, the problems are not as evident. And with less evidence of problems, fewer runs need to be rejected, cost is further reduced, and productivity is further improved. And with Eqc, laboratories will be able to do even more with less QC.
- Perhaps the most direct factor influencing the need for the Eqc guidelines was CMS’s earlier stance on Electronic QC. In effect, CMS put itself between a rock and a hard place. It temporarily allowed the practice of “electronic QC” while the Final Rule was pending. But that also meant that once the Final Rule was issued, a decision would have to be made about allowing Electronic QC or not. Eqc is CMS’s answer to the issue of the acceptability of electronic QC.
To me, it was a foregone conclusion that CMS had to come up with a way to allow Electronic QC as a substitute for traditional QC. The FDA QC clearance process was a fundamental and essential part of the original CLIA-88 implementation strategy. It was the part that made it possible to simplify the requirements for a laboratory and allow them to just follow manufacturers’ directions and be in compliance with the CLIA quality regulations.
In my opinion, CMS was left with a hole in the dam that had to be patched. Unfortunately, the dam should have been constructed properly from the original plans so there would be no need to fix any leaks. But a leak did occur when Electronic QC was introduced as a substitute for running external controls in POC applications. While the practical arguments for Electronic QC are generally that (i) regular QC is too expensive (due to the cost of the test or cartridge), and (ii) traditional QC is too difficult (due to the lack of training of operators in POC sites), the real problem is that many organizations have now implemented testing services and would not be in compliance unless Electronic QC was acceptable. So CMS had to create a different, alternate, “equivalent” way to give Electronic QC a quasi-legal justification. Do you think it is just a coincidence that CMS uses the abbreviation EQC to accommodate Electronic QC? I think it reveals the motivation, conscious or unconscious, behind the EQC options!
What’s the goal? Quality or compliance?
Why should we oppose the CMS recommendations on Eqc? After all, Eqc makes it easier for laboratories to be in compliance with the CLIA QC regulations. Isn’t that better? Won’t the improved compliance statistics assure the public that healthcare laboratories are doing a good job?
NO, compliance should not be the goal and compliance statistics should not be misinterpreted to suggest improvements in the quality of laboratory testing! The goal must be to provide quality testing services! Compliance equates to average, being good enough to stay out of jail. Quality is about excellence, being good enough to take proper care of our patients. The Final CLIA QC rules and interpretive guidelines set the standard so low that QC now represents “Quality-less Compliance” rather than Quality Control.
The Eqc options are so obviously and absurdly inadequate when judged against CLIA’s own standard that QC “should detect immediate errors that occur due to test system failure, adverse environmental conditions, and operator performance, monitor over time the accuracy and precision of test performance that may be influenced by changes in test system performance and environmental conditions, and variance in operator performance.” The recommended evaluation protocols are so obviously and absurdly inadequate. The idea that individual laboratories rather than manufacturers can and should take responsibility for documenting the effectiveness and performance of the internal procedural controls developed by the manufacturer is itself absurd, particularly in light of the staffing and skill issues in laboratories today. Manufacturers are best able to do that, not individual laboratories.
Who’s responsible?
While the fatal step in the long process of QC degradation occurred some ten years ago when the FDA was unable or unwilling to implement a clearance process for manufacturer’s QC labeling instructions, the consequences were only felt recently when Final CLIA Rule was published and the QC clearance provision officially removed. That made it necessary for CMS to fill that hole in the regulations with the Eqc options. Manufacturers were certainly partly responsible for making that happen. They are also suffering the consequences today in terms of what is happening to their analytical systems in the field. Quality is becoming a bigger issue for manufacturers and maybe they will find it beneficial to develop improved QC technology, document its performance, and voluntarily request FDA approval of their QC claims and instructions. That’s one solution to our QC problems, but it’s voluntary for the manufacturers.
Another solution is for us as laboratory scientists and professionals in this field to personally assert more control over QC practices and avoid the implementation of hazardous approaches, such as the Eqc options. We can take responsibility to avoid the use of Eqc, and if we don’t , we end up being responsible for the consequences of using Eqc. Keep in mind the little footnote that appears in the CMS interpretive rules:
“Note. Since the purpose of control testing is to detect immediate errors and monitor performance over time, increasing the interval between control testing (i.e, weekly or monthly) will require a more extensive evaluation of patient test results when a control failure occurs. The director must consider the laboratory’s clinical and legal responsibility for providing accurate and reliable patient test results versus the cost implications of reducing the quality control testing frequency.”
The laboratory is clearly responsible for the clinical and legal consequences of providing correct patient test results, regardless whether it is in compliance with CLIA QC regulations or not. The bottom line is that the laboratory must be in compliance to assure payment for government patients and services, but must provide quality testing services to assure proper patient care and treatment. Compliance with CLIA QC regulations is necessary, but not sufficient. Responsibility for the quality is essential!
James O. Westgard, PhD, is a professor of pathology and laboratory medicine at the University of Wisconsin Medical School, Madison. He also is president of Westgard QC, Inc., (Madison, Wis.) which provides tools, technology, and training for laboratory quality management.