Trends
Trying your patience while trying out Patient-Based QC
Patient Based Real Time Quality Control (PBRTQC) is the newest trend in the elite laboratories and academic literature. What is it, and is it really ready for prime time?
Trying your patience while trying out PBRTQC
James O. Westgard, PhD
November 2020
In the September issue of Clinical Chemistry, there was an interesting discussion of Patient Based Real Time Quality Control (PBRTQC) [1]. Presented as a Q&A article, it was provided on behalf of the IFCC Laboratory Medicine Committee on Analytical Quality and involved Tony Badrick as moderator and experts Andreas Bietenbeck, Alex Katayev, Huub H. van Rossum, Mark A. Cervinski, and Tze Ping Loh. This group is responsible for several recent papers on planning and implementing PBRTQC procedures [2-4]. Most recently (this November), van Rossum and van den Broek published an evaluation of a 10 month trial of PBRTQC in a hospital laboratory [5].
In the Q&A article in Clinical Chemistry [1], the group enthusiastically promoted the PBRTQC approach:
PBRTQC will become the mainstay of QC in laboratories once the profession sees the advantages of this form of process control, and manufacturers and middleware vendors provide the onboard capability.
The power of these techniques is that they offer exquisite customization to provide very sensitive detection of a change in bias.
Hand in hand with the implementation of PBRTQC is a need to change the mindset from human decision making to AI approaches to QC.
There is a need for large analytical systems to not only use the Hospital Information System to identify patient subgroups, but also for the Laboratory Information System to identify a significant drift, interrogate manufacturers databases regarding calibrator and reagent lot quality, and to initiate recalibration.
PBRTQC is a major step to integrating the laboratory into the hospital information system, and to a bigger dataset with the ultimate goal of better patient outcomes.
The future vs current reality
While it is useful to think about what might be possible in the future, it is also important to maintain a perspective on the current reality of laboratory practice.
First of all, this is about the 4th attempt to revolutionize laboratory QC by use of patient data. This approach is "re-discovered" every 15 years. With increasing computer power, it seems that now is always the right time to implement these new informatics-intensive computations. However, even with increasing computer capabilities, implementation is harder than it seems, as recognized by the IFCC group’s statement that manufacturers and middleware vendors need to provide more onboard capabilities.
Second, planning and optimization of PBRTQC procedures is not a trivial undertaking. Amongst the IFCC group, several different approaches have been utilized involving “trial and error, use of power function curves, and simulation.”[3] The variables to be considered include the PBRTQC algorithm (mean, median, moving average, Exponentially Weighted Moving Average, etc.), truncation limits, exclusion criteria, transformations, block size (number of patient samples or run size), weighting factor (relative importance of recent vs past results), operating mode (continuous or batch), control limits, and target block value (mean, median). These factors are also influenced by the particular quantity being measured, the expected patient distribution (narrow for sodium, wide for cholesterol), possible sub-populations (clinic vs regular hospital patients), and computer capabilities to identify and sort the populations of interest.
Third, laboratories already have difficulties in implementing best practice recommendations for their "simple" QC. CLSI C24-Ed4 [6] provides a roadmap for laboratories to plan risk-based SQC strategies that define “the number of QC materials to measure, the number of QC results and QC rule to use at each QC event, and the frequency of QC events.” According to a recent survey of 21 US academic laboratories, however, laboratories employ predominately 2 SD control limits with 2 QC materials per run with run lengths varying widely from 2 hours to 24 hours on their high volume chemistry analyzers and automated immunoassay systems [7]. Such practices do not conform to the best practices recommended by C24-Ed4. Of the 21 laboratories surveyed, 3 expressed interest in implementing PBRTQC procedures whereas 1 laboratory indicated such procedures had been tried and not found to be useful.
Fourth, the reality is that optimization of traditional SQC strategies is a lot simpler and straight forward than it used to be. More efficient, effective QC procedures can be designed by defining the quality required for intended use in the form of an allowable Total Error (ATE, TEa), characterizing precision and bias of the method, calculating the Sigma-metric as %TEa-|%bias|)/%CV, then utilizing a graphical tool such as the Sigma Run Size Chart to identify the control rules and number of control measurement per QC event and the frequency of QC in terms of run size (number of patient samples between QC events) [8]. Nevertheless, this relatively uncomplicated guidance for good SQC practices is not yet applied in most laboratories around the world. Extrapolating the scope of the challenge for a far more technically demanding QC technique, it may not be realistic to expect implementation of PBRTQC on a large scale. PBRTQC is at least an order of magnitude more complicated than Sigma metrics.
Fifth, using PBRTQC doesn’t automatically make the practice patient-focused. That can only happen if the patient's needs - for example, the quality required for intended us - is part of the planning and optimization of PBRTQC.
Sixth, calling it "Real Time" doesn’t necessarily make it so. By definition a block of patients must be analyzed before knowing if the process is in control. It may be more real-time than traditional controls run every 8 or 24 hours, but there are much faster (and realler) "real time" QC approaches. Even when a patient-based technique is implemented, the traditional SQC is still needed at the beginning or start-up of automatic systems, as well as for trouble-shooting out-of-control problems. So the use of traditional controls may be reduced, but seldom eliminated. PBRTQC can augment traditional QC, but unlikely it will completely replace it.
Finally, everyone can agree that QC is not exactly at the top of today's laboratory priorities. In the US, the reality is that we are guided more by quality compliance to the regulations (or to put it less charitably, quality theater), rather than standards for managing quality to satisfy the intended use of test results for patient care. If we don’t have the 15 to 30 minutes necessary to check for proper design of an SQC strategy, we may never have the days and weeks necessary for planning and optimizing an PBRTQC procedure.
A real world application
The November issue of the Journal of Applied Laboratory Medicine provides an article by van Rossum and van den Broek [5] describing a “Ten-month evaluation of the routine application of patient moving average (MA-QC) for real-time Quality Control in a hospital setting.” The study involved 10 chemistry and 6 hematological tests. According to the authors, “MA-QC was considered for test when no (stable) control materials were available, tests were associated with pre-analytical instability issues, test had a history of rapid onset critical error, tests had observed reagent or calibrator lot-to-lot variations or control were associated with non-commutability issues, or tests has a sigma metric value of ≤ 4.”
The highest alarm rates were found for sodium, creatinine, potassium, and glucose. “The MA QC alarm workup included (a) running internal QC, (b) comparing 4 patients’ samples analyzed on the MA-QC system with results from the other analyzer, and (c) reviewing the recently analyzed samples and patients.”
Note that this application was selective for tests where MA-QC is expected to provide some benefits, e.g., low sigma tests such as sodium that are difficult to monitor by traditional SQC because of the demanding quality requirement for intended use. Note also that traditional SQC was still required for workup of QC alarms. In other words, this was not a replacement for QC, because van Rossum recognizes the complementary capabilities of traditional SQC and PBRTQC [9].
It should also be valuable to read the accompanying editorial on “How to implement patient-based quality control: Trial and error.” [10]
“This report makes clear that translating the theoretical performance of MA into real-time QC practice requires considerable development and conscientious implementation of policies, procedures, and tools. The MA rules themselves require validation and, potentially, the development or purchase of new information system tools... [T]he effort and cost of developing, implementing, and maintaining additional QC systems are substantial. As a result, it is paramount to balance the potential benefits of the system with its costs.”
“The implementation of patient-based QC to effectively improve quality requires highly automated and flexible tools that are well integrated into clinical workflows and prospective pragmatic trails of these tools in clinical practice. This report is a substantial step forward and highlights several remaining theoretical and practical challenges that will require future trial and error.’
What’s the point?
Trial and error may not the best choice for your QC approach, particularly right now. Until practical tools are available to support the planning, optimization, and implementation of such PBRTQC procedures, these approaches may not be as useful as they have been advertised. Even when practical tools become available, such as for traditional SQC strategies, laboratories will need to up their game and prioritize improvements in SQC. That is the situation today in US laboratories. We can do better, we have the tools that are needed, we just need to get serious about planning and implementing SQC strategies that, in the words of ISO 15189, “verify the attainment of the intended quality of results.”
References
- Patient-Based Real Time QC. Clin Chem 2020; 66:1140-1145.
- Loh TP, Bietenbeck A, Cervinski MA, van Rossum HH, Katayev A, Badrick T. Recommendation for performance verification of patient-based real time quality control. Clin Chem Lab Med 2020; doi: 10.1515/cclm-2019-1024.
- Loh TP, Cervinski MA, Katayev A, Brietenback A, van Rossum HH, Badrick T. Recommendations for laboratory informatics specifications needed for the application of patient-based real time quality control. Clin Chim Act 2019;495:625-629.
- Badrick T, Bietenbeck A, Cervinski MA, Katayev A, van Rossum HH, Loh TP. Patient-based real-time quality control: review and recommendations. Clin Chem 2019;65:962-971.
- Van Rossum HH, van den Broek, D. Ten-month evaluation of the routine application of patient moving average for real-time quality control in a hospital setting. JALM 2020;5:1184-1193.
- Clinical Laboratory Standards Institute (CLSI) C24-Ed4 Statistical Quality Control for Quantitative Measurement Procedures: Principles and Definitions. Wayne, PA:CLSI;2016.
- Rosenbaum MW, Flood JG, Melanson SEF, et al. Quality control practices for chemistry and immunochemistry in a cohort of 21 large academic medical centers. Am J Clin Pathol 2018;150:96-104.
- Westgard JO, Westgard SA. Establishing evidence-based statistical quality control practices. Am J Clin Pathol 2019;151:364-370.
- Van Rossum HH. When internal quality control is insufficient or inefficient: Consider patient-based real-time quality control. Ann Clin Biochem 2020; doi:10.1177/0004563220912273
- Ng DP, Herman DS. How to implement patient-based quality control: Trial and error. JALM 2020;5:1153-1155.