Posted by Sten Westgard, MS
[Hat-tip to the AACC Point-of-Care listserve, which first posted a notice about this article]
The Pennsylvania Patient Safety Advisory has a regular electronic newsletter highlighting new science and studies about healthcare safety. Their December 2011 issue has a particularly interesting article for laboratory testing:
Point-of-Care Technology: Glucose Meter's Role in Patient Care, Lea Anne Gardner, PhD, RN, Senior Patient Safety Analyst, Pennsylvania Patient Safety Authority.
This review examined more than 1,300 reports of glucose-meter problems from the Pennsyvlania reporting system database from 2004 to 2011. Of those reports, 71 near-miss or adverse event reports occurred. Most intriguing are the report excerpts directly quoted in the study. Of those reports, 72% of the near-miss or adverse events occurred with high-blood glucose results. That is, where the glucose meter had a sudden high value that may or may not have been reflective of the actual patient's clinical state. For example:
"A patient's blood sugar was checked using a [glucose meter]. The lunchtime result was 517. A [blood glucose test] was [immediately] retaken to check for accuracy, and the result was greater than 600. A blood [laboratory] test was conducted per protocol, and the [lab] glucose [result] was 136..."
What do you think happened next?
-----Posted by Sten Westgard, MS
Now that we know EQC will officially be phased out and instead Labs will have to develop QC Plans through Risk Analysis (as explained in CLSI's new guideline EP23A), some of the waiting is over. EQC, which was fatally flawed from the start, is going to go away.
However, the exact regulations about QC Plans and Risk Analysis have yet to be written (or, at least, are not yet known by the general public). What makes this more uncertain is that EP23A is only meant as a guideline, and the Risk Analysis approach discussed in the guideline is only meant as a possible example. Risk Analysis is a long-established technique (outside the medical laboratory) and has many different formats and levels of complexity. Even between EP18 and EP23, there are discrepancies between the Risk Analysis recommendations (EP18 recommends a 4-category ranking of risk, while EP23 recommends a 5-category approach).
So while we're waiting for the other shoe to drop (in the form of detailed regulations and accreditation guidelines governing Risk Analysis), we might as well talk about what questions those rules will have to answer...
-----Posted by Sten Westgard, MS
Posted by Sten Westgard, MS
Here's an eye-opening report from the Office of Inspector General from the Department of Health and Human Services: Adverse Events in Hospitals: National Incidence Among Medicare Beneficiaries
So what's your best guess on the frequency of adverse events?
-----Posted by Sten Westgard, MS
A recent Clinical Laboratory Strategies article:Anchoring POC Quality in Clinical Decision-Making and the related study: Novel analysis of clinically relevant diagnostic errors in point-of-care devices, KM Shermock, MB Streiff, BL Pinto, P Kraus, an dPJ Pronovost, (J Thromb Haemost 2011;9:1769-1775) have an interesting observation about the use of the correlation coefficient to accept method performance.
They looked at Hemochron POC devices, analyzing 1518 paired INRs. The correlation between the POC and laboratory measurements ranged between 0.84 and 0.91.
The authors stated, "Traditional, quarterly, quality assurance studies emphasize correlation analysis." So this study has good news, right?
-----Posted by James O. Westgard, Sten A. Westgard
Posted by Sten Westgard, MS
Posted by Sten Westgard, MS
(Or, if only some surveys are based on accuracy, then what are the other surveys based on?)
Posted by Sten Westgard, MS
There's an article that appeared in the October 2010 issue of CAP Today that probably didn't get enough attention. It covers a subject that's been gnawing at us for a while:
Accuracy-based Surveys carve higher QA Profile, by Anne Paxton
For those of you who thought all proficiency testing was "accuracy-based", this article may give you a bit of a shock. In fact, most PT surveys - indeed most EQA programs and even peer-group programs - are not based in accuracy. Instead, those surveys are only based on "consensus."
What's the difference, What does it mean - and how did it come to be this way?
-----Posted by Sten Westgard, MS
Earlier, we posted an article on the website with a darkly humorous take on the passing of the CLSI EP22 guideline, which voted itself out of existence in late 2010. Other websites have also noted its passing.
But it's worthwhile to take a moment to discuss, in all seriousness, where we are with Risk Information, Risk Management, "Equivalent QC", and the CLIA Final Rules. How did we get here? What drove us to this state? Where are we going next?
-----We see the Bail Outs of the bankers and wall street. We see the cutbacks and austerity of governments in Greece and Ireland as their governments struggle to make good on the debts run up by their out-of-control banks.Private risks made into Public losses.
But is the laboratory immune from the problem of "Too Big to Fail"?
-----Posted by Sten Westgard, MS
The National Oil Spill Commission released a preliminary chapter of its report today. This is the commission charged with finding out what went wrong with the Deepwater Horizon / Macondo oil rig in the Gulf of Mexico that blew up in 2010 and spilled 4 million barrels of oil and killed 11 workers.
Whenever there are big stories in the media, we like to take a look at them to see if we can learn anything, find any connection between the disaster and our own situation in the medical laboratory community. But from a distance, it's hard to see any similarities between oil rigs and labs, right?
Right?
-----Posted by Sten Westgard, MS
Sten Westgard, MS
Is it time for a tighter quality requirement for glucose meters?
Fresh on the heels of Dr. George Klee's review of setting performance specifications, as well as the recent FDA public meeting on glucose meter quality, Dr. Klee and Dr. Brad Karon of the Mayo Clinic and Dr. James C. Boyd of the University of Virginia recently published a study that used simulation modeling to determine performance criteria for glucose meters:
"Glucose Meter Performance Criteria for Tight Glycemic Control Estimated by Simulation Modeling", Brad S Karon, James C. Boy, and George G. Klee, Clinical Chemistry 56:7; 1091-1097 (2010)
-----