Return to CAP Home
Printable Version

  Letters

title

 

 

 

cap today

 

 

July 2006

bullet Laboratory error

Regarding Laura Landro’s Wall Street Journal article of June 14 ("Hospitals Move To Cut Dangerous Lab Errors"), I have to voice my concerns. Specifically, Stephen Raab, MD, is quoted as saying, "Tests fail because things can go wrong at every step of the process, and there are no checks and balances in place in pathology to catch these errors." To me, Dr. Raab’s comments seem to be disingenuous. There are checks and balances throughout most labs, both in clinical and anatomic pathology. We have a quality assurance plan directed at the clinical labs and one directed at anatomic pathology. We check preanalytic, analytic, and postanalytic problems. Maybe these measures are inadequate—this is a topic worthy of study. Maybe there are better methods. But the blithe assertion that there are "no checks and balances in place in pathology" is simply ludicrous.

To take an example from anatomic pathology: We have all first-time cancers examined by a second pathologist. Many institutions do this. Is that enough? No, but (of course) we have a lot of other quality assurance measures in place. Are all my measures enough? Perhaps not, but it’s a far cry from nothing. Clinical pathology is simply filled with quality assurance measures—for example, delta checks act to suggest mislabeled specimens.

The famous comment about one percent of planes crashing makes my blood boil. A crash is more like intending to amputate a gangrenous foot but cutting off the patient’s head in error. It doesn’t happen often. Maybe missing a treatable carcinoma is similar. But that is not the basis for this oft-cited, amusing-but-false analogy. Any error, from preanalytic to postanalytic, even minor variations and disagreements, is often counted as "errors."

Let’s apply similar logic to the airlines: Not every mistake the airlines make results in a crash. According to USA Today, the airlines lost 10,000 bags per day or 3.5 million bags a year. Do we lose 3.5 million surgical specimens each year? I think not. What about delayed flights? Analogous to delayed diagnosis? According to the Department of Transportation, on-time arrival rates were 77.6 percent in February, up slightly from February 2004’s 77.5 percent and well above January 2005’s 71.4 percent. Are pathology reports more than 77 percent on time? The CAP standards are substantially higher.

Don’t get me wrong; there are plenty of opportunities for improvement. But the assertions as published in the Wall Street Journal shouldn’t go unchallenged. Unfortunately, the damage to pathology’s reputation is done. Any response will smack of answers to the question, "Do you still beat your wife?"

Jack Garon, MD
Mt. Sinai Hospital Medical Center
Chicago

bullet Proficiency testing and statistics

Jonathan Hughes, MD, PhD, Nancy Young, MD, and David Wilbur, MD, are correct ("2005 Regulatory PT Results: What Do They Really Mean?" May 2006). There can be little doubt that some extremely well-qualified cytotechnologists and pathologists will fail the cytology proficiency test if it is based on 10 test slides. Even the probabilities of failure can be calculated accurately by using the binomial theory of statistics (Nagy GK, Collins DN. Acta Cytologica. 1991; 35: 3-7). In the 37-year-long practice of the New York State cytology proficiency test, we have seen examples of failures by internationally known cytopathologists whose expertise was absolutely beyond doubt. Proficiency testing is a heavily statistical subject (Crocker L, Algina J. Introduction to Classical and Modern Test Theory, New York: Holt, Rinehart and Winston; 1986). If this fact is disregarded, then a scientifically sound cytology proficiency test will never be available. Adjusting the superficial aspects of the cytology proficiency test, the scoring grids, the variable validation methods of the slides, and so on, will only marginally improve the test’s validity and reliability. A more thorough overhaul of the system, based on rational statistical principles, is needed. Lip service is paid frequently to the importance of statistics in medical science, but the use of non-descriptive statistics in the practice of anatomic pathology has generally remained wishful thinking. Now, when the cytopathology community is attempting to introduce a highly accurate system in cytology proficiency testing, a consideration of statistical principles is more important than ever.

The importance of statistical insights for a rational cytology proficiency test can be demonstrated with an example. Data from the National Cytology Proficiency Testing Update [Cheryl Wiseman, MPH, CT(ASCP), of CMS, published online Feb. 8, 2006] show that as of Jan. 31, 2006, among 12,786 examinees nine percent failed the test when attempting it for the first time. For the second attempt, the failure rate—among those who had failed the initial attempt—remained surprisingly similar, 10 percent, though common sense seems to dictate that the rate should be much higher among those who have already failed the test once and therefore supposedly have lower professional skills. Yet the passing rate at the second attempt is virtually identical to the passing rate of all participants at the first attempt.

It would be virtually impossible to conclude that the huge decrease in the failure rate from 100 percent to 10 percent is attributable to a vast improvement of skills of the "failed" cytologists during the few weeks that lapsed between the two tests. There is a far simpler explanation: What we are seeing is a statistical phenomenon, known as "regression toward the mean," which was first described by Sir Francis Galton in 1877. There are two groups of examinees who earn failing scores during proficiency testing: those whose skills are genuinely insufficient and those who are competent but who achieve low scores due to random variation in the test results, as Drs. Hughes, Young, and Wilbur assume. The latter "misclassified" examinees subsequently regress toward the mean during the second test, that is, their test results become more commensurate with their genuine skills. Since the failure rates of the participants during the first and second attempts are so similar, we have to infer that the majority of the failed examinees fall into the second, misclassified group, and only a minority have truly insufficient skills. The high frequency of misclassification, which is not only theoretically fully plausible but also supported by Cheryl Wiseman’s data, also demonstrates that a "short" proficiency test based on a small number of test slides has this inherent weakness

the associated high misclassification rate. A long board-examination type test would therefore be far more efficacious in investigating competence than the federally mandated short test.

George K. Nagy, MD
Cytopathology Laboratory
Wadsworth Center
New York State Department of Health
Albany

bullet Molecular testing

In this age of genomics, microbiologists who have been in the profession for many years have to heed and act on the message of the bestselling book Who Moved My Cheese? by Spencer Johnson, MD. DNA technology is here to stay. The dramatic and revolutionary shift from phenotypic to genotypic methods was evident at the 2006 meeting of the American Society for Microbiology. Since all areas of the laboratory will be forced in the future to deal with some form of molecular testing, analyzing, or reporting or all three, we have found that the best approach is to familiarize ourselves with basic molecular principles through classes at local colleges. We cannot blame our employers or circumstances that have forced the changes that are upon us in this age in which we live and work. We have to be proactive, and the time to act is now.

Arthur P. Guruswamy,BS, SM(ASCP)SLS
Jody C. Noe, MS, MT(ASCP)SH
Richmond, Va