Return to CAP Home
Printable Version

  With PT Failures, Spinning Straw
  Into Gold

 

May 2005
Originally published in CAP TODAY

Desiree A. Carlson, MD, Editor, Laboratory Accreditation News

When that proficiency testing survey scorecard arrives with some results marked "unacceptable," you might be inclined to swallow hard and forge ahead, resolved to do better next time. Despite laboratories' best efforts, proficiency testing failures occur regularly in the CAP proficiency testing programs as well as those of other PT providers.

But laboratories shouldn't forget to tap the "hidden assets" of a failing grade, says Gerald A. Hoeltge, MD, the CAP Laboratory Accreditation Program special commissioner for nonroutine processes. As the presenter at a CAP teleconference, "What To Do with PT Failures," he detailed how proficiency testing failures can be transformed into improvement strategies. Even though PT failures are unwelcome, Dr. Hoeltge says they can be worthwhile to the laboratory.

The purpose of the teleconference (in audio format at lapaudioac_041404.html) was to improve the awareness of CAP inspectors about the best use of the data in PT evaluation reports, and at the same time explain to laboratories how they can make the most productive use of the data, Dr. Hoeltge says.

He defines a PT failure as any deviation in proficiency testing from external requirements or personal expectations that requires investigation or followup. "That definition applies to both approved and to alternative PT programs."

To illustrate, he refers to hypothetical examples—based on real-life cases identified by CAP inspectors and Centers for Medicare and Medicaid Services investigators—where there might be an obvious deviation but laboratory managers may not be sure they have a PT failure on their hands.

"Let's say you're a manager and you have great techs and supervisors, but you decide to do PT rounds. You look at a slide at your first stop on PT rounds, and the worksheets and instrument printouts indicate the PT samples are being tested four times each."

"Of course, patient samples at that workstation are tested only once. Is that a PT failure? Yes."

"At another workstation, everything looks fine and the survey scorecard said PT was successful, but the records for three recent proficiency testing events can't be found." That's also a failure, Dr. Hoeltge explains, because under both regulatory and accreditation requirements, testing records must be retained for at least two years, and for five years in transfusion medicine.

PT failures are caused mostly by technical errors, followed by methodological and clerical errors, and occasionally by survey material errors, he says. Technical errors include mechanical or calibration problems, failed instruments, and instruments that aren't in good repair.

Methodological errors would include things wrong with the procedure itself or operator failure. Clerical errors can occur preanalytically or postanalytically and refer to mistakes in the manual transfer of information from one medium to another.

"From the accreditation provider's point of view," Dr. Hoeltge says, "the PT data are fed back to participants so they can use them in their own laboratory for quality improvement purposes. The accreditation provider expects that quality improvement activity to be well documented."

"So for every result that comes back that's 'unacceptable,' and every result that's 'ungraded' because of some special circumstance, the laboratory needs to document how it thought through those issues."

Laboratories accredited under the College's Laboratory Accreditation Program aren't required to use CAP's Surveys program, Dr. Hoeltge says. "But for those choosing to use the CAP's program, if there is a PT failure, the lab can use the code on the evaluation report as a guide for its follow-up actions."

External requirements include not just those defined by regulatory agencies and accrediting bodies, but also those defined by the laboratory for itself, Dr. Hoeltge says.

"Say at another workstation, you find there's no formal PT but there's alternative PT of a particular analyte at this workstation, and all the results are outside the range you define for yourself as acceptable. CLIA's not going to know about it, but that's still a PT failure."

Or take the case of a coagulation workstation that has proficiency testing results for prothrombin time measured in seconds, and they're fine, but the INRs are consistently unacceptable. "Is that a PT failure?"

"Some might say no because INR is a calculated result; it's not an analyte. But we all know INRs are important to the management of patients. So it is a PT failure," Dr. Hoeltge says, "even though most calculated results that are derived from primary measurements are excluded from the LAP requirements for formal PT."

One of the choices that a laboratory responding to a proficiency testing report can make is "No explanation." That means, Dr. Hoeltge says, "we've investigated and can't figure it out. We think it's random error. And some of the results are random error. But if a laboratory keeps using that same explanation time after time, it is ignoring the underlying problem."

Under CAP's accreditation program requirements, "PT events must be regular and designed to detect trends, and the LAP must monitor the events to identify if there is a need for intervention if laboratory performance is slipping between on-site visits," he says.

Even "acceptable" results can lead to improvement, if the laboratory looks beneath the surface. "Most laboratories could do a better job of using their own data for their own improvement purposes," Dr. Hoeltge says.

"There are clues in the PT results that are going to be overlooked unless somebody sits down and says, 'What can I learn from this data beyond what the PT provider is telling me?'

Just because the PT provider has not flagged results as 'unacceptable' does not mean you as a manager can ignore the probability of systematic bias. For example, a laboratory with a whole series of 'acceptable' results notices that nevertheless they are all on the same side of the mean—just a little higher (or lower) than everyone else's, but still close enough to average that you get an 'acceptable' score.

"In an event in which there are five challenges, having them all fall on the same side of the mean is only going to happen once in 32 mailings by chance alone," Dr. Hoeltge says. "And the need for investigation is even more obvious when the bias persists over two mailings."

"You still have a problem. It may be that all your results are a little different because you have a methodological bias. In fact, it's only by interlaboratory comparison through PT that you can ever detect that bias.

"Most of us look to see if the results are 'acceptable,' and if they are, we're happy. But in cases of bias or a trend toward bias, our PT is telling us something we can use proactively before we ever start getting unacceptable results."

When reviewing one of the PT evaluation reports, laboratory directors should have data sources on hand, Dr. Hoeltge says—possibly the quality control results for that month, or the schedule for calibrating the instruments on which the results are based.

"You might want to have the original PT submission forms so you can look quickly for clerical errors in interpreting the reports," he adds.

He also recommends asking staff to describe their procedure for managing a PT sample in their work area, to see if there was in fact a possibility for handling errors. NCCLS document GP-27A can help with this part of laboratory improvement.

A useful tool produced by the CAP Laboratory Accreditation Program is the proficiency testing exception summary, or PTES, a report printed automatically when a sufficient number of unacceptable results for an analyte are shown to cluster.

"It's a tool the LAP uses to help monitor laboratory performance between on-site inspections," Dr. Hoeltge explains. "The computer spits out 20,000 PTES reports in the course of a Surveys year, so don't be surprised or alarmed if you get a PTES letter.

"After all, by the time a letter arrives you'll have already investigated the issue and have done what needs to be done, so responding to a PTES letter is actually going to be quite straightforward for you."

Sometimes a routinely graded PT challenge is not graded, he notes. How should the laboratory manager respond? A series of codes on the PTES report flags a reason for ungraded exceptions.

For example, Code 26, Educational Challenge, means the laboratory manager will want to document review of all statistics provided by the PT provider, and if intervention was needed, to document that corrective action was taken.

"Or Code 33—that means the provider was told the specimen was unsuitable and the College was unable to provide a replacement sample."

CLIA defines "unsatisfactory" and "unacceptable" results, and the CAP uses these terms in exactly the same way as CLIA. But other definitions are unique to the CAP and refer to clusters of failures in nonregulatory PT.

For example, "If it says 'critical' on a PTES letter, you can believe there's been some evidence of systematic problems in, say, three out of four testing events," Dr. Hoeltge says. "Insufficient" is another CAP-defined term. It's used for unregulated analytes in the same way CLIA uses the term "unsatisfactory" for regulated analytes.

He recommends that laboratories that have to investigate a PT failure work through a checklist of possible causes. Based on laboratories' responses to PTES letters, he suggests: "First, check your sample preparation, testing, and reporting records. Review the participants' report that came with the evaluation report.

"Then review the QC, the instrument maintenance data, and reagent performance records for that test before, during, and after the original analysis of that PT. If it doesn't work, you may try switching to a different reagent lot or different instrument and see if that helps.

"You may find the material was processed in the wrong instrument model, or you may end up having to contact the manufacturer of the reagent kit for assistance in sorting through possible causes."

Only after ruling out all possible causes should laboratories retest the sample, he stresses. "Too often, a laboratory retests the sample hoping that if it turns out okay, the problem is solved. But that only delays identification of any systematic source of error."

Laboratories that respond to PTES letters say that two-thirds of the time technical or methodological problems are at the root of the issue. "Most of the other ones are in fact clerical problems that can be easily identified and worked through."

Laboratories often think of PT as being part of a quality control program. But it really isn't, Dr. Hoeltge emphasizes. "PT is a terrible way to do QC. But it's an excellent way to do interlaboratory comparison, to see how you compare to other people using the same methodology. It's the best tool there is for that purpose."

 

Related Links