College of American Pathologists
Printable Version

  In send-out snafus, it’s right place, wrong kind


cap today



April 2007

Feature Story

Anne Ford

Until recently, how accurately institutions order and route send-out tests was more or less anyone’s guess. And since mistakes in send-out testing generally cost laboratories much more than do errors in routine testing, that guess was a potentially expensive one.

“When laboratory staff incorrectly order a locally performed test, the laboratory loses the cost of reagents, but most other expenses are fixed,” says Paul N. Valenstein, MD, president of Pathology and Laboratory Associates, Ann Arbor, Mich., and chair of the CAP Quality Practices Committee. “In contrast, when staff incorrectly order a send-out test, the ordering laboratory generally receives a bill for the full cost of the test from the reference laboratory.”

More succinctly put: “The financial implications…are not trivial.” So wrote Dr. Valenstein and coauthor Ana K. Stankovic, MD, PhD, MSPH, vice president and medical and clinical director for BD Diagnostics’ preanalytical systems, in the recently released results of the CAP Q-Probes study, “Send-Out Test Order Accuracy.”

In the study participants from 97 institutions reviewed send-out test orders and retrospectively identified errors in order entry or tests that had been routed to the wrong reference laboratory. The study defined “send-out test” as any test sent to a separate CLIA-licensed laboratory that did not use the same computer system as the referring institution. Each participant reviewed at least 50 send-out test orders (with a maximum of five per day). In all, 17,904 send-out orders were reviewed.

Nearly all of the participating institutions were in the United States and 85 percent of them had been inspected by the CAP within the past two years. Teaching hospitals represented about 34 percent of participants, while 19 percent of participants had a pathology residency program. Slightly more than 88 percent transmitted orders to reference laboratories electronically, with the remainder using computer printouts or handwritten requisitions. Only about 43 percent of participants reported that they routinely (that is, at least quarterly) monitor their send-out test order entry error rate.

First, the good news: The study found that 99.4 percent of the time, participants routed send-out tests to the correct reference laboratory. The findings regarding order entry errors, however, “were more worrisome,” Dr. Valenstein says.

Before conducting the study, he says, he and Dr. Stankovic “speculated that order entry for send-out tests might be less accurate than orders for routine tests, because order entry staff are likely to be less familiar with the names of esoteric tests and because there are so many different esoteric tests to choose from. And that is exactly what we found.” At the median institution, the order entry error rate for send-out tests was two percent, about twice the rate for routine tests.

Not only that, but the error rate varied widely from laboratory to laboratory. “The bottom fourth of laboratories made order entry errors on more than five percent of send-out tests, while the top fourth of laboratories made order entry errors on fewer than 0.3 percent of tests—a 10-fold difference,” the study says.

Some of that variation is simply the result of sampling. “Even if every laboratory had the same error rate, there would be some variation because staff at each laboratory only looked for errors in a sample of send-out orders,” Dr. Valenstein explains. Still, the effects of sampling cannot explain all of the variation among laboratories. Simply put, “Some laboratories order esoteric send-out tests much more accurately than others,” he says.

Why is that? The study authors can’t say. “We would have liked to identify factors that were associated with better performance, because those associations would have allowed us to make data-driven recommendations about how to reduce order entry error rates,” Dr. Valenstein says. “But we didn’t find any institutional practices that were significantly associated with lower order entry rates. Studies such as this one have only moderate statistical power to detect associations. Unless an effect is large, we can miss it.”

Among the factors the study was unable to link to error rate were training of staff who handle send-out orders, the presence or absence of an electronic interface with the reference laboratory, and the inclusion of esoteric tests in the primary laboratory’s test catalog. However, the authors were surprised to find that one factor was associated with an increase in laboratories’ send-out test error rate: the use of specific, rather than miscellaneous, test codes.

“When a send-out test was ordered with a ‘miscellaneous’ test code—a code used for a variety of send-out tests—there was a 3.9 percent chance that an order entry error would occur. In contrast, 5.6 percent of tests ordered with a ‘specific’ test code—a test code reserved for a particular send-out test—had order entry errors,” says Dr. Valenstein.

“This finding surprised us. We had expected to find exactly the opposite relationship,” he adds.

Dr. Valenstein speculates that front-line order entry staff, who must interpret physician orders and select the correct test code in the computer, aren’t familiar with the names of many esoteric tests. This is especially problematic if the physician hasn’t clearly indicated the esoteric test he or she wants.

“In contrast, when a miscellaneous test code is ordered by front-line staff, along with the text the physician used to describe the test, order entry of the specific test code is performed down the line in a send-out area or at the reference laboratory itself, where staff are more familiar with esoteric tests,” Dr. Valenstein says. “These individuals are likely to make fewer errors. And we know from earlier studies that nonphysicians often modify physician orders to make them better. I suspect this sort of modification is going on when experienced order entry staff order specific test codes for esoteric tests.”

In Dr. Valenstein’s view, the best environment for reducing send-out test order entry errors is one where physicians order esoteric tests directly into a computer from a list that includes specific test codes accompanied by information about the indications and limitations for each test. “But this sort of environment is not present in most health care settings,” he says. Failing that, “ordering a miscellaneous test code with a free text description may, paradoxically, lead to fewer errors than forcing a relatively inexperienced clerk to choose from a long list of rarely ordered specific test codes,” he says.

The study recommends that laboratories with order entry error rates for send-out tests of more than three percent consider improving the education and training of their order entry staff, check all send-out test orders, and educate providers to identify orders with greater specificity. Suggested additional reading includes “Accuracy of Outpatient Order Entry” (Valenstein P, Meier F. Arch Pathol Lab Med. 1999;123:1145–1150) and “Alteration of Physician Orders by Non-physicians” (Finn AF, Valenstein PN, Burke MD. JAMA. 1988;259:2549–2552).

Since, as far as Dr. Valenstein knows, this is the first time anyone has studied error rates in send-out test ordering, “this study will have to be repeated down the road to get a sense of whether order entry errors for send-out tests are getting more or less frequent,” he says.

Meanwhile, this Q-Probe’s counterintuitive finding regarding specific and miscellaneous test codes leaves him somewhat dismayed. “I must confess to wishing that the study results hadn’t come out this way,” he says, “because I have been a champion of using specific test codes at the institutions I serve.” But, he concludes, “when you make an error, it is best to come to terms with it. Humility never killed anyone.”

Anne Ford is a writer in Chicago.