Q:Is it appropriate in a hospital health system—where the same instruments are used throughout—to pool the quality control data and use a single common statistics (mean and standard deviation [SD]) range for all the instruments? Or should the mean and SD range be calculated and monitored individually for each instrument?
A. Quality control protocols are designed to evaluate the performance of a single analytical system, with the assumption that any variation in output is due to an expected amount of random error in the system. Ideally, multiple identical analyzers should perform as a single analytical system, so quality control data from these instruments could be aggregated to produce an overall set of statistics that would be valid for each analyzer. Unfortunately, however, no two analyzers are identical. The parts, reagents, and calibration curves often differ in age between instruments, which affects each instrument’s total error in output. This increases the variability of the aggregated data, which, when used to establish quality control statistics, will produce a mean that likely will represent no one instrument and a standard deviation that is too large because it combines the random errors of each analyzer and the biases between the instruments. The effect on the mean will tend to increase the frequency of systematic errors, while the effect on the standard deviation will tend to mask random error.
The hope may be that, over time, the analyzers will be brought closer to each other by using the common mean and SD range. However, each analyzer changes over time, relative to its prior performance and relative to the other analyzers in the aggregate. The performance of each analyzer as measured by control material is a constantly changing target, and pooling these values will increase the variation seen in the control ranges.
In a quality control system for multiple similar analyzers, the statistics should be evaluated for each individual analyzer. However, a higher level review should also be done to compare the instruments, recognizing that changes in quality control statistics may not reflect a change in the behavior of the patient sample due to matrix differences. Nonetheless, any analyzer that shows a greater standard deviation or a bias in the means when compared to similar instruments deserves attention. It may be necessary to compare fresh patient samples, if possible, to determine if the discrepancy is clinically significant.
William J. Castellani, MD
Penn State Hershey Medical Center
Department of Pathology
Past Chair, CAP Instrumentation
Q. Our laboratory uses biohazard bags to transport specimens from outpatient clinics via courier runs. The bags typically contain slides in holders, blood tubes, and urine specimens. If there is no visible contamination, the bags are reused to transport new specimens. Our infection control nurse has advised against this. Is this practice acceptable?
A. I can find no law or regulation forbidding reuse of transport bags. Nor can I find any report of infections related to their reuse. Furthermore, there is no CAP checklist question that specifically addresses reuse of the patient specimen transport bag.
However, I agree with your infection control nurse. The biohazard symbol should trigger an automatic response to take precautions and to dispose of items safely in a biohazard container. I fear that allowing individuals to violate that standard way of handling items labeled biohazard will undermine that teaching.
Looking for visible contamination is not foolproof because some body fluids are colorless or nearly colorless. You could argue that not finding evidence of a spill does not eliminate the possibility of contamination.
You also could argue that the bags should be reused for financial reasons or to decrease the quantity of biohazardous trash. While I recognize the validity of these arguments, I don’t believe they outweigh the need for infection control precautions.
Jerry L. Harris, MD
KWB Pathology Associates
Member, CAP Safety Committee