College of American Pathologists
CAP Committees & Leadership CAP Calendar of Events Estore CAP Media Center CAP Foundation
 
About CAP    Career Center    Contact Us      
Search: Search
  [Advanced Search]  
 
CAP Home CAP Advocacy CAP Reference Resources and Publications CAP Education Programs CAP Accreditation and Laboratory Improvement CAP Members
CAP Home > CAP Reference Resources and Publications > cap_today/cap_today_index.html > CAP TODAY 2008 Archive > Punching a hole in specimen ID errors
Printable Version

  Punching a hole in specimen ID errors

 

CAP Today

 

 

 

June 2008
Feature Story

Anne Paxton

On a list of prime candidates for error reduction, a health care procedure that is performed a billion times a year should be near the top. That’s about how often blood draws are done each year in the United States, requiring that hospital laboratories continually shore up their defenses against patient safety lapses in phlebotomy.

Deciding whether blood collection errors are caused by management, phlebotomist training, laboratory procedures, information technology, or just plain human nature has become a somewhat academic question. Health care facilities have been tackling the problem by focusing on what works, and now some hospital laboratories are racking up impressive gains in patient safety.

There’s no one magic answer, they say, but several measures are succeeding in bringing error rates down.

Until recently, the extent of the problem with correct patient identification in phlebotomy wasn’t known, even though the Joint Commission on Accreditation of Healthcare Organizations made accurate patient ID a lab patient safety goal and the CAP Quality Practices Committee makes it clear that accurate specimen ID is critical to quality care.

But a 2007 CAP Q-Probes report from the committee has stepped into the gap. This Q-Probes, “Specimen Labeling Errors,” breaks new and valuable ground by providing a baseline figure. Reviewing 3.4 million specimens collected at 147 institutions, the authors of the study identified 3,043 labeling errors. They conclude that the median specimen labeling error rate of U.S. laboratories is 1.31 per 1,000 labels.

“For the first time, we’ve arrived at something of a consensus as to how frequently specimen identification errors occur out there in the community,” says lead study author Elizabeth Wagar, MD, director of clinical laboratories at the University of California at Los Angeles. “We didn’t even have that kind of baseline information before. We’ve had wristband error review at the bedside, but this was the first study where we’ve looked at specimens received in the laboratory.”

Now, the committee recommends, laboratories can review their specimen error rate relative to the median or 50th percentile. The broad range of error rates reported—from 0.22 errors per 1,000 labels at the 90th percentile to 52.3 errors per 1,000 labels at the 10th percentile—suggests that many laboratories will find the comparison useful.

The study’s exceptionally large sample of 3.4 million specimen labels was necessary to evaluate five types of error, Dr. Wagar explains: mislabeled specimens, unlabeled specimens, partially labeled specimens, specimens with incomplete labels, and illegible labels. “We needed enough to get a feel for the general prevalence of specimen ID errors.”

It’s always hard to be sure that a Q-Probes sample is representative of laboratories nationwide, Dr. Wagar says. “These are voluntary studies, and there is a bias toward those institutions that may be more interested in quality. But we always collect the demographic data, and this sample looks very representative.”

As to specific measures to stem errors, the Q-Probes study’s chief conclusion was that lower rates of specimen labeling errors occurred in two settings: laboratories that have current, ongoing quality monitors for specimen identification, and institutions that have 24/7 phlebotomy services for inpatients.

“Now we have evidence that a certain kind of phlebotomy service and a certain kind of quality program make a difference. And that’s an important piece to communicate to hospital administrators and others, especially when hospitals don’t want to spend the money on programs like that.”

But some laboratories are finding that other inno­vative strategies are making a dent in specimen collection error rates as well. The reengineering of phlebotomy procedures at Alegent Health in Omaha, Neb., is an example.

Alegent is a six-hospital system that employs 111 phlebotomists and draws 400,000 blood specimens per year. Since 2003, starting with the reference laboratory and moving to the hospitals, it has successfully employed Lean-Six Sigma business process reengineering concepts to bring down its specimen identification error rates and to improve productivity and turnaround time, says phlebotomy manager Sandy Prososki, PBT(ASCP).

Standardized work is one of the key Lean concepts on which Alegent relied. “Before we started this project, everyone had their own trays and used their own supplies. But we standardized both so that every tray at every site is the same. The two Alegent sites where arterial blood gases are performed are slightly different. But you can walk into any hospital site, pick up a tray, and it will be organized in the same way with the exact same supplies. So it’s not ‘Sandy’s tray’ anymore. It’s tray 1, 2, 3, 4, 5, and they’re all the same.”

Another significant change in operations was the elimination of batch processing. The laboratory took the Lean concept of single-piece flow, or “First In, First Out,” and implemented it as “Draw One, Tube One.” In the past, “we would draw by floor, and the phlebotomists would draw all of one certain section, then bring it down to the laboratory or tube it down. So our techs were waiting all that time.”

“We don’t do that anymore at all. When the phlebotomists draw one patient, they walk to the tube station, tube it to the laboratory, then go back to the next patient. So we have constant motion.”

The effect on turnaround time has been striking. “The preanalytical mean time between draw to receipt in the lab three years ago was 15 to 18 minutes. Last year our goal was eight minutes, and we have it at 7.01. So we’ve more than cut it in half.”

The laboratory reassigned its phlebotomy team into “phlebotomy zones,” which brought more turnaround time savings. “Before, the specimen labels would print in the laboratory and the phlebotomists would either be paged, or if in the laboratory they would go to wherever the patient was. Now we say Phlebotomist A is responsible for Zone 1, Phlebotomist B for Zone 2, so it means less traveling.”

The zones were developed by the phlebotomists themselves, and depending on the site, they might be vertical instead of horizontal, Prososki says, since it may be a shorter distance to take the elevator than to walk to the other end of the hall. “The key is it is strategically planned so one person just stays in a geographically positioned area rather than running all over the hospital.”

The blood collection error rates, well below the median reported in the Q-Probes, attest to the success of the project. For lab-collected specimens, Alegent now reports 0.08 errors per 1,000 labels. For non-lab collected specimens: 0.61 errors per 1,000 labels. And for total collected specimens: 0.34 errors per 1,000 labels.

Does she think Lean-Six Sigma is a practical and desirable strategy for other institutions to adopt? “Absolutely,” Prososki says.

Alegent is looking forward to anot­her significant step next year, when it plans to begin bedside bar coding. The Q-Probes study showed that automated ordering and labeling is starting to take hold but has a long way to go in the nation’s hospitals. While bar-code technology in laboratory computers is common, only 9.8 percent of laboratories participating in the Q-Probes study said they use electronic patient wristband identification for order entry.

“Some laboratories will generate a whole stack of label sheets for the phlebotomists,” Dr. Wagar says. “And it’s really easy to mix sheets of labels, although it doesn’t necessarily mean that will affect safety. But a lot of labs don’t even have that—they might still be working with paper requisitions and maybe addressographs or other kinds of labels based on what was available in their hospital.” Nearly half of the institutions surveyed in the Q-Probes study said that handwritten paper requisitions accompany specimens from both inpatient and outpatient areas of the hospitals.

When she teaches phlebotomy to her students, Nancy Erickson, PBT (ASCP), CHI(NHA), uses a strictly manual method of specimen labeling, because many hospitals still use the manual method and others might have to do so when their computer systems are down.

“Errors in specimen identification usually happen in two spots,” says Erickson, who is owner and director of Phlebotomy Education Inc., Allen Park, Mich. “The phlebotomist did not properly identify the patient, went ahead and drew blood, and it really wasn’t the right person.”

“Or another possibility, let’s say you had four labels, and three were the right ones for that patient while the last one was for another person down the hall because they were stapled together incorrectly. You’ve got four tubes but you really needed three. If you don’t take the specimen back to that armband right there to compare the labels, you won’t catch it.”

The proper procedures for the phlebotomist include identifying himself or herself, double-checking the patient identity, labeling the specimen while in the room with the patient, and making sure to take the specimen right back to compare it to the armband, she says.

Starting in 2007, the phlebotomists at Seton Medical Center in Daly City, Calif., transitioned from hard-copy orders to handheld Palm devices for bedside specimen collection, with remarkable results, says LIS coordinator Roberto Dacanay, CLS.

“The old way was, our phlebot­o­mists would print a collection list and bar-code labels from our Sunquest laboratory information system and then go up to the floors. They would take their trays, go up to the patient’s room with a hard copy of the orders, verify the patient’s name, draw the blood, and then label the tubes with the preprinted bar-code labels. For stats, they would have to be paged overhead.” Now, with the Sunquest Collection Manager system, “all the orders go wirelessly into their handheld. They can view and download the patients to draw for the specific floor they’re assigned to.”

Throughout the shift, “when an order comes in from the hospital information system they see it almost instantly because of live streaming, and it has an audio and visual indicator. The handheld beeps and the blinking indicator will be red if it’s a stat order, yellow if a routine draw.” With the old paging system, it might turn out that two phlebotomists show up to do a draw. “Now if they are available to draw, they download the order onto their handheld and it’s locked there; no other phlebotomist can get it.”

At the bedside, phlebotomists still do their verbal checks, then scan the armband for the medical record bar code, and the system checks to make sure no added test has come in from the time of download and displays all the orders along with the tube requirements. “That’s really efficient, because a lot of times when they draw a CBC, they would return to the laboratory and then find that a prothrombin time was added. Then they’d have to go back up to the floor and stick the patient again.”

The key factor in improving patient safety has been producing the label at the bedside. “In the old system, we printed labels before they actually would draw the patient, and so they could mismatch. With this technology they get printed at the moment of scan at the patient’s bedside. So we haven’t had any mislabeling errors.”

The few inevitable glitches have mostly been connectivity problems, he notes. “They could be standing at a dead spot, like the kind you might have with a cell phone.” A particular patient order might be “locked” because the order is being queried down in the laboratory, or the system might occasionally freeze on the phlebot­o­mists, Dacanay says. But rebooting usually solves the problem. At $2,000 per Palm device, plus $800 for each printer, the patient safety dividends easily justify the cost, in his view.

Palm technology has been popular at UCLA also, Dr. Wagar says. “Our phlebotomists are enthusiastic and motivated to use this technology, and they’re very appreciative because it’s wireless, the right labels are created at the bedside, and it updates itself automatically, so they always have the most current information without having to go back downstairs to the phlebotomy area.”

The Q-Probes finding that 24/7 phlebotomy services have lower error rates does not just mean that bigger hospitals are likely to do better, Dr. Wagar says, because the study did not find 24/7 phlebotomy service was correlated with the size of the hospital. “You would think it would be larger hospitals, but we didn’t really find that.”

Rather, it implies that a better organized, core group of phlebotomists in the hospital may have an impact, whether the phlebotomists are managed by the laboratory or by some other unit of the hospital.

Institutions without a 24/7 service may have phlebotomy services for the morning draws but no organized phlebotomy service after that during the 24-hour period. Or if they have the more decentralized form of phlebotomy services, they may rely entirely on nursing to do blood draws or have care partners who perform blood draws, she notes. “So it appears that the availability 24/7 of phlebotomists, hired for that explicit job, regardless of who manages them, is the marker for reduced labeling errors.”

In most hospitals, says Katherine Galagan, MD, director of clinical laboratories at Virginia Mason Medical Center, Seattle, 24/7 service implies that the laboratory is staffing phlebotomy around the clock. “Usually, this means they do as many peripheral draws as are requested, and the nurses do the line draws because phlebotomists aren’t licensed for that.”

But there may be quite a bit of variability out there. “Even though we are 24/7, the phlebotomists are only drawing somewhat more than half of all the draws. We are working with nursing to eliminate line draws for coagulation studies wherever possible, as peripheral draws provide a much better sample,” Dr. Galagan says.

UCLA can point to its own experience to show that the kind of 24/7 phlebotomy service the Q-Probes report refers to can help improve specimen labeling accuracy. In the course of a study completed two years ago (Wagar E, et al. Arch Pathol Lab Med. 2006; 130: 1662– 1668), the clinical laboratories began providing phlebotomy services around the clock, hiring 12 additional phlebotomists to do it as well as starting electronic error reporting, improved training, and automated processing. In two years these measures brought a sharp decrease in mislabeled specimens, often referred to as “wrong blood in tube”—including many months when the error rate was zero.

The Q-Probes study did not find a correlation between laboratory management of phlebotomy and improved labeling. But the tide appears to have shifted against decentralized phlebotomy, a management experiment that gained popularity in the late 1980s and early 1990s, says Dennis J. Ernst, MT(ASCP), founding director of the Center for Phlebotomy Education, Ramsey, Ind.

“Most facilities that try to decentralize switch back to centralized processing when they realize nobody can give specimen collection proper attention unless they do it all day, every day.” More and more hospitals, he says, are reclaiming phlebotomy as a laboratory procedure, realizing that no matter how much cross-training they do, it’s inherently difficult to manage non-laboratory health care professionals with enough oversight to make decentralized phlebotomy work.

The majority of hospitals under 100 beds, Ernst says, usually have a printer in the laboratory that connects to nurses’ stations, so the phlebotomists doing morning draws call up the tests that need to be done, print the labels in the laboratory, and hand-carry them to the bedside. In general, the more manual the methods, the higher the error rate in patient ID and labeling.

Regarding the two percent of hospitals that indicated in the Q-Probes study that they had no written procedure for specimen labeling at the bedside, Ernst is both surprised and not surprised. “That’s a small number, but it’s disappointing that those hospitals haven’t taken specimen identification seriously enough to do that. It’s a little shocking to me.”

Ernst says he himself hasn’t seen marked gains in accuracy of specimen identification. “I don’t see that we’re getting any better. I see the same mistakes and the same frequency of mistakes that we have in the last decade. The problem is that errors creep in whenever people are taking shortcuts. Even with technology and cutting-edge management information systems, there are still ways that those who draw blood can obtain specimens that don’t reflect the physiology of the patient.”

For example, for the sake of expediency, the phlebotomist may not draw tubes in the proper order. “Laboratories are pretty good at correcting errors once they find them, but where I think they fall short is with the invisible error—nobody can just look at a specimen and say it’s not drawn in the proper order.” Those errors have to be forestalled proactively by constant monitoring and evaluating of staff, he notes.

Though the Joint Commission’s requirement for two identifiers is a step forward, it does have a “dangerous loophole,” Ernst says, as far as identifying patients before a blood draw. Two bits of information from the patient’s identification bracelet satisfy the requirement. “The problem comes when the bracelet is on the wrong patient. Then both identifiers misidentify the patient,” Ernst says. In his view, the Joint Commission should adopt the Clinical and Laboratory Standards Institute requirement that the patient be asked to say his or her name to confirm the armband is correct (standard H3-A6). If the patient is sedated, comatose, or cognitively impaired, or there is a language barrier, a family member or caregiver is to verbalize the patient’s name on his or her behalf, he says.

Of those hospitals participating in the Q-Probes, 19 percent said they had no current ongoing quality monitor other than participating in the Q-Probes, but the presence of such a monitor was correlated with fewer labeling errors. That’s a finding Dr. Wagar is enthusiastic about. “It indicates that when you pay attention to a problem, you do solve the problem.”

“We do a lot of quality projects in the lab. We churn a lot of paper and sometimes people wonder why,” she says. “This study proves that having a quality monitor, when you continuously collect data on mislabeled or unlabeled specimens and try to take corrective actions, is associated with a lower number of errors. So it proves the activity itself, which some people question, is valuable.”

Sharing the quality monitoring information with hospital administration—which 81.7 percent of the survey participants said they do—is the second piece, she says. “We asked whether they report to the hospital or higher administration, and I think that’s important because it makes the institution aware you care about the problem and are trying to do better.”

The Q-Probes study delved into a controversial area when it asked participants whether they allow relabeling of specimens. The responses show laboratories are divided on this: 58 percent said they do not allow relabeling by primary collecting personnel, while 42 percent do allow it.

“When a specimen comes down that is either unlabeled or has a label that doesn’t match the requisition, some labs will allow relabeling,” Dr. Wagar notes. “They may perhaps have the nurse or physician come down and relabel. Some will restrict relabeling to only difficult-to-recover specimens such as biopsies or cerebrospinal fluid or something not able to be re-drawn easily.”

But Dr. Wagar was a little surprised that a substantial portion of laboratories have a procedural policy in place allowing relabeling. “I consider it to be an unsafe practice. I think it’s a good subject for another Q-Probes, because we just had a survey question in this one. We didn’t investigate it at a detailed level, and we’d certainly like to know more about it and whether it is safe or unsafe.”

Like most hospitals, Virginia Mason Medical Center used to be more accepting about relabeling, Dr. Galagan says. Many years ago, “we used to let them say they recognized it and then we’d put them through.” Then one day she had a kind of epiphany.

“I got a call from the surgical pathology laboratory saying, ‘We have three unlabeled tissue biopsies.’ Each doctor was called and each offered to identify and label it. I said ‘Fine, come on down,’ and I showed them all three and said, ‘Which one is your patient?’ We all realized that it was impossible to distinguish between them, and that got me started on a serious effort to improve patient identification, which eventually became a medical center goal as well.”

Now the laboratory is much tougher. “We don’t allow relabeling,” Dr. Galagan says. “There are a few circumstances where we may accept it if it’s a recognizable specimen, such as a colon, or in certain other circumstances. But usually we either discard it or insist on doing DNA identification if it is a tissue,” a process that may take a week or longer. “So we’ll have to either use a block from a previous case or get blood from the patient and actually verify the DNA—and we had one time where it ­didn’t match.”

About four years ago, responding to the Joint Commission’s patient safety goals, Virginia Mason Medical Center started a campaign called “It takes Two.” “That means it takes two identifiers—the name and the medical record number, the name and the birthday, or the name and the Social Security number—for every interaction with the patient or any part of the patient, such as the specimen. In her own office, sitting by herself and dictating a case, “I’ll say out loud into the microphone, ‘John Smith, case #SP-08-100,’ for example, reading off the slide, then I read the same name and case number off the paperwork, and that way I have both the visual and auditory cue that it is the right patient.”

Standard quality assurance measures, however, may not be sufficiently effective in preventing specimen ID errors, Dr. Galagan says. “We have our staff turn in QA reports on every mislabeling or misidentification, and we have collected that data and with each error interacted with the people who made it and looked for opportunities to improve the process. And that was all great, but what I found was it didn’t help that much.”

“To keep improvement going, we needed to both mistake-proof the process and to really go after the outliers.” Lean concepts were helpful. For example, one outlier department was a major customer that produced a lot of biopsy specimens. The department manager looked at the flow of work and identified a problem: “They were basically getting the specimen on one side of the room and the labels were on the other side. That’s why they were tending to get mislabeled. So they changed the flow and how things were arranged and that allowed them to really cut their error rate,” Dr. Galagan says.

Another mistake-proofing tool was putting a sticker on every outpatient order. “The phlebotomist has to sign off after asking the name and the date of birth, and then initial the sticker to show that both the patient and the order were checked,” Dr. Galagan says. When the tube goes into the central processors, phlebotomists also check the tube against the order before putting it into the computer, and they initial the sticker. “As a result of this mistake-proofing, we saw the error rate go down significantly,” she says.

Since the laboratory started tracking the error rates of its 46 phlebotomists in 2006, there has been a significant decline. The 2006 total was 15 errors out of 165,260 draws, an error rate of 0.091 per 1,000 draws. In 2007, there were 14 errors out of 172,450 draws, an error rate of 0.081 per 1,000 draws. The first four months of 2008 have seen only two errors out of 56,546 draws, an error rate of 0.035 per 1,000 draws. “So we’re on track for six errors all year at this rate. Our phlebotomy team is amazing and totally dedicated to eliminating this error completely,” Dr. Galagan says. The laboratory has also made progress in its overall error rate (both laboratory and non-laboratory collected specimens), she notes, dropping from 2.5 per 1,000 collections in 2004 to 0.8 per 1,000 collections in 2008.

Virginia Mason’s phlebotomists hand-carry other specimens from the floor to the laboratory to keep turnaround time down. However, transporters are used for the nurse draws. “To help identify mislabeling at its source, we started using yellow labels as opposed to white. And that helps—it allows the transporters and others to notice if a tube is unlabeled. They don’t see that yellow and they say to the nurse, ‘This isn‘t labeled,’ rather than bring it down to the laboratory unlabeled. So it’s another place we can mistake-proof,” Dr. Galagan says.

The hospital’s next move to lower the error rate will be to add bedside bar coding through Palm technology, which Dr. Galagan hopes will be implemented next year. “We’re a big hospital and medical center with several outlying community clinics, and our IS team covers the whole organization. So we worked with the IS team and identified several other areas where bar coding will be useful to the organization,” she says.

Philosophies differ on whether punitive action toward phlebotomy personnel can have a substantive impact on error rates. At some laboratories, a second error means automatic dismissal, but Dr. Galagan is not sure that such an inflexible policy works. “If someone new had a lot of problems, we might dismiss them,” she says, “but if someone who had been here for years and years suddenly started making errors, we would be looking at the process and circumstances to see if something had changed to create an error-prone situation for our staff. In general, though, we have found a positive approach to be more productive.” The phlebotomy and central processing manager publicly recognizes and reinforces good work. “He regularly throws staff recognition parties, gives the phlebotomists awards, and calls people out for having really low error rates,” Dr. Galagan says.

The push to increase patient safety has boosted awareness of the importance of phlebotomy training and certification, says phlebotomy educator Erickson, pointing out that California and Louisiana passed licensing laws within the past six years, and there are about 18 groups that do the certifying.

Erickson’s own students, after completing the education in a classroom and then the clinical training that includes 100 successful veni­punctures, earn a certificate from Phlebotomy Education Inc., but are encouraged to sit for the national certification test. She also runs the Certification Preparedness Agency, a company that prepares test takers and administers the test from the National Healthcareers Association. Unfortunately, Erickson says, the attendance numbers are low for the national test since it is not required in many states and does not offer much, if any, of a pay rate increase. The American Society for Clinical Pathology and American Medical Technologists are among other groups that offer phlebotomist certification.

Ernst, of the Center for Phlebotomy Education, encourages individuals who are passionate about minimum training standards for specimen collection personnel to lobby their state legislators. “The large percent of errors committed preanalytically cannot be eradicated without constant training and discipline,” he says, insisting that improved educational standards for phlebotomists will keep many errors from being committed in the first place.

But perfection in specimen collection is not going to happen, Dr. Galagan emphasizes. “Wherever human beings are involved, things are not perfect. When we have bedside bar coding, it will be another huge step for us, but there will still be an occasional error. There will be some creative thing that people will do that produces an error that we’ll then have to look at and keep after.”

“It’s something that has to be always on people’s radar screen, and while it’s not something you’re going to completely fix, it is something you can manage and improve forever.”


Anne Paxton is a writer in Seattle.
 
 
 © 2014 College of American Pathologists. All rights reserved. | Terms and Conditions | CAP ConnectFollow Us on FacebookFollow Us on LinkedInFollow Us on TwitterFollow Us on YouTubeFollow Us on FlickrSubscribe to a CAP RSS Feed