College of American Pathologists
CAP Committees & Leadership CAP Calendar of Events Estore CAP Media Center CAP Foundation
 
About CAP    Career Center    Contact Us      
Search: Search
  [Advanced Search]  
 
CAP Home CAP Advocacy CAP Reference Resources and Publications CAP Education Programs CAP Accreditation and Laboratory Improvement CAP Members
CAP Home > CAP Reference Resources and Publications > CAP TODAY > CAP TODAY 2007 Archive > Handing off critical results�fore things turn critical
Printable Version

  Handing off critical results—before things
  turn critical

 

 

 

June 2007
Feature Story

Karen Lusky

To drive the potential for sentinel events to as close to zero as it can go, anatomic pathology labs are using novel IT solutions and other strategies to help ensure that what can go wrong doesn’t.

As a case in point, many hospitals rely on e-mail alerts or other computerized messaging to let clinicians know that a patient’s pathology report shows cancer. And as a backup, pathologists might orally hand off cancer diagnoses to the clinicians. That type of redundant system works well more than 99 percent of the time.

But what happens when a clinician inadvertently deletes or fails to heed the electronic communication—and the pathologist hasn’t placed a call to the clinician about it? At the University of Michigan Hospitals in Ann Arbor, a computerized surveillance system is on guard to catch such oversights before they fall through the cracks.

Last fall, pathologists at the university implemented a homegrown information technology tool that automatically scans anatomic pathology reports in the lab information system each day to identify cases where the pathologist failed to discuss a patient’s unexpected or new cancer diagnosis with the clinician at the time the pathologist verified the report.

The IT system includes 20 to 30 key phrases in the search logic indicating a cancer diagnosis and 10 to 15 key phrases of critical text suggesting communication with the clinician has occurred, explains Ulysses J. Balis, MD, director of clinical informatics and co-director of the Division of Pathology Informatics, University of Michigan Health System. “The search terms are built into the algorithms of the reports.”

Each day, Jeffrey L. Myers, MD, director of anatomic pathology, or Dan Visscher, MD, director of surgical pathology, receives a list of reports the system has flagged as having a cancer diagnosis but none of the key terms suggesting the pathologist talked to the clinician about it. Then they decide which cases might be at risk for not having been handed off to the clinician—for example, primary biopsies showing cancer.

Next, either Dr. Myers or Dr. Visscher investigates each selected case by checking to see if the patient’s electronic medical record shows whether the diagnosis falls within the critical values alerts policy and, if so, whether the clinician knows about the cancer diagnosis.

“Prima facie evidence” that a clinician is aware of the cancer and taking action, Dr. Balis says, would include a scheduled followup appointment or definitive procedure. In the absence of that kind of evidence, the pathologist can easily “rectify the situation by contacting the clinician.”

To date, the IT surveillance system has identified 2,079 potential failed handoffs, the vast majority of which involved cancer diagnoses that were no surprise-thus the pathologist’s documentation wasn’t required.

Seven of them, however, required action that occurred the day after the surgical pathology report was issued, Dr. Myers says. In one case, the pathologist had diagnosed a first-time hepatocellular carcinoma in a patient whose imaging studies suggested cirrhosis of the liver. “The diagnosis was unexpected and not clear to the practitioner when the pathologist signed the case out,” he says. “We promptly communicated the diagnosis the next day.”

Another fairly recent case involved a patient with a recent breast cancer who had metastasis to the lymph nodes and presumed liver metastasis. But the pathologist determined that the liver lesion was actually a metastasis from a new primary cancer, a diagnosis that changes the treatment and prognosis.

What do clinicians say about the approach? “Seven so far have been benefactors of the technology” and have been “extremely grateful,” Dr. Balis says. “Our findings show that at baseline not all clinicians look at all of their surgical pathology results. The results are available to them in the patient’s EMR, but it’s up to the clinician to go to that EMR and retrieve the results. Sometimes they don’t, presumably because they think they know the answer in a certain case and there’s no point in looking.”

The most common pattern among the seven failed handoffs the system detected thus far involves an unexpected primary malignancy in a patient with a preexisting cancer. In such cases, Dr. Balis suspects, clinicians may be “desensitized to looking at the report” in the electronic record because they assume the second primary cancer is metastatic disease.

But it’s these cases, he points out, for which a clinician could have potential liability if he or she didn’t know the second tumor was a new primary malignancy.

“That’s a pretty big mistake and legally actionable,” he says. “Most importantly, it harms the patient. The whole reason we are doing this is to protect the patient, and with the modern electronic medical record,” clinicians receive “a sea of information.” The university’s IT approach extracts what’s important from what he calls “the sea of noise.”

The IT surveillance tool also detects a very small percent of cases, Dr. Balis says, where the pathologist simply forgot to document that he or she communicated an unexpected cancer diagnosis to the clinician. Identifying the lack of documentation allows the pathologist to “properly annotate the surgical pathology report with an addendum saying the clinical handoff occurred with a specific communication.”

Dr. Balis notes lawyers’ fondness for the concept that if an event isn’t documented, it never happened. If pathologists apply that standard in practice, he says, the chance of not reporting a case “dwindles down to the sensitivity” of a tool for identifying unexpected diagnoses.

“And you design a tool that errs on the side of sensitivity rather than specificity,” Dr. Balis adds. “We know we have a very high false-positive rate and that’s appropriate.”

That failure to hand off a result occurs so infrequently underscores the importance of using an automated approach, in Dr. Balis’ view. “We recognize the limits of people’s memories,” he says, “and instead of chastising pathologists after the fact for not remembering to contact a clinician about a critical result, we provide a tool to compensate for this intrinsic shortcoming where people don’t always remember to do something each and every time.”

Developing the IT surveillance tool isn’t difficult, Dr. Balis says, nor does it require an elaborate new computer system.

“We use the Cerner PathNet Classic, a 25-year-old system that, for purposes of this type of IT approach, works perfectly well. The type of Boolean rules in terms of text analysis can be carried out on any modern AP LIS without great difficulty.” Their application specialist, Beth Valka, prototyped and validated the tool within a week’s time. It was fully operational a week later, he says.

Of course, broader computerized searches can be created to flag cases other than malignant diagnoses.

“There’s no questioning the utility of this approach,” Dr. Balis says. He and colleagues are preparing a paper about their IT solution for publication in a peer-reviewed journal. “We hope it will become the new standard of practice in AP, which lags behind clinical pathology” in using automation and rules-based approaches, he says.

Routine manual checks of electronic medical records can also flag instances in which clinicians are not acting on a patient’s cancer diagnosis. That’s the approach the Baltimore Veterans Affairs Maryland Health Care System takes. Pathologists there review the computerized patient record system, or CPRS, for evidence of clinical followup one month after the AP laboratory informs a clinician about a cancer diagnosis.

“If there’s nothing in the CPRS record, we call the clinician,” says G. William Moore, MD, PhD, staff pathologist and chief of the quality assurance section for anatomic pathology for the Baltimore VA system.

The pathologists also look at the OR schedule daily to identify cases, such as a colectomy for cancer, so they can make sure the patient has a cancer diagnosis from the VA. “A patient may have received the diagnosis somewhere else,” Dr. Moore says, but the pathology department reviews the biopsy to verify the patient’s identification and diagnosis. “We warn the surgeon if we don’t have the evidence in our files to support the operation,” he says.

Patient care has been known to take a wrong turn when a clinician miscommunicates a pathology diagnosis in writing to a patient.

“The problem occurs where the pathologist correctly reports that a lesion is malignant and the clinician writes a letter to the patient indicating it’s benign or vice versa,” says J. Mark Tuthill, MD, division head of pathology informatics, Henry Ford Health System, Detroit.

To reduce the potential for that type of error, the anatomic pathology lab at Henry Ford built a tool that connects the patient letter function of the electronic medical record to the pathology reports. When the clinician uses the EMR to compose a letter to a patient about a pathology result, the IT function automatically drops the pathology diagnosis and report into the letter. No cutting and pasting or paraphrasing results is allowed. That way, there’s no chance the patient will read, “Mrs. Smith, you have a benign malignant melanoma,” Dr. Tuthill says.

The IT tool has “eliminated a huge amount of confusion and errors where clinicians reported things to patients that the pathologist didn’t say” or in a different way than the pathologist said them.

Clinicians can still put text around the pathology report to interpret the information for the patient. They can write, for example, “This is normal, no need to worry. Come back next year as usual,” Dr. Tuthill says.

“But we have found that most patients scan the information in the pathology report to see if the lesion was benign. And they seem to appreciate the fact that the pathology information is in the letter.”

Some AP labs are using systems that help ensure reports contain all the required elements in a structured, checklist format. This tends to eliminate mistakes in dictated reports.

A pathologist might say, for example, that cancer is not present when he or she meant to say cancer is not present at the margin. “The ‘not’s’ and ‘nones’ are really important modifiers in pathology,” Dr. Tuthill says.

David Booker, MD, medical director of Claripath Laboratories, Augusta, Ga., once read a report where a transcriptionist had written “lipoma,” a benign diagnosis, instead of “lymphoma.” It’s his observation that many physicians spend little or no time proofreading their reports. Moreover, he adds, authors tend to “see what they meant to write rather than what is recorded.”

Dr. Tuthill describes as “human behavior” the tendency for a busy person to think a transcribed report looks acceptable without scrutinizing it. And “the problem with using a synoptic text report where the transcriptionist puts in a text line is that it’s very hard to attend to that level of detail in editing,” he says.

To ensure reports are accurate and contain all the required elements, Henry Ford’s AP laboratory has adopted a structured reporting tool using the Misys CoPath system for reporting cancer findings and cases involving transplant rejection and chronic hepatitis in liver biopsies.

Using the tool, the pathologist selects the applicable checklist for a given case, which for a liver biopsy would be the chronic hepatitis checklist. The pathologist has to populate all of the fields on the checklist before CoPath allows him or her to sign the case.

“The pathologist has to pick an option among the available options on the menu,” Dr. Tuthill says. “In some cases, you could choose ‘not applicable’ [as an answer], but for the most part we don’t have people who try to fool the system.”

If the pathologist makes a nonstandard choice, such as “not applicable,” he or she explains it in a comment field.

The system isn’t error-proof. “The menu asks the pathologist to enter the tumor dimensions,” Dr. Tuthill says, “so he or she could enter the wrong dimensions—for example, 1.2 rather than 12—and there’s nothing we can do to capture that.”

Since implementing the structured reporting tool in October 2004, the number of amendments to AP reports for omissions in cancer staging has dropped to zero. “We don’t know what the number was before,” Dr. Tuthill says, because it wasn’t tracked. “People could go re-edit the case and add new information. But we know at least internally that the pathologist would find something wrong with about one in three or one in four reports sent to the transcriptionist, requiring the report to be sent back for corrections. We still have to amend reports for a variety of other reasons.”

Because the structured reporting system is so easy to use, pathologists are now “populating the information themselves rather than playing telephone tag where they dictate, receive the report, find an error, fix it, dictate it again, etc.,” he says. “They make the appropriate choices at the time they review the cases, which results in a huge reduction in errors.”

Using this tool, the order of the information pathologists report is always the same, says Dr. Tuthill, who adds, “The standardization of the ordering is fairly important.” Claripath’s Dr. Booker agrees that a standardized, well-formatted report can prevent clinicians from missing a diagnosis. He recounts how a urologist told him of a colleague who overlooked a diagnosis of prostate carcinoma on page seven of a multi-page pathology report. And another example: “A thoracic surgeon who had biopsied a lung tumor overlooked an important prognostic finding because it was buried in the text of a lengthy microscopic description,” he says.

Dr. Booker, who is a member of a CAP ad hoc committee on standardizing AP reports, says there is a debate in pathology and on the ad hoc committee about whether more or less is better in AP reports. Some say the report should include extensive information about quality control, such as the staining of controls for special stains, and analyte-specific reagent disclaimers. But in his experience, “Clinicians expect to read a pathology report very quickly and move on. They aren’t going to spend 30 minutes reading a path report.” Thus, length is important.

However long or short, the report should be formatted such that it is easy to find important diagnostic and prognostic information. At Claripath Laboratories, the top of page one of the prostate biopsy report has the diagnosis (benign, adenocarcinoma, atypical, etc.) in tables with symbols representing key prognostic findings—Gleason score, perineural invasion, periprostatic extension, and vascular invasion, says Dr. Booker. Below the tables is a schematic diagram of the prostate showing the location of each of these findings in the gland.

“We can include up to 16 separately submitted biopsies with all of this information on page one of our reports, and include photomicrographs as well,” Dr. Booker says. Page two, which is usually a half-page, includes the gross description and other comments. But “the clinicians are only interested in page one and can review all of the diagnoses very quickly and have a visual tool for patient education,” he says.

The prostate biopsy reports are generated quickly and easily, Dr. Booker says. Here’s how it works: Claripath uses multiple accession prefixes, each of which correlates to the number and distribution of biopsies the clinician performed. When the specimens arrive in the lab and the patient is registered, the case’s bar-coded accession prefix determines the report template, which has default values. The pathologist edits the template at the microscope, changing the default values if needed. “There is absolutely no dictation and transcription involved,” Dr. Booker says.

Some might say the use of report templates with default values could cause the pathologist to make an error. In Dr. Booker’s experience, he says, the use of “free-text dictation with transcription and proofreading is subject to many more errors.”

To prevent mismatching of specimen containers to blocks, slides, and reports throughout the histology lab, Claripath uses bar codes. Its Clarikit (patent pending), which uses pre-accessioned tissue cassettes with two-dimensional bar codes, prevents the first of these potential errors—transfer of the specimen from container to cassette, Dr. Booker says. The urologists place the biopsies directly into the tissue cassettes and place all cassettes into a single bottle of fixative. “When we receive the kit we can place the cassettes directly into the tissue processor,” he says. “This saves time for the urologist and lab and speeds up turnaround time.”

At the microtome, matching labeled slides to blocks is a common problem, according to Dr. Booker, because usually all the slides in a lab are labeled on one shared instrument and then distributed to the cutting workstations and matched to blocks. But the 2D bar-code system (Stainershield Laser Imaging System, General Data) employs separate slide label printers at each histology workstation. Thus, the histotechnologist scans the 2D bar code to print the appropriate slides for each block he or she cuts—and for that block only. “We can even pull up the report for sign-out using bar codes on our slides,” which eliminates slide-report mismatching, he says. “If you dictate a case and the transcriptionist puts your dictation on the wrong patient’s report, you are unlikely to catch this error, particularly if you have a lot of similar specimens in your workflow.”

Use of radiofrequency identification, or RFID, can reduce the potential for human error when “shepherding patient specimens” from the point of collection to the pathology laboratory, says Schuyler Sanderson, MD, a Mayo Clinic gastrointestinal and liver pathologist. At Mayo Clinic, phase two of a pilot study is underway using RFID technology (supplied by 3M) in 41 gastrointestinal endoscopy suites. Dr. Sanderson reported on the project in May at the Executive War College, sponsored by The Dark Report.

If the validation pilot—expected to wrap up in March 2008—confirms the RFID system improves patient safety, efficiency, and staff satisfaction, Mayo Clinic will deploy the approach throughout the Rochester pathology system, says Bruce Kline, of the Mayo Clinic Office of Intellectual Property, who co-presented with Dr. Sanderson. In that case, Kline says, Mayo Clinic would also develop an RFID system, partnering with 3M, that any pathology group could use.

In the initial pilot involving five GI endoscopy suites, nurses found the RFID approach gave them a sense of comfort and security, allowing them to focus more on patient care, Dr. Sanderson told War College attendees.

The pilot RFID process works this way: When patients come in for a colonoscopy or any endoscopic procedure, they check in to the unit and all of their information is loaded into the GI database. Then, during the procedure, specimens are collected in bottles, each of which has a blank RFID tag placed on the bottom. The RFID tag used during the pilot is an off-the-shelf product in sticker form from 3M.

As the endoscopist performs the procedure, he or she dictates detailed information to the nurse, including the number of samples and pertinent information for each sample, for example, identifying a polyp or erythematous area. The nurse enters the information into the GI database, and the endoscopist reviews the nurse’s electronic notes at the end of the procedure.

“The RFID system allows one person, and only one person, to enter the patient information with physician oversight at the point of collection, eliminating the need for additional transcription points,” Dr. Sanderson told CAP TODAY.

At the end of the procedure, the nurse uses the IT system to activate each individual RFID tag, assigning them unique identifiers. The nurse also applies an IT-printed human-readable label to each bottle with the patient’s name, identifiers, procedure, and anatomical location of the specimen.

Each RFID tag is unique, Dr. Sanderson says, so that an RFID reader interrogating a tag can discriminate each specimen bottle, telling you it’s bottle C from Mrs. Jones and that Mrs. Jones should have five bottles.

The specimen bottles are scanned at various checkpoints in the handling process to ensure everything matches. When the case is accessioned in the AP lab, a staff person scans the specimens and hits a data-transfer key that imports all of the information from the GI database into the lab’s CoPath system.

The system can prevent sentinel events where, for example, two Mrs. Smiths have colonoscopies in the same time frame, both of which uncover polyps, one of which is malignant. In that case, if the specimen bottles were transported to the laboratory on the same cart, and got mixed up, the surgery could be performed on the wrong Mrs. Smith.

In the initial pilot that tested the RFID approach in five GI suites, nurses used the existing requisition-based labeling process parallel to the new RFID process. But Dr. Sanderson and his research team are working to eliminate paper requisitions in the current validation pilot study. “We are hoping to put enough information in the RFID tag to collect all of the data that one would normally put on a pathology lab requisition,” he says.

Could the same be done with bar codes?

Yes and no. “Bar coding requires the staff person to pick up the bottle and align the bar code with the scanner,” Kline says, whereas the RFID system allows you to scan multiple specimen bottles simultaneously to receive an instantaneous read saying “it’s OK to proceed.” And bar codes can become unreadable more easily.

A failed RFID tag would be a problem, Kline says, but of the 8,000 used so far in the pilot, researchers have yet to see one falter.

Manual strategies to preempt specimen misidentification errors during accessioning can and do work, says Stanley Geyer, MD, of Geyer Pathology Services LLC in Pittsburgh and a former site investigator for an Agency for Healthcare Quality and Research grant to study improving patient safety by examining pathology errors.

The Western Pennsylvania Hospital Department of Pathology in Pittsburgh, where Dr. Geyer formerly served as chair, “separates specimens of the same types during the accessioning process specifically to avoid mixups.”

Say five breast specimens arrive in the laboratory in a batch from a mammography clinic. They can be accessioned such that there are “intervening non-breast specimens” between each breast specimen. “That makes it more difficult for pathologists and laboratory techs to mix up specimens,” says Dr. Geyer.

Of course, there’s no way to completely error-proof a system as long as humans are involved, Dr. Geyer notes. Reducing errors requires “a devotion on the part of everyone engaged in patient care” to prevent them.


Karen Lusky is a writer in Brentwood, Tenn.

   

 

 

   
 
 © 2014 College of American Pathologists. All rights reserved. | Terms and Conditions | CAP ConnectFollow Us on FacebookFollow Us on LinkedInFollow Us on TwitterFollow Us on YouTubeFollow Us on FlickrSubscribe to a CAP RSS Feed