College of American Pathologists
Printable Version

  Tracking troponin TAT, completeness of cancer reports


CAP Today




October 2011
Feature Story

Ed Finkel

The CAP has launched a new program called Q-Monitors to provide quality monitors for rapid chest pain diagnosis and the completeness of cancer reports.

The Society of Chest Pain Centers approached the CAP to develop a performance monitoring program that tracks the timeliness of serum troponin measurement in diagnosing non-ST-elevation myocardial infarction. Named “Monitoring of Troponin Metrics for Chest Pain Centers,” this new Q-Monitor was unveiled at the American Association for Clinical Chemistry’s annual meeting in July and is set to be rolled out during the first half of 2012.

Separately, the CAP has unveiled a second Q-Monitor on completeness of cancer reporting that’s specific to cancer centers and prompts them to fulfill the American College of Surgeons’ Commission on Cancer requirements for self-auditing pathology reports. It, too, will be available in the first half of next year. Enrollment for both has begun.

The two new monitors have become part of the CAP’s roster of quality management tools, which are designed to bolster lab processes in general, make laboratory management more effective, and help labs take a hard look at performance and identify future quality improvement opportunities.

The Society of Chest Pain Centers, or SCPC, reviewed the literature and decided on the CAP for a partnership on behalf of chest pain centers, given the CAP’s past experience in monitoring timeliness of troponin, says Ron B. Schifman, MD, a member of the CAP Quality Practices Committee, which develops the College’s quality monitoring tools.

“This is the first time that the CAP has collaborated with the Society of Chest Pain Centers or other outside society with a quality performance program. This breaks new ground,” Dr. Schifman says. “It also demonstrates CAP’s experience and leadership in providing the kind of statistical analysis and data collection tools used for these types of timeliness measures, for which they sought us out.”

The troponin monitor will help providers meet the Society of Chest Pain Centers accreditation metrics and the upcoming Centers for Medicare and Medicaid Services requirement for troponin testing turnaround time of 60 minutes or less, measured from patient arrival to result availability.

Members of the CAP’s Quality Practices Committee were enthusiastic about the partnership, says Christine Bashleben, a CAP senior technical specialist who works with the committee on the quality management tools. “We developed the troponin monitor based on the Society of Chest Pain Centers’ requirements. … We tailor-made everything so it would be simple for chest pain centers to participate in our program. They will have everything to show the society, ‘Yes, we’ve met this requirement.’”

The society has aligned itself with the current CMS health care reform measures to ensure hospitals are on track to improve patient quality. Through the SCPC’s multi-specialty accreditation process, hospitals seeking accreditation have to meet rigorous criteria based in a process improvement method, says the SCPC’s Deborah Koeppen, director of marketing and business development.

The SCPC worked closely with the CAP in its creation of a track that would assist hospitals seeking chest pain center accreditation with the required metrics related to troponin turnaround times, Koeppen says. “Our intent is to better educate our facilities on the importance of tracking their lab processes to improve patient outcomes through rapid risk stratification.”

The CMS turnaround time requirement, scheduled to take effect on Jan. 1, 2012, came up concurrently and serendipitously, Bashleben says. “They were moving along the same path,” she says. “There is the ability using this monitor to show that my turnaround time for patient arrival to report is within 60 minutes, if they choose to do that metric. They’ll be able to show that, ‘We meet that requirement, too.’ ”

The Q-Monitor will be aimed primarily at coordinators of chest pain centers—usually the manager or director of the emergency department or the cardiac cath laboratory—as well as the emergency department medical director or hospitalist and the laboratory manager, administrator, or medical director.

“Hospitals need to fast-track [healthy] patients so that resources are used appropriately. If they’re having indigestion, you want to check them out and get them out,” Koeppen says. But without a granular monitoring system, that’s easier said than done.

“There are so many steps involved in the process of drawing blood and ensuring timely results,” she says. “A patient comes in, they may be sweating or exhibiting other signs of a heart attack. You may not have the syringe; you may not have the tube. The ED tech may be responsible for ‘running’ the blood work from the ED to the lab but is busy with another patient. There are all these little steps that we want to help hospitals break down, measure, eliminate if not needed, and improve. Improving the process will improve turnaround times, and the only way to determine if the changes are successful is to measure the steps.”

Until now, the SCPC has pushed chest pain centers to improve turnaround times, but “it was no more detailed than that,” she says. “Now we’re getting more detailed. Additionally, requiring these types of measurements encourages communication between the emergency department and the lab. Departments need to work together to develop better processes.”

The CAP has an extensive collection of quality management tools that make up an external peer-comparison program to address process and outcome quality assurance. The tools include, among others, the in-depth quality assessment programs called Q-Probes and the continuous quality monitoring programs called Q-Tracks. Q-Monitors are customized quality monitoring programs that allow participants to choose which indicators they want to track and measure.

“CAP takes quality seriously and operates out in front of many other medical societies,” says Paul Valenstein, MD, chair of the CAP Council on Scientific Affairs. “The College isn’t satisfied issuing pronouncements about what pathologists and laboratories should be doing. CAP partners with providers and laboratories to assess how well the job is actually being done,” he says, as seen in the CAP Surveys program, in accreditation, and in the Q-Probes and Q-Tracks services.

The Quality Practices Committee fashioned the new troponin Q-Monitor after a preexisting Q-Tracks to measure troponin, Dr. Schifman says, “with modifications so that it specifically targets the accreditation requirements for this particular [SCPC] standard.”

The troponin Q-Monitor will provide monthly data collection and quarterly reports, and enable participants to benchmark against other institutions and choose their metrics based on testing method.

Main laboratories will need to measure either patient arrival to result, specimen collection to result, or test order to result, or any combination, for a randomly selected sample of patients. In addition, they will need to measure either patient arrival to test order, test order to specimen collection, specimen collection to laboratory receipt, laboratory receipt to result availability, or any combination.

Point-of-care settings will be required to measure specimen collection to result availability and will have the option of also measuring patient arrival to result availability.

For both, the performance indicators will be a monthly check on median turnaround time for troponin testing as well as test-order-to-result-availability compliance rate, and specimen-collection-to-result-availability compliance rate.

“You don’t have to measure all five time points; you need to measure at least two or three, depending on whether troponin is measured in the lab or at the point of care,” Dr. Schifman says. “The benefit for chest pain centers to participate in this particular program is to obtain useful benchmark metrics to compare how they’re doing over time. Are they improving or sustaining good performance?”

“We help them through that process and help them measure the outcomes,” Koeppen adds. “We’re very excited about the partnership between CAP and the Society of Chest Pain Centers because the relationship is going to improve overall patient care.”

The completeness of cancer reporting Q-Monitor is designed to provide institutions and pathology departments with feedback on their cancer reporting and help in complying with CAP, Commission on Cancer, and other standards.

“More and more, people are realizing that having complete cancer reports is very beneficial to the patient,” says Raouf E. Nakhleh, MD, chair of the CAP’s Quality Practices Committee. “This allows people to have a tool to check themselves to make sure they’re doing the right thing. … You want to make sure you have a consistent product.”

Dr. Valenstein is mindful of how difficult it is to measure pathologist performance in a meaningful way, and references a quote attributed to Albert Einstein: “Not everything that can be counted, counts. And not everything that counts can be counted.”

“That dilemma applies to pathologists’ performance,” he says. “There’s a lot you can measure about pathologist performance that is silly and mostly meaningless. And there are a lot of important aspects of our work that we can’t objectively assess. With this monitor, we are trying to get at the intersection—something that can be measured and that matters.”

The American College of Surgeons Commission on Cancer requires that designated cancer centers show that 90 percent of cancer resection reports contain all the required elements of the CAP cancer protocols. CAP accreditation also calls for the required data elements of the protocols to be in surgical pathology reports. In addition, the CMS provides incentives to report completeness in breast and colon cancer reports through its Physician Quality Reporting Initiative, and other types of cancer likely will follow.

Use of the Q-Monitor will also demonstrate quality performance to patients, hospitals, and the insurance companies that foot a good part of the bill, Dr. Nakhleh says.

“It makes good sense from a patient’s perspective, from an institutional perspective, and very likely from the payer’s perspective,” he says. “You have to demonstrate that you are completing your reports as you’re supposed to. Down the road, and I’m not predicting this, but I don’t think it would be too far-fetched to say some payers will want top quality in everything, and this might be one condition. They may not want to pay unless you demonstrate you’re doing everything according to standards.”

Dr. Nakhleh doesn’t expect the new monitor to change significantly what labs are already doing. “It’s going to facilitate the way you document this,” he says. “It says, yes, you’re in compliance, or no, you’re not in compliance. We’re not implementing any new methodologies. … It’s an ongoing check to make sure you keep up with things and do what you’re supposed to do.”

Institutions and pathology departments that participate in this Q-Monitor will receive quarterly feedback about cancer reporting on a group basis, based on at least a 10 percent random sampling of reports eligible for CAP protocols, and they will have the option to benchmark the cancer reporting performance of individual pathologists.

“Performance is primarily assessed at the group level. Assessment of individual pathologists is entirely optional,” Dr. Valenstein says. “If a group hasn’t adopted checklists or worksheets or a computer result entry tool that ensures reports are complete, there’s not much point in monitoring individual pathologists; the problem is at the group level. But if the practice has put the right tools in place and isn’t performing where it needs to be, it may make sense to look at individual pathologists. Someone may not be using the tools or may be using an old version of a checklist or may be struggling in some other way.”

The Commission on Cancer’s standards apply to the institution—not the individual. “It is up to the practice,” Dr. Valenstein says, “to decide whether it makes sense to track the performance of individual practitioners.”

Ed Finkel is a writer in Evanston, Ill.