College of American Pathologists
CAP Committees & Leadership CAP Calendar of Events Estore CAP Media Center CAP Foundation
 
About CAP    Career Center    Contact Us      
Search: Search
  [Advanced Search]  
 
CAP Home CAP Advocacy CAP Reference Resources and Publications CAP Education Programs CAP Accreditation and Laboratory Improvement CAP Members
CAP Home > CAP Reference Resources and Publications > cap_today/cap_today_index.html > CAP TODAY 2012 Archive > Clinical Abstracts
Printable Version

  Clinical Abstracts

 

 

 

 

February 2012

Editors:
Deborah Sesok-Pizzini, MD, MBA; Michael Bissell, MD, PhD, MPH

Use of blood products in trauma resuscitation: plasma deficit versus plasma ratio Use of blood products in trauma resuscitation: plasma deficit versus plasma ratio

Use of red blood cells and plasma in a one-to-one ratio has been associated with better patient outcomes. However, not all publications have consistently reported improved outcomes. A confounding variable known as survivor bias is the length of time it takes to thaw and transfuse previously frozen plasma and the rate at which a massively bleeding patient will die. In other words, those patients who survive are more likely to have received plasma for their severe injuries. While many medical centers have adopted a one-to-one transfusion policy for red blood cell-to-plasma transfusion, one large center had not shown a link between implementation of this practice and better outcomes until now. The authors performed a retrospective study with 438 adult trauma patients who had a probability of survival score of 0.010 to 0.975 and received five or more units of red blood cells (RBCs) in the first 24 hours. A unique aspect of the study was the use of a plasma deficit (units of RBC—units of plasma) indicator and a ratio (units of plasma/units of RBCs) as measures to correlate survival with each hour during resuscitation. The authors analyzed the data by groups of those receiving more than five, five to nine, and more than nine units of RBCs. The study found that mortality by hour was significantly associated with worse plasma deficit in the first two hours of resuscitation. The association dropped after the first two hours in the entire group and in the massively transfused group. In the RBC group receiving five to nine units, overall mortality was less, while the moderate plasma deficit group had the most deaths within the first hour. Nonetheless, plasma deficit appeared to be a more sensitive predictor of mortality in the first 24 hours than plasma ratio. As a retrospective study, there are limitations to the interpretation of this data. However, as transfusion services struggle to develop guidelines for transfusion during trauma, this adds to the clinical studies that show benefit from early plasma replacement. This study indicates that the first two to three hours are the most critical for the bleeding patient with regard to plasma replacement.

De Biasi AR, Stansbury LG, Dutton RP, et al. Blood product use in trauma resuscitation: plasma deficit versus plasma ratio as predictors of mortality in trauma. Transfusion. 2011; 51:1925–1932.

Correspondence: Dr. John R. Hess at jhess@umm.edu

[ Top ]

Pathology consultation on reporting of critical values Pathology consultation on reporting of critical values

A significant responsibility of pathology and laboratory medicine services is to report critical values to patient care providers so the providers can relay this information to the patient and the patient can obtain prompt medical care. Laboratories develop policies and procedures that define which laboratory tests require a critical call back, how results are called back, who receives patient results, and what to do if the process fails. Hospitals often share best practices, but laboratories understand that a one-model-fits-all concept is not practical due to variations in laboratory ranges with different assays, as well as technical differences, and variations among instruments. A recent publication addressed the many pitfalls for laboratories communicating critical values to patient care providers. The paper begins with a hypothetical situation, common to many laboratories, in which a laboratory technologist is unable to report a critical result to the ordering clinician. This result is an elevated international normalized ratio and is clearly defined as a critical value by the hospital laboratory. The situation escalates as the laboratory technologist then proceeds to use the chain of command to give the result to the on-call pathology resident. The authors used this example to discuss regulations, establishment of a critical call list, and methods for notification, including personnel responsible for notification as well as tools used, such as an automated notification system. The discussion also addressed escalation policies when the patient care providers cannot be contacted, repeat critical values, and the practice of an opt-out policy for receiving critical values. The authors stated that the CAP strongly discourages this “no-call hours” practice and that the practice was prohibited in 80 percent of programs surveyed in a study reported in this article. Critical value audits were cited as a method to ensure safety and reliability. The authors concluded that these audits may be used to identify laboratory process and preanalytic errors that may result in unnecessary critical values due, for example, to transport issues. These critical value audits may also result in quality improvement initiatives with systematic reviews.

Genzen JR, Tormey CA. Pathology consultation on reporting of critical values. Am J Clin Pathol. 2011;135:505–513.

Correspondence: Dr. J. R. Genzen, Weill Cornell Medical College, 525 E. 68th St., F-705, New York, NY 10065

[ Top ]

Buprenorphine levels in opioid dependence Buprenorphine levels in opioid dependence

The main pharmacological strategy used to treat opioid dependence relies on the agonist occupation of µ-opioid receptors, which alleviates withdrawal symptoms, decreases craving, and attenuates the effects of heroin and other opiates. Buprenorphine, a partial µ-opioid receptor agonist, is widely used to treat opioid dependence. It has been shown to decrease opioid use and improve retention in treatment, and it is associated with less severe side effects—for example, QTc prolongation—than methadone, while not being as time and energy consuming for patients and providers. Buprenorphine is known to have a long half-life (approximately 36 hours), and its plasma level can be measured relatively easily. Pharmacokinetic and pharmacodynamic studies have shown that dose is generally related to plasma concentration, which in turn is associated with opioid receptor blockade and clinical effects. The recommended dose range in the United States is 8 to 32 mg/day. In contrast to methadone, guidelines do not recommend plasma level monitoring to adjust the dose of buprenorphine. Instead, dose is supposed to be determined based on clinical response, primarily elimination of withdrawal symptoms and craving. While some data indicate the minimal plasma concentration to treat withdrawal symptoms, there is a dearth of information regarding optimal buprenorphine plasma levels that would block the effect of other opioids and possibly improve long-term clinical stability. The authors collected buprenorphine levels from patients in their clinic who were on a stable dose of suboxone. The patients’ blood was drawn 90 minutes after the morning dose. The authors reported on the three patients whose clinical situation, in their opinion, might have been improved through monitoring of buprenorphine levels. Patient one was a 67-year-old male who had been on a stable dose of 12/3 mg of suboxone for 10 months. The patient was healthy, sober, and clinically stable and did not take any other medication or have any medical illness that could modulate buprenorphine level. His weight and height were within the normal range (weight, 68 kg; height, 163.8 cm). Patient two was a 63-year-old male who had been taking a dose of 16/4 mg of suboxone for almost three months. He did not experience any craving or withdrawal symptoms, was sober and otherwise healthy, and did not take any medication other than buprenorphine (weight, 67.1 kg; height, 182.9 cm). Patient three was a 56-year-old male who started taking a dose of 16/4 mg of suboxone 12 months earlier. He was also abstinent and clinically stable and did not take any medication that could interfere with buprenorphine metabolism. His body mass index (BMI) was also in the normal range (weight, 75.7 kg; height, 175.3 cm). Patients one, two, and three had buprenorphine levels of 7.0 ng/mL, 1.5 ng/mL, and 4.8 ng/mL, respectively. While buprenorphine level is thought to be related to dosage, these cases show that there may be significant variations in buprenorphine plasma concentration, and possibly in opioid receptor blockade and medication efficacy in the long term, among patients who take similar doses. This suggests that the monitoring of buprenorphine levels may generate complementary information to treat patients on suboxone and to adjust their dosage, especially after the first weeks of treatment, when the acute withdrawal symptoms are alleviated. In such cases, it would, therefore, be critical to increase the amount and availability of information and guidelines to provide clinicians with some direction regarding how to use such clinical tools.

Jutras-Aswad D, Scimeca MM. Buprenorphine plasma concentration in the management of opioid dependence. Am J Addict. 2011;30:304–305.

Correspondence: Michael M. Scimeca at michael.scimeca@va.gov

[ Top ]

Evaluation of whole blood bead immunoassays Evaluation of whole blood bead immunoassays

The advantages of microfluidic technology for diagnostic applications, which include assay speed, sample/reagent volumes, automation, small footprint, and cost, have driven widespread development of point-of-care (POC) testing devices. A promising system for POC assays is the “lab on a disk,” in which fluid flow is driven by centrifugation of a disk with microfluidic channels. These systems offer the possibility of portable diagnostic instruments the size of a compact disc player that read inexpensive disposable assay disks. Extensive development efforts have yielded multiple types of immunoassay applications, including simple modifications of data compact discs, sophisticated assay platforms, and printed protein arrays. In addition, substantial progress has been made in developing centrifugal systems with self-contained sample-processing capabilities for detecting DNA/RNA and protein. In a notable recent example, researchers at the Samsung Advanced Institute of Technology, in South Korea, developed a fully integrated, portable immunoassay platform capable of directly processing whole blood. However, despite the level of sophistication achieved with these centrifugal immunoassays, there remains room for improvement. Reproducing the entire task sequence in a conventional immunoassay protocol requires several valves and multiple reagent storage compartments. The need for multiple wash steps to remove nonspecifically bound detection agents and proteins is especially taxing on assay design, requiring wash reservoirs, valves, and a voluminous waste reservoir. This complexity results in the need for increased disk space, reducing the potential for parallelization and multiplexing. Furthermore, the large number of steps increases the assay run time, typically from 30 to 60 minutes, and heightens the probability of a single sequence error invalidating the assay. For applications requiring rapid POC screening with high fidelity from whole blood samples, a fundamentally new technique for conducting immunoassays is needed. The authors presented proof of principle for a disk-based microfluidic immunoassay technique that processes blood samples without conventional wash steps. Microfluidic disks were fabricated from layers of patterned, double-sided tape and polymer sheets. Sample was mixed on the disk with assay capture beads and labeling antibodies. Following incubation, the assay beads were physically separated from the blood cells, plasma, and unbound label by centrifugation through a density medium. A signal-laden pellet formed at the periphery of the disk was analyzed to quantify concentration of the target analyte. To demonstrate this technique, the inflammation biomarkers C-reactive protein and interleukin-6 were measured from spiked mouse plasma and human whole blood samples. On-disk processing—mixing, labeling, and separation—facilitated direct assays on 1-µL samples with a 15-min. sample-to-answer time, less than 100 pmol/L limit of detection, and 10 percent coefficient of variation. The authors also used a unique single-channel multiplexing technique based on the sedimentation rate of different size or density bead populations. The authors concluded that this portable microfluidic system is a promising method for rapid, inexpensive, and automated detection of multiple analytes directly from a drop of blood in a point-of-care setting.

Schaff UY, Sommer GJ. Whole blood immunoassay based on centrifugal bead sedimentation. Clin Chem. 2011;57:753–761.

Correspondence: Greg J. Sommer at gsommer@sandia.gov

[ Top ]

Study of near-patient platelet function testing Study of near-patient platelet function testing

Platelet dysfunction after cardiopulmonary bypass contributes significantly to microvascular bleeding and is associated with excessive blood loss, perioperative blood transfusion, and surgical re-exploration. The effects of cardiopulmonary bypass include dilution (and reduced concentration of platelets) and alteration of platelet structure and function. Use of platelet inhibitors in the peri-operative period is associated with platelet inactivation. It is often necessary to administer blood products during excessive bleeding, but doing so is associated with significant morbidity and mortality, including lung injury, immunomodulation, fluid overload, and infection. Transfusion algorithms in common use include established laboratory and near-patient testing to guide clinicians in using blood products. Reduced platelet count is a universal trigger for administering platelet concentrate. Thromboelastography is also commonly performed, and maximum amplitude is used by some as a surrogate for clot strength and, consequently, platelet activity and fibrinogen level. Platelet function, perse, is not routinely assessed, mainly because of difficulty with testing and a lack of agreement between different modalities. Consequently, platelet concentrates are often administered when excessive bleeding is observed because platelet dysfunction is presumed to be implicated, despite normal or near-normal platelet counts. The authors conducted a prospective pilot study to assess whether a new point-of-care analyzer, multiple electrode platelet aggregometry, commonly known as multiplate (Dynabyte Medical and Instrumentation Laboratory), could provide sensitive, rapid testing of platelet function. The authors compared its performance against light transmission aggregometry, which is considered the gold standard method for assessing platelet function, despite a lack of standardization and being difficult to perform and time consuming. The authors also wanted to determine whether, and at which time points, multiplate abnormalities would predict excessive bleeding and blood transfusion and might, therefore, possibly be included in transfusion algorithms and guidelines. The authors studied platelet function, as measured by multiplate and light transmission aggregometry, in 44 patients undergoing routine coronary artery surgery. They found that platelet aggregation, as measured by multiplate, was reduced during and after cardiopulmonary bypass compared with baseline, with evidence of partial recovery by the time the patient was transferred to the ICU. In patients transfused blood, platelet aggregation measured by multiplate was reduced during chest closure with adenosine diphosphate (18 U versus 29 U; P=0.01) and thrombin receptor agonist peptide-6 agonist (65 U versus 88 U; P=0.01) compared with patients not transfused. This suggests that multiplate can detect platelet dysfunction in this setting.

Reece MJ, Klein AA, Salviz EA, et al. Near-patient platelet function testing in patients undergoing coronary artery surgery: a pilot study.Anaesthesia. 2011;66:97–103.

Correspondence: Dr. Andrew Klein at andrew.klein@papworth.nhs.uk

[ Top ]

Erythromycin levels in critical care patients Erythromycin levels in critical care patients

Adequate nutrition, preferably via the enteral route, is important in critical illness. However, impaired gastric emptying leading to intolerance of nasogastric feeding occurs in approximately half of patients. This not only results in inadequate nutritional support, but also constitutes a major risk factor for gastroesophageal reflux and aspiration. These complications adversely affect patient morbidity and mortality. Treatment with prokinetic agents, either metoclopramide (a dopamine agonist) or erythromycin (a motilin agonist), is usually regarded as first-line therapy. Although erythromycin is more effective than metoclopramide, the prokinetic effects of both agents reduce rapidly over seven days, limiting their long-term use. There are limited data on the underlying mechanisms responsible for tachyphylaxis during prokinetic drug use, but tachyphylaxis related to motilin receptor physiology is best studied. In vitro studies have demonstrated that motilin receptors are rapidly down-regulated or internalized during exposure to native hormone or agonists—for example, erythromycin—with progressive loss of prokinetic effect on re-exposure to the agent. In addition to modulating the expression of motilin receptors, variations in plasma concentrations have different motor effects on gastrointestinal motility. Low doses (40 mg) induce a propagated motor activity front starting in the antrum and migrating to the small intestine. High doses (350 to 1,000 mg) induce prolonged and strong but nonpropagated antral contractions. Therefore, plasma concentrations of erythromycin appear to play an important role in determining the type and duration of gastrointestinal motor activity. The authors hypothesized that the loss of clinical prokinetic effect observed in previous clinical trials may reflect high plasma drug concentrations early in treatment, which may favor development of tachyphylaxis. They conducted an observational comparative study in a tertiary critical care unit to evaluate the relationship between plasma erythromycin concentrations and feeding outcomes in critically ill patients intolerant to enteral nutrition. For the study, 29 feed-intolerant (gastric residual volume greater than 250 mL) mechanically ventilated, critically ill patients received intravenous erythromycin in a 200-mg dose twice daily for feed intolerance. Plasma erythromycin concentrations were measured one and seven hours after drug administration on day one. Success of enteral feeding, defined as six-hourly gastric residual volume of 250 mL or less with a feeding rate of 40 mL/hour or more, was recorded over seven days. At day seven, 38 percent (11 of 29) of patients were feed tolerant. Age, Acute Physiology and Chronic Health Evaluation scores, serum glucose concentrations, and creatinine clearance were comparable between successful and failed feeders. Plasma erythromycin concentrations at one and seven hours after drug administration were significantly lower in patients treated successfully compared with treatment failures (one hour: 3.7±0.8 mg/L versus 7.0±1.0 mg/L, P=0.02; and seven hours: 0.7±0.3 mg/L versus 2.8±0.6 mg/L, P=0.01). There was a negative correlation between the number of days to failure of feeding and the one-hour (r=–0.47; P=0.049) and seven-hour (r=–0.47; P=0.05) plasma erythromycin concentrations. A one-hour plasma concentration of more than 4.6 mg/L had 72 percent sensitivity and 72 percent specificity, and a seven-hour concentration of 0.5 mg/L or more had 83 percent sensitivity and 72 percent specificity in predicting loss of response to erythromycin. The authors concluded that in critically ill feed-intolerant patients, there is an inverse relationship between plasma erythromycin concentrations and time to loss of clinical motor effect. This suggests that erythromycin binding to motilin receptors contributes to variations in the duration of prokinetic response. The use of lower doses of erythromycin and tailoring the dose of erythromycin to the plasma concentrations may help reduce erythromycin tachyphylaxis.

Nguyen NQ, Grgurinovich N, Bryant LK, et al. Plasma erythromycin concentrations predict feeding outcomes in critically ill patients with feed intolerance. Crit Care Med. 2011;39:868–871.

Correspondence: Nam Q. Nguyen at quoc.nguyen@health.sa.gov.au

[ Top ]

Raising cutoff levels for fecal occult blood testing Raising cutoff levels for fecal occult blood testing

Screening for colorectal cancer using guaiac-based fecal occult blood tests (G-FOBTs) has been shown to reduce colorectal cancer-related mortality. In recent years, a growing body of literature has lent support to the assertion that fecal immunochemical tests (FITs) are superior to G-FOBTs for colorectal cancer screening. This superiority not only implies higher participation rates and sensitivity for advanced neoplasia but also better reproducibility and quality control resulting from automated analysis and quantitative test output. Quantitative test output allows the threshold for defining a positive test to be adjusted. This is important because several recent studies comparing G-FOBTs and FITs have reported a lower specificity for FITs when a cutoff level of 50 to 100 ng hemogloblin/mL was used. Once this test is applied in a colorectal cancer screening program, a lower cutoff level will result in more subjects being referred for colonoscopy and, due to lower specificity, a higher number of futile colonoscopies. Higher FIT cutoff levels will decrease strain on colonoscopy resources, but they might also lead to a greater number of curable colorectal cancers not being detected. This hypothesis can be tested using a study design in which all FIT-negative individuals undergo the reference test—that is, complete colonoscopy. However, in most population-based screening studies, only FIT-positive individuals undergo colonoscopy. Although these screening studies reflect the target population for screening, sensitivity cannot be calculated. Specificity can be calculated, but only indirectly and based on less accurate rare disease assumptions. Moreover, these studies often have a low yield of colorectal cancers, which restricts the power to stratify these cancers by stage. When aiming to reduce colorectal cancer mortality, detecting early-stage cancers obviously is much more relevant than detecting late-stage cancers. In a referral population, a higher prevalence of colorectal cancers and its precursors will allow quantitative FIT results to be stratified for different phases of the natural history of the disease. To this end, the authors assessed the effect of a higher cutoff level of a quantitative FIT on positivity rates and detection rates of curable, early- stage colorectal cancers and advanced adenomas in a colonoscopy-controlled population. Subjects older than 40 years who were scheduled for colonoscopy in one of five hospitals were asked to sample a single FIT (OC sensor) before colonoscopy. Screen-relevant neoplasia were defined as advanced adenoma or early-stage cancer (stages I and II). Positivity rate, sensitivity, and specificity were evaluated at increasing cutoff levels of 50 to 200 ng/mL. In 2,145 individuals who underwent total colonoscopy, 79 patients were diagnosed with colorectal cancer, 38 of which were early-stage disease. Advanced adenomas were found in 236 patients. When varying the cutoff levels from more than 50 ng/mL to more than 200 ng/mL, positivity rates ranged from 16.5 percent to 10.2 percent. With increasing cutoff levels, sensitivity for early-stage colorectal cancers and screen-relevant neoplasia ranged from 84.2 percent to 78.9 percent and 47.1 percent to 37.2 percent, respectively. The authors concluded that higher FIT cutoff levels substantially decrease test positivity rates with only limited effects on detection rates of early-stage colorectal cancers. However, spectrum bias resulting in higher estimates of sensitivity than would be expected in a screening population may be present. Higher cutoff levels can reduce strain on colonoscopy capacity with only a modest decrease in sensitivity for curable cancers.

Terhaar sive Droste JS, Oort FA, van der Hulst RWM, et al. Higher fecal immunochemical test cutoff levels: lower positivity rates but still acceptable detection rates for early-stage colorectal cancers. Cancer Epidemiol Biomarkers Prev. 2011;20:272–280.

Correspondence: J. S. Terhaar sive Droste at js.terhaar@vumc.nl

[ Top ]

Adjusting plasma ferritin concentrations for assessing iron deficiency Adjusting plasma ferritin concentrations for assessing iron deficiency

Plasma ferritin concentrations reflect the concentration of stored iron in the liver. Most investigators accept that serum ferritin concentrations of less than 12 µg/L in those younger than five years and less than 12 or 15 µg/L in those older than five years indicate iron deficiency. Furthermore, plasma ferritin concentrations respond well in iron-intervention studies and were the principal recommendation of the World Health Organization (WHO) at a meeting in 2004 to discuss the best way of assessing iron status in populations. However, ferritin is also a positive acute-phase protein (APP) that is elevated in the presence of infection or inflammation. Therefore, a WHO working group recommended that ferritin measurements be accompanied by the analysis of one or more APPs to detect infection or inflammation. Yet there is uncertainty about how APP should be used. Regression analyses of data from African-American infants and Guatemalan school-age children showed that serum ferritin correlated with APP concentrations but found poor positive predictive values. Investigators have suggested raising ferritin thresholds to higher values in the presence of inflammation to discriminate iron deficiency, but others have suggested that such action is fraught with uncertainty. Likewise, excluding results from subjects with inflammation could bias the results if iron-deficient people are more prone to infection. It is also impractical if the number of people with elevated APP in a study population is high, such as in the Gambia, where more than 90 percent of apparently healthy infants have elevated APP concentrations. The authors assert that regression analysis is poorly predictive of ferritin concentrations because the increase in ferritin after infection follows a different pattern than that of C-reactive protein (CRP) or α-acid glycoprotein (AGP). The authors conducted a study to estimate the increase in ferritin in 32 studies of apparently healthy people by using CRP and α-acid glycoprotein, individually and in combination, and to calculate factors to remove the influence of inflammation from ferritin concentrations. They estimated the increase in ferritin associated with inflammation—that is, CRP greater than 5 mg/L or AGP greater than 1 g/L, or both. The 32 studies comprised infants (five studies), children (seven studies), men (four studies), and women (16 studies) (n=8,796 subjects). In two-group analyses using either CRP or AGP, the authors compared the ratios of log ferritin with or without inflammation in 30 studies. In addition, in 22 studies, the data allowed a comparison of ratios of log ferritin between four subgroups: reference (no elevated APP), incubation (elevated CRP only), early convalescence (both APP and CRP elevated), and late convalescence (elevated AGP only). The authors found that in the two-group analysis, inflammation increased ferritin by 49.6 percent (CRP) or 38.2 percent (AGP; both P<0.001). Elevated AGP was more common than CRP in young people than in adults. In the four-group analysis, ferritin was 30 percent, 90 percent, and 36 percent (all P<0.001) higher in the incubation, early convalescence, and late convalescence subgroups, respectively, with corresponding correction factors of 0.77, 0.53, and 0.75. Overall, inflammation increased ferritin by approximately 30 percent and was associated with a 14 percent (confidence interval, seven percent, 21 percent) underestimation of iron deficiency. The authors concluded that measures of APP and CRP are needed to estimate the full effect of inflammation and can be used to correct ferritin concentrations. Few differences were observed between age and gender subgroups.

Thurnham DI, McCabe LD, Haldar S, et al. Adjusting plasma ferritin concentrations to remove the effects of subclinical inflammation in the assessment of iron deficiency: a meta-analysis. Am J Clin Nutr. 2010;92:546–555.

Correspondence: D. I. Thurnham at di.thurn ham@ulster.ac.uk

[ Top ]


Clinical pathology abstracts editors: Deborah Sesok-Pizzini, MD, MBA, associate professor Department of Clinical Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, and medical director, Blood Bank and Transfusion Medicine, Children's Hospital of Philadelphia; Michael Bissell, MD, PhD, MPH, professor, Department of Pathology, Ohio State University, Columbus.
 
 
 © 2014 College of American Pathologists. All rights reserved. | Terms and Conditions | CAP ConnectFollow Us on FacebookFollow Us on LinkedInFollow Us on TwitterFollow Us on YouTubeFollow Us on FlickrSubscribe to a CAP RSS Feed