College of American Pathologists
CAP Committees & Leadership CAP Calendar of Events Estore CAP Media Center CAP Foundation
 
About CAP    Career Center    Contact Us      
Search: Search
  [Advanced Search]  
 
CAP Home CAP Advocacy CAP Reference Resources and Publications CAP Education Programs CAP Accreditation and Laboratory Improvement CAP Members
CAP Home > CAP Reference Resources and Publications > cap_today/cap_today_index.html > CAP TODAY 2007 Archive > Clinical Abstract
Printable Version

  Clinical Abstracts

 

 

 

 

 

June 2007

Editor:
Michael Bissell, MD, PhD

Reticulocyte hemoglobin content as a screening test for iron deficiency
Acid-base implications of free water distribution
Reduction in red blood cell transfusions among preterm infants
Three-dimensional image analysis of peripheral blood smears
Genetic variation in C-reactive protein
Correlating multiple genes with disease risk
Use of an automated closed fluid-management device

bullet Reticulocyte hemoglobin content as a screening test for iron deficiency

The lack of a simple and reliable screening tool to detect iron deficiency has, in part, made this condition difficult to eradicate. A screening approach for iron deficiency based on reticulocyte analysis is appealing for its consistency in various biological states, direct real-time assessment of iron metabolism, and ease of collection. The optimal reticulocyte hemoglobin content (CHr) threshold for predicting iron deficiency in healthy children has not been determined prospectively, and CHr has yet to be compared with hemoglobin as a screening tool in the pediatric population. The authors conducted a study to establish an optimal CHr threshold for detecting iron deficiency without anemia in nine- to 12-month-old infants and to compare CHr with hemoglobin in screening for iron deficiency in this population. A secondary objective of the study was to explore the association between CHr and subsequent development of anemia. The authors chose the nine- to 12-month-old age group because it is already routinely screened since it is at particular risk for iron deficiency and its consequences. The prospective observational cohort study involved 202 healthy nine- to 12-month-old infants from an urban, hospital-based, primary care clinic in Boston. The infants were screened for iron deficiency between June 2000 and April 2003 and followed up for a median of 5.6 months. The main outcome measures were iron deficiency (transferrin saturation, less than 10%) and anemia (hemoglobin, less than 11 g/dL). Of the 202 infants enrolled, 23 (11.4%) had iron deficiency and six (3.0%) had iron deficiency and anemia. Iron-deficient and non-iron-deficient infants had significantly different values for all measured hematological and biochemical markers for iron deficiency. Optimal CHr cutoff for detecting iron deficiency was 27.5 pg (sensitivity, 83%; specificity, 72%). A hemoglobin level of less than 11 g/dL resulted in a sensitivity of 26 percent and a specificity of 95 percent. Reticulocyte hemoglobin content was more accurate overall than was hemoglobin for detecting iron deficiency (area under the receiver operating characteristic curve, 0.85 versus 0.73; P=0.007). A CHr of less than 27.5 pg without anemia at initial screening was associated with subsequent anemia when screened again in the second year of life (risk ratio, 9.1; 95% confidence interval, 1.04–78.9; P=0.01). The authors concluded that a CHr of less than 27.5 pg is a more accurate hematological indicator of iron deficiency than is hemoglobin of less than 11 g/dL in healthy nine- to 12-month-old infants. Additional studies are warranted to determine whether CHr should be the preferred screening tool for the early detection of iron deficiency in infants.

Ullrich C, Wu A, Armsby C, et al. Screening healthy infants for iron deficiency using reticulocyte hemoglobin content. JAMA. 2005;294:924–930.

Reprints: Dr. Christina Ullrich, Dept. of Pediatric Oncology, Dana Farber Cancer Institute, 44 Binney St., Boston, MA 02115; christina.ullrich@childrens.harvard.edu

bullet Acid-base implications of free water distribution

Concentrational alkalosis and dilutional acidosis have been defined as the removal or addition of free water, respectively. They have also been defined, in the context of clinical disease, as the loss of an extracellular-type fluid (a solution with a substantial sodium concentration) with a bicarbonate concentration that is different from that of normal extracellular fluid. In the context of fluid administration, they have been defined as an extracellular replacement fluid that has a bicarbonate or bicarbonate-like anion concentration that is different from that of normal extracellular fluid. In the context of clinical patients, compensatory mechanisms to volume or electrolyte imbalances are often cited as an important part of the syndrome. These categorizations involve completely different mechanisms and often co-exist in a single critically ill patient. A small amount of water molecules dissociate into hydrogen cations and hydroxyl anions. At 24°C, neutral free water has a pH of 7.0, and at 38°C, a pH of 6.8. Compared with plasma, which has a pH of 7.4, water is a weak acid. Adding free water to plasma, blood, or a patient should have an acidifying effect, and removing it should have an alkalinizing effect. The authors conducted a study to evaluate the acid-base effect of removing free water from or adding it to plasma samples in vitro. This information would be useful in interpreting the effect of changes in free water in patients. For the study, plasma samples from goats were evaporated in a tonometer to 80 percent of baseline volume or hydrated by adding distilled water to 120 percent of baseline volume. The authors measured the pH and partial pressure of carbon dioxide, sodium, potassium, ionized calcium, chloride, lactate, phosphorous, albumin, and total protein concentrations. They calculated actual base excess (ABE), standard bicarbonate, anion gap, strong ion difference, strong ion gap, unmeasured anions, and the effects of sodium, chloride, phosphate, and albumin changes on ABE. Most parameters changed 20 percent in proportion to the magnitude of dehydration or hydration. Bicarbonate concentration, however, increased only 11 percent in the evaporation trial and decreased only two percent in the dehydration trial. The evaporation trial was associated with a mild, but significant, metabolic alkalotic effect (ABE increased 3.2 mM/L), whereas the hydration trial was associated with a slight, insignificant metabolic acidotic effect (ABE decreased only 0.6 mM/L). The calculated free water ABE effect (change in sodium concentration) was offset by opposite changes in calculated chloride, lactate, phosphate, and albumin ABE effects.

Haskins SC, Hopper K, Rezende ML. The acid-base impact of free water removal from, and addition to, plasma. J Lab Clin Med. 2006;147:114–120.

Reprints: Steve C. Haskins, Dept. of Surgical and Radiological Sciences, School of Veterinary Medicine, University of California, Tupper Hall, West Health Sciences Drive, Davis, CA 95616; schaskins@ucdavis.edu

bullet Reduction in red blood cell transfusions among preterm infants

Critically ill preterm infants who weigh 500 to 1,000 g at birth are among the most highly transfused groups of patients because they routinely experience anemia. Weekly phlebotomy loss among preterm infants during the first two weeks of life averages 10 to 30 percent of total blood volume (10–25 mL/kg). The fact that total blood volume removed is highly correlated with volume transfused strongly suggests a causal relationship, providing the rationale for developing strategies to decrease phlebotomy blood loss in the early postnatal period as a way of reducing red blood cell (RBC) transfusions among preterm infants. In this study, the authors hypothesized that extremely low birth weight (ELBW) premature infants treated with an umbilical artery catheter (UAC) attached to an in-line, ex vivo, point-of-care monitor capable of analyzing blood gases and sodium, potassium, and hematocrit levels would experience a 35 percent reduction in RBC volume transfused during the first two weeks of life. To address this hypothesis, the authors undertook a two-center, randomized, open, controlled, clinical trial with an equal number of infants assigned to routine care or to care using an in-line monitor. They uniformly applied standardized RBC transfusion criteria and blood administration procedures. The main outcome measures were total volume and number of RBC transfusions during the first two weeks of life and total volume of blood removed for laboratory testing. The trial was terminated prematurely when one center’s neonatal intensive care unit changed its standard method of laboratory testing. In the first two weeks of life for ELBW preterm infants, the authors noted a nonsignificant 17 percent lower cumulative RBC transfusion volume in the monitor group (n=46) compared with the control group (n=47). However, data from the first week only (the period of greater catheter use) demonstrated a significant 33 percent lower cumulative RBC transfusion volume in the monitor group. Cumulative phlebotomy loss was approximately 25 percent less in the monitor group throughout the two-week study period. There was no difference between groups in neonatal mortality, morbidity, and neurodevelopmental outcome rates at 18 to 24 months. The authors concluded that as long as an umbilical artery catheter is available for blood sampling with an in-line blood gas and chemistry monitor, significant reductions in neonatal RBC transfusions can be achieved. The patients most likely to benefit from the use of a monitor are the smallest, most critically ill newborns.

Widness JA, Madan A, Grindeanu LA, et al. Reduction in red blood cell transfusions among preterm infants: results of a randomized trial with an in-line blood gas and chemistry monitor. Pediatrics. 2005;115:1299–1306.

Reprints: Dr. John A. Widness, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, 8807 JPP, Iowa City, IA 52242-1083; john-widness@uiowa.edu

bullet Three-dimensional image analysis of peripheral blood smears

It is possible to detect acute infection/inflammation and quantitate its intensity using a simple slide test. This technology might be suitable for clinics that do not have low-cost, real-time laboratories. The authors described an enhancement to such a system, which would permit automatic analysis of such slides. This technology could be applied to a point-of-care system to detect and quantitate an acute-phase response within a couple of minutes at a reasonable cost. The authors obtained peripheral venous blood from children with acute inflammation/infection and examined it using an automatic three-dimensional image analyzer to detect the number of white blood cells and the degree of erythrocyte aggregation, a marker of the humoral phase response. The authors included 66 children with acute bacterial infections and 59 with nonbacterial inflammation/infection (mean age, 4.3±3.9 years and 4.2±3.7 years, respectively; P=0.91). The percentages of correct classifications based on discriminant analysis in predicting bacterial versus nonbacterial inflammation/infection were 61.3 percent using white blood cell count, 64.5 percent using percentage of granulocytes, 61.6 percent using degree of erythrocyte aggregation, and 59.2 percent using number of leukocytes counted on unstained slides. The results of the receiver operating characteristic curve analysis yielded an area under the curve of 0.714 (P<0.001) for number of granulocytes, 0.699 (P<0.001) for white blood cell count, 0.685 (P<0.001) for number of leukocytes on the slides, and 0.685 (P=0.001) for degree of erythrocyte aggregation. The correlation between the number of leukocytes by the electronic cell analyzer and the number of cells counted on the slides was highly significant (r=0.85; P<0.001). The authors concluded that it is feasible to use an automatic three-dimensional image analyzer to reveal the different intensities of the acute-phase response between a group of children with an acute bacterial infection and another group with nonbacterial inflammation/infection. These findings might have potential application at the point of care.

Urbach J, Rogowski O, Shapira I, et al. Automatic 3-dimensional visualization of peripheral blood slides: a new approach for the detection of infection/inflammation at the point of care. Arch Pathol Lab Med. 2005;129:645–650.

Reprints: Dr. Itzhak Shapira, Tel-Aviv Sourasky Medical Center, 6 Weizman St., Tel Aviv 64239 Israel; shapiraiz@tasmc.health.gov.il

bullet Genetic variation in C-reactive protein

Elevated C-reactive protein has been identified as a biomarker for cardiovascular disease risk, supplementary to traditional risk factors such as body mass index, smoking, diabetes, and cholesterol levels. Family and twin studies suggest that additive genetic factors account for as much as 40 percent of the variance in plasma C-reactive protein (CRP) levels, but the specific genetic determinants remain poorly characterized. Several human genetic association studies of CRP have assessed the relationship between genotype and plasma CRP levels or disease risk. However, these studies generally have suffered from lack of functional data on DNA-sequence variation. Polymorphisms reported elsewhere as being associated with CRP levels include a polymorphism in the promoter region, synonymous polymorphism in exon 2, polymorphism in the 3’ UTR, and a second polymorphism in the 3’ UTR. These studies have been restricted to a small number of polymorphisms within the CRP gene locus, without consideration of the patterns of variation across the locus as a whole. Therefore, although the reported associations suggest that variation at the CRP locus is significantly correlated with basal CRP levels, it is possible that the observed associations could be attributed to other polymorphisms in strong linkage disequilibrium with the reported polymorphisms because none of the past work demonstrated functional differences between alleles. Prior analyses have also been largely restricted to European populations, and no study included more than 100 African-Americans. To explore whether common genetic variants at the human CRP gene locus influence plasma CRP, the authors used an approach that starts with genomic resequencing to identify all common patterns of nucleotide diversity and to define common haplotypes in the European American and African-American populations. They then genotyped a set of polymorphisms representative of the patterns of common variation identified in a much larger population-based study sample and identified associations with specific haplotypes. Lastly, they experimentally validated the observed associations with polymorphisms in the promoter region using in vitro CRP promoter analysis in a human hepatocyte cell line. Of the common single-nucleotide polymorphisms (SNPs) identified, several in the CRP promoter region were strongly associated with CRP levels in the large cohort study of cardiovascular risk in European American and African-American young adults. The authors also demonstrated the functional importance of these SNPs in vitro.

Carlson CS, Aldred SF, Lee PK, et al. Polymorphisms within the C-reactive protein (CRP) promoter region are associated with plasma CRP levels. Am J Hum Genet. 2005;77:64–77.

Reprints: Dr. Christopher Carlson, Health Sciences Center, University of Washington, Room K-322, 1705 N.E. Pacific St., Seattle, WA 98195-7730; csc47@u.washington.edu

bullet Correlating multiple genes with disease risk

The genetic basis of common human diseases is widely studied by evaluating the association of genetic variants with disease status, as in candidate-gene case-control studies. The power of this approach depends on the effect size of the disease locus (typically considered in terms of an odds ratio), frequency of the disease alleles, frequency of the marker alleles, and magnitude of linkage disequilibrium between the marker and disease loci. Although there is debate about whether common diseases are caused by many rare mutations or a few common genetic variants, it is clear that allelic heterogeneity will dilute the power to detect genetic associations. Furthermore, when multiple genes are functionally related—for instance, when their products are related through a cascade of enzymatic reactions—mutations at any of several genes could lead to disease. Furthermore, it may not be unusual for genes in a functional pathway to have complex interaction given evidence of feedback loops and compensatory enzymatic activities among the protein products of biosynthesis pathways. The genetic basis of many common human diseases is expected to be highly heterogeneous, with multiple causative loci and multiple alleles at some of the causative loci. Analyzing the association of disease with one genetic marker at a time can be a weak approach because of relatively small genetic effects and the need to correct for multiple tests. Testing the simultaneous effects of multiple markers by multivariate statistics might improve power, but they too will not be very powerful when there are many markers because of the many degrees of freedom. To overcome some of the limitations of statistical methods for case-control studies of candidate genes, the authors developed a new class of nonparametric statistics that can simultaneously test the association of multiple markers with disease with only a single degree of freedom. The authors’ approach, which is based on U-statistics, first measures a score over all markers for pairs of subjects and then compares the averages of those scores between cases and controls. Genetic scoring for a pair of subjects is measured by a “kernel” function, which the authors allow to be fairly general. However, they provide guidelines on how to choose a kernel for different types of genetic effects. Their global statistic has only one degree of freedom and achieves its greatest power advantage when the contrasts of average genotype scores between cases and controls are in the same direction across multiple markers. Simulations illustrate that the authors’ proposed methods have the anticipated type I-error rate and that they can be more powerful than standard methods. Applying the authors’ methods to a study of candidate genes for prostate cancer illustrates their potential merits and allows them to serve as guidelines for interpretation.

Schaid DJ, McDonnell SK, Hebbring SJ, et al. Nonparametric tests of association of multiple genes with human disease. Am J Hum Genet. 2005;76:780–793.

Reprints: Dr. Daniel J. Schaid, Dept. of Health Sciences Research, Harwick 7, Mayo Clinic, 200 First St. SW, Rochester, MN 55905; schaid@mayo.edu

bullet Use of an automated closed fluid-management device

Cryopreserved autologous hematopoietic grafts, usually peripheral blood progenitor cells, are reinfused routinely to reduce the duration of chemotherapy-induced cytopenia. However, a high frequency of adverse events are associated with the infusion of thawed cells. These adverse effects are in part due to dimethyl sulfoxide (DMSO), used as a cryoprotectant, but also to free hemoglobin or cellular debris that are produced when thawing cryopreserved cell suspensions. It is essential to measure the viability and number of peripheral blood progenitor cells (PBPCs) in the infused cell product. This level of quality control cannot be easily achieved when cell products are thawed at the bedside. The authors recently reported preclinical results obtained with a new benchtop device (CytoMate, Baxter Oncology). The user-definable system includes a spinning membrane that ensures cell filtration against a counterflow buffer circulation and is connected to different bags in a closed system. The process is largely automated and standardized. The operator, however, can program the device, thus choosing the efficacy of this washing step. Depending on the setup of the system, recovery of CD34+ progenitors ranges from 69 to 99 percent, whereas DMSO elimination ranges from 99 to 95 percent. Comparable results were recently reported with cord blood. For this study, 304 patients were treated with intensified chemotherapy and autologous transplantation at a single institution. Fifty-four of them received washed cell products because three or more bags were to be reinfused. The CytoMate was used for this purpose. The performances of the device were similar to previously reported results, with greater than 75 percent CD34+ cell recovery. Neutrophil and platelet recoveries were similar in the group of patients receiving washed cells and in the group of patients for whom cell products were extemporaneously thawed at the bedside. Adverse events that are typically reported after DMSO infusion were significantly less frequent and less severe in patients who received washed cells. Finally, the nursing staff on the transplant ward reported a decreased workload and a more satisfactory procedure when infusing washed cell products. The authors concluded that the CytoMate device significantly reduces DMSO infusion, with a diminished frequency and severity of immediate side effects, and it does not compromise neutrophil or platelet engraftment.

Lemarie C, Calmels B, Malenfant C, et al. Clinical experience with the delivery of thawed and washed autologous blood cells, with an automated closed fluid management device: CytoMate. Transfusion. 2005;45:737–742.

Reprints: Claude Lemarie, Center de Thérapie Cellulaire et Génique, Institut Paoli-Calmettes, Center Régional de Lutte Contre Ie Cancer Provence-Alpes-Côte d’Azur, 232, Boulevard Sainte-Marguerite, 13273 Marseille Cedex 9, France; lemariec@marseille.fnclcc.fr or thercell@marseille.fnclcc.fr


Dr. Bissell is Professor and Director of Clinical Services and Vice Chair, Department of Pathology, Ohio State University Medical Center, Columbus.

 
 
 © 2014 College of American Pathologists. All rights reserved. | Terms and Conditions | CAP ConnectFollow Us on FacebookFollow Us on LinkedInFollow Us on TwitterFollow Us on YouTubeFollow Us on FlickrSubscribe to a CAP RSS Feed