College of American Pathologists
Printable Version

  Clinical Abstracts





September 2010

Michael Bissell, MD, PhD, MPH

Pediatric glucose parameters predictive of adult diabetes Pediatric glucose parameters predictive of adult diabetes

Diabetes is a prevalent chronic disease in the United States, with approximately 19 million people having type 2 diabetes and another 54 million having impaired fasting glucose, or pre-diabetes. Type 2 diabetes is preceded by a pre-diabetic state linked to relative insulin resistance associated with mild increases in blood glucose levels, despite hyperinsulinemia. Several studies have indicated that hyperinsulinemia/insulin resistance is associated with cardiometabolic risk factors, including obesity, dyslipidemia, and hypertension, a constellation of disorders characteristic of metabolic syndrome. Previous findings have shown that elevations in insulin and glucose levels persist over time in children and adults. The authors have reported that relatively high or low fasting plasma insulin levels tend to remain unchanged eight years later; and significant clustering of the factors of obesity, hypertension, and dyslipidemia occur primarily among those with persistently elevated levels. However, information is scant regarding whether adverse levels of glucose homeostasis variables (glucose, insulin, and insulin-resistance index) in childhood persist over time and predict pre-diabetes and type 2 diabetes and other cardiometabolic risk factors in apparently healthy young adults. The authors examined this topic as part of the Bogalusa Heart Study, a bi-racial (black and white), community-based investigation of the evolution of cardiovascular disease risk beginning in childhood. They conducted a retrospective cohort study of normoglycemic (n=1,058), pre-diabetic (n=37), and type 2 diabetic (n=25) adults aged 19 to 39 years who were followed, on average, for 17 years since childhood. At least 50 percent of the people who ranked highest (top quintile) in childhood for glucose homeostasis variables maintained their high rank by being above the 60th percentile in adulthood. In a multivariate model, the best predictors of adult glucose homeostasis variables were the change in BMI Ζ score from childhood to adulthood and childhood BMI Ζ score, followed by the corresponding childhood levels of glucose, insulin, and HOMA-IR. Furthermore, children in the top decile versus the remaining deciles for insulin and HOMA-IR were 2.85 and 2.55 times, respectively, more likely to develop pre-diabetes. Children in the top decile versus the remaining deciles for glucose, insulin, and HOMA-IR were 3.28, 5.54, and 5.84 times, respectively, more likely to develop diabetes, independent of change in BMI Ζ score, baseline BMI Ζ score, and total-to-high-density lipoprotein cholesterol ratio. In addition, children with adverse levels (top quintile versus the remainder) of glucose homeostasis variables displayed significantly higher prevalences of hyperglycemia, hypertriglyceridemia, met-a-bol-ic syndrome, and other conditions. The authors concluded that adverse levels of glucose homeostasis variables in childhood not only persist into adulthood but also predict adult pre-diabetes and type 2 diabetes and relate to cardiometabolic risk factors.

Nguyen QM, Srinivasan SR, Xu J-H, et al. Utility of childhood glucose homeostasis variables in predicting adult diabetes and related cardiometabolic risk factors: the Bogalusa Heart Study. Diabetes Care. 2010;33:670–675.

Correspondence: Gerald S. Berenson at berenson

[ Top ]

Preanalytic variables in nutritional testing Preanalytic variables in nutritional testing

Micronutrient deficiencies are a wide-spread public health problem in developing countries and, to a lesser extent, in the industrialized world. More than 2 billion people worldwide are at risk for vitamin A, iodine, or iron deficiency. Other micronutrient de-ficiencies that have received less attention but are also a public health concern include deficiencies of zinc, riboflavin, folate, vitamin B12, calcium, vitamin D, and selenium. To assess the prevalence of micronutrient deficiencies before or after an intervention aimed at improving nutritional status, or both, clinicians have frequently collected blood during nutrition surveys or clinical trials. Unfavorable environmental conditions, such as increased temperatures, a weak infrastructure, and a shortage of adequately trained staff in many developing countries, make it difficult to follow proper procedures for sample processing, shipping, and storage. Inconsistent access to cold packs, dry ice, centrifuges, refrigerators, and freezers, and electricity that is unstable or unavailable, particularly in remote locations, pose challenges to maintaining a proper cold chain and ensuring timely processing. Although such situations are rare in industrialized countries, inadvertent delays in sample processing or shipping and exposure of samples to increased temperatures can occur. These scenarios raise the question of whether analyzing samples exposed to unfavorable preanalytical conditions will produce valid results. Numerous reports have provided information on the stability of nutritional indicators, but only a few have studied analytes and conditions that are relevant for this investigation. Most reported studies have been limited in scope to single or a few analytes or to particular panels, such as antioxidant (pro)vitamins. Only a few studies have evaluated the effects of preanalytical factors on a broader list of nutritional indicators, and all but one of these studies have had small sample sizes (12 people or fewer). Of the reports that have investigated delayed processing of whole blood, the most extreme delays were up to one day for whole blood stored at 32°C and up to seven days for whole blood stored at room temperature. Some reports have evaluated delays in the shipping or freezing of serum, or both. Most reports lack a clinical interpretation of any statistically significant changes, making it difficult to evaluate the relevance of the findings. The authors conducted a study to evaluate the stability of commonly measured nutritional biomarkers (representatives of fat- and water-soluble [pro]vitamins and iron-status indicators) under previously unstudied conditions that simulate extreme conditions encountered in a hot environment or a location with a poor infrastructure. To mimic delays in processing or shipping, the authors focused on two preanalytical conditions: a delay in the processing of whole blood stored at 32°C for up to three days and a delay in the freezing of serum samples stored at 11°C for up to 14 days. The authors used acceptability criteria based on combined analytical imprecision and intraindividual biologic variation to evaluate whether changes in concentrations due to unfavorable sample treatment were clinically acceptable. They found that clinically acceptable changes in concentration varied from three percent to 15 percent. Delayed whole blood processing did not negatively affect concentrations of carotenoids and vitamins B12, D, and E. However, the authors obtained clinically unacceptable changes for ferritin (+9%), soluble transferrin receptor (+5%), and folate (–30%) after one day, and for vitamin A (–10%) after three days. Delayed freezing of serum did not affect concentrations of ferritin, soluble transferrin receptor, carotenoids, and vitamins A, B12, and E. However, the authors obtained clinically unacceptable changes for vitamins C (–20%) and D (+7%) after seven days and for folate after 14 days (–22%). The authors concluded that despite substantial delays in whole blood processing or in the freezing of serum samples, most nutritional indicators showed remarkable stability. This information is important for designing field studies and when using residual samples subjected to suboptimal preanalytical factors.

Drammeh BS, Schleicher RL, Pfeiffer CM, et al. Effects of delayed sample processing and freezing on serum concentrations of selected nutritional indicators. Clin Chem. 2008;54:1883–1891.

Correspondence: Christine M. Pfeiffer at

[ Top ]

Utility of pneumococcal urinary antigen detection in COPD patients Utility of pneumococcal urinary antigen detection in COPD patients

Chronic obstructive pulmonary disease is defined physiologically by the presence of irreversible or partially reversible airway obstruction in patients with chronic bronchitis or emphysema, or both. Some patients with the disease are prone to frequent exacerbations, which are a major cause of morbidity and mortality and an important determinant of health-related quality of life. Bacteria cause a substantial proportion of exacerbations of chronic obstructive pulmonary disease (COPD). Bacteria are isolated from sputum in 40 to 60 percent of acute exacerbations of the disease. The three predominant bacteria species isolated are nontype Haemophilus influenzae, Moraxella catarrhalis, and Strep-tococcus pneumoniae. However, Gram-negative enteric bacilli and Pseudomonas spp are also frequently isolated in patients with severe COPD. Several new lines of evidence demonstrate that bacterial isolation from sputum during acute exacerbation in many instances reflects a cause-effect relationship. Isolating S. pneumoniae from the sputum samples of chronic bronchitis patients provides only a probable etiological diagnosis of the exacerbation. In addition, pneumococcus is not usually isolated from the blood culture during exacerbation. A reliable method for distinguishing between colonization and clinical infection in COPD patients does not exist. However, an immunochromatographic (ICT) test has been developed to detect polysaccharide C (PnC) in urine samples as well as serum and pleural fluid samples. The test has proven rapid, sensitive, and specific for diagnosing pneumococcal pneumonia in adults. Furthermore, concentrating the urine by selective ultrafiltration may elevate the utility of this test because it increases sensitivity. The introduction of a S. pneumoniae urinary antigen assay in clinical practice has increased the rate of the etiological diagnosis of pneumococcal pneumonia. The authors conducted a study to assess the performance of the ICT method in diagnosing the pneumococcal bronchial exacerbation of COPD, thereby detecting specific urinary antigen. They assessed 46 patients with S. pneumoniae isolation in sputum culture (29 collected in a stable period and 17 during exacerbation). In the 29 patients with samples collected in a stable period, the antigen was detected in three cases (10.3 percent) using nonconcentrated urine (NCU) and 12 cases (41.4 percent) using concentrated urine (CU). For patients recruited during an exacerbation period, the antigen was detected in three cases (17.6 percent) using NCU and 13 cases (76.5 percent) using CU. To evaluate the specificity of the ICT test, the authors also tested 72 cases in which pneumococcus was not isolated in the sputum sample. ICT was positive in one NCU and nine CU of these patients. Having had at least one previous exacerbation (P=0.024), at least one exacerbation that required hospitalization (P=0.027), and a pneumonia episode the year before (P=0.010) was statistically significantly associated with the detection of specific antigen in CU. Using NCU, the only significant association was found when pneumonia had occurred the year before (P=0.006). The authors concluded that a positive result of pneumococcal urinary antigen from a COPD patient, in both bronchial exacerbation and pneumonia, should be evaluated with caution because the antigen detected could be related to a previous infectious episode.

Andreo F, Ruiz-Manzano J, Prat C, et al. Utility of pneumococcal urinary antigen detection in diagnosing exacerbations in COPD patients. Resp Med. 2010;104:397–403.

Correspondence: José Dominguez at

[ Top ]

A test panel for prostate cancer screening A test panel for prostate cancer screening

Data have shown some of the shortcomings of prostate-specific antigen as a basis for biopsy decisions. The positive predictive value of an elevated PSA is in the 20 to 30 percent range, implying that a large number of men receive unnecessary biopsy. In addition, many of the cancers found by prostate-specific antigen (PSA) constitute overdiagnosis, such that treatment, which is associated with important morbidities, has little, if any, benefit. It is plausible that supplementing PSA with other markers during prostate cancer screening would reduce unnecessary biopsy and overde-tection. Using a data set from the Goteborg section of the European Randomized Study of Screening for Prostate Cancer (ERSPC), the authors reported that a panel of four kallikrein markers—total PSA, free PSA, intact PSA, and kallikrein-related peptidase 2 (hK2)—was strongly predictive of biopsy outcome in men with elevated PSA at their first PSA test. The authors estimated that using the model to determine referral to biopsy would reduce biopsy rates by 573 per 1,000 men with elevated PSA and would miss only a small number of cancers (42 per 1,000). Moreover, most of the cancers missed were the low-grade, low-stage cancers most likely to constitute overdiagnosis. The authors subsequently replicated this finding on an independent cohort of unscreened men biopsied in the first round of ERSPC Rotterdam, with very similar results. It is reasonable to suppose that PSA screening history would affect the properties of predictive models for prostate cancer. Accordingly, the authors applied the kallikrein panel to men biopsied in subsequent rounds of ERSPC Goteborg to address whether it retained its value in men with a recent PSA test. They found similar increments in predictive accuracy: Use of the model to determine biopsy would lead to a sharp decrease in the number of biopsies and delay the diagnosis of only one high-grade cancer per 1,000 men with elevated PSA. To determine whether this finding could be replicated, the authors applied the predictive model from the kallikrein panel to men with a normal PSA at initial screening who were subsequently biopsied in rounds two and three of the Rotterdam section of the ERSPC. A total of 1,501 previously screened men with elevated PSA underwent initial biopsy during rounds two and three of ERSPC Rotterdam, and 388 cancers were diagnosed. Biomarker levels were measured in serum samples taken before biopsy. The prediction model developed on the unscreened cohort was then applied, and predictions were compared with biopsy outcome. The authors found that the previously developed four-kallikrein prediction model had much higher predictive accuracy than PSA and age alone (area under the curve, 0.711 versus 0.585, and 0.713 versus 0.557 with and without digital rectal exam, respectively; both P<0.001). Similar statistically significant enhancements were seen for high-grade cancer. Applying the model with a cutoff of 20 percent cancer risk as the criterion for biopsy would reduce the biopsy rate by 362 for every 1,000 men with elevated PSA. Although diagnosis would be delayed for 47 cancers, these would be predominately low stage and low grade (83 percent Gleason 6 T1c). The authors concluded that a panel of four kallikreins can help predict the result of initial biopsy in previously screened men with elevated PSA. Use of a statistical model based on the panel would substantially decrease rates of unnecessary biopsy.

Vickers AJ, Cronin AM, Roobol MJ, et al. A four-kallikrein panel predicts prostate cancer in men with recent screening: data from the European Randomized Study of Screening for Prostate Cancer, Rotterdam. Clin Cancer Res. 2010;16:3232–3238.

Correspondence: Andrew Vickers at

[ Top ]

Cytogenetics of B lymphoblastic leukemia Cytogenetics of B lymphoblastic leukemia

B-cell acute lymphoblastic leukemia in children and adults is associated with diverse genetic abnormalities, including, but not limited to, balanced translocations. For example, the t(12;21)(p12;q22) involving TEL (ETV6)/RUNX1 (AML1) and the 11q23 involving mixed-lineage leukemia represent 22 percent and eight percent of cases, respectively. Other less commonly encountered balanced trans-locations, such as the t(1;19) involving E2A (TCF3)/PBX1 and the t(9;22) involving BCR/ABL, occur in approximately five percent and three percent of patients, respectively. Identifying and understanding the underlying genetic abnormalities in B-cell acute lymphoblastic leukemia (B-ALL) are significant in not only helping make the diagnosis but in predicting the prognoses and, ultimately, understanding the leukemogenesis. The prototypic balanced nonrandom chromosomal translocation t(8;21)(q22;q22) involving RUNX1T1 (formerly known as ETO [eight twenty one] or MTG8 [myeloid translocation gene]) on chromosome 8q22 and RUNX1 (also known as acute myeloid leukemia factor 1) on chromosome 21q22 is found to be present in approximately five percent to 12 percent of patients with acute myeloid leukemia (AML). Rare variant of t(8;21)(q22;q22) such as t(8;20)q22;q13) was reported in a case of T lymphoblastic leukemia. However, t(8;21)(q22;q22) has not yet been report-ed in B-ALL in the English literature. The authors analyzed the t(8;21)(q22;q22) in a 44-year-old female patient diagnosed with B-ALL based on morphology and immunophenotype. Conventional karyotyping revealed complex abnormalities, including t(8;21)(q22;q22) in 10 of 20 cells examined. Interphase and metaphase fluorescent in situ hybridization (FISH) confirmed a fused RUNX1/RUNX1T1 signal on derivative chromosome 8 but not on chromosome 21, confirming the unbalanced translocation between chromosomes 8q22 and 21q22 involving the RUNX1 and RUNX1T1 genes. The authors believe this is the first case of B-ALL with t(8;21)(q22;q22) involving RUNX1 and RUNX1T1 genes.

Wang H-Y, Tirado CA. T(8;21)(q22;q22) translocation involving AML1 and ETO in B lymphoblastic leukemia. Hum Pathol. 2010;41:286–292.

Correspondence: Dr. Huan-You Wang at

[ Top ]

Culture positivity in hospital-acquired pneumonia Culture positivity in hospital-acquired pneumonia

The concept of health care-associated infections sits at the crossroads of community-acquired infections and hospital-acquired infections. Health care-associated pneumonia (HCAP) is an example of a health care-associated infection. Because of how it has been described in the medical literature, many of the clinical issues investigated for community-acquired pneumonia and hospital-acquired pneumonia have not been evaluated for HCAP. Patients with HCAP have distinct risk factors predisposing them to infection with bacteria that potentially are antibiotic resistant, including methicillin-resistant Staphylococcus aureus and Pseudomonas aeruginosa. Therefore, patients with HCAP may require broader initial empirical antimicrobial therapy to ensure that appropriate treatment is administered. Studies of HCAP have focused on patients with microbiologically confirmed disease. Prior studies of septic shock, endocarditis, and community-acquired pneumonia have suggested that there may be differences between culture-positive and culture-negative patients with these infections. Therefore, the authors carried out a study with two goals. The first goal was to determine whether important demographic differences, including risk factors for HCAP, exist between culture-positive and culture-negative patients. The second goal was to compare the outcomes of these two groups to better understand the implications of initial antimicrobial therapy. The authors conducted a retrospective cohort study in which they examined adult patients with HCAP from Barnes-Jewish Hospital, a 1,200-bed urban teaching hospital. They identified over a three-year period, from January 2003 through December 2005, 870 patients with HCAP, of whom 431 (49.5 percent) were culture positive. Among the non-culture–positive patients, 290 (66.1 percent) had no respiratory cultures obtained, and 149 (33.9 percent) had no growth or nonpathogenic oral flora identified and were classified as culture negative. The latter group was more likely to have received an initial antibiotic regimen (ceftriaxone ± azi-thromycin or moxifloxacin) targeting community-acquired pneumonia path-ogens compared with culture-positive patients (71.8 versus 25.5 percent; P<0.001). Severity of illness, as assessed by admission to the ICU and mechanical ventilation, was statistically lower in culture-negative than culture-positive patients (ICU admittance, 12.1 versus 48.7 percent; P<0.001; mechanical ventilation, 6.7 versus 44.5 percent; P<0.001). In-hospital mortality and hospital length of stay were also statistically lower for culture-negative patients (mortality, 7.4 versus 24.6 percent; P<0.001; hospital length of stay, 6.7±7.4 days versus 12.1±11.7 days; P<0.001). The authors concluded that patients with culture-negative HCAP had lower severity of illness, hospital mortality, and hospital length of stay compared with culture-positive patients. These data suggest that patients with culture-negative HCAP differ substantially from patients with culture-positive HCAP.

Labelle AJ, Arnold H, Reichley RM, et al. Comparison of culture-positive and culture-negative health-care-associated pneumonia. Chest. 2010;137:1130–1137.

Correspondence: Dr. Marin H. Kollef at

[ Top ]

Cost-effectiveness of urine dipsticks in well-child care Cost-effectiveness of urine dipsticks in well-child care

Screening dipstick urinalyses are still being performed on school-aged children, even though this practice is no longer recommended by the American Academy of Pediatrics. Supporting the academy’s viewpoint are multiple large-scale studies of healthy schoolchildren that have demonstrated a low incidence of chronic kidney disease (CKD) in this population. Early detection of CKD in asymptomatic children does not appear to alter disease outcome, making dipstick urinalysis an unbeneficial screening tool for this group. The high rate of false-positive and true-positive screens for benign conditions, such as ortho-static proteinuria, results in further testing, generating additional costs and anxiety for patients and families. The decision to perform routine dipstick urinalysis rests with the primary care practitioner. Therefore, the authors sought to evaluate the cost-effectiveness of dipstick urinalysis from the perspective of such physicians. The authors hypothesized that routine urine dipstick is not cost-effective, which aligns with the updated American Academy of Pediatrics guidelines. The authors used decision analysis to model a screening dipstick urinalysis strategy relative to a no-screening strategy. They derived data on the incidence of hematuria and proteinuria in children from published reports of large cohorts of school-aged children and estimated direct costs from the perspective of the primary care practitioner. The measure of effectiveness was the rate of diagnoses of CKD. The authors used an incremental cost/effectiveness ratio. They found that the expected costs and effectiveness for the no-screening strategy were zero because no resources were used and no cases of CKD were diagnosed. The screening strategy involved a cost per dipstick of $3.05. Accounting for true-positive and false-positive initial screens, 14.2 percent of the patients required a second dipstick as per typical clinical care, bringing the expected cost of the screening strategy to $3.47 per patient. In the screening strategy, one case of CKD was diagnosed per 800 children screened, and the incremental cost-effectiveness ratio was $2,779.50 per case diagnosed. The authors concluded that while urine dipstick is inexpensive, it is a poor screening test for CKD and a cost-ineffective procedure for the primary care provider. These data support the change in the American Academy of Pediatrics’ guidelines on the use of screening dipstick urinalysis. Clinicians must consider the cost-effectiveness of preventive care procedures to make better use of available resources.

Sekhar DL, Wang L, Hollenbeak CS, et al. A cost-effectiveness analysis of screening urine dipsticks in well-child care. Pediatrics. 2010;125: 660–663.

Correspondence: Dr. Deepa L. Sekhar at

[ Top ]

Clinical pathology abstracts editor: Michael Bissell, MD, PhD, MPH, professor, Department of Pathology, Ohio State University, Columbus.