DNA methods for identifying pathogenic molds
White blood cell counts in malaria patients
Use of broad-spectrum antibiotics and spread of drug-resistant bacteria
B-type natriuretic peptide in dyspneic patients with atrial fibrillation
Multinational impact of the 1968 Hong Kong influenza pandemic
Interferon production in HIV infection
Secretion of growth hormone in patients with thalassemia
Oxidized LDL and PBMC activation in unstable angina
Aspergillus is the most prevalent infectious mold in immunocompromised patients. However, other molds, such as Fusarium spp. and Zygomycetes, increasingly are causing infection. Phenotypic methods can take weeks, a time frame that is not clinically useful. The ability to rapidly identify molds that cause invasive disease could lead to effective therapy being administered in a more timely fashion. Molecular methods for identifying pathogenic fungi have been validated for use in clinical settings. RRNA genes, including the 28S gene (26S gene in all yeasts), are conserved, accrue single nucleotide changes at a relatively low rate, and provide useful phylogenetic information. In eukaryotes, the rRNA operon includes internal transcribed spacer regions one and two (ITS1 and ITS2), which do not encode functional rRNAs or proteins. ITS DNA sequences may identify closely related isolates and species that cannot be readily distinguished using 26S or 28S rRNA gene sequences. The authors developed a rapid molecular method for identifying pathogenic molds based on the lengths and sequences of ITS1 and ITS2. Analysis of ITS1 and ITS2 DNA sequences unambiguously identified all molds tested to the species level; 44 species were represented in the analysis. The authors analyzed the D1/D2 hypervariable region of the 28S ribosomal gene and the ITS1 and ITS2 of the rRNA operon. They examined 201 strains, including 143 clinical isolates and 58 reference and type strains representing 43 recognized species and one possible new species. They then generated a phenotypically validated database of 118 diagnostic alleles. DNA length polymorphisms detected among ITS1 and ITS2 PCR products can differentiate 20 of 33 species of molds tested, and ITS DNA sequence analysis can identify all species tested. For 42 of 44 species tested, conspecific strains displayed greater than 99 percent sequence identity at ITS1 and ITS2; sequevars were detected in two species. For all 44 species, identification by genotypic and traditional phenotypic methods was 100 percent concordant. Because dendrograms based on ITS sequence analysis are similar in topology to 28S-based trees, the authors concluded that ITS sequences provide phylogenetically valid information and can be used to identify clinically important molds. Additionally, this phenotypically validated database of ITS sequences will be useful for identifying new species of pathogenic molds.
Rakeman JL, Bui U, LaFe K, et al. Multilocus DNA sequence comparisons rapidly identify pathogenic molds. J Clin Microbiol. 2005;43:3324–3333.
Reprints: Brad T. Cookson, Depts. of Laboratory Medicine and Microbiology, University of Washington, Box 357110, Seattle, WA 98195; firstname.lastname@example.org
White blood cell counts during malaria are generally low to normal, a phenomenon that is widely thought to reflect localization of leukocytes in the spleen and other marginal pools rather than depletion or stasis. Leukocytosis is typically reported in a fraction of cases and may be associated with concurrent infections or poor prognosis, or both. However, few published studies have compared white blood cell (WBC) counts in malarial parasite-infected and -uninfected residents of regions in which malaria is endemic. Human malaria can be caused by any of several species of Plasmodium parasites that occur in various combinations in regions of endemicity. Plasmodium falciparum is responsible for almost all mortality attributed directly to malaria and is the focus of almost all research and intervention efforts. Compared with P. falciparum, however, Plasmodium vivax is the source of as much or more morbidity worldwide, despite its extremely low prevalence in sub-Saharan Africa. The tacit assumption that WBC counts are identical during infections with different Plasmodium species has been examined only minimally and tangentially. Although several methods for estimating the densities of blood-stage parasites by microscopy are in use, the most common is to count the number of asexual parasites seen relative to a given count of WBCs (usually 200 or 500 cells) and then to multiply the parasite:WBC ratio by 8,000, the assumed number of WBCs per microliter of blood. These estimates are used in clinical and epidemiological studies and to evaluate the effects of interventions on individuals and communities. The consequences of errors are strongly dependent on context but could be profound, as would be the case in studies that relate malarial symptoms or transmission to parasite densities. The authors conducted a study in which they counted WBCs in 4,697 people who presented to outpatient malaria clinics in Maesod, Tak Province, Thailand, and Iquitos, Peru, between May 28 and Aug. 28, 1998 and between May 17 and July 9, 1999. At each site and in each year, WBC counts in the P. falciparum-infected patients were lower than those in the P. vivax-infected patients, which, in turn, were lower than those in the uninfected patients. In Thailand, one-sixth of the P. falciparum-infected patients had WBC counts lower than 4,000 cells/µL. The authors concluded that leukopenia may confound population studies that estimate parasite densities on the basis of an assumed WBC count of 8,000 cells/µL. For instance, in this study, use of the conventional approach would have overestimated average asexual parasite densities in the P. falciparum-infected patients in Thailand by nearly one-third.
McKenzie FE, Prudhomme WA, Magill AJ, et al. White blood cell counts and malaria. J Infect Dis. 2005;192:323–330.
Reprints: Dr. F. Ellis McKenzie, Fogarty International Center, Bldg. 16, National Institutes of Health, Bethesda, MD 20892; email@example.com
The authors hypothesized that if physicians were provided with test results suggestive of nonbacterial infection while patients were in surgery, the unnecessary prescribing of antibiotics might decrease. To test their hypothesis, the authors analyzed physicians’ antibiotic selection patterns in the presence or absence of C-reactive protein (CRP) and white blood cell (WBC) count test results, paying particular attention to newer broad-spectrum antibiotic prescribing. Acutely febrile new outpatients were randomized into two groups. Group one (147 patients) underwent CRP and WBC testing before initial consultation (advance testing). Prescriptions were compared with those in group two (no advance testing; 154 patients). In non-pneumonic acute respiratory tract infections, 61 (58%) and 122 (91%) of group one and two patients were prescribed antibiotics, respectively. Cefcapene pivoxil (a third-generation cephalosporin) and amoxicillin were the most frequently chosen drugs for group one and two, respectively. The total number of prescriptions for newer, extended-spectrum antibiotics (cefcapene pivoxil and clarithromycin [advanced macrolide]) were reduced by 25 percent in group one, although they increased in rate (41 versus 55 prescriptions) because of the decrease in prescribing amoxicillin. In group one, cefcapene pivoxil was preferentially selected when WBC values were greater than 9 X 109/L. Macrolides, mainly clarithromycin, were prescribed for patients without leukocytosis. Patient treatment outcome did not significantly differ between the two groups. The authors concluded that availability of CRP and WBC data during initial consultation greatly reduced the prescribing of amoxicillin but had a lesser effect on newer, potent, broad-spectrum antibiotics.
Takemura Y, Ebisawa K, Kakoi H, et al. Antibiotic selection patterns in acutely febrile new outpatients with or without immediate testing for C reactive protein and leucocyte count. J Clin Pathol. 2005;58:729–733.
Reprints: Dr. Y. Takemura, Dept. of Laboratory Medicine, National Defense Medical College, 3-2 Namiki, Tokorozawa, Saitama 359-8513, Japan; firstname.lastname@example.org
B-type natriuretic peptide is a hormone derived from atrial and ventricular cardiomyocytes. Circulating levels of B-type natriuretic peptide (BNP) increase in conditions characterized by volume overload, including cardiac and renal failure. BNP levels also increase in patients with atrial fibrillation, even after controlling for demographic and clinical variables. However, the levels appear to decrease after successful cardioversion to sinus rhythm. The principal clinical indication for BNP measurement is to diagnose heart failure in patients presenting with acute dyspnea. Using a commercially available, automated, point-of-care device and a single prespecified cutoff of 100 pg/mL, the authors documented that measuring BNP on admission provides valuable diagnostic information in this patient group, complementary and superior to clinical evaluation. Atrial fibrillation is not uncommon in patients presenting with acute dyspnea, but it is not known whether permanent/paroxysmal atrial fibrillation significantly affects circulating levels of BNP in dyspneic patients regardless of whether they experience heart failure. Moreover, it is unknown whether permanent/paroxysmal atrial fibrillation affects the diagnostic performance of BNP in this setting and whether the conventional cutoff of 100 pg/mL provides optimal discrimination in patients with atrial fibrillation. To address these issues, the authors compared circulating levels and the diagnostic performance of BNP in patients who did and did not have atrial fibrillation in the Breathing Not Properly Multinational Study cohort. They studied 1,431 patients drawn from a cohort of patients (n=1,586) with acute dyspnea who had BNP levels measured on arrival. Patients were prospectively classified according to the presence or absence of permanent/paroxysmal atrial fibrillation. In total, 292 patients had permanent/paroxysmal atrial fibrillation. In patients who did not experience heart failure, permanent/paroxysmal atrial fibrillation was associated with significantly higher BNP levels (P=0.001). Conversely, in patients with heart failure, BNP levels did not differ significantly between patients with and without atrial fibrillation (P=0.533). A BNP cutoff value of 100 pg/mL had a specificity of 40 percent and 79 percent for the diagnosis of acute heart failure in patients with and without atrial fibrillation, respectively. The areas under the receiver-operating characteristic curves were 0.84 (95% confidence interval, 0.78 to 0.89) and 0.91 (95% confidence interval, 0.89 to 0.93) for patients with and without atrial fibrillation, respectively. In patients without heart failure, but not in those with heart failure, atrial fibrillation is associated with higher circulating BNP levels, suggesting that a higher diagnostic threshold should be used in patients with atrial fibrillation.
Knudsen CW, Omland T, Clopton P, et al. Impact of atrial fibrillation on the diagnostic performance of B-type natriuretic peptide concentration in dyspneic patients: an analysis from the Breathing Not Properly Multinational Study. J Am Coll Cardiol. 2005; 46:838–844.
Reprints: Dr. Torbjorn Omland, Dept. of Medicine, Akershus University Hospital, University of Oslo, N-1474 Nordbyhagen, Norway; email@example.com
Annual influenza epidemics are sustained in the human population through gradual mutations in hemagglutinin and neuraminidase, the surface antigens of the virus. The genetic makeup of the influenza virus allows frequent minor drifts every two to five years in response to selection pressure to evade human immunity. Rarely, reassortment between human and nonhuman viruses results in larger shifts, in which a new virus subtype emerges and replaces the virus that is circulating. A new pandemic virus rapidly invades the human population, which has partial or no immunity, and may cause severe illness worldwide. Although the impact of influenza is not always higher during pandemics than during interpandemics, a shift in the age distribution of mortality toward younger age groups distinguishes pandemic from epidemic impact. The influenza virus responsible for the last pandemic, A/Hong Kong/68 (A/H3N2), was first isolated in Hong Kong in July 1968. The new A/H3N2 virus exhibited a shift in hemagglutinin but not in neuraminidase, and it replaced A/H2N2 viruses that had been circulating in all countries since 1957. Despite rapid and extensive spread by international air travel, the impact of the virus was not the same in all geographical regions. A marked increase in mortality occurred in the United States during the first pandemic season (1968/1969), especially in people younger than 65 years old, but was not seen elsewhere. Conversely, in England, the second pandemic season (1969/1970) of A/H3N2 virus proved to be more severe than the first. The reasons for the delayed severe impact in England are still not understood. Such a delay is counterintuitive since a novel virus introduced in a susceptible population should demonstrate decreasing impact over time as immunity increases. The authors analyzed monthly mortality data on six countries on four continents and reviewed published morbidity and virological studies to extend understanding of the Hong Kong A/H3N2 pandemic. To explain the epidemiological patterns, they estimated the influenza-related excess mortality using national vital statistics by age for 1967 to 1978. Geographical and temporal pandemic patterns in mortality were compared with the genetic drift of the influenza viruses by analyzing hemagglutinin and neuraminidase sequences from GenBank. In North America, the majority of influenza-related deaths in 1968/1969 and 1969/1970 occurred during the first pandemic season (United States, 70 percent; Canada, 54 percent). Conversely, in Europe and Asia, the pattern was reversed: 70 percent of deaths occurred during the second pandemic season. The second pandemic season coincided with a drift in the neuraminidase antigen. The authors found a consistent pattern of mortality being delayed until the second pandemic season of A/H3N2 circulation in Europe and Asia. They hypothesized that this phenomenon may be explained by higher pre-existing neuraminidase immunity (from the A/H2N2 era) in Europe and Asia than in North America, combined with a subsequent drift in the neuraminidase antigen during 1969/1970.
Viboud C, Grais RF, Lafont BAP, et al. Multinational impact of the 1968 Hong Kong influenza pandemic: evidence for a smoldering pandemic. J Infect Dis. 2005;192:233–248.
Reprints: Dr. Cécile Viboud, Fogarty International Center, National Institutes of Health, 16 Center Drive, Bethesda, MD 20892; firstname.lastname@example.org
Innate immunity is the first line of defense against HIV infection. Type I interferons (IFNs) are important players in innate immunity because of their antiviral activity against HIV and because they enhance T-cell stimulation. Two main types of leukocytes are involved in type I IFN production. Monocytes produce IFN in response to Sendai virus and other enveloped viruses. In HIV infection, their IFN production decreases late and is not correlated with opportunistic infections. Natural IFN-producing cells are approximately 50 times less frequent in peripheral blood than are monocytes but they can produce 100 times more IFN per cell. They produce type I IFN in vitro in response to a broad range of enveloped viruses, including HIV, and to naked viruses complexed with antibodies. These cells have been identified as plasmacytoid dendritic cells. HIV, itself, induces IFN secretion from plasmacytoid dendritic cells, not from myeloid dendritic cells. Natural IFN-producing cells are progressively lost during HIV infection in association with progression to disease and the occurrence of opportunistic infections and Kaposi’s sarcoma. Circulating plasmacytoid and myeloid dendritic cell counts, measured by flow cytometry ex vivo and by rare-event analysis, are decreased in HIV chronic infection. Several studies have found a correlation between circulating plasmacytoid dendritic cell counts and IFN production from peripheral blood mononuclear cells (PBMCs) in vitro and an inverse correlation with disease progression. However, IFN production in vitro has never been tested during primary infection. The authors longitudinally studied 26 patients during the primary stage of HIV infection. Fifteen patients received highly active antiretroviral therapy (HAART) for 12 months. At the time of inclusion in the cohort, median type I IFN production in response to herpes simplex virus type 1 stimulation was dramatically impaired in PBMCs from HIV-infected patients compared with that in PBMCs from 31 uninfected donors (180 versus 800 IU/mL; P<0.0001). Median circulating plasmacytoid dendritic cell counts were also significantly decreased (7,300 versus 13,500 cells/mL; P=0.001). Twelve months later, IFN production returned to normal. The data suggested that HAART may help in recovering IFN production by plasmacytoid dendritic cells. The authors concluded that these data underline the potential for early antiretroviral treatment and IFN-α treatment to enhance viral control in a larger proportion of patients during the critical stage of primary infection.
Kamga I, Kahi S, Develioglu L, et al. Type I interferon production is profoundly and transiently impaired in primary HIV-1 infection. J Infect Dis. 2005;192:303–310.
Reprints: Dr. Anne Hosmalin, Dept. of Immunology, Institut Cochin, INSERM U 567, 27 rue du Fg St-Jacques, Bat G. Roussy, 8è étage, 75014 Paris, France; email@example.com
Frequent blood transfusions combined with desferrioxamine chelation therapy have significantly improved the survival rate for patients with beta-thalassemia major. Unfortunately, however, this treatment may lead to dysfunction of various organs as a result of hemosiderosis. Endocrine disorders, related to iron overload in the pituitary gland and other endocrine glands, are major problems in adolescent and adult thalassemic patients and include gonadotropin deficiency, growth hormone deficiency (GHD), hypothyroidism, hypoparathyroidism, impaired glucose tolerance, and diabetes mellitus. Growth retardation and short stature are common clinical features of thalassemic patients and may be related to endocrine and non-endocrine factors, such as dysfunction in growth hormone secretion, hypothyroidism, hypogonadism, delayed puberty, anemia, severely impaired liver function, and bone dysplasia. Reduced growth hormone secretion and low insulin-like growth factor-1 (IGF-1) concentrations are frequently found in growth-retarded thalassemic patients. And many thalassemic patients with short stature have a low growth hormone peak response to a stimulation test. However, the nature of the GH-IGF-1 axis defect in thalassemia remains unclear. Several studies have demonstrated that recombinant growth hormone treatment improves growth velocity in these patients, although response to treatment is variable and not predictable. The GH-IGF-1 axis must be reassessed in young adults with childhood-onset GHD after they attain their final height to select those who are candidates for replacement therapy as adults. To the authors’ knowledge, there are no data available on retesting the GH-IGF-1 axis in adult thalassemic patients with childhood-onset GHD. Therefore, the authors conducted a study to investigate growth hormone secretion in adult thalassemics. They performed an arginine plus growth hormone-releasing hormone stimulation test in 16 thalassemic patients (10 males, six females) who were a mean age of 24.8±3.6 years. The cutoff level for growth hormone response was set at 9 µg/L, according to the literature. Ferritin, IGF-1, liver enzymes, and lipid levels were also determined. The authors found persisting GHD in three patients; one patient had borderline values (GH peak=10.4 µg/L) and the others had a normal response. These results are in accordance with the data on growth hormone retesting in adult patients with idiopathic partial childhood-onset GHD. The authors concluded that growth hormone status should be retested in adult thalassemic patients with childhood-onset GHD. If the diagnosis of adult GHD is established, doctors may want to consider growth hormone treatment because it could improve heart function and bone mineral density, which are frequently impaired in adult thalassemic patients.
La Rosa C, De Sanctis V, Mangiagli A, et al. Growth hormone secretion in adult patients with thalassaemia. Clin Endocrinol. 2005;62:667–671.
Reprints: Manuela Caruso-Nicoletti, Dept. of Pediatrics, Azienda Policlinico Università di Catania, Via Santa Sofia n°78,95123 Catania, Italy; firstname.lastname@example.org
There is increasing evidence that inflammation plays an important role in atherogenesis and might determine plaque vulnerability. Many of the genes involved in the acute inflammatory response that are pivotal in the atherogenic process are activated by nuclear factor-kappa B (NF-kB). NF-kB resides inactive and bound to the inhibitory protein-kappa B (I-kB) in the cytoplasm of many cell types, including T-lymphocytes, monocytes, macrophages, endothelial cells, and smooth muscle cells. Numerous stimulants, including cytokines and oxidants such as oxidized low-density lipoprotein (ox-LDL), alter I-kB, causing nuclear translocation of NF-kB. Recent data indicate that circulating levels of ox-LDL are high in acute coronary syndromes (ACS) and, in particular, unstable angina. The mechanisms leading to this increase are unclear. However, it is known that plaque instability correlates with the location of macrophages, T cells, and mast cells within plaque. Moreover, macrophage-rich plaques have been shown to contain a higher concentration of ox-LDL than macrophage-poor plaques and to be associated with elevated levels of ox-LDL in plasma. Taken together, this evidence suggests that plasma and plaque levels of ox-LDL may be correlated with atherosclerotic lesions’ vulnerability to rupture. The authors conducted a study to assess the role of plasma ox-LDL and lectin-like ox-LDL receptor-1 (LOX-1) and circulating NF-kB activation in patients with unstable angina. Levels of plasma ox-LDL and circulating NF-kB in peripheral blood mononuclear cells (PBMCs) and in separated lymphocytes and monocytes were measured in 27 control subjects and 29 stable angina and 27 unstable angina patients. The authors also evaluated, through in vitro studies, the effect of ox-LDL and the sera derived from a subgroup of unstable angina patients and control subjects on monocytic NF-kB activation. They found that the unstable and stable angina patients had higher levels of circulating ox-LDL and NF-kB in PBMCs than did control subjects (P<0.001). The increase in circulating NF-kB was mainly due to the activation of monocytes. In the in vitro studies, ox-LDL dose-dependently increased the activation of NF-kB in monocytes but not in lymphocytes derived from healthy volunteers. This increase was related to the expression of LOX-1 on monocytes. The incubation of monocytes with the sera derived from the unstable angina patients induced a significant increase in NF-kB activation compared with the sera derived from the control subjects. The authors concluded that the data suggest that the activation of NF-kB in monocytes of unstable angina patients is, at least in part, induced by circulating molecules such as ox-LDL, which has been found to be particularly elevated in unstable angina patients.
Cominacini L, Anselmi M, Garbin U, et al. Enhanced plasma levels of oxidized low-density lipoprotein increase circulating nuclear factor-kappa B activation in patients with unstable angina. J Am Coll Cardiol. 2005;46:799–806.
Reprints: Dr. Luciano Cominacini, Dipartimento di Scienze Biomediche e Chirurgiche, Sezione di Medicina Interna D—Università di Verona, Policlinico G.B. Rossi- P. le L.A. Scuro 10, 37134 Verona, Italy; email@example.com
Dr. Bissell is Professor and Director of Clinical Services and Vice Chair, Department of Pathology, Ohio State University Medical Center, Columbus.