College of American Pathologists
CAP Committees & Leadership CAP Calendar of Events Estore CAP Media Center CAP Foundation
 
About CAP    Career Center    Contact Us      
Search: Search
  [Advanced Search]  
 
CAP Home CAP Advocacy CAP Reference Resources and Publications CAP Education Programs CAP Accreditation and Laboratory Improvement CAP Members
CAP Home > CAP Reference Resources and Publications > cap_today/cap_today_index.html > CAP TODAY 2010 Archive > Clinical Abstracts for August 2010
Printable Version

  Clinical Abstracts

 

 

 

 

August 2010

Editor:
Michael Bissell, MD, PhD, MPH

Lower thyroid-stimulating hormone thresholds in neonatal screening Lower thyroid-stimulating hormone thresholds in neonatal screening

Congenital hypothyroidism is the most common congenital endocrine disease and avoidable cause of severe mental retardation. L-thyroxine supplementation started within two to three weeks of age can prevent severe neurological damage. In economically advanced countries, neonatal screening programs have been instituted to allow early detection of congenital hypothyroidism (CH) and initiation of therapy. Before newborn screening programs were developed, the incidence of CH was estimated to be one in 7,000. At that time, the diagnosis was frequently delayed, and only 10 percent of children with severe manifestations of CH were diagnosed within the first month of life. In the mid-1970s, a newborn screening program for CH was started in Quebec, Canada, and rapidly developed in other countries. Two principal screening strategies have been followed: a primary thyroid-stimulating hormone (TSH) method, which is more common in Europe, Japan, and Oceania, and a primary T4 method, which is more common in North America. The use of these strategies has allowed a larger number of CH cases to be detected early on, with a reported incidence of one in 3,000 to 4,000 newborns. A recent European survey of approximately 6 million newborns mostly screened with a primary TSH method found an overall incidence of one in 2,709. From 1987 through 2003, the Italian CH Registry reported a national incidence of one in 2,500, for a total of 3,008 CH cases out of about 7,520,000 live newborns. The current understanding of CH indicates that thyroid dysgenesis, including such developmental disorders as athyreosis, ectopy, hemia-genesis, and hypoplasia, accounts for about 75 percent of total cases. The remainder have a thyroid gland in situ (GIS) that may be associated with transient or permanent functional defects. These epidemiological and clinical classifications are based on experience with screening programs using the primary T4 determination or TSH cutoff values of 20 to 40 mU/L in the dry blood spot, corresponding to about 40 to 80 mU/L in serum. These strategies have been followed to avoid excessive recall rates and limit costs. They were justified by the general assumption that milder CH forms are devoid of neurological consequences. Considerable advancements in the analytic performance of TSH measurements have been made in recent years, and highly sensitive TSH assays are now largely employed as first-line tests for thyroid function. Several screening centers in North America have shifted to the primary TSH strategy. As a direct consequence of such analytical improvements, recommendations have been made to lower the upper limit of the normal range for TSH determination. Accordingly, guideline 69 of the recent Laboratory Support for the Diagnosis and Monitoring of Thyroid Disease of the National Academy of Clinical Biochemistry (NACB) recommends no further action in neonates with dry blood spot TSH (b-TSH) of less than 10 mU/L, whereas recall and additional investigations are advocated for values above this cutoff. All of these considerations justify the need for studies to investigate the impact that more sensitive screening strategies may have on the incidence and clinical classification of the disease. The Screening Centre of Lombardy, in the Lombardy region of Italy, adopted a primary TSH method for screening about 89,000 newborns per year. In line with NACB guidelines, this reference center chose to shift the b-TSH cutoff level from 20 mU/L down to 12 mU/L in 1999 and to 10 mU/L in 2002. The thy-roid function parameters and clinical features at birth of all children born in a seven-year period in Lombardy were retrospectively analyzed to verify the impact of lower TSH cutoff values on the epidemiology and clinical classification of CH. These results were compared with those virtually obtained with the previous cutoff (20 mU/L). Clinical re-evaluation after L-T4 withdrawal of a representative group of 140 CH children at three to five years was used to assess the contribution of permanent or transient forms. The authors found that low b-TSH cutoffs allowed the detection of 435 newborns with confirmed CH (incidence, one in 1,446). Forty-five percent of infants with CH, including 12 of 141 with dysgenesis, would have been missed using the 20 mU/L cutoff. In contrast to the current classification, 32 percent of newborns with CH had thyroid dysgenesis and 68 percent had a gland in situ. Premature birth was present in 20 percent of cases being associated with a three- to fivefold increased risk of gland in situ CH. Re-evaluation at three to five years showed a permanent thyroid dysfunction in 78 percent of 59 CH toddlers with GIS. The authors concluded that use of a low b-TSH cutoff allowed the detection of an unsuspected number of children with neonatal hypothyroidism evolving into mild permanent thyroid dysfunction later in life. The incidence of CH in this Italian population appears to be double that previously thought, with a clear-cut prevalence of functional defects over dysgenetic defects.

Corbetta C, Weber G, Cortinovis F, et al. A 7-year experience with low blood TSH cutoff levels for neonatal screening reveals an unsuspected frequency of congenital hypothyroidism (CH). Clin Endocrinol. 2009;71:739–745.

Correspondence: Luca Persani at luca.persani@unimi.it

[ Top ]

Laboratory measures of adiponectin and lectin in prostate cancer Laboratory measures of adiponectin and lectin in prostate cancer

Adiposity in prostate cancer patients has been consistently associated with increased risk of biochemical progression, metastasis, and fatal outcomes, but the underlying mechanisms are poorly understood. It has been suggested that these effects may be mediated by adipocyto-kines, such as adiponectin and leptin. Adiponectin is produced solely by adipose tissue. It is abundantly present in plasma and inversely related to degree of adiposity. Adiponectin activates AMP-activated protein kinase, stimulates fatty acid oxidation, improves insulin sensitivity and glucose metabolism, acts as a direct endogenous inhibitor of inflammation and angiogenesis, and reduces cancer invasiveness. In contrast, circulating concentrations of leptin are closely and directly related to adiposity, and the biological effects of leptin generally are the opposite of those of adiponectin. Isoforms of adiponectin and leptin are expressed in androgen-dependent and androgen-independent prostate cancer cell lines and human prostate cancer tumor tissues. In addition, adiponectin inhibits androgen-dependent and androgen-independent prostate cancer cell growth in vitro at physiological concentrations, whereas leptin stimulates cell proliferation specifically in androgen-independent DU145 and PC-3 prostate cancer cells but not in androgen-dependent LNCaP-FGC cells. Epidemiological data regarding circulating adiponectin and leptin and risk of prostate cancer are limited and inconsistent. Significantly lower concentrations of plasma adiponectin were found among prostate cancer patients compared with healthy men. And prostate cancer patients with higher grade tumor (Gleason score, 8 or higher) or advanced clinical stage (extraprostatic cancer) tended to have lower adiponectin concentrations. However, in a recent small prospective study, no significant relationship was found between prediagnostic concentrations of adiponectin and overall prostate cancer risk (n=125 cases). A significant association between plasma leptin concentrations and increased risk of prostate cancer was found in the prospective Northern Sweden Health and Disease Cohort. But in a subsequent study of a different cohort, the same research group found no association between leptin and prostate cancer. None of the retrospective studies found significant associations for leptin. Because of the weak or absent link between body mass index and risk of overall incidence of prostate cancer, such null findings might be expected. However, the role of these adipocyto-kines in association with prostate can-cer metastasis and lethal outcomes has not yet been evaluated. Given the close link between obesity and prostate cancer mortality, the authors hypothesized that these adipocyte-derived cytokines could be involved in prostate cancer progression and the development of lethal outcome. Therefore, they examined the association of baseline circulating concentrations of adiponectin and leptin with future risk of developing incident prostate cancer, especially lethal cancer, in a case-control study nested within the Physicians’ Health Study. For the study participants in whom prostate cancer was diagnosed during followup, the authors further assessed whether prediagnostic plasma concentrations of adiponectin and leptin predicted subsequent risk of dying from the cancer. The study involved 654 cases of prostate cancer diagnosed from 1982 to 2000 and 644 age-matched controls. The authors found that adiponectin concentrations were not associated with risk of overall prostate cancer. However, men with higher adiponectin concentrations had lower risk of developing high-grade or lethal cancer. The relative risk, determined by comparing the highest quintile to the lowest (Q5 versus Q1), was 0.25 (95 percent confidence interval [CI], 0.07–0.87; Ptrend=0.02) for lethal cancer. Among all the cases, higher adiponectin concentrations predicted lower prostate cancer-specific mortality (hazard ratio [HR]Q5 versus Q1= 0.39; 95 percent CI, 0.17–0.85; Ptrend=0.02), independent of body mass index, plasma C-peptide, leptin, clinical stage, and tumor grade. This inverse association was apparent mainly among men with a body mass index of 25 kg/m2 or higher (HRQ5 versus Q1=0.10; 95 percent CI, 0.01–0.78; Ptrend=0.02) but not among men of normal weight (Ptrend=0.51). Although the correlation of leptin concentrations with body mass index (r=0.58; P<0.001) was stronger than that for adiponectin (r=–0.17; P<0.001), leptin was unrelated to prostate cancer risk or mortality. The authors concluded that higher prediagnostic adiponectin, but not leptin, concentrations predispose men to a lower risk of developing high-grade prostate cancer and a lower risk of subsequently dying from cancer, suggesting a mechanistic link between obesity and poor prostate cancer outcome.

Li H, Stampfer MJ, Mucci L, et al. A 25-year prospective study of plasma adiponectin and leptin concentrations and prostate cancer risk and survival. Clin Chem. 2010;56(1):34–43.

Correspondence: Jing Ma at jing.ma@channing.harvard.edu

[ Top ]

Antigen excess with serum free light chain assay Antigen excess with serum free light chain assay

The serum free light chain assay was developed in the early 2000s to detect light-chain epitopes that are exposed only when they are not bound to a heavy chain. The assay quantifies kappa (κ) and lambda (λ) free light chains (FLCs) and is routinely used to diagnose and manage several plasma cell proliferative disorders, including monoclonal gammopathy of undetermined significance, light-chain amyloidosis, and multiple myeloma. Despite its utility, the FLC assay has limitations. Lot-to-lot reagent variation, nonlinear dilutions, and falsely high results can occur. Falsely low serum FLC results may be obtained in cases of antigen excess. Nephelometric assays measure light scatter caused by the formation of immune complexes in solution and are subject to limitations inherent in antigen-antibody reactions. This method requires that antigen concentrations fall within a range known as the anti-body excess of the Heidelberger-Kendall curve. Higher antigen concentrations produce falsely low readings. The authors investigated the incidence of serum FLC antigen excess. During the four-month period of Aug. 1, 2006 to Nov. 30, 2006, all clinical FLC assays ordered in their laboratory were performed at the recommended 100-fold dilution as well as at a second 400-fold dilution. FLC assays were performed on a Dade Behring BN II nephelometer with FLC reagent sets from The Binding Site. Of the 7,538 serum FLC studies that were ordered and assayed in duplicate during this period, no samples exhibited λ FLC antigen excess, but nine patients (0.12 percent) were found to have κ FLC excess. These nine patients had increased κ FLC concentrations and abnormal κ/λFLC ratios when tested at the initial 100-fold dilution, but the instrument did not indicate a need for additional dilutions. However, when re-tested at a 400-fold dilution, all nine samples gave substantially higher results or indicated that additional dilutions were needed. Four of the nine patients had two samples submitted for testing, and each pair of samples yielded similar results. The sample with the largest change in the κ FLC result was from 77 mg/L to 141,000 mg/L. On average, the concentrations of these samples increased approximately 200 fold when they were diluted. The assay identified all nine samples as abnormal. From a monitoring standpoint, however, the initial κ FLC results were misleadingly lower than the results obtained at a higher starting dilution. Some nephelometers, including BN II, have automated detection algorithms designed to detect antigen excess. Such algorithms may include monitoring the initial rate of the precipitin reaction as well as a separate pre-reaction step. However, The Binding Site’s Freelite assay does not include a pre-reaction step and relies on whether the result lies beyond the linear portion of the calibration curve and the initial rate of preciptin formation to determine if additional dilutions are necessary. This may lead to certain cases of antigen excess being missed. In the authors’ laboratory, all sera with a FLC κ/λratio greater than two and a κ FLC concentration not obtained at test dilutions greater than 100 fold are re-tested at a serum dilution of 400 fold. The 400-fold dilutions of samples that are not in antigen excess give concentrations that are, on average, 50 percent higher than those obtained with the 100-fold dilutions. For consistency, the 100-fold dilutions are reported. The authors’ use of the 100-fold dilution for reporting is based on the fact that most samples have the 100-fold dilution within the linear portion of the calibration curve.

Murata K, Clark RJ, Lockington KS, et al. Sharply increased serum free light-chain concentrations after treatment for multiple myeloma. Clin Chem. 2010;56(1):16–20.

Correspondence: Jerry A. Katzmann at katzmann@mayo.edu

[ Top ]

Use of CO-oximetry for postmortem carbon monoxide measurement Use of CO-oximetry for postmortem carbon monoxide measurement

Measurement of carboxyhemoglobin is crucial to recognizing carbon monoxide as a contributor in deaths involving fires, exposure to automobile exhaust, aircraft accidents, and residential exposures. Knowing car-boxy-hemoglobin (COHb) concentrations helps medical examiners determine whether a victim with multiple life-threatening injuries was alive or dead when a fire started. The toxic effects of carbon monoxide (CO) depend on the length of exposure, concentration of CO, and ventilation. Short exposures to high CO concentrations, even greater than those typically associated with death, are often more survivable than exposures to more moderate concentrations over a prolonged period. Blood COHb concentrations of three percent or less are found in nonsmokers, whereas smokers may have concentrations upwards of 10 percent to 15 percent. In toxicologic investigations of cause of death, COHb concentrations greater than 50 percent are considered lethal. A number of other factors, including declining health of the elderly, increased vulnerability of an infant, coronary artery disease, and respiratory insufficiency, can cause death at COHb concentrations of less than 50 percent. The validity of CO-oximetry for COHb measurement in postmortem samples has been examined by comparing CO-oximetry with ultraviolet spectrophotometry and gas chromatography (GC). Interferences, including lipid-caused turbidity, methemoglobin, sulfhemoglobin, microcoagulates, putrefaction, and contamination, have called into question the accuracy of COHb measurements obtained by CO-oximetry. Older CO-oximetry technologies with fewer monitored wavelengths often gave inaccurate COHb measurements in the presence of interferents, a limitation that has been improved with the availability of CO-oximeters with six or more wavelengths that correct for multiple types of interferents. Treatment with sodium dithionite to reduce methemoglobin and oxyhemoglobin, filtration to remove particulates, and other methods of pretreatment help make postmortem samples more suitable for measurement by CO-oximetry. Use of CO-oximetry technology with more than four wavelengths improves correlations with GC results in postmortem samples, even at very low hemoglobin concentrations (less than 40 g/L [less than 4 g/dL]). COHb meas-ured on a CO-oximeter after treating putrefying blood samples to remove interference caused by high methemoglobin, sulfhemoglobin, turbidity, or low total hemoglobin correlated well with flagged CO-oximetry results before treatment and with GC results. The authors conducted a study in which they compared COHb results obtained with automated CO-oximetry (Diametrics Medical AVOX 4000) and manual UV spectrophotometry (Hewlett Packard 8453 UV spectrophotometer). They analyzed postmortem heart blood samples that were EDTA anticoagulated and obtained from 16 medical examiner cases. Before spectrophotometry, samples were treated with sodium dithionite. The authors measured the absorbance at 540 nm (COHb) and 555 nm (isosbestic point) and calculated the percentage COHb concentration. The postmortem interval between death and blood draw ranged from zero to 25.5 hours. COHb concentrations ranged from 21 percent to 83 percent. Deming regression analysis of COHb data obtained by CO-oximetry and UV spectrophotometry demonstrated an excellent correlation (r=0.983 [y=1.04x–1.21]; Sy/x=3.45). Neither postmortem interval nor evidence of body decomposition, both of which are known to increase methemoglobin concentration, affected correlation of the CO-oximeter and spectrophotometer results. These results demonstrate that measurement of COHb by CO-oximetry with an appropriate number of wavelengths can be an accurate method to assess CO in postmortem blood samples obtained in forensic toxicology cases.

Olson KN, Hillyer MA, Kloss JS, et al. Accident or arson: Is CO-oximetry reliable for carboxyhemoglobin measurement postmortem? Clin Chem. 2010;56(4):515–520.

Correspondence: Fred S. Apple at apple004@umn.edu

[ Top ]

Use of nuclear magnetic resonance spectroscopy with urinary tract infections Use of nuclear magnetic resonance spectroscopy with urinary tract infections

Escherichia coli contribute to 80 percent of urinary tract infections. The other common uropathogens, including Klebsiella spp., Pseudomonas aeruginosa, Proteus spp., Enterobacter spp., Citrobacter spp., and Staphylococcus saprophyticus, are implicated in the cause of uncomplicated urinary tract infection (UTI). Urine culture provides direct evidence of the presence and significance of UTI. However, inherent reporting delays and high chances of contamination make it a poor tool to monitor therapeutic response to treatment. In a cohort study of women with persistent urinary symptoms after treatment of UTI, urine culture poorly correlated with bladder biopsy culture; 24 percent of women with sterile urine cultures had bladder tissue that was culture positive, indicating persistence of UTI. To administer treatment specific to the UTI, it is essential to know the nature and quantity of the infecting organism. Other approaches, such as microscopic, colorimetric, filtration/staining, photometric, automated method, and Fourier transform-infra-red spectroscopy, have received limited clinical acceptance. Nuclear magnetic resonance (NMR) spectroscopy has shown its value in characterizing, within 12 to 15 minutes, the endogenous biochemistry with qualitative and quantitative metabolic information from biological fluids, especially in urine. Using NMR spectroscopy, it is possible to diagnose bacterially contaminated specimens with the nonspecific peaks representative of bacterial end-product (acetate, formate, β-hydroxybutyrate, lactate, and trimethylamine) in sample processed immediately. The same contaminants take time to appear in urine. To address the shortcomings of urine culture for diagnosing urinary tract infection, the authors used 1H-nuclear magnetic resonance spectroscopy to identify and quantify Escherichia coli, Pseudomonas aeruginosa, Klebsiella pneumonia, and Proteus mirabilis. They assessed, between 2003 and 2006, urine samples from patients with suspected UTI (617), healthy volunteers (50), and commercially available standard strains of E. coli, K. pneumonia, P. aeruginosa, Enterobacter, Acinobacter, Pr. mirabilis, Citrobacter frundii, Streptococcus saprophyticus, and Enterococcus faecalis. 1H-NMR spectra were recorded on a 400-MHz spectrophotometer. To quantify the bacteria, the authors estimated the areas under the spectral peaks of the specific metabolic product compared with the known concentration of trimethyl silyl propionic acid. All urine specimens were cultured, as well as being assessed by NMR spectroscopy. Preliminary urinary spectroscopy of the unprocessed samples showed peaks of nonspecific metabolites, such as succinate, acetate, lactate, and ethanol, indicating infected samples. Based on the results from processed samples, 93 percent (240 of 256) of E. coli, 92 percent (101 of 110) of K. pneumoniae, 93 percent (56 of 60) of P. aeruginosa, and 80 percent (8 of 10) of Pr. mirabilis could be diagnosed with NMR (numerator) and urine culture (denominator). The remaining samples were sterile or had a bacterial population of less than 103 colony-forming units (CFU)/mL, or both. The NMR method diagnosed bacterial densities of more than 103 CFU. The authors concluded that identification of the common uropathogens E. coli, K. pneumoniae, P. aeruginosa, and Pr. mirabilis by NMR spectroscopy has a shorter reporting time and can be used to differentiate between infected, contaminated, and sterile specimens.

Gupta A, Dwivedi M, Mahdi AA, et al. 1H-nuclear magnetic resonance spectroscopy for identifying and quantifying common uropathogens: a metabolic approach to the urinary tract infection. BJU Int. 2009;104:236–244.

Correspondence: Mahendra Bhandari at mbhanda1@hfhs.org

[ Top ]

Anti-hepatitis B core antigen for blood screening Anti-hepatitis B core antigen for blood screening

Screening for hepatitis B virus with assays for hepatitis B surface antigen excludes most potentially infectious units and greatly reduces the risk of transfusion-transmitted HBV infection. However, the risk from potentially infectious units that contain HBV DNA without detectable hepatitis B surface antigen (HBsAg) remains. Anti-hepatitis B core antigen (HBc) screening has been conducted in the United States since the mid-1980s. It originally was used as a surrogate marker for non-A, non-B hepatitis. After testing for hepatitis C virus (HCV) was introduced, anti-HBc screening was used to intercept potentially infectious HBsAg-negative donations. Published reports have suggested that hepatitis B transmission by blood from HBsAg-negative donors could be prevented by screening for anti-HBc. However, many countries have not implemented anti-HBc screening, so the potential benefit of using anti-HBc screening largely has been inferred from U.S. studies. In Canada, implementation of anti-HBc screening was delayed because of disputes over the safety benefit from the test. Policy decisions in countries with a low prevalence of HBV transmission, such as Canada, are hampered by the absence of empirical data about the benefit of introducing anti-HBc screening in a donor population with a low prevalence of HBV that previously has not been screened for anti-HBc. Published risk estimates have limitations because many HBsAg-negative, anti-HBc-reactive positive donations do not transmit HBV to recipients. Estimates of the risk of transmitting HBV that can be specifically ascribed to such HBsAg-negative, anti-HBc-reactive positive units can only be derived from settings in which additional testing for HBV DNA is conducted. In April 2005, Canadian Blood Services commenced screening of all blood donations for anti-HBc to further reduce the risk of HBV-infectious units entering the blood supply. Supplemental testing of HBsAg-negative, anti-HBc-reactive units for HBV DNA was introduced at the same time, which allowed the medical profession to quantify the benefit derived from introducing anti-HBc screening of all donations. The authors reported on the rate of potentially infectious donations intercepted by anti-HBc screening based on data collected over 18 months of testing. These data, obtained from a previously unscreened donor population, can help other countries with a low HBV prevalence decide whether or not to introduce donor screening for anti-HBc. The proportion of potentially infectious donations intercepted by anti-HBc over the initial 18 months of testing was calculated based on three assumptions relating infectivity of HBV DNA-positive units to anti-HBs levels. Lookback was conducted for all DNA-positive donations. Of 493,344 donors, 5,585 (1.13 percent) were repeat-reactive for the presence of anti-HBc, and 29 (0.52 percent) of them were HBV DNA positive and HBsAg negative. The proportion of potentially infectious donations intercepted by anti-HBc screening was one in 17,800 if all HBV DNA-positive donations were infectious, one in 26,900 if infectivity was limited to donations with an anti-HBs level of not more than 100 mIU/mL, and one in 69,300 if only donations with undetectable anti-HBs were infectious. For 279 components in the lookback study, no traced recipients were HBsAg positive, and seven recipients were anti-HBc reactive in association with four donors, three of whom had an anti-HBs level of more than 100 mIU/mL and one of whom had a level of 61 mIU/mL. The authors concluded that implementing anti-HBc screening reduced the risk of transfusing potentially infectious units by at least as much as had been expected based on the literature. The lookback did not provide proof of transfusion transmission of HBV from HBV DNA-positive, anti-HBc-reactive, HBsAg-negative donors, nor did it establish lack of transmission.

O’Brien SF, Fearon MA, Yi Q-L, et al. Hepatitis B virus DNA-positive, hepatitis B surface antigen-negative blood donations intercepted by anti-hepatitis B core antigen testing: the Canadian Blood Services experience. Transfusion. 2007;47:1809–1815.

Correspondence: Sheila F. O’Brien at sheila.o’brien@ bloodservices.ca

[ Top ]

Two-tiered antibody testing for Lyme disease Two-tiered antibody testing for Lyme disease

Serology is the major laboratory test available to support the clinical diagnosis of Lyme disease. The Centers for Disease Control and Prevention recommends a two-tiered approach for serodiagnostic testing: an enzyme immunoassay (EIA) or immunofluorescent assay as a first test, followed by separate immunoglobulin M (IgM) and IgG Western blots when the first-tier test is positive or equivocal. Sonicates of whole Borrelia burgdorferi are usually used as the antigen preparation in Western blots. This approach has performed well for patients with late manifestations of Lyme disease (LD), particularly those with Lyme arthritis or stage three neuroborreliosis. Such patients have robust IgG antibody responses against many spirochetal antigens. However, antibody responses in Lyme disease develop slowly, so antibody testing for patients with erythema migrans (stage one) is insensitive and depends largely on detecting specific IgM responses. Although the CDC guidelines limit the application of IgM criteria to the first month of illness, these criteria, which require two of three specific bands, have been problematic. Misuse or misinterpretation of IgM blots has been a factor in overdiagnosing LD in patients with other illnesses. In an effort to improve serologic testing, second-generation tests that employ recombinant spirochetal proteins or synthetic peptides have been developed. The most promising tests use the VlsE (variable major protein-like sequence expressed) protein of B. burgdorferi, or a portion of it, as the antigen target in enzyme immunoassay (EIA) tests. The authors conducted a study to determine whether elements of sonicate EIA and IgG Western blot with a rVlsE band could be combined to produce a two-tiered IgG test that would per-form at least as well as standard testing without the drawbacks of IgM Western blot testing. Separate serum sets from Lyme disease patients and control subjects were tested independently at two medical centers using whole-cell enzyme immunoassays and IgM and IgG immunoblots, with recombinant VlsE added to the IgG blots. The results from both centers were combined, and a new second-tier IgG algorithm was developed. The authors found that with standard two-tiered IgM and IgG testing, 31 percent of patients with active erythema migrans (stage one), 63 percent of those with acute neuroborreliosis or carditis (stage two), and 100 percent of those with arthritis or late neurologic involvement (stage three) had positive results. Using new IgG criteria, in which only the VlsE band was scored as a second-tier test among patients with early Lyme disease (stage one or two) and five or more of 11 IgG bands were required in those with stage three disease, 34 percent of patients with stage one, 96 percent with stage two, and 100 percent with stage three infection had positive responses. New and standard testing achieved 100 percent specificity. The authors concluded that compared with standard IgM and IgG testing, the new IgG algorithm, with the VlsE band, eliminates the need for IgM testing. It provides comparable or better sensitivity and maintains high specificity.

Branda JA, Aguero-Rosenfeld ME, Ferraro MJ, et al. 2-tiered antibody testing for early and late Lyme disease using only an immunoglobulin G blot with the addition of a VlsE band as the second-tier test. Clin Infect Dis. 2010;50:20–26.

Correspondence: John A. Branda at branda.john@mgh.harvard.edu

[ Top ]


Clinical pathology abstracts editor: Michael Bissell, MD, PhD, MPH, professor, Department of Pathology, Ohio State University, Columbus.
 
 
 © 2014 College of American Pathologists. All rights reserved. | Terms and Conditions | CAP ConnectFollow Us on FacebookFollow Us on LinkedInFollow Us on TwitterFollow Us on YouTubeFollow Us on FlickrSubscribe to a CAP RSS Feed