College of American Pathologists
CAP Committees & Leadership CAP Calendar of Events Estore CAP Media Center CAP Foundation
About CAP    Career Center    Contact Us      
Search: Search
  [Advanced Search]  
CAP Home CAP Advocacy CAP Reference Resources and Publications CAP Education Programs CAP Accreditation and Laboratory Improvement CAP Members
CAP Home > CAP Reference Resources and Publications > CAP TODAY > CAP TODAY 2007 Archive > Queries and Comments
Printable Version

  Q and A





cap today



May 2007
Feature Story

Q. My laboratory recently installed the second of two automated Pap test screening instruments in our laboratory. The first of these instruments was installed in 2002. Is it necessary to perform the validation process on this new instrument?

A.The CAP Laboratory Accreditation Program standard CYP.05257 reads as follows: “Is there documentation of adherence to the manufacturer’s recommended protocol(s) for implementation and validation of new instruments?” The appended note reads: “Before implementing use of new gynecologic liquid-based instruments, automated preparations, and automated screening instruments, the laboratory must validate and document the functioning of the instrument in its own specific laboratory environment, including the capability of the instrument to replace existing procedure(s), if applicable. If the manufacturer does not provide validation and instrument monitoring recommendations, the laboratory must document the specific validation procedure used.”

Even if the laboratory has already performed this validation process on a previous instrument, a new instrument of the same type must still undergo validation. It is appropriate to use the same protocol used to validate the original instrument. Many manufacturers now provide assistance by establishing guidelines for the validation of their instruments; however, the absence of a formal protocol does not create an exemption for an instrument. Often, the clinical laboratory section of a general lab can provide assistance with establishing a protocol if the cytology laboratory lacks experience and knowledge in this area. Though CLIA ’88 provides for an exemption from validation for instruments installed before April 24, 2003, the CAP allows no exceptions. Any event that may alter the previously established function of the instrument also requires revalidation. This includes major repair of the instrument, especially if the repair requires shipping the instrument to an off-site location. The Joint Commission on Accreditation of Healthcare Organizations also establishes similar requirements for instrument validation.

William D. Tench, MD
Palomar Medical Center
Escondido, Calif.
Member, Cytopathology Committee

Joel S. Bentz, MD
University of Utah
Salt Lake City
Member, Cytopathology Committee

Q. My laboratory uses an automated Pap test screening instrument that has functioned reliably for many months. My general laboratory manager insists that we establish a downtime procedure for this instrument in case it suddenly fails. Is this really necessary?

A.Yes, it is important to develop a procedure that provides guidance for the management of work when any laboratory instrument fails. The CAP Laboratory Accreditation Program standard CYP.05285, a phase II standard, reads as follows: “Is there a documented procedure for handling workload during instrument failure and/or downtime?” It includes this note: “This procedure must address: (a) final processing and resulting of any cases/specimens that are within the instrument at the time of failure, and (b) alternative procedures to be used during instrument downtime.” Though much, if not all, of this procedure may appear to be common sense, it is important that it be clearly and thoroughly documented. Furthermore, though this standard was originally developed to address issues relating to newly introduced instrumentation associated with automated Pap test screening, including instruments used in the production of Pap test slides as well as the automated computer-assisted evaluation, it should also be applied to other instruments in use within the cytology laboratory, including the laboratory information system.

William D. Tench, MD
Palomar Medical Center
Escondido, Calif.
Member, Cytopathology Committee

Joel S. Bentz, MD
University of Utah
Salt Lake City
Member, Cytopathology Committee

Q. Is there a significant difference in error rate diagnosing LGSIL and HGSIL using conventional versus ThinPrep cytology on Pap smears?

A.Yes, there is. Many studies have been applied to the differences between conventional and liquid-based Papanicolaou tests in terms of their overall yield of squamous intraepithelial lesions, or SIL; meta-analyses of these individual reports have shown improved detection rates by liquid-based samples.1–3 But investigators have often combined both low-grade and high-grade lesions (LSIL, HSIL) into a single diagnostic group, many have relied on cytologic characteristics alone without confirmatory biopsy, and few have investigated the error rates unique to each of those categories. While the validity of these multiple studies is valuable in terms of overall comparison between conventional and liquid-based samples, the answer to the aforementioned question requires a more stringent investigation of the outcome of Pap test diagnoses.

The CAP Interlaboratory Comparison Program in Cervicovaginal Cytopathology, or Pap program, affords an excellent opportunity not only to measure the proficiency of performance of individuals but also to analyze thousands of interpretations and investigate errors and their causes. All LSIL and HSIL cases used in the program are biopsy confirmed and have passed through a stringent approval process whereby three CAP cytopathologists approve representation of the lesion on the slide and technical aspects such as staining quality. Passage through the program provides additional field testing or “validation” data for each individual slide. An early investigation on conventional CAP Pap slides showed that laboratory responses matched reference (that is, correct, validated) diagnoses of HSIL in 75 percent of cases.4 Subsequent to the investigation, Renshaw and colleagues, again using conventional slide CAP Pap data, analyzed all diagnostic categories, proving that cytologic interpretations of LSIL are made with greater precision than those of HSIL.5 Their database consisted of 25,745 validated slides, of which 201 contained LSIL and 211 contained HSIL. Precision rates were categorized as (a) 100 percent exact match rates, (b) 50 to 100 percent match rates, and (c) less than 50 percent exact match rates for each diagnostic entity. Specifically, LSIL versus HSIL respectively demonstrated figures of 43.2 percent versus 33.1 percent for (a); 56.8 percent versus 70.3 percent for (b); and 3.7 percent versus 11.8 percent for (c). All these figures favor more precise categorization of LSIL than HSIL in conventional slides.

With the preceding background experience in conventional smears in mind, the question of error rates for LSIL and HSIL cytodiagnoses in conventional and liquid-based samples can be tackled. Using the 2002 data from the CAP Pap program, Renshaw and colleagues compared performance of conventional slides (n=89,815) and ThinPrep, or TP (Cytyc, Marlborough, Mass.), slides (n=20,886), both validated and educational slides. (The latter have biopsy confirmation and diagnostic concurrence by three CAP Cytopathology Committee members, but have not yet circulated sufficiently to be regarded as fully field validated, that is, they have not yet acquired a 90 percent level of agreement with at least 20 correct responses.) The study was in accord with previous publications in overall performance of TP slides, which accumulated 1.6 percent false-positive responses (versus 3.2 percent for conventional, P=0.001) and 1.3 percent (versus 2.1 percent, P=0.02) false-negative responses. Specifically for LSIL, validated TP and conventional slides yielded false-negatives of 1.5 percent and 3.4 percent respectively (P=0.009) and for HSIL, 1.1 percent and 1.9 percent respectively (P=0.10, NS).6

On the so-called educational slides, however, an interesting reversal was noted. Here, there was a statistically significant increase in missed HSIL by TP (8.1 percent) over conventional (4.1 percent) smears (P=0.001). Whereas these slides have biopsy confirmation, and diagnostic concurrence by three CAP Cytopathology Committee members, they have not yet received full field validation. It might be argued that these are the samples that would be weeded out in the field; it could also be held that these slides with as-yet-incomplete validation might better mirror the routine work in pathology laboratories.

The conclusion of the CAP Pap assessment was that overall TP preparations had significantly lower error rates than conventional slides for all categories (including LSIL), but that participants’ responses indicated some difficulty in recognizing HSIL.6 The authors postulated that this error might be the consequence of HSIL in TP more frequently presenting as isolated smaller cells with decreased nuclear size, as compared with their appearance in conventional smears. An alternative suggestion might be rooted in the finding of yet another CAP Pap analysis, in which researchers found that the hyperchromatic crowded groups of HSIL in conventional smears were more likely to be interpreted as squamous in type; in TP samples, on the contrary, those hyperchromatic crowded groups were more likely to be labeled as glandular lesions.7

The CAP has not yet studied SurePath (TriPath Imaging, Burlington, NC), another liquid-based method, in comparison with conventional slides. A large separate study of 58,580 SurePath slides and 58,988 historic conventional slides provided a statistically significantly increased detection of LSIL (107 percent increase, P<0.00001) and HSIL (64 percent increase, P<0.00001) using the liquid-based method. Biopsy confirmation was obtained for a proportion of all cases.8 Lastly, even a recent Australian meta-analysis, which viewed liquid-based preparations unfavorably in highly selected studies, described an increase in detection of LSIL cases.9

In conclusion, the answer to the posed question of errors in the detection of LSIL and HSIL in conventional and liquid-based slides is pertinent and relevant. There seems to be no argument that the improvement in LSIL diagnosis in TP is real, and biopsy confirmable. At the same time, it is possible that the liquid-based HSIL detection is somewhat more subtle and perhaps troubling. Whereas overall improvement is noted in highly controlled circumstances such as proficiency testing programs, there could be some HSIL cases that are missed owing to smaller cell size, or misinterpretation of HSIL hyperchromatic crowded groups as being glandular in origin. Whether these early findings will become statistically significant or not, as queried, will depend on further well-controlled large projects, knowledge of potential pitfalls, and increased vigilance by those many centers using the new technology.


  1. Austin RM, Ramzy I. Increased detection of epithelial cell abnormalities by liquid-based gynecologic cytology preparations. A review of accumulated data. Acta Cytol. 1998;42:178–184.
  2. Karnon J, Peters J, Platt J, et al. Liquid-based cytology in cervical screening: an updated rapid and systematic review and economic analysis. Health Technol Assess. 2004;8:1–78.
  3. Abulafia O, Pezzulo JC, Sherer DM. Performance of ThinPrep liquid-based cervical cytology in comparison with conventionally prepared Papanicolaou smears: a quantitative survey. Gynecol Oncol. 2003;90:137–144.
  4. Woodhouse SL, Stastny JF, Styer PE, et al. Interobserver variability in subclassification of squamous intraepithelial lesions. Arch Pathol Lab Med. 1999;123:1079–1084.
  5. Renshaw AA, Davey DD, Birdsong GG, et al. Precision in gynecologic cytologic interpretation: A study from the College of American Pathologists Interlaboratory Comparison Program in Cervical Cytology. Arch Pathol Lab Med. 2003;127:1413–1420.
  6. Renshaw AA, Young NA, Birdsong GG, et al. Comparison of performance of conventional and ThinPrep gynecologic preparations in the College of American Pathologists Gynecologic Cytology Program. Arch Pathol Lab Med. 2004;128:17–22.
  7. Renshaw AA, Mody DR, Wang E, et al. Hyperchromatic crowded groups in cervical cytology—differing appearances and interpretations in conventional and ThinPrep preparations. A study from the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology. Arch Pathol Lab Med. 2006;130:332–336.
  8. Davey E, Barratt A, Irwig L, et al. Effect of study design and quality on unsatisfactory rates, cytology classifications, and accuracy in liquid-based versus conventional cervical cytology: a systematic review. Lancet. 2006;367:122–132.

Gladwyn Leiman,
University of Vermont/Fletcher Allen
Burlington, Vt.

Former member,
Cytopathology Committee

Q. I thought that artifact from lubricant was a problem that applies only to conventional Pap smears. I recently read about lubricant affecting liquid cervical cytology specimens. How is this possible?

A.You are correct in that before liquid-based cervical cytology lubricant gels were a common and easily recognized artifact that had the potential to significantly obscure the cellular detail of conventional Pap smears. The typical microscopic appearance of lubricant on a Pap smear is a thick, amorphous material that stains intensely purple. The appearance is distinctive with no look-alikes.

Excess lubricant as a cause of hypocellular ThinPrep Pap Test slides went largely undocumented for several years following FDA approval. In 2002, pathologists and cytotechnologists from Newcastle Laboratory in Newcastle, Australia, reported that they had observed a grainy, basophilic contaminant on some of their ThinPrep Pap Test slides and conventional Pap smears.1 At first, they interpreted this material as bacteria causing cellular obscuration of cells on smears and hypocellular unsatisfactory ThinPrep Pap Test slides. The authors noted that the residual specimen in the ThinPrep Pap Test vial for these hypocellular cases contained flocculent material in solution. The same flocculent material was noted in some SurePath vials; however, SurePath slides did not show the grainy material. PreservCyt solution processed using TriPath’s PrepStain Slide Processor rendered “clear” slides without the contaminant. The Newcastle investigators were able to duplicate these findings by spiking ThinPrep vials with other brands of commercially available lubricants. They documented two forms of contamination: the grainy basophilic effect that caused hypocellular obscured slides and a pale, lacey, homogeneous background that posed less of a problem. (Related artice: Grainy, basophilic lubricant that presumably clogs the ThinPrep filter and results in hypocellular slides)

In August 2004, Cytyc sent a letter to laboratories about the recent rise in ThinPrep unsatisfactory rates due to use of certain lubricants. The problem developed following a shortage of Johnson and Johnson’s K-Y Lubricating Jelly. Clinicians substituted other brands of lubricant unaware of the potential adverse effects on the preparation of the ThinPrep slide. Cytyc recommended that only K-Y Lubricating Jelly be used, only if necessary and then sparingly, along the sides of the speculum avoiding the tip.

This news from Cytyc prompted investigators at Sonora-Quest Laboratories in Tempe, Ariz., to conduct a small controlled study that demonstrated the effect on ThinPrep slides of nine commercially available lubricants.2 The results were presented as a poster at the 2005 ASC annual meeting. Split samples with and without lubricant were prepared. K-Y Lubricating Jelly, Surgilube, Replens, and Walgreen’s Lubricating Jelly had no ill effects on the cellularity of the specimen compared with the matched control slides without lubricant that were prepared from the same vials. Specimens containing FemGlide, Triad, Maxilube, Aquagel, or Aquasonic, on the other hand, all resulted in slides that were unsatisfactory for evaluation due to scant squamous component. The matched control slides for these specimens all were adequately cellular.

Additionally, a few reports have documented no effect of lubricant on cytologic interpretation or HPV testing.3–5 The key factor for these reports is that all tested the effects of water-soluble lubricant.


  1. Zardawi IM, Catterall N, Duncan J, et al. Effects of lubricant on conventional and liquid-based cervical smears. Acta Cytol. 2003;47:704–705.
  2. McClure S, Prey M, Chase J. The effect of lubricant jellies on the specimen adequacy of ThinPrep Pap tests. Cancer Cytopath. 2005;105:375.
  3. Amies AE, Miller L, Lee S, Koutsky L. The effect of vaginal speculum lubrication on the rate of unsatisfactory cervical cytology diagnosis. Obstet Gynecol. 2002;100:889–892.
  4. Hathaway JK, Pathak PK, Maney R. Is liquid-based Pap testing affected by water-based lubricant? Obstet Gynecol. 2006;107:66–70.
  5. Smith-McCune KK, Tuveson JL, Rubin MM, et al. Effect of Replens gel used with a diaphragm on tests for human papillomavirus and other lower genital tract infections. J Lower Gen Tract Dis. 2006;10:213–218.

Marianne Unger Prey, MD
Chesterfield, Mo.
Former member,
Cytopathology Committee

Q. For pathologists who perform their own fine needle aspirations, is there a minimum number of procedures that should be performed weekly or yearly to maintain competency?

A. There is little doubt that experience and training have a significant impact on the diagnostic yield and accuracy of the FNA procedure. Published studies have shown that physicians with formal training in FNA, such as fellowship-trained cytopathologists, have much greater success obtaining diagnostic material than physicians who lack formal training.1

Unfortunately, there is not much published information that specifically addresses the question of how many FNA procedures need to be done to maintain proficiency with the technique. Since “practice makes perfect,” there is little doubt that pathologists who perform FNA procedures on a daily or nearly daily basis probably achieve a level of comfort and accuracy greater than that of individuals who perform the procedure infrequently. However, the “magic number” of procedures required to optimize competency probably varies from operator to operator.

Since the answer to this question is almost certainly operator dependent, it is probably best to approach it in a manner that is individualized to the laboratory, rather than to try to establish a national standard or requirement. We recommend that all pathologists who perform FNAs track their own false-negative and false-positive rates, based on the results of subsequent surgical material or other clinical information. By performing this internal quality assurance function, laboratories should be able to identify trends in their pathologists’ FNA performance and identify individuals whose diagnostic yield is felt to be unacceptably low or to need improvement. In the study referenced below, the false-negative rate among pathologists with formal training in the FNA procedure was less than two percent.


  1. Ljung B-M, Drejet A, Chiampi N, et al. Diagnostic accuracy of fine-needle aspiration biopsy is determined by physician training in sampling technique. Cancer Cytopathol. 2001; 93:263–268.

Jonathan H. Hughes, MD, PhD
Laboratory Medicine
Consultants, Ltd.
Las Vegas
Member,Cytopathology Committee


bullet Related Links




 © 2014 College of American Pathologists. All rights reserved. | Terms and Conditions | CAP ConnectFollow Us on FacebookFollow Us on LinkedInFollow Us on TwitterFollow Us on YouTubeFollow Us on FlickrSubscribe to a CAP RSS Feed