College of American Pathologists
CAP Committees & Leadership CAP Calendar of Events Estore CAP Media Center CAP Foundation
 
About CAP    Career Center    Contact Us      
Search: Search
  [Advanced Search]  
 
CAP Home CAP Advocacy CAP Reference Resources and Publications CAP Education Programs CAP Accreditation and Laboratory Improvement CAP Members
CAP Home > CAP Reference Resources and Publications > CAP TODAY > CAP TODAY 2005 Archive > Measuring the significance of field validation in the CAP Interlab Comparison Program: How good are the experts?
Printable Version

  Measuring the significance of field
  validation in the CAP Interlab
  Comparison Program

title

 

 


 

cap today

June 2005
PAP/NGC Programs Review

Jonathan H. Hughes, MD, PhD

Expert opinion is often used as a gold standard for gynecologic cytology in the evaluation of new technologies, in the medicolegal arena, and in the selection and validation of cases for proficiency testing. In all three of these settings, the stakes are high: Promising developments in technology, million-dollar jury awards, and the careers of medical professionals hang in the balance. In spite of the considerable weight that expert opinion carries in these settings, surprisingly few studies examining the reliability of expert opinion in cytology have been conducted. Specifically, how reliable expert opinion alone is at selecting slides containing cytodiagnostic lesions that can be reproducibly and reliably identified has not been determined.

In a recent article, Andrew A. Renshaw, MD, and colleagues examined the reliability of expert opinion in cytology by assessing the significance of field validation of slides in the CAP Interlaboratory Comparison Program in Cervicovaginal Cytology (Measuring the significance of field validation in the College of American Pathologists Interlaboratory Comparison Program in Cervicovaginal Cytology: How good are the experts? Arch Pathol Lab Med. 2005;129:609–613). In order for a slide to enter into the CAP program as an “ungraded” or educational slide, three expert cytopathologists must agree that the slide represents a good example of the referral diagnosis made by the submitting pathologist. An “ungraded” slide reaches the status of “graded” only after it has circulated as an educational slide among at least 20 CAP program participants, with at least 90 percent of the respondents agreeing with the original referral diagnosis. Any slide that does not fulfill these strict criteria is not permitted to enter the program as a graded slide. This process of “field validation” permits an assessment of the ability of experts to set guidelines for standard of practice, because the interpretations of the three expert Cytopathology Committee pathologists can be compared with the real-world performance of the slides across a variety of practice settings.

In their article, Renshaw and colleagues determined the percentage of cases originally accepted into the CAP program as ungraded slides by three expert cytopathologists that could not be reliably and reproducibly identified by program participants for each cytodiagnostic category. Put another way, Dr. Renshaw examined the percentage of cases that “failed field validation.” Their data set consisted of more than 10,000 conventional smear and ThinPrep cases that were selected by the three-pathologist expert panel. Of those selected slides, 19 percent of conventional smears and 15 percent of ThinPrep specimens failed field validation. Some diagnoses (such as squamous cell carcinoma) validated well, but other diagnoses had high failure rates. The worst-performing reference diagnoses were unsatisfactory, reparative changes, and low-grade squamous intraepithelial lesion. Renshaw, et al conclude that despite rigorous review, the expert reviewers consistently overestimate the number of cases that the cytopathology community at large can reliably identify.

In both Dr. Renshaw’s article and the accompanying editorial by Barbara Ducatman, MD (How expert are the experts? Implications for proficiency testing in cervicovaginal cytology. Arch Pathol Lab Med. 2005;129:604–605), the authors discuss the profound implications that Dr. Renshaw’s data have for cytology proficiency testing. Specifically, because there is considerable field validation failure of the diagnoses “unsatisfactory” and low-grade squamous intraepithelial lesion, it is conceivable that the use of unvalidated slides in a 10-slide proficiency test could result in the failure of a subset of highly qualified participants. Renshaw concludes, “These data show that an expert selection process method may not achieve an acceptable test population of slides, a circumstance that may lead to issues of proficiency failure rates and testing validity.” Dr. Ducatman and colleague make the point that the new proficiency testing programs, which do not use field-validated slides, probably provide a lower degree of quality assurance than do more robust quality assurance programs that are already mandated. She concludes by challenging “CMS and those organizations administering PT, present and future, to mandate the use of only validated slides and to require statistical review of the test.”


Dr. Hughes, a member of the CAP Cytopathology Committee, is staff pathologist at Laboratory Medicine Consultants, Las Vegas.
 
 

 

 

   
 
 © 2014 College of American Pathologists. All rights reserved. | Terms and Conditions | CAP ConnectFollow Us on FacebookFollow Us on LinkedInFollow Us on TwitterFollow Us on YouTubeFollow Us on FlickrSubscribe to a CAP RSS Feed