Return to CAP Home
Printable Version

  Letters

 

 

 

 

 

June 2007
Feature Story

Breast cancer diagnoses

I enjoyed reading your article on communication between pathologists, clinicians, and radiologists (Related article: In breast cancer diagnoses, don’t balk at talk ), and I was impressed that Ira Bleiweiss, MD, took the time to review the radiographic images of all his breast core biopsies. However, I believe he could make his life easier and he could offer better service if he simply ensured that he adequately sampled his breast core specimens in the first place.1

The literature is clear that in order to sample all lesions in breast core biopsies, one needs to get at least five levels going at least 5/8ths of the way through the block. Many places, including our own, entirely section the block. Thus, it is no surprise that with Dr. Bleiweiss’ practice of only taking four apparently superficial (since he can subsequently get five and even 10 sections from what is left) cuts that significant lesions are left unsampled. While it is true that reviewing the radiographic image may allow one to determine that some of these lesions have not been sampled, many significant lesions, including ADH, LCIS, and even small incidental invasive carcinomas, cannot be so readily identified.

While there are many ways to skin a cat, it would seem far safer for Dr. Bleiweiss’ patients if he simply ensured that he sampled the entire breast core specimen. Since radiologists will be the first to acknowledge that interpreting radiographic images of breast lesions can be challenging and tricky, it might be better to leave this correlation to those physicians who have specific training and experience in doing just that.

Andrew Renshaw, MD
Baptist Hospital
Miami, Fla.

Reference

  1. Renshaw AA. Adequate sampling of breast core needle biopsies. Arch Pathol Lab Med. 2001;125:1055–1057. 

Ira J. Bleiweiss, MD, director of surgical pathology and director of the Division of Breast Pathology, Mount Sinai Medical Center, New York, NY, replies: While I appreciate Dr. Renshaw’s concern for my workload, his alternative, as will become apparent, is a method that suffers from imprecision and inaccuracy. In his single author paper in Archives (2001;125: 1055–1057), Dr. Renshaw describes his routine of completely sectioning through the paraffin blocks of core biopsies, and in his letter he states that this is a common practice. My colleagues and I have lectured nationally on this topic and have never encountered anyone who adheres to this practice, nor have we ever received in consultation paraffin blocks devoid of tissue. A single author article does not a standard of practice make nor does it magically illuminate the literature.

Dr. Renshaw claims we are leaving significant amounts of tissue (and by implication diagnoses) initially unsampled. What he fails to tell us (or fails to realize) is that he is doing the same. A careful reading of his article reveals (buried once in small print) that he skips 50–75 microns of tissue between his slides. This is quite a significant amount of tissue (and possibly diagnoses) to be discarding. Multiple problems arise from this practice: 1) He may be discarding areas of histologic intraductal carcinoma whereas his slides show only atypical duct hyperplasia; 2) He may be discarding tiny invasive carcinomas unknowingly; 3) In the scenario whereby he has diagnosed microinvasive carcinoma presumably in the setting of DCIS/calcifications, it is our experience that the core biopsy generally removes the entire invasive carcinoma. By sectioning through the block and skipping intervening tissue, Dr. Renshaw has created a situation where he is unable to test the invasive carcinoma for all the standard breast cancer markers, that is, estrogen and progesterone receptor proteins and HER2/neu. This is clearly a situation to be avoided.

Dr. Renshaw implies that we are missing incidental findings which he finds. Unfortunately, the comparison is invalid because he cannot possibly know which of his findings (save perhaps in situ lobular proliferations) are truly incidental. Calcifications come and go as one cuts levels through breast tissue. His “incidental” atypical duct hyperplasia may have been associated with calcifications in his discarded tissue (perhaps even upgraded to DCIS), but he will never know. If he is so concerned with finding incidental diagnoses, does Dr. Renshaw also section entirely through surgical breast biopsy blocks? I think not, though the logic would be the same.

One cannot approach biopsies without consideration of the target. In his Archives article Dr. Renshaw writes: “With calcifications, one knows what one is looking for…”; however, this is not sufficient. One needs to know the number and pattern of calcifications, or else one is working blindly. Radiologists know there are benign, indeterminate, and clearly malignant patterns of calcifications and that often they can be admixed. Examining the slides alongside the specimen radiographs allows for a level of accuracy and precision in diagnosis that is impossible with Dr. Renshaw’s method. Of note with respect to the radiologic findings, the term correlation appears nowhere in his article. Apparently it is not important.

Finally, examining specimen radiographs is not the rocket science Dr. Renshaw would have us believe. Our radiographs are, of course, first viewed by the radiologist who then circles the calcifications with a wax pencil. We are sorry if the CAP TODAY article implied anything otherwise. It does not require any special training to review them in this way with the radiologist’s guidance. The situation is no different from reviewing specimen radiographs received with surgical specimens; I would hope that Dr. Renshaw sees those. Dr. Renshaw believes that correlation should be left to the radiologists. Unfortunately, his method does not provide enough information for them to do just that. Radiologists (and later surgeons) need to know precisely how calcifications relate to specific entities; is there a one-to-one correlation (determinant calcification) between the calcifications and the DCIS, for example? Better treatment planning and followup ensues.

If Dr. Renshaw is so averse to looking at specimen radiographs, I would suggest an alternative to him: Before cutting any sections he should radiograph all his paraffin blocks to know his target. Then proceed to cut sections in whatever way he likes; however, if he insists on cutting entirely through the tissue, he should not discard anything. To avoid the problems outlined above, he would have to hold huge numbers of unstained (and carefully marked) slides in reserve. Whose lab has the bigger workload now?

There may be more than one way to skin a cat; there may even be more than one right way. However, we are convinced that Dr. Renshaw’s way is not one of them. We do appreciate his letter giving us the opportunity to further comment on this important subject.

von Willebrand disease

Your recent article on von Willebrand disease was of considerable interest to our company (Related article: Giving vWD its due, one test at a time). Though we enjoyed reading this excellent cover story and agree with the vast majority of information on diagnostic test systems for screening and classifying von Willebrand disease, we would like to clarify information as it pertains to use of the Dade PFA-100 system.

In the section on screening methods, the article says the PFA-100 test “emerged some 10 years ago as a possible replacement” for the skin bleeding time test. Although the bleeding time test may detect some conditions that PFA-100 may not, especially when such conditions pertain to subendothelial abnormalities, we would like to emphasize that PFA-100 has been shown in many publications to enable detection of platelet dysfunction with sensitivities for aspirin-induced platelet deficiency and vWD far superior to the bleeding time test.

Moreover, in routine prospective usage, abnormal PFA-100 results were shown by Koscielny, et al., to identify patients at risk for bleeding and requiring significantly more transfusion measures when compared with a control group with normal PFA-100 results (Clin Appl Thromb Hemost. 2004;10(3):195–204 and Clin Appl Thromb Hemost. 2004;10(2): 155–166).

In fact, the data published in the abovementioned publications illustrate the high sensitivity of the PFA-100 system for detecting the most frequent causes of platelet function deficiency, such as the use of aspirin and vWD. The presurgical use of the PFA-100 system, described in these articles, also confirms the clinical value of using the system in bleeding risk management. Accordingly, an extension of our product claims, recently cleared by the FDA, is underway.

In the CAP TODAY article was a comment that the PFA-100 test is “not as good as you’d like for an ideal screening test [for vWD].” PFA-100 is not classified as a general screening test for vWD but as an aid in detecting platelet function deficiency. Aside from low vWF-levels, many other disorders can cause abnormal low platelet function, resulting in abnormal PFA-100 test results. Moreover, PFA-100 has shown to be an excellent screening test for vWD and aspirin-induced platelet dysfunction.

On the other hand, PFA-100 will not detect a disorder when this disorder is, at least at the time of the sample draw, not affecting platelet function. Typical examples are patients with vWD type 2N, or vWD type 1 under DDAVP (1-deamino-8-D-arginine vasopressin) therapy; in both situations the underlying disorder is there but clinically not causing platelet function deficiency (as confirmed by normal PFA-100 results).

In other words, most vWD patients will show prolonged PFA-100 closure time. In those cases where a mild vWD type 1 patient is not detected by the PFA-100 system, vWF levels are often borderline or within normal range at the time of sample collection (often stress related). In fact, it is very uncommon that abnormal vWF activity levels do not trigger an abnormal PFA-100 result.

The CAP TODAY article refers to the Favaloro review (Sem Thromb Hemost. 2006;32: 537–545), in which a number of articles are cited with reported sensitivity of PFA-100 for vWD type 1. The cited paper from Quiroga, et al. (J Thromb Haemost. 2006;4:1426–1427), reports the lowest value, 61.5 percent, while others go as high as 100 percent sensitivity. When combining all the knowledge we have on PFA-100 sensitivity for vWF deficiency, we can safely assume the system’s sensitivity for clinically manifest vWD is well above 90 percent.

Finally, your article says the PFA-100 is not useful for monitoring response to vWF replacement therapy. In fact, PFA-100 is not intended for monitoring purposes but for assessing patient response to certain treatments, such as with DDAVP. As the effect of DDAVP is, among others, an increased level of circulating vWF, which can dramatically shorten the PFA closure time, we must at least anticipate that in many patients treated with other vWF-affecting therapies a similar effect on closure time can be observed.

Jacob de Haan, PhD
Marketing Manager PFA Global
Dade Behring Marburg GmbH
Marburg, Germany