College of American Pathologists
CAP Committees & Leadership CAP Calendar of Events Estore CAP Media Center CAP Foundation
 
About CAP    Career Center    Contact Us      
Search: Search
  [Advanced Search]  
 
CAP Home CAP Advocacy CAP Reference Resources and Publications CAP Education Programs CAP Accreditation and Laboratory Improvement CAP Members
CAP Home > CAP Reference Resources and Publications > cap_today/cap_today_index.html > CAP TODAY 2010 Archive > Q & A for February 2010
Printable Version

  Q & A

 

 

 

 

February 2010

Editor:
Fredrick L. Kiechle, MD, PhD

A diagnostic puzzle for the laboratory

A CAP TODAY reader wrote to ask if it is medically feasible for a WBC count to go from 69.0 to less than 10.0 in about six hours. She provided the CBC results (Figs. 1 and 2) and the following clinical information: The patient was an 88-year-old female who presented to the ER with shortness of breath at about 2230 on June 24, 2008. After initial laboratory results in the ER were reported, the medical technologist asked if there was underlying illness that could project a 69.0 WBC. There was no history of lymphoma, leukemia, or cancer. The patient was admitted to the medical floor. Morning laboratory tests were ordered and performed. When the drastic change in laboratory values was noticed, recollection and repeat testing began. All were validated. Pathology slide review—active T lymphocytes were present.

For this question, CAP TODAY called on Katherine A. Galagan, MD, director of clinical laboratories, Virginia Mason Medical Center, Seattle, and here is what Dr. Galagan had to say:

I’ve reviewed the CBC, chemistry, and clinical information from this case, and discussed it with my hematology manager as well. We agree that this presents a very confusing picture. There are several things to consider here:

1. My first concern is that one of these may represent a misidentified sample, and, in fact, this would be my favored interpretation. I say this be-cause of the MCV, which really should be constant over six hours; in this case it starts at 92 and ends at 88. Although the MCV might change with a substantial blood transfusion, there was no reason for this patient to receive blood since the beginning hematocrit was 44.6 percent. However, it is possible that the patient was actively bleeding and was transfused. Sometimes contaminant dilution can affect the MCV, but usually the MCV will go up and the HCT will go down. In this case, the MCV and HCT went down.

Other parameters, aside from just the WBC, also changed as follows:

HgB: 14.6 on 6/24; 12.6 on 6/25
HCT: 44.6 on 6/24, 36.4 on 6/25
PLT: 311,000 on 6/24, 194,000 on 6/25
MCV: 92 on 6/24, 88 on 6/25
ABS Polys: 10,350 on 6/24, 3,456 on 6/25
ABS Lym: 53,820 on 6/24, 5,472 on 6/25
ABS Monos: 3,450 on 6/24, 672 on 6/25
ABS Eos: 1,380 on 6/24, 0 on 6/25

2. If these samples represent the same individual, one would be concerned that the person has had an acute myocardial injury event, since the CPK, CK-MB, myoglobin, and troponin have all increased over the six-hour time frame. There is also evidence of congestive heart failure, with an elevated BNP and shortness of breath. One might postulate a stress reaction in this scenario, but stress reactions usually manifest as a neutrophilia (which is present) or profoundly left shifted differential; this degree of lymphocytosis would not be expected. In fact, an absolute lymphocyte count of 53,820/mL raises the concern of a lymphoproliferative disorder (infection would be less likely), and it is hard to imagine a scenario that would de-crease the absolute lymphocyte count so rapidly (chemotherapy or even just prednisone should not act this quickly). Thus, I am pushed back to my original interpretation that these results represent two different patients.

3. The third interpretation would be that one of these samples was characterized inaccurately, either because of a machine malfunction or some-thing inherently wrong with the sample. The results seem to be coherent within themselves, and so it is difficult to postulate a sample integrity problem. However, it is possible that the machine malfunctioned on one of these samples. The CAP TODAY reader mentioned that the values were verified and that a slide review acknowledged the presence of active T lymphocytes, but I’m wondering if both samples were reviewed microscopically.

The CAP TODAY reader in whose laboratory this puzzle arose sent the following after reading Dr. Galagan’s reply:

Indeed, we had formed the same possible scenarios, with No. 1 being the logical selection. Our pathologist compared both slides and agreed that the first slide (+69 WBC) appeared to be from a chronic lymphocytic leukemia patient. So we screened all ED patients within a 12-hour time frame. We consulted with the personnel who collected the samples, and both individuals insisted they had followed the two-patient identifier system in place and labeled appropriately.

After exhausting all other information avenues, we decided to send both blood samples for DNA testing. What do you think? Yes, both samples belonged to the same patient! This was a very interesting case: No physician consulted, save the hematology oncologist, could reconcile the values for a valid diagnosis. As you suggested in No. 2, this patient did have a cardiac event that had stressed the body, such that there was a violent action in cell response.

So, after a lengthy root-cause analysis, the human body remains triumphant. This patient was 88 years old and still able to fool the books.

Editor’s note: Dr. Galagan said her laboratory, too, would have followed up with DNA testing.

Question Q. I work as a trainee in a hematology laboratory in the United Arab Emirates. We have been having QC problems with our fibrinogen level N. We use Stago’s Compact for analysis. We checked issues such as pipetting, lot number, and storage temperature, but the problem persists. As a solution, my supervisor wants to run QC for fibrinogen 20 times. Are there other suggestions or guidelines for such a problem?

A. The specific nature of the problem is not provided in the question, which makes it difficult to directly address the concern. When quality control material is “not in control,” certain steps should be followed, some of which this laboratory has done. However, if a laboratory can identify from the QC data the type of error (variation) in the system, then it can minimize the troubleshooting time required to discover the problem. Is the error one of accuracy (systematic) or precision (random)?

Systematic errors may be caused by factors such as a change in reagent or calibrator lot number, incorrect inputting of calibrator values, improperly prepared reagents, deterioration of reagents or calibrator because of improper storage, volume changes in reagents because of instrument pipettor problems, and temperature changes in instrument heating blocks. With systematic errors, one can see a shift or trend in the QC data points on a Levey-Jennings chart. A systematic shift in the mean (all control values have moved in one direction, higher or lower) is usually caused by an event that occurred on the first day of the shift, such as a change in reagent lot numbers or calibration or an instrument modification (problem or mainte-nance). A systematic trend (change that occurs gradually over time) requires that at least six or more consecutive data points fall above or below the mean.

Random errors, on the other hand, are caused by factors such as bubbles in reagent vials and instrument dilutor reagent lines; clogs in dilutor or re-agent pipettes, or both; inadequately mixed reagents; unstable electrical supply; and operator variation. With random error (imprecision), the stan-dard deviation increases. That is, the scatter (or spread) of data points about the established mean will be wider. Therefore, it is important that a labo-ratory, before troubleshooting a “QC problem,” critically review its Levey-Jennings charts, identify the type of error from the information provided by those charts, and then “plot” a corrective course of action. And before a laboratory proclaims its QC is “out,” it should consider the possibility that variation in the system is not the issue at all but instead that its QC data may be falsely rejected because the control limits or rules used may not be appropriate or sufficiently robust for analyzing the QC data.

Reference

Statistical quality control for quantitative measurement procedures: principles and definitions; approved guideline, C24-A3. Wayne, Pa.: Clinical and Laboratory Standards Institute. 2006.

Marlies Ledford-Kraemer, MBA, BS, MT(ASCP)SH
Islamorada, Fla.

Question Q. When there is a low platelet count, can we use platelet estimate from blood smear as an alternative method, or is it necessary to use the Unopette method? How can we report estimated platelet count by reviewing the slide? What are the criteria for reporting platelet count?

A. Platelet counting in general is challenging because of the small size of the platelets, their overlapping characteristics with cells and cellular debris, and their tendency to activate and clump. Fortunately, modern automated cell counters have become much more reliable in counting platelets, with lower limits of linearity approaching zero. These counts tend to be much more accurate than manual platelet counts, where the coefficient of variation, or CV, is high.1 The manual platelet count or direct smear estimate is still needed in some situations, however, particularly when the count is outside the linearity limits of the analyzer or the results are questionable and flagged.

There are several situations in which the platelet count may be inaccurate. For example, platelet clumping in EDTA anticoagulants, platelet satellitism, giant platelets, and partially clotted specimens can all lead to falsely low counts. Falsely high counts can be seen with RBC fragments, microcytic RBCs, cellular debris, malaria, and RBC inclusions such as Howell-Jolly, Pappenheimer, and Heinz bodies and when bacteria are present. Cryoglobulinemia can lead to falsely high or falsely low counts.2

Therefore, all abnormal or flagged platelet counts need to be verified in some way. A blood film can be examined first to determine whether any of the items listed above are present. In addition, any related abnormalities—such as blasts in a case of acute leukemiacan be noted. If the count is very inaccurate, inspection of the blood film will show this inaccuracy.2 If the count is closer to the visual appearance on the blood film, then a platelet estimate technique can be used.

Most laboratories use one of two methods to make a platelet estimate. Before starting, it is a good idea to check the feathered edge for fibrin strands because “microclots” may erroneously lower the platelet count. If these strands are absent, one can proceed with the platelet estimate. One method involves counting the number of platelets in 10 or more oil immersion fields (100x), in thick and thin areas of the smear if the distribution is uneven. The average number of platelets per oil immersion field is calculated and multiplied by the established field factor for the microscope being used to determine the platelet estimate. Under normal conditions, these estimates should check within 15 percent of the automated counter result.3 The second technique is to count 1,000 RBCs on the smear and note the number of platelets seen while doing this count. The platelet count/L equates to the number of platelets counted multiplied by the red cell count.4

Review of the film may also lead to the determination that a manual count is necessary. A manual count is done using a hemocytometer. Phase contrast microscopy can greatly improve the recognition of platelets in this circumstance.2 Although the CV is high, this may be the most accurate ap-proach when factors are present to confound the automated count.

In summary, automated platelet counts have become much more reliable, surpassing the phase contrast hemocytometer count. However, unexpectedly abnormal and flagged automated platelet counts must be validated by reviewing a blood film. The specimen should be checked for clots and redrawn if necessary, and, at times, a manual count may be required.2

References

1.
Henry JB, ed. Clinical Diagnosis and Management by Laboratory Methods. 20th ed. Philadelphia, Pa.: WB Saun-ders; 2001.

2. Tien SL. Validating the platelet count. Singapore Med J. 1995; 36:255–256.

3. The Peripheral Blood Film. College of Physicians and Surgeons of Alberta. Alberta Laboratory Quality Enhancement Program; 2004.

4. HealthSeva. Platelet count or thrombocyte count. http://www.health seva.com/content/testproc/Platelet_count.php3. Accessed: March 31, 2009.

Katherine A. Galagan, MD
Director of Clinical Laboratories
Virginia Mason Medical Center
Seattle

Question Q. I would like to know whether tumor burden has anything to do with hypercoagulability status. That is, is occult malignancy likely to lead to hypercoagulability? This question is based on my personal history. For me (age 75), acute pulmonary embolism and extensive deep vein thrombosis were ruled out. Malignancy has not been detected by cancer markers, and a high-resolution CT scan and PET were not done.

A. Individuals with underlying malignancy, whether occult or overt, are at greater risk for developing venous thrombosis. Venous thromboembolic disease (VTE) is an important cause of morbidity and mortality in cancer patients, and up to 15 percent of patients with malignancy experience VTE at some point during the course of their disease. The basis of the development of a hypercoagulable state in malignancy is multifactorial. Tumor cells have the capacity to express and release procoagulant factors. Cells from many types of cancers have been shown to constitutively express tissue factor (TF) on their surfaces, or in microvesicles, or to secrete TF into the tumor microenvironment, which activates the coagulation system, generating thrombin.

Thrombin generation is also promoted directly by tumor cell production of coagulation factor X activator, a cysteine protease known as “cancer procoagulant,” and by expression of surface sialic acid residues that can promote nonenzymatic activation of factor X. Indirect thrombin generation may occur via tumor synthesis and release of proinflammatory cytokines (including TNF-α and IL–1β), eliciting tissue factor expression from mono-cytes and endothelial cells and downregulating endothelial cell expression of thrombomodulin, a potent anticoagulant.

Impaired fibrinolysis is another procoagulant mechanism described repeatedly in patients with solid tumors. Other mechanisms that promote the development of thrombosis in malignancy include platelet, endothelial cell, and leukocyte activation. A complete review of these mechanisms is be-yond the scope of this question and answer.

Cancers that are aggressive in inducing thrombotic events include mucin-producing visceral malignancies (for example, colorectal, gastric, gall bladder, and pancreatic cancers), pulmonary carcinomas, hematologic malignancies, ovarian carcinomas, and some brain cancers, though associated malignancies may involve virtually all organ systems.

Several studies suggest an association between tumor burden and hypercoagulability. The larger the tumor burden, the larger the reservoir of tumor cells to express tissue factor and cancer procoagulant, thereby inducing a thrombotic diathesis. In turn, these procoagulants can act as promoters of cancer growth.

Individuals over 40 years of age who present with idiopathic VTE have an approximately four to 10 percent prevalence of occult cancer. Cancers diagnosed subsequent to VTE presentation are associated with poor prognosis. Patients diagnosed with cancer within one year of VTE have been found to be more likely to have advanced disease and shorter life expectancy than those without VTE.

It is important to bear in mind, however, that the incidence of idiopathic thrombosis increases exponentially with age above 40 years. It has been reported that the hazard ratios for the development of VTE increase dramatically in the later decades of life. Risk is further increased in men more than women, in blacks more than whites, and in those with increased body mass index. These factors can appropriately identify populations at risk but have a low predictive value for an individual.

Bibliography

1.
Blood. 2004;104:3516. Abstract.

2. Gouin-Thibault I, Achkar A, Samama MM. The thrombophilic state in cancer patients. Acta Haematol. 2001;106(1-2):33-42.

3. Gupta PK, Charan VD, Kumar H. Cancer related thrombophilia: clinical importance and management strategies. J Assoc Physicians India. 2005;53:877–882.

4. Monreal M, Trujillo-Santos J. Screening for occult cancer in patients with acute venous thromboembolism. Curr Opin Pulm Med. 2007;13:368–371.

5. Nierodzik M, Karpatkin S. Hypercoagulability preceding cancer. Does hypercoagulability awaken dormant cells in the host? J Thromb Haemost. 2005;3:577–580.

6. Otten HM, Prins MH. Venous thromboembolism and occult malignancy. Thromb Res. 2001;102:V187–194.

7. Piccioli A, Prandoni P. Venous thromboembolism as first manifestation of cancer. Acta Haematol. 2001;106(1–2):13–17.

8. Tsai AW, Cushman M, Rosamond WD, et al. Cardiovascular risk factors and venous thrombosis incidence: the longitudinal investigation of thromboembolism etiology. Arch Int Med. 2002;162:1182–1189.

A. Victoria McKane, MD
Pathologist
Minneapolis

Dorothy M. (Adcock) Funk, MD
Medical Director of Coagulation
Esoterix Laboratory Services
Englewood, Colo.

Member, CAP Coagulation Resource Committee


Dr. Kiechle is medical director of clinical pathology, Memorial Healthcare, Hollywood, Fla.
 
 
 © 2014 College of American Pathologists. All rights reserved. | Terms and Conditions | CAP ConnectFollow Us on FacebookFollow Us on LinkedInFollow Us on TwitterFollow Us on YouTubeFollow Us on FlickrSubscribe to a CAP RSS Feed