Return to CAP Home
Printable Version

  Voice of choice for lab transcriptions

 

CAP Today

 

 

 

January 2011
Feature Story

Anne Ford

Remember when there was only one kind of chocolate sandwich cookie? Whether you favored Oreo, Hydrox, or Famous Amos, you got essentially the same snack: two crispy chocolate discs married by a layer of sweet white cream. Those were the days before manufacturers began draping the cookies in white fudge or dying the cream seasonal colors—yes, even before plumping up the filling and dubbing the result a “Double-Stuf.” Depending on how long you’ve been wearing that lab coat (Nabisco introduced the Double-Stuf in 1975), you may or may not recall the days when for any given type of product, there was often just one option. With customers long used to a highly diversified marketplace, the idea of having access to only one kind of good in a category seems almost unthinkable.

And yet for at least one type of product—voice recognition software designed for pathologists—there appears to be just one kind of cookie on the plate. That would be Voicebrook’s VoiceOver Enterprise for Pathology, which uses as its engine Nuance Communication’s Dragon Medical or Dragon Medical Enterprise Network Edition and customizes the products’ vocabulary and functionality to suit pathology reporting workflows.

“Nuance is the 800-pound gorilla when it comes to speech recognition technologies. They have acquired almost every single speech recognition product that was available for health care, like IBM’s ViaVoice and Philips speech recognition,” says Voicebrook CEO E. Ross Weinstein. “There are two legacy products that still may be used in a limited capacity in pathology, Kurzweil’s VoicePath and Nuance’s PowerScribe for Pathology, which they sunsetted about a year and a half ago.” But as far as new offerings, “really there is no competition for the Nuance speech product, which is built into our product.” VoiceOver has been implemented in just under 150 sites, he says.

To be clear, laboratories in search of new pathology-specific voice recognition software are not limited to VoiceOver. It is also possible to buy Dragon Medical or Dragon Medical Enterprise Network Edition and customize it yourself. However, that’s not an option experts recommend.

“You can go and buy the medical version of Dragon off the shelf and install it, but it gets pretty clunky,” says John Fallon, MD, who has overseen the implementation of voice recognition technology in two institutions. Dr. Fallon is director of laboratory services at Westchester Medical Center and chair of pathology at New York Medical College, Valhalla.

Even blunter are the words of Steven Drexler, MD, anatomic pathologist and neuropathologist at Winthrop University Hospital, Mineola, NY: “People who try to institute speech recognition by buying Dragon off the shelf are doomed to disaster,” he says. “It doesn’t come with any bells and whistles, and you have to put a tremendous amount of effort in to get that to work. I’ve talked to a number of sites where that has failed.”

In other words, unless you have a voice recognition expert on staff with a lot of time on his or her hands, VoiceOver appears to be the only serious option available. So for pathologists considering the move to speech recognition, the question isn’t “What are my options?” but “How good is the one option I’ve got?” For now, at least, that’s just the way the cookie crumbles.

When Meenakshi Singh, MD, assumed her current position in October 2008, voice recognition technology was already in use at her new institution—State University of New York at Stony Brook’s School of Medicine and Stony Brook University Medical Center—but in a limited fashion. “One of my responsibilities was to see what new technologies we could bring on board to improve our workflow and our efficiency, as well as our quality monitors,” says Dr. Singh, professor and vice chair of anatomic pathology. “So I was looking at various technologies within the lab, one of which was voice recognition. One of the things I did was to see whether it should be used in a more generalized manner, not just sporadically for some gross descriptions.”

At the time, Stony Brook was using Dragon Medical as a standalone product. “That was not specific for pathology, and so it could have its own interesting idiosyncrasies,” Dr. Singh says. “I said, ‘Well, let’s see what exists out there. Is there anything specific for pathology? Because otherwise, what’s the point of bringing this on board?’”

Dr. Singh was skeptical. “Pathology uses such complicated terminology. I thought to myself: ‘Which voice recognition program will be able to do this better than an experienced transcriptionist? To compound the issue, will it, for somebody like me, recognize my accent? Some people have even stronger accents.’” She ultimately decided what she needed were hard data comparing the performance of human transcriptionists to VoiceOver. She recalls thinking, “‘If this pilot study can convince me that, one, voice recognition can be fast; two, it will not have errors beyond what a transcriptionist would make; three, it will be easy to use; and four, it will be simple enough that you don’t need to be a PhD from MIT to use it, then we’ll think about adopting it.’”

Dr. Singh proceeded to pilot Voice­Over, not just for grossing but for her entire reports, complete through sign-out. Getting the technology up and running on her computer took between 45 minutes and an hour, she says. “You read pathology-based scripts into the system, so it starts recognizing your voice. Voicebrook has created it so that it will recognize words like ‘carcinoma infiltrating into the submucosa,’ whereas the regular Dragon NaturallySpeaking might only pick up one or two words correctly. The training period is pretty short.” (Nuance used the NaturallySpeaking name before replacing it with Enterprise Network Edition.)

At the same time that she was collecting data on VoiceOver’s time savings and accuracy rate, Dr. Singh was collecting the same data on similar cases transcribed manually. “It was quite fantastic,” she says. “This was like a before-and-after picture. The time savings were huge, going into hundreds of minutes.” As for VoiceOver’s accuracy rate, it was better than that of traditional transcription, she says.

Convinced that the technology would work, she says, she shared her experience with the faculty members, residents, and pathologists’ assistant on her service and had them begin using VoiceOver as well. “And after that, we adopted the technology for complete surgical pathology reports. The staff at Voicebrook worked with us to put in all the aspects we wanted.” For example, at the end of the pathology reports there is an attestation statement, and Dr. Singh wanted even that to be a voice command. She says: “We were even able to create templates, and Voice­Over is then able to insert the templates by voice command straight into the report. So this helps with large cases where we have templates from which we work.”

“We have data from many months now showing that this has improved our turnaround time, made our work so very easy, and stopped the hurry-up-and-wait situation that we used to be in earlier with transcription,” she concludes. “Any kind of waiting for transcription makes no sense anymore.”

Some of VoiceOver’s benefits can’t be quantified, in Dr. Singh’s opinion. For one thing, she is particularly pleased that moving to VoiceOver did not require laying off any transcriptionists. “I have huge respect for them, and they have been in the department for a long time. We were able to retrain them, and they are now helping us in other aspects of the department to improve our efficiencies there,” she says happily. For another, “VoiceOver has had a huge positive impact on our training program for the resident, because now the residents can themselves dictate reports into the LIS before they even come to the pathologists. When they sit with their attending pathologist to look at these slides, they can now see what changes the pathologist is making to the report in real time, and they can be part of the entire process, from dictation of gross descriptions to sign-out.” When Dr. Singh thinks about how much time she spent as a resident scribbling notes while signing out cases with her attendings, she says, “I wish I had had this technology then.”

Michael Feldman, MD, PhD, doesn’t need to be convinced that speech recognition for pathology is a good thing. “We’ve been using it since 1998,” he says. “You’re preaching to the choir.” Dr. Feldman is associate professor of pathology and laboratory medicine at the Hospital of the University of Pennsylvania and director of pathology informatics at the University of Pennsylvania Health System, Philadelphia. The first product his hospital implemented: Kurzweil’s Clinical Reporter. After a company called Lernout and Hauspie bought out the product, then went bankrupt, the hospital switched to Nuance’s PowerScribe. Now that PowerScribe is being sunsetted, “we’re in the process of putting in VoiceOver,” Dr. Feldman says. It is being implemented not only in his own hospital but in the health system’s other two hospitals, which have not used speech recognition before.

In the 13 years Dr. Feldman has been using speech recognition, he’s seen significant improvements. “In large part, it’s gotten better and better and better,” he says. “Higher accuracies, shorter training times, better abilities to deal with accents.” Workflow too is important. “It’s not just about the voice engine, but about how you use it in day-to-day operations. VoiceOver gives us the ability to accommodate multiple workflows, because we have different people dictating in different ways.”

For example? “A pathology assistant dictates every day, five days a week. So they get very facile with the technology, and they use it in a dictate-correct mode right in front of them. A resident dictates only once every three or four days, and they do that for a month, and then they go off surg-path to one of the other rotations. They’re not seeing the application frequently enough to become real masters at it. VoiceOver allows them to use the system, does the transcription on the back end of the server, and then sends it to a transcriptionist for correction.” That person can listen to the recorded voice, see the automatic transcription, correct the few errors that have occurred, and then put them into the laboratory information system. For anyone wondering why a transcriptionist is on hand when voice recognition is in place, Dr. Feldman says, “The reality is that we need very, very few transcriptionists compared to what we would need in standard tape-recording type stuff.”

As for the pathologists them­selves, “It’s up to them whether they want to use the voice system or a keyboard,” he says. Dr. Feldman doesn’t dictate which one they have to use. “Since we don’t do the cutting—we’re only doing the diagnostics—almost all of them use the keyboard. If you look at the totality of the pathology report and you look at where all the words are, 90 percent of the words are not in the diagnosis; they’re in all the other parts, and those are handled by the residents and the pathology assistants. So that’s where the biggest bang for the buck is in our practice.

“I don’t choose to spend my time convincing doctors to use voice recognition, because that’s not where the value is,” he adds. “I can’t pay for my system by having pathologists use it; I can pay for the cost of the system by eliminating transcriptionists if my PAs and residents use it.”

Will VoiceOver be the product that finally brings voice recognition to the pathology masses? “If I had a crystal ball, I’d be at the racetrack,” Dr. Feldman says. “I think it will depend on their financial priorities, their comfort in using technology. If you’re dead-set against using technology, then this technology will not work for you, and neither will any technology.” That said, “I know the technology is robust enough to implement and utilize if you choose to really make a go of it.”

Dr. Drexler is another early adopter of voice recognition technology for pathology. His institution, Winthrop University Hospital, began using IBM’s MedSpeak Pathology in 1998, then switched to Talk Technologies’ TalkStation before adopting VoiceOver in 2008, “mainly because we wanted to get a full-fledged anatomic pathology laboratory information system, which was PowerPath,” he says. “We found that Voicebrook was a partner with [Elekta’s] PowerPath, and it’s very well integrated with PowerPath.”

That integration has been a major selling point for VoiceOver. “At the time that we were using MedSpeak, the problem was that we were also using a Sunquest transcription workstation. It was very complicated,” he remembers. “We would have to dictate in MedSpeak, and then the text would have to be copied over into WordPerfect, which is what the Sunquest workstation was using at the time. You had to go back and forth between two systems. It was very cumbersome.” Furthermore, “The speech recognition was not fantastic at that time; it was good enough to use, but not fantastic.” With TalkStation, there was more integration. “We had an interface where the documents were automatically retrieved into TalkStation. But we still had to sign them out in the transcriptionist workstation.”

Everything is now done in PowerPath. The VoiceOver software is customized, Dr. Drexler says, so it’s almost like an integral part of PowerPath. At the same time, “you could, for instance, change out your LIS,” he says. “VoiceOver is very adaptable. If we decided to stop using PowerPath and switch to another product, we could still use VoiceOver and have them come in and customize the context.”

Unlike the Hospital of the University of Pennsylvania, Winthrop uses no transcriptionists. “We haven’t used transcriptionists since 1998,” he says. “At that time, we had about eight full-time transcriptionists; now we have none. We have a medical secretary and two clerks. That’s it.” The medical secretary still does a small amount of typing, usually consults that come back or outside reports. The pathologists, by and large, dictate everything into VoiceOver.

As for cost, “the initial buy-in to the system can be a little pricey, but once you get past that, it’s really not bad,” Dr. Drexler says. (Voicebrook CEO Weinstein declines to publicly share pricing information, but he says return on investment versus transcription is typically five to 18 months.) “We spend a little over $15,000 a year in support and probably about another $2,000 a year because we have our new residents trained each year. The savings is probably easily half a million dollars a year, but that’s just a guesstimate.”

Even the older or less technologically savvy members of Dr. Drexler’s department have become comfortable with VoiceOver. “Our department’s been successful in getting a number of people who were initially resistant to it up and running, and they’ve done very well on it,” he says. “We had a pathologist in his late 70s who came in and was doing temporary work. He was very hesitant when we first started, but we spent some time with him and he became a big convert to the system, and he actually adapted to it very well. If you spend time with these people [who aren’t initially comfortable with the technology], they will adapt to it at some point.”

A growing scarcity of medical transcriptionists will drive more and more pathologists to adopt voice recognition software, predicts pa­thologist and VoiceOver user Steven Jobst, MD, of Central Coast Pathology, San Luis Obispo, Calif. “Transcriptionists are getting harder to find,” he notes. “There’s not a huge future for it because voice recognition keeps getting better and better. Our transcriptionists are all older; we see no young people going into the field. So maybe even older pathologists are going to have to use voice recognition.”

That’s not to say Dr. Jobst has abandoned the use of transcriptionists since his laboratory recently adopted VoiceOver. But “the only thing I really use a transcriptionist for now is to do some demographic shifting—say, if I need to add a physician to a case,” he says. “Stuff I’ve never learned how to fix myself because [other] people have always fixed it.”

That said, he can’t envision returning to the pre-VoiceOver days. “When I used to dictate on the transcription system, there was never a case signed out before 10 o’clock in the morning,” he says. “Now I sign cases out as soon as I see the slides. It could be at 7, could be at 8, could be at 9. In the old days, cases were dictated in batches, and by the time you dictated four or five cases and got interrupted with phone calls and then went back to see whether they were ready to sign out, it was always at least an hour. That just doesn’t happen anymore.”

About half of Dr. Jobst’s colleagues have traded transcription for VoiceOver. “I think it’s quite easy to make the transition,” he says. “The commands are pretty straightforward. There’s probably a bigger dictionary of commands than I use, but I’ve settled into the ones that I use for every case. The nice thing is that a very simple command puts me in exactly the right place in the case that I’m in, so I don’t have to page down. If I just were using Dragon alone, I’d have to use commands to get to the right place in the report each time and get the cursor where I want it.

“There’s a separate command for a two-part specimen or three-part specimen,” he continues. “But for a one-part specimen, like a gallbladder or most skin cases, the cursor goes automatically to that place. If it’s multi-part, then there’s a separate command and you just start dictating. We don’t use microscopic description so much anymore, and that’s more of a prose dictation, but it’s still excellent with that.” Dr. Jobst does have a couple of colleagues who don’t like it for that kind of case. “I’m not judging the way they dictate or don’t dictate,” he says, “but from what I can tell, they just haven’t taken the time to teach the program their voice, their idiosyncrasies.”

All in all, for Dr. Jobst, the immediate turnaround time is the clincher. He says: “For cases I dictate in the afternoon, I don’t have to wait around, saying, ‘Gee, I’ve got these 10 cases that haven’t been transcribed yet and I can’t sign them out.’ We haven’t done a time study, but I can tell you, just by the way my day is at the end of the day, that it’s quicker for me to do my cases by voice recognition. It makes my day a little shorter.”


Anne Ford is a writer in Evanston, Ill.