Where is Voice Recognition in EHR Headed?

UPDATE: See the latest version of NoteSwift here.

I’ve long been interested in voice recognition together with EHR software. In many ways it just makes sense to use voice recognition in healthcare. There was so much dictation in healthcare, that you’d think that the move to voice recognition would be the obvious move. The reality however has been quite different. There are those who love voice recognition and those who’ve hated it.

One of the major problems with voice recognition is how you integrate the popular EHR template documentation methods with voice. Sure, almost every EHR vendor can do free text boxes as well, but in order to get all the granular data it’s meant that doctors have done a mix of clicking a lot of boxes together with some voice recognition.

A few years ago, I started to see how EHR voice recognition could be different when I saw the Dragon Medical Enabled Chart Talk EHR. It was literally a night and day difference between dragon on other EHR software and the dragon embedded into Chart Talk. You could see so much more potential for voice documentation when it was deeply embedded into the EHR software.

Needless to say, I was intrigued when I was approached by the people at NoteSwift. They’d taken a number of EHR software: Allscripts Pro, Allscripts TouchWorks, Amazing Charts, and Aprima and deeply integrated voice into the EHR documentation experience. From my perspective, it was providing Chart Talk EHR like voice capabilities in a wide variety of EHR vendors.

To see what I mean, check out this demo video of NoteSwift integrated with Allscripts Pro:

You can see a similar voice recognition demo with Amazing Charts if you prefer. No doubt, one of the biggest complaints with EHR software is the number of clicks that are required. I’ve argued a number of times that number of clicks is not the issue people make it out to be. Or at least that the number of clicks can be offset with proper training and an EHR that provides quick and consistent responses to clicks (see my piano analogy and Not All EHR Clicks Are Evil posts). However, I’m still interested in ways to improve the efficiency of a doctor and voice recognition is one possibility.

I talked with a number of NoteSwift customers about their experience with the product. First, I was intrigued that the EHR vendors themselves are telling their customers about NoteSwift. That’s a pretty rare thing. When looking at adoption of NoteSwift by these practices, it seemed that doctor’s perceptions of voice recognition are carrying over to NoteSwift. I’ll be interested to see how this changes over time. Will the voice recognition doctors using NoteSwift start going home early with their charts done while the other doctors are still clicking away? Once that happens enough times, you can be sure the other doctors will take note.

One of the NoteSwift customers I talked to did note the following, “It does require them to take the time up front to set it up correctly and my guess is that this is the number one reason that some do not use NoteSwift.” I asked this same question of NoteSwift and they pointed to the Dragon training that’s long been required for voice recognition to be effective (although, Dragon has come a long way in this regard as well). While I think NoteSwift still has some learning curve, I think it’s likely easier to learn than Dragon because of how deeply integrated it is into the EHR software’s terminology.

I didn’t dig into the details of this, but NoteSwift suggested that it was less likely to break during an EHR upgrade as well. Master Dragon users will find this intriguing since they’ve likely had a macro break after their EHR gets upgraded.

I’ll be interested to watch this space evolve. I won’t be surprised if Nuance buys up NoteSwift once they’ve integrated with enough EHR vendors. Then, the tight NoteSwift voice integrations would come native with Dragon Medical. Seems like a good win win all around.

Looking into the future, I’ll be watching to see how new doctors approach documentation. Most of them can touch type and are use to clicking a lot. Will those new “digital native” doctors be interested in learning voice? Then again, many of them are using Siri and other voice recognition on their phone as well. So, you could make the case that they’re ready for voice enabled technologies.

My gut tells me that the majority of EHR users will still not opt for a voice enabled solution. Some just don’t feel comfortable with the technology at all. However, with advances like what NoteSwift is doing, it may open voice to a new set of users along with those who miss the days of dictation.

About the author

John Lynn

John Lynn is the Founder of HealthcareScene.com, a network of leading Healthcare IT resources. The flagship blog, Healthcare IT Today, contains over 13,000 articles with over half of the articles written by John. These EMR and Healthcare IT related articles have been viewed over 20 million times.

John manages Healthcare IT Central, the leading career Health IT job board. He also organizes the first of its kind conference and community focused on healthcare marketing, Healthcare and IT Marketing Conference, and a healthcare IT conference, EXPO.health, focused on practical healthcare IT innovation. John is an advisor to multiple healthcare IT companies. John is highly involved in social media, and in addition to his blogs can be found on Twitter: @techguy.

10 Comments

  • NoteSwift looks very sexy, for an EHR, but still much slower total time than the “old days” of just dictating it. This is one of the frustrations of this whole EHR movement – – for a very simple visit such as this, it is still time consuming no matter how you do it, to now do the documentation. No one doctors are turning to Scribes; but so much for that 1-on-1 relationship/trust in an exam room.

  • Both the voice recognition and the tool’s knowledge of the EHR are impressive. But I wonder how familiar the doctor must be with the EHR. During dictation, it sounded like the doctor was calling up each page of the EHR explicitly, which would require a deep familiarity that the doctor would build up over time. But the announcer’s summary suggested that the tool can sometimes determine from the text where that text should go, which could greatly improve consistency in structured data.

  • Andy,
    It’s a good question about the learning curve. I asked them a bit about this and they said it was easier to learn than dragon because it used the terminology of the EHR. If you’re new to the EHR, then I guess you have to learn that terminology either way (whether you’re clicking or voice).

  • While I agree that the key to dictation remaining vital is its ability to register as structured data, this relatively simple note for entering information about a patient coming in with a “cough” requires a lot of clicks and commands. I agree with the first comment that this would be suited for a scribe, but if this is the face of integration with dictation in EHRs, it still has a long way to go. Try doing a note like this on a complex surgical consult and see how long that would take. It is almost as if the technology is being developed completely outside of the clinical sphere and then forcing providers to adapt their methods to fit the technology. When these two worlds align more closely, then I think you will find users more anxious to try to incorporate voice recognition.

  • I’ve been working with Dragon since 2009. I also work closely with an integrated ortho specific EHR/PM that has dramatically simplified the process. First, “clicking” is obsolete. It has been replaced by “touching”. Get rid of the PC’s tablets and iPads and replace them with easy to use All-In-One 23″ or larger touch screen computers. They are fast, powerful and most importantly, information on the screen is easy to see! Instead of a cumbersome mouse, everything is laid out so you can use your finger or a stylus. By combining touch screen and voice recognition, physicians, nurses and staff can work twice as fast than traditional PC’s or tablets.

  • Great points and thoughts as always John and some good follow up discussion
    Yup – dictating and abdicating the content to another person removed from care to tidy up and return a nicely formatted document is a hard process form a time perspective to beat…but in the age of the EMR this is a delay in availability of information and delay in delivery of care as a result (you can mitigate this with calls and minor updates through the care chain but this is a sub optimal answer at best)
    If you accept that the current solution of the EMR is better than paper (which btw I do think it is – the prevailing view I hear if you ask any doctor if they would like to go back to the old paper note and files is no thanks) then we have to adapt to this new media
    There is a familiarity requirement as some suggest above – knowing the application but that is true with or without speech. I am not deeply familiar with the NoteSwift application but they have done an impressive job of integrating and easing that burden
    The potential for helping this interaction wiht voice is clear as you can see form this piece written by one of my colleagues
    http://whatsnext.nuance.com/healthcare/health-it-innovation-helping-hospitals-future/
    This article
    http://www.bizjournals.com/boston/blog/techflash/2013/10/nuance-to-launch-virtual-assistant-for.html

    and this concept demo of Florence

    The rise of the intelligent assistant integrated into the complex EMR environment will I believe offer some alleviation of the challenges of time and complexity of these tools and allow clinicians to return to the art of medicine and focusing on the patient not the technology

  • Thanks for commenting Dr. Nick. I really like this part of what you said:
    “If you accept that the current solution of the EMR is better than paper (which btw I do think it is – the prevailing view I hear if you ask any doctor if they would like to go back to the old paper note and files is no thanks) then we have to adapt to this new media”

  • I really love the capabilities associated with voice recognition and implemented it last year at our hospital. It’s a huge time saver, especially when you program the microphone smart keys to import common phrases, lab values, allergies, medication list and problem list from the EHR. It reduced our dictation cost for those providers anywhere from 25-100%. Certain workflows had limitations, but with the advancements and mobility platforms I expect adoption will sky rocket. It’s good to see our providers get excited about a product and they definitely love voice recognition — especially our ED providers.

  • I would also comment that the use of virtual scribes may be the next wave of adoption for voice technologies. Virtual scribes allow the freedom and speed of dictation, with the structured data entry of point and click. These systems use human transcription/scribes as well as voice recognition and NLP technology to assist data entry. The ROI associated with this activity may be compelling for high volume practitioners and is significantly less costly than human scribes.

Click here to post a comment
   

Categories