Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Where is Voice Recognition in EHR Headed?

Posted on August 22, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve long been interested in voice recognition together with EHR software. In many ways it just makes sense to use voice recognition in healthcare. There was so much dictation in healthcare, that you’d think that the move to voice recognition would be the obvious move. The reality however has been quite different. There are those who love voice recognition and those who’ve hated it.

One of the major problems with voice recognition is how you integrate the popular EHR template documentation methods with voice. Sure, almost every EHR vendor can do free text boxes as well, but in order to get all the granular data it’s meant that doctors have done a mix of clicking a lot of boxes together with some voice recognition.

A few years ago, I started to see how EHR voice recognition could be different when I saw the Dragon Medical Enabled Chart Talk EHR. It was literally a night and day difference between dragon on other EHR software and the dragon embedded into Chart Talk. You could see so much more potential for voice documentation when it was deeply embedded into the EHR software.

Needless to say, I was intrigued when I was approached by the people at NoteSwift. They’d taken a number of EHR software: Allscripts Pro, Allscripts TouchWorks, Amazing Charts, and Aprima and deeply integrated voice into the EHR documentation experience. From my perspective, it was providing Chart Talk EHR like voice capabilities in a wide variety of EHR vendors.

To see what I mean, check out this demo video of NoteSwift integrated with Allscripts Pro:

You can see a similar voice recognition demo with Amazing Charts if you prefer. No doubt, one of the biggest complaints with EHR software is the number of clicks that are required. I’ve argued a number of times that number of clicks is not the issue people make it out to be. Or at least that the number of clicks can be offset with proper training and an EHR that provides quick and consistent responses to clicks (see my piano analogy and Not All EHR Clicks Are Evil posts). However, I’m still interested in ways to improve the efficiency of a doctor and voice recognition is one possibility.

I talked with a number of NoteSwift customers about their experience with the product. First, I was intrigued that the EHR vendors themselves are telling their customers about NoteSwift. That’s a pretty rare thing. When looking at adoption of NoteSwift by these practices, it seemed that doctor’s perceptions of voice recognition are carrying over to NoteSwift. I’ll be interested to see how this changes over time. Will the voice recognition doctors using NoteSwift start going home early with their charts done while the other doctors are still clicking away? Once that happens enough times, you can be sure the other doctors will take note.

One of the NoteSwift customers I talked to did note the following, “It does require them to take the time up front to set it up correctly and my guess is that this is the number one reason that some do not use NoteSwift.” I asked this same question of NoteSwift and they pointed to the Dragon training that’s long been required for voice recognition to be effective (although, Dragon has come a long way in this regard as well). While I think NoteSwift still has some learning curve, I think it’s likely easier to learn than Dragon because of how deeply integrated it is into the EHR software’s terminology.

I didn’t dig into the details of this, but NoteSwift suggested that it was less likely to break during an EHR upgrade as well. Master Dragon users will find this intriguing since they’ve likely had a macro break after their EHR gets upgraded.

I’ll be interested to watch this space evolve. I won’t be surprised if Nuance buys up NoteSwift once they’ve integrated with enough EHR vendors. Then, the tight NoteSwift voice integrations would come native with Dragon Medical. Seems like a good win win all around.

Looking into the future, I’ll be watching to see how new doctors approach documentation. Most of them can touch type and are use to clicking a lot. Will those new “digital native” doctors be interested in learning voice? Then again, many of them are using Siri and other voice recognition on their phone as well. So, you could make the case that they’re ready for voice enabled technologies.

My gut tells me that the majority of EHR users will still not opt for a voice enabled solution. Some just don’t feel comfortable with the technology at all. However, with advances like what NoteSwift is doing, it may open voice to a new set of users along with those who miss the days of dictation.

Dragon Medical Enabled EHR – Chart Talk

Posted on July 12, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently was asked by Deanna from Mighty Oak to check out a demo of their Chart Talk EHR software (previously called DC talk). It’s always a challenge for me since there are only so many hours in a day to be demoing the more than 300 EHR companies out there. So, instead of doing a full demo, I asked Deanna to highlight a feature of Chart Talk that set them apart from other EHR software companies.

She told me that Chart Talk’s killer feature was its integration with Dragon Naturally Speaking’s voice recognition software. I was very familiar with DNS and other voice recognition software, so I was interested to see if they really could create a deep integration of Dragon Medical over the other EHR software I’d seen that integrated it as well.

I have to admit that I was pretty impressed by the demo. It was really quite amazing the number of things that you could do with your voice in the Chart Talk EHR software. Certainly standard transcription like documentation worked out well in Chart Talk. However, the impressive part was how you could navigate the EHR with your voice. Here’s a demo video that does a decent job illustrating it:

What made the documentation even more interesting (and is partially shown in the above video) is the use of various DNS macros and the even more powerful built in macros for pulling in vital signs, past history, etc. Plus, I like the idea that when you have any issues with Dragon Medical, you don’t get someone at your EHR company who doesn’t really know much about Dragon. Since Chart Talk’s completely focused on Dragon integration, you know they know how to support it properly.

I of course only saw a partial demo of the Chart Talk software. So, I’m only commenting on the Dragon Medical integration in this post. It would take a much longer and more in depth evaluation to know about the other features and challenges to the software.

Plus, there’s no doubt that voice recognition isn’t for everyone. They tell me that some people do the charting with their voice right in front of the patient. That feels awkward to me, but I guess it works for some people. Then, there’s the people who don’t want to go through the learning curve of voice recognition. However, I’d guess that Chart Talk could make a case for being some of the best at teaching people to overcome that learning curve since every one of their users uses it.

I also know that Chart Talk originally started as DC talk. So, anyone considering Chart Talk should likely take a good look at how well the software fits with their specialty. I know the people at Mighty Oak have been making a big effort to work for any specialty. However, like every EHR software out there, they just work better for some specialties better than others.

It’s also worth noting that Chart Talk is a client server EHR. I guess the web browser isn’t quite ready for the processing power that’s required to have a nice voice enabled user experience.

Needless to say I was impressed by the voice recognition integration and how pretty much every command can be performed using your voice. I’d be interested to know of other EHR companies that are striving for that type of deep integration. I’m not just talking about being able to basically dictate into a text field. I’m talking about actual navigating the EMR with your voice.