Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

EHR Charting in Another Language

Written by:

I recently started to think about some of the implications associated with multiple languages in an EHR. One of my readers asked me how EHR vendors correlated data from those charting in Spanish and those charting in English. My first response to this question was, “How many doctors chart in Spanish?” Yes, this was a very US centric response since obviously I know that almost all of the doctors in Latin America and other Spanish speaking countries chart in Spanish, but I wonder how many doctors in the US chart in Spanish. I expect the answer is A LOT more than I realize.

Partial evidence of this is that about a year ago HIMSS announced a Latino Health IT Initiative. From that today there is now a HIMSS Latino Community web page and also a HIMSS Latino Community Workshop at the HIMSS Annual Conference in Las Vegas. I’m going to have to find some time to try and learn more about the HIMSS Latino Community. My Espanol is terrible, but I know enough that I think I could enjoy the event.

After my initial reaction, I then started wondering how you would correlate data from another language. So, much for coordinated care. I wonder what a doctor does if he asks for his patient’s record and it is all in Spanish. That’s great if all of your doctors know Spanish, but in the US at least I don’t know of any community that has doctors who know Spanish in every specialty. How do they get around it? I don’t think those translation services you can call are much help.

Once we start talking about automated patient records the language issue becomes more of a problem. Although, maybe part of that problem is solved if you use could standards like ICD-10, SNOMED, etc. A code is a code is a code regardless of what language it is and computers are great at matching up those codes. Although, if these standards are not used, then forget trying to connect the data even through Natural Language Processing (NLP). Sure the NLP could be bi-lingual, but has anyone done that? My guess is not.

All of this might start to really matter more when we’re talking about public health issues as we aggregate data internationally. Language becomes a much larger issue in this context and so it begs for an established set of standards for easy comparison.

I’d be interested to hear about other stories and experiences with EHR charting in Spanish or another language. I bet the open source EHR have some interesting solutions similar to the open source projects I know well. I look forward to learning more about the challenge of multiple languages.

January 13, 2012 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 15 blogs containing almost 5000 articles with John having written over 2000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 9.3 million times. John also recently launched two new companies: InfluentialNetworks.com and Physia.com, and is an advisor to docBeat. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and Google Plus. Healthcare Scene can be found on Google+ as well.

Clinical Data Abstraction to Meet Meaningful Use – Meaningful Use Monday

Written by:

In many of our Meaningful Use Monday series we focused on a lot of the details around the meaningful use regulations. In this post I want to highlight one of the strategies that I’ve seen a bunch of EHR vendors and other EHR related companies employing to meet Meaningful Use. It’s an interesting concept that will be exciting to see play out.

The idea is what many are calling clinical data abstraction. I’ve actually heard some people refer to it as other names as well, but clinical data abstraction is the one that I like most.

I’ve seen two main types of clinical data abstraction. One is the automated clinical data abstraction. The other is manual clinical data abstraction. The first type is where your computer or server goes through the clinical content and using some combination of natural language processing (NLP) or other technology it identifies the important clinical data elements in a narrative passage. The second type is where a trained medical professional pulls out the various clinical data elements.

I asked one vendor that is working on clinical data abstraction whether they thought that the automated, computer generated clinical abstraction would be the predominate means or whether some manual abstraction will always be necessary. They were confident that we could get there with the automated computer abstraction of the clinical data. I’m not so confident. I think like transcription the computer could help speed up the abstraction, but there might still need to be someone who checks and verifies the data abstraction.

Why does this matter for meaningful use?
One of the challenges for meaningful use is that it really wants to know that you’ve documented certain discrete data elements. It’s not enough for you to just document the smoking status in a narrative paragraph. You have to not only document the smoking status, but your EMR has to have a way to report that you have documented the various meaningful use measures. In comes clinical data abstraction.

Proponents of clinical data abstraction argue that clinical data abstraction provides the best of both worlds: narrative with discrete data elements. It’s an interesting argument to make since many doctors love to see and read the narrative. However, all indications are that we need discrete data elements in order to improve patient care and see some of the other benefits of capturing all this healthcare data. In fact, the future Smart EMR that I wrote about before won’t be possible without these discrete healthcare data elements.

So far I believe that most people who have shown meaningful use haven’t used clinical data abstraction to meet the various meaningful use measures. Although, it’s an intriguing story to tell and could be an interesting way for doctors to meet meaningful use while minimizing changes to their workflow.

Side Note: Clinical data abstraction is also becoming popular when scanning old paper charts into your EHR. Although, that’s a topic for a future post.

November 21, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 15 blogs containing almost 5000 articles with John having written over 2000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 9.3 million times. John also recently launched two new companies: InfluentialNetworks.com and Physia.com, and is an advisor to docBeat. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and Google Plus. Healthcare Scene can be found on Google+ as well.

Study Shows Value of NLP in Pinpointing Quality Defects

Written by:

For years, we’ve heard about how much clinical information is locked away in payer databases. Payers have offered to provide clinical summaries, electronic and otherwise, The problem is, it’s potentially inaccurate clinical information because it’s all based on billing claims. (Don’t believe me? Just ask “E-Patient” Dave de Bronkart.) It is for this reason that I don’t much trust “quality” ratings based on claims data.

Just how much of a difference there was between claims data and true clinical data hasn’t been so clear, though. Until today.

A paper just published online in the Journal of the American Medical Association found that searching EMRs with natural-language processing identified up to 12 times the number of pneumonia cases and twice the rate of kidney failure and sepsis as did searches based on billing codes—ironically called “patient safety indicators” in the study—for patients admitted for surgery at six VA hospitals. That means that hundreds of the nearly 3,000 patients whose were reviewed had postoperative complications that didn’t show up in quality and performance reports.

Just think of the implications of that as we move toward Accountable Care Organizations and outcomes-based reimbursement. If healthcare continues to rely on claims data for “quality” measurement, facilities that don’t take steps to prevent complications and reduce hospital-acquired infections could score just as high—and earn just as much bonus money—as those hospitals truly committed to patient safety. If so, quality rankings will remain false, subjective measures of true performance.

So how do we remedy this? It may not be so easy. As Cerner’s Dr. David McCallie told Bloomberg News, it will take a lot of reprogramming to embed natural-language search into existing EMRs, and doing so could, according to the Bloomberg story, “destabilize software systems” and necessitate a lot more training for physicians.

I’m no technical expert, so I don’t know how NLP could destabilize software. From a layman’s perspective, it almost sounds as if vendors don’t want to put the time and effort into redesigning their products. Could it be?

I suppose there is still a chance that HHS could require NLP in Stage 3 of meaningful use—it’s not gonna happen for Stage 2—but I’m sure vendors and providers alike will say it’s too difficult. They may even say there just isn’t enough evidence; this JAMA study certainly would have to be replicated and corroborated. But are you willing to take the chance that the hospital you visit for surgery doesn’t have any real incentive to take steps to prevent complications?

 

August 25, 2011 I Written By

Jeopardy!’s Watson Computer and Healthcare

Written by:

I’m sure like many of you, I was completely intrigued by the demonstration of the Watson computer competing against the best Jeopardy! stars. It was amazing to watch not only how Watson was able to come up with the answer, but also how quickly it was able to reach the correct answer.

The hype at the IBM booth at HIMSS was really strong since it had been announced that healthcare was one of the first places that IBM wanted to work on implementing the “Watson” technology (read more about the Watson Technology in Healthcare in this AP article). Although, I found the most interesting conversation about Watson in the Nuance booth when I was talking to Dr. Nick Van Terheyden. The idea of combining the Watson technology with the voice recognition and natural language processing technologies that Nuance has available makes for a really compelling product offering.

One of the keys in the AP article above and was also mentioned by Dr. Nick from Nuance was that the Watson technology in healthcare would be applied differently than it was on Jeopardy!. In healthcare it wouldn’t try and make the decision and provide the correct answer for you. Instead, the Watson technology would be about providing you a number of possible answers and the likelihood of that answer possibly being the issue.

Some of this takes me back to Neil Versel’s posts about Clinical Decision Support and doctors resistance to CDS. There’s no doubt that the Watson technology is another form of Clinical Decision Support, but there’s little about the Watson technology which takes power away from the doctor’s decision making. It certainly could have an influence on a doctor’s ability to provide care, but that’s a great thing. Not that I want doctors constantly second guessing themselves. Not that I want doctors relying solely on the information that Watson or some other related technology provides. It’s like most clinical tools. When used properly, they can provide a great benefit to the doctor using them. When used improperly, it can lead to issues. However, it’s quite clear that Watson technology does little to take away from the decision making of doctors. In fact, I’d say it empowers doctors to do what they do better.

Personally I’m very excited to see technologies like Watson implemented in healthcare. Plus, I think we’re just at the beginning of what will be possible with this type of computing.

May 25, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 15 blogs containing almost 5000 articles with John having written over 2000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 9.3 million times. John also recently launched two new companies: InfluentialNetworks.com and Physia.com, and is an advisor to docBeat. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and Google Plus. Healthcare Scene can be found on Google+ as well.

“I use EMR and so I am MY OWN transcriptionist.” – Doc at AAFP

Written by:

I’m currently in Denver attending the AAFP conference. So far I’m really glad that I’ve come to the conference. It’s really fantastic to be surrounded by providers. It’s a stark contrast to HIMSS where you’re mostly surrounded by industry insiders and not that many providers. The practical questions the doctors ask are fascinating.

Of course, the comments they make are also fascinating. The title of this post is a comment one lady made in the David Kibbe session on Meaningful Use:
“I use EMR and so I am MY OWN transcriptionist.”

The problem with this comment is that it just doesn’t have to be true. It could be true depending on which EMR software you selected and how you implemented the EMR. However, that’s a choice you make when you choose and implement an EMR without any transcription.

I’ve actually seen a number of EMR vendors that have some really nice and deep integration between their software and transcription companies. There are even transcription companies that are building their own EMR software which obviously leverages the power of transcription.

Plus, many doctors happily use voice recognition like Dragon Naturally Speaking to still do what essentially amounts to transcription with their EMR.

Add in developments around natural language processing and the idea of preserving the narrative that is so valuable and interesting while capturing the granular data elements is a really interesting area of EMR development.

Of course, one of the problems with this idea is that many people like to use the savings on transcription costs as a way to justify the cost of purchasing and implementing an EMR. Obviously, you’ll need to look for other EMR benefits if you choose to continue transcription.

Just to round out the conversation, there are a wide variety of EMR vendors which each have their own unique style of documentation. Part of the problem is that many people don’t look much past the big “Jabba the Hutt” EMR vendors which are these ugly click interfaces that spit out a huge chunk of text that nobody wants to see. There’s plenty of EMR vendor options out there. Keep looking if you don’t like an EMR vendor’s documentation method.

September 30, 2010 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 15 blogs containing almost 5000 articles with John having written over 2000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 9.3 million times. John also recently launched two new companies: InfluentialNetworks.com and Physia.com, and is an advisor to docBeat. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and Google Plus. Healthcare Scene can be found on Google+ as well.

Nuance and MModal – Natural Language Processing Expertise

Written by:

Many of you might remember that one of the most interesting things I saw at HIMSS this year was the natural language processing that was being done by MModal. In case you don’t know what I’m talking about, check out this video interview of MModal that I did at HIMSS. I still think there really could be something to the idea of retaining the narrative that dictation provides while also pulling out the granular data elements in that narrative.

With that background, I found it really interesting when I was on LinkedIn the other day and saw Dr. Nick van Terheyden,the same guy I interviewed in the video linked above had switched companies. Nick’s profile on LinkedIn had him listed as working for Nuance instead of MModal. I guess this shouldn’t have been a surprise. Nuance has a lot of skin in the natural language processing game and it seemed to me that MModal had the technology that would make it a reality. So, now Dr. Nick van Terheyden is the Chief of Medical Information Officer for Nuance.

I’d say this is a really good move by Nuance and I’m sure Nick is being richly rewarded as well. Nick was one of the most interesting people that I met at HIMSS this year. I’ll be certain to search him out at next year’s event to hear the whole story. Luckily, I also found out that Nick is blogging about voice recognition in healthcare on his blog Voice of the Doctor. I always love it when smart people like Nick start blogging.

July 23, 2010 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 15 blogs containing almost 5000 articles with John having written over 2000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 9.3 million times. John also recently launched two new companies: InfluentialNetworks.com and Physia.com, and is an advisor to docBeat. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and Google Plus. Healthcare Scene can be found on Google+ as well.

“Practical Use” of an EHR Using Transcription

Written by:

In a post on EMR and EHR about Transcriptionists Partnering with an EMR Vendor, I got an interesting comment by George Catuogno from StenTel about the various technologies that the Medical Transcription (MT) industry are using alongside EMR software. George called the use of transcription with an EHR “practical use” while still showing “meaningful use.” I think it’s a mistake for any EMR company to ignore the transcription industry.

Here’s George’s description of the medical transcription technologies which I think people will find interesting:

The Medical Transcription (MT) industry actually has done a lot to advance itself amidst HIT, particularly EHR technologies, while supporting narrative dictation, which for many physicians is still the preferred method of information capture because it’s fast and easy (efficient) and it tends to more comprehensively captures the patient “story”. DRT, BESR and NLP are three examples of this. I’ll save the best for last.

1. Discrete Reportable Transcription (DRT) is the process of converting narrative dictation into text documents with discrete data elements than can be easily imported into the appropriate placeholders inside an EMR.

2. Backend Speech Recognition (BESR) has been in play for years which allows physicans to dictate without engaging the computer for realtime correction. The correction is instead done retrospectively by a medical transcriptionist. Some speech rec technologies (like M*Modal) support data structuring. The gap remains, however, in getting applications written that readily move that strucutred infomration into EHRs like DRT can.

3. Natural Language Processing (NLP) trumps both of these solutions because it takes a narrative report, regardless of how it was created, and codifies it (SNOMED) for a number of extraction, analytics and reporting applications: Patient Summary, DRT feed into an EMR, Core Measures and PQRI, coding automation, interoperability, and support for the majority of Meaningful Use requirements. Secondary use opens up to clinical trials and other applications as well.

Overall, if the transcription industry can market itself and get its messaging out through the right channels regaridng these innovations that augment transcription and keep physicians dictating, then transcription is a terrific EHR adoption facilitator, enables “practical use” along with Meaningful Use, and will remain relevant for the foreseeable future.

May 12, 2010 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 15 blogs containing almost 5000 articles with John having written over 2000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 9.3 million times. John also recently launched two new companies: InfluentialNetworks.com and Physia.com, and is an advisor to docBeat. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and Google Plus. Healthcare Scene can be found on Google+ as well.