Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

The Pain of Recording Patient Risk Factors as Illuminated by Apixio (Part 2 of 2)

Posted on October 28, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article introduced Apixio’s analytics for payers in the Medicare Advantage program. Now we’ll step through how Apixio extracts relevant diagnostic data.

The technology of PDF scraping
Providers usually submit SOAP notes to the Apixio web site in the form of PDFs. This comes to me as a surprise, after hearing about the extravagant efforts that have gone into new CCDs and other formats such as the Blue Button project launched by the VA. Normally provided in an XML format, these documents claim to adhere to standards and offer a relatively gentle face to a computer program. In contrast, a PDF is one of the most challenging formats to parse: words and other characters are reduced to graphical symbols, while layout bears little relation to the human meaning of the data.

Structured documents such as CCDs contain only about 20% of what CMS requires, and often are formatted in idiosyncratic ways so that even the best CCDs would be no more informative than a Word document or PDF. But the main barrier to getting information, according to Schneider, is that Medicare Advantage works through the payers, and providers can be reluctant to give payers direct access to their EHR data. This reluctance springs from a variety of reasons, including worries about security, the feeling of being deluged by requests from payers, and a belief that the providers’ IT infrastructure cannot handle the burden of data extraction. Their stance has nothing to do with protecting patient privacy, because HIPAA explicitly allows providers to share patient data for treatment, payment, and operations, and that is what they are doing giving sensitive data to Apixio in PDF form. Thus, Apixio had to master OCR and text processing to serve that market.

Processing a PDF requires several steps, integrated within Apixio’s platform:

  1. Optical character recognition to re-create the text from a photo of the PDF.

  2. Further structuring to recognize, for instance, when the PDF contains a table that needs to be broken up horizontally into columns, or constructs such the field name “Diagnosis” followed by the desired data.

  3. Natural language processing to find the grammatical patterns in the text. This processing naturally must understand medical terminology, common abbreviations such as CHF, and codings.

  4. Analytics that pull out the data relevant to risk and presents it in a usable format to a human coder.

Apixio can accept dozens of notes covering the patient’s history. It often turns up diagnoses that “fell through the cracks,” as Schneider puts it. The diagnostic information Apixio returns can be used by medical professionals to generate reports for Medicare, but it has other uses as well. Apixio tells providers when they are treating a patient for an illness that does not appear in their master database. Providers can use that information to deduce when patients are left out of key care programs that can help them. In this way, the information can improve patient care. One coder they followed could triple her rate of reviewing patient charts with Apixio’s service.

Caught between past and future
If the Apixio approach to culling risk factors appears round-about and overwrought, like bringing in a bulldozer to plant a rosebush, think back to the role of historical factors in health care. Given the ways doctors have been taught to record medical conditions, and available tools, Apixio does a small part in promoting the progressive role of accountable care.

Hopefully, changes to the health care field will permit more direct ways to deliver accountable care in the future. Medical schools will convey the requirements of accountable care to their students and teach them how to record data that satisfies these requirements. Technologies will make it easier to record risk factors the first time around. Quality measures and the data needed by policy-makers will be clarified. And most of all, the advantages of collaboration will lead providers and payers to form business agreements or even merge, at which point the EHR data will be opened to the payer. The contortions providers currently need to go through, in trying to achieve 21st-century quality, reminds us of where the field needs to go.

The Pain of Recording Patient Risk Factors as Illuminated by Apixio (Part 1 of 2)

Posted on October 27, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Many of us strain against the bonds of tradition in our workplace, harboring a secret dream that the industry could start afresh, streamlined and free of hampering traditions. But history weighs on nearly every field, including my own (publishing) and the one I cover in this blog (health care). Applying technology in such a field often involves the legerdemain of extracting new value from the imperfect records and processes with deep roots.

Along these lines, when Apixio aimed machine learning and data analytics at health care, they unveiled a business model based on measuring risk more accurately so that Medicare Advantage payments to health care payers and providers reflect their patient populations more appropriately. Apixio’s tools permit improvements to patient care, as we shall see. But the core of the platform they offer involves uploading SOAP notes, usually in PDF form, and extracting diagnostic codes that coders may have missed or that may not be supportable. Machine learning techniques extract the diagnostic codes for each patient over the entire history provided.

Many questions jostled in my mind as I talked to Apixio CTO John Schneider. Why are these particular notes so important to the Centers for Medicare & Medicaid Services (CMS)? Why don’t doctors keep track of relevant diagnoses as they go along in an easy-to-retrieve manner that could be pipelined straight to Medicare? Can’t modern EHRs, after seven years of Meaningful Use, provide better formats than PDFs? I asked him these things.

A mini-seminar ensued on the evolution of health care and its documentation. A combination of policy changes and persistent cultural habits have tangled up the various sources of information over many years. In the following sections, I’ll look at each aspect of the documentation bouillabaisse.

The financial role of diagnosis and risk
Accountable care, in varying degrees of sophistication, calculates the risk of patient populations in order to gradually replace fee-for-service with payments that reflect how adeptly the health care provider has treated the patient. Accountable care lay behind the Affordable Care Act and got an extra boost at the beginning of 2016 when CMS took on the “goal of tying 30 percent of traditional, or fee-for-service, Medicare payments to alternative payment models, such as ACOs, by the end of 2016 — and 50 percent by the end of 2018.

Although many accountable care contracts–like those of the much-maligned 1970s Managed Care era–ignore differences between patients, more thoughtful programs recognize that accurate and fair payments require measurement of how much risk the health care provider is taking on–that is, how sick their patients are. Thus, providers benefit from scrupulously complete documentation (having learned that upcoding and sloppiness will no longer be tolerated and will lead to significant fines, according to Schneider). And this would seem to provide an incentive for the provider to capture every nuance of a patient’s condition in a clearly code, structured way.

But this is not how doctors operate, according to Schneider. They rebel when presented with dozens of boxes to check off, as crude EHRs tend to present things. They stick to the free-text SOAP note (fields for subjective observations, objective observations, assessment, and plan) that has been taught for decades. It’s often up to post-processing tools to code exactly what’s wrong with the patient. Sometimes the SOAP notes don’t even distinguish the four parts in electronic form, but exist as free-flowing Word documents.

A number of key diagnoses come from doctors who have privileges at the hospital but come in only sporadically to do consultations, and who therefore don’t understand the layout of the EHR or make attempts to use what little structure it provides. Another reason codes get missed or don’t easily surface is that doctors are overwhelmed, so that accurately recording diagnostic information in a structured way is a significant extra burden, an essentially clerical function loaded onto these highly skilled healthcare professionals. Thus, extracting diagnostic information many times involves “reading between the lines,” as Schneider puts it.

For Medicare Advantage payments, CMS wants a precise delineation of properly coded diagnoses in order to discern the risk presented by each patient. This is where Apixio come in: by mining the free-text SOAP notes for information that can enhance such coding. We’ll see what they do in the next section of this article.

EHR Charting in Another Language

Posted on January 13, 2012 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently started to think about some of the implications associated with multiple languages in an EHR. One of my readers asked me how EHR vendors correlated data from those charting in Spanish and those charting in English. My first response to this question was, “How many doctors chart in Spanish?” Yes, this was a very US centric response since obviously I know that almost all of the doctors in Latin America and other Spanish speaking countries chart in Spanish, but I wonder how many doctors in the US chart in Spanish. I expect the answer is A LOT more than I realize.

Partial evidence of this is that about a year ago HIMSS announced a Latino Health IT Initiative. From that today there is now a HIMSS Latino Community web page and also a HIMSS Latino Community Workshop at the HIMSS Annual Conference in Las Vegas. I’m going to have to find some time to try and learn more about the HIMSS Latino Community. My Espanol is terrible, but I know enough that I think I could enjoy the event.

After my initial reaction, I then started wondering how you would correlate data from another language. So, much for coordinated care. I wonder what a doctor does if he asks for his patient’s record and it is all in Spanish. That’s great if all of your doctors know Spanish, but in the US at least I don’t know of any community that has doctors who know Spanish in every specialty. How do they get around it? I don’t think those translation services you can call are much help.

Once we start talking about automated patient records the language issue becomes more of a problem. Although, maybe part of that problem is solved if you use could standards like ICD-10, SNOMED, etc. A code is a code is a code regardless of what language it is and computers are great at matching up those codes. Although, if these standards are not used, then forget trying to connect the data even through Natural Language Processing (NLP). Sure the NLP could be bi-lingual, but has anyone done that? My guess is not.

All of this might start to really matter more when we’re talking about public health issues as we aggregate data internationally. Language becomes a much larger issue in this context and so it begs for an established set of standards for easy comparison.

I’d be interested to hear about other stories and experiences with EHR charting in Spanish or another language. I bet the open source EHR have some interesting solutions similar to the open source projects I know well. I look forward to learning more about the challenge of multiple languages.

Clinical Data Abstraction to Meet Meaningful Use – Meaningful Use Monday

Posted on November 21, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In many of our Meaningful Use Monday series we focused on a lot of the details around the meaningful use regulations. In this post I want to highlight one of the strategies that I’ve seen a bunch of EHR vendors and other EHR related companies employing to meet Meaningful Use. It’s an interesting concept that will be exciting to see play out.

The idea is what many are calling clinical data abstraction. I’ve actually heard some people refer to it as other names as well, but clinical data abstraction is the one that I like most.

I’ve seen two main types of clinical data abstraction. One is the automated clinical data abstraction. The other is manual clinical data abstraction. The first type is where your computer or server goes through the clinical content and using some combination of natural language processing (NLP) or other technology it identifies the important clinical data elements in a narrative passage. The second type is where a trained medical professional pulls out the various clinical data elements.

I asked one vendor that is working on clinical data abstraction whether they thought that the automated, computer generated clinical abstraction would be the predominate means or whether some manual abstraction will always be necessary. They were confident that we could get there with the automated computer abstraction of the clinical data. I’m not so confident. I think like transcription the computer could help speed up the abstraction, but there might still need to be someone who checks and verifies the data abstraction.

Why does this matter for meaningful use?
One of the challenges for meaningful use is that it really wants to know that you’ve documented certain discrete data elements. It’s not enough for you to just document the smoking status in a narrative paragraph. You have to not only document the smoking status, but your EMR has to have a way to report that you have documented the various meaningful use measures. In comes clinical data abstraction.

Proponents of clinical data abstraction argue that clinical data abstraction provides the best of both worlds: narrative with discrete data elements. It’s an interesting argument to make since many doctors love to see and read the narrative. However, all indications are that we need discrete data elements in order to improve patient care and see some of the other benefits of capturing all this healthcare data. In fact, the future Smart EMR that I wrote about before won’t be possible without these discrete healthcare data elements.

So far I believe that most people who have shown meaningful use haven’t used clinical data abstraction to meet the various meaningful use measures. Although, it’s an intriguing story to tell and could be an interesting way for doctors to meet meaningful use while minimizing changes to their workflow.

Side Note: Clinical data abstraction is also becoming popular when scanning old paper charts into your EHR. Although, that’s a topic for a future post.

Study Shows Value of NLP in Pinpointing Quality Defects

Posted on August 25, 2011 I Written By

For years, we’ve heard about how much clinical information is locked away in payer databases. Payers have offered to provide clinical summaries, electronic and otherwise, The problem is, it’s potentially inaccurate clinical information because it’s all based on billing claims. (Don’t believe me? Just ask “E-Patient” Dave de Bronkart.) It is for this reason that I don’t much trust “quality” ratings based on claims data.

Just how much of a difference there was between claims data and true clinical data hasn’t been so clear, though. Until today.

A paper just published online in the Journal of the American Medical Association found that searching EMRs with natural-language processing identified up to 12 times the number of pneumonia cases and twice the rate of kidney failure and sepsis as did searches based on billing codes—ironically called “patient safety indicators” in the study—for patients admitted for surgery at six VA hospitals. That means that hundreds of the nearly 3,000 patients whose were reviewed had postoperative complications that didn’t show up in quality and performance reports.

Just think of the implications of that as we move toward Accountable Care Organizations and outcomes-based reimbursement. If healthcare continues to rely on claims data for “quality” measurement, facilities that don’t take steps to prevent complications and reduce hospital-acquired infections could score just as high—and earn just as much bonus money—as those hospitals truly committed to patient safety. If so, quality rankings will remain false, subjective measures of true performance.

So how do we remedy this? It may not be so easy. As Cerner’s Dr. David McCallie told Bloomberg News, it will take a lot of reprogramming to embed natural-language search into existing EMRs, and doing so could, according to the Bloomberg story, “destabilize software systems” and necessitate a lot more training for physicians.

I’m no technical expert, so I don’t know how NLP could destabilize software. From a layman’s perspective, it almost sounds as if vendors don’t want to put the time and effort into redesigning their products. Could it be?

I suppose there is still a chance that HHS could require NLP in Stage 3 of meaningful use—it’s not gonna happen for Stage 2—but I’m sure vendors and providers alike will say it’s too difficult. They may even say there just isn’t enough evidence; this JAMA study certainly would have to be replicated and corroborated. But are you willing to take the chance that the hospital you visit for surgery doesn’t have any real incentive to take steps to prevent complications?

 

Jeopardy!’s Watson Computer and Healthcare

Posted on May 25, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’m sure like many of you, I was completely intrigued by the demonstration of the Watson computer competing against the best Jeopardy! stars. It was amazing to watch not only how Watson was able to come up with the answer, but also how quickly it was able to reach the correct answer.

The hype at the IBM booth at HIMSS was really strong since it had been announced that healthcare was one of the first places that IBM wanted to work on implementing the “Watson” technology (read more about the Watson Technology in Healthcare in this AP article). Although, I found the most interesting conversation about Watson in the Nuance booth when I was talking to Dr. Nick Van Terheyden. The idea of combining the Watson technology with the voice recognition and natural language processing technologies that Nuance has available makes for a really compelling product offering.

One of the keys in the AP article above and was also mentioned by Dr. Nick from Nuance was that the Watson technology in healthcare would be applied differently than it was on Jeopardy!. In healthcare it wouldn’t try and make the decision and provide the correct answer for you. Instead, the Watson technology would be about providing you a number of possible answers and the likelihood of that answer possibly being the issue.

Some of this takes me back to Neil Versel’s posts about Clinical Decision Support and doctors resistance to CDS. There’s no doubt that the Watson technology is another form of Clinical Decision Support, but there’s little about the Watson technology which takes power away from the doctor’s decision making. It certainly could have an influence on a doctor’s ability to provide care, but that’s a great thing. Not that I want doctors constantly second guessing themselves. Not that I want doctors relying solely on the information that Watson or some other related technology provides. It’s like most clinical tools. When used properly, they can provide a great benefit to the doctor using them. When used improperly, it can lead to issues. However, it’s quite clear that Watson technology does little to take away from the decision making of doctors. In fact, I’d say it empowers doctors to do what they do better.

Personally I’m very excited to see technologies like Watson implemented in healthcare. Plus, I think we’re just at the beginning of what will be possible with this type of computing.

“I use EMR and so I am MY OWN transcriptionist.” – Doc at AAFP

Posted on September 30, 2010 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’m currently in Denver attending the AAFP conference. So far I’m really glad that I’ve come to the conference. It’s really fantastic to be surrounded by providers. It’s a stark contrast to HIMSS where you’re mostly surrounded by industry insiders and not that many providers. The practical questions the doctors ask are fascinating.

Of course, the comments they make are also fascinating. The title of this post is a comment one lady made in the David Kibbe session on Meaningful Use:
“I use EMR and so I am MY OWN transcriptionist.”

The problem with this comment is that it just doesn’t have to be true. It could be true depending on which EMR software you selected and how you implemented the EMR. However, that’s a choice you make when you choose and implement an EMR without any transcription.

I’ve actually seen a number of EMR vendors that have some really nice and deep integration between their software and transcription companies. There are even transcription companies that are building their own EMR software which obviously leverages the power of transcription.

Plus, many doctors happily use voice recognition like Dragon Naturally Speaking to still do what essentially amounts to transcription with their EMR.

Add in developments around natural language processing and the idea of preserving the narrative that is so valuable and interesting while capturing the granular data elements is a really interesting area of EMR development.

Of course, one of the problems with this idea is that many people like to use the savings on transcription costs as a way to justify the cost of purchasing and implementing an EMR. Obviously, you’ll need to look for other EMR benefits if you choose to continue transcription.

Just to round out the conversation, there are a wide variety of EMR vendors which each have their own unique style of documentation. Part of the problem is that many people don’t look much past the big “Jabba the Hutt” EMR vendors which are these ugly click interfaces that spit out a huge chunk of text that nobody wants to see. There’s plenty of EMR vendor options out there. Keep looking if you don’t like an EMR vendor’s documentation method.

Nuance and MModal – Natural Language Processing Expertise

Posted on July 23, 2010 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Many of you might remember that one of the most interesting things I saw at HIMSS this year was the natural language processing that was being done by MModal. In case you don’t know what I’m talking about, check out this video interview of MModal that I did at HIMSS. I still think there really could be something to the idea of retaining the narrative that dictation provides while also pulling out the granular data elements in that narrative.

With that background, I found it really interesting when I was on LinkedIn the other day and saw Dr. Nick van Terheyden,the same guy I interviewed in the video linked above had switched companies. Nick’s profile on LinkedIn had him listed as working for Nuance instead of MModal. I guess this shouldn’t have been a surprise. Nuance has a lot of skin in the natural language processing game and it seemed to me that MModal had the technology that would make it a reality. So, now Dr. Nick van Terheyden is the Chief of Medical Information Officer for Nuance.

I’d say this is a really good move by Nuance and I’m sure Nick is being richly rewarded as well. Nick was one of the most interesting people that I met at HIMSS this year. I’ll be certain to search him out at next year’s event to hear the whole story. Luckily, I also found out that Nick is blogging about voice recognition in healthcare on his blog Voice of the Doctor. I always love it when smart people like Nick start blogging.

“Practical Use” of an EHR Using Transcription

Posted on May 12, 2010 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In a post on EMR and EHR about Transcriptionists Partnering with an EMR Vendor, I got an interesting comment by George Catuogno from StenTel about the various technologies that the Medical Transcription (MT) industry are using alongside EMR software. George called the use of transcription with an EHR “practical use” while still showing “meaningful use.” I think it’s a mistake for any EMR company to ignore the transcription industry.

Here’s George’s description of the medical transcription technologies which I think people will find interesting:

The Medical Transcription (MT) industry actually has done a lot to advance itself amidst HIT, particularly EHR technologies, while supporting narrative dictation, which for many physicians is still the preferred method of information capture because it’s fast and easy (efficient) and it tends to more comprehensively captures the patient “story”. DRT, BESR and NLP are three examples of this. I’ll save the best for last.

1. Discrete Reportable Transcription (DRT) is the process of converting narrative dictation into text documents with discrete data elements than can be easily imported into the appropriate placeholders inside an EMR.

2. Backend Speech Recognition (BESR) has been in play for years which allows physicans to dictate without engaging the computer for realtime correction. The correction is instead done retrospectively by a medical transcriptionist. Some speech rec technologies (like M*Modal) support data structuring. The gap remains, however, in getting applications written that readily move that strucutred infomration into EHRs like DRT can.

3. Natural Language Processing (NLP) trumps both of these solutions because it takes a narrative report, regardless of how it was created, and codifies it (SNOMED) for a number of extraction, analytics and reporting applications: Patient Summary, DRT feed into an EMR, Core Measures and PQRI, coding automation, interoperability, and support for the majority of Meaningful Use requirements. Secondary use opens up to clinical trials and other applications as well.

Overall, if the transcription industry can market itself and get its messaging out through the right channels regaridng these innovations that augment transcription and keep physicians dictating, then transcription is a terrific EHR adoption facilitator, enables “practical use” along with Meaningful Use, and will remain relevant for the foreseeable future.