Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 3 of 3)

Posted on October 20, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Earlier parts of this article set the stage for understanding what the Alexa Diabetes Challenge is trying to achieve and how some finalists interpreted the mandate. We examine three more finalists in this final section.

DiaBetty from the University of Illinois-Chicago

DiaBetty focuses on a single, important aspect of diabetes: the effect of depression on the course of the disease. This project, developed by the Department of Psychiatry at the University of Illinois-Chicago, does many of the things that other finalists in this article do–accepting data from EHRs, dialoguing with the individual, presenting educational materials on nutrition and medication, etc.–but with the emphasis on inquiring about mood and handling the impact that depression-like symptoms can have on behavior that affects Type 2 diabetes.

Olu Ajilore, Associate Professor and co-director of the CoNECt lab, told me that his department benefited greatly from close collaboration with bioengineering and computer science colleagues who, before DiaBetty, worked on another project that linked computing with clinical needs. Although they used some built-in capabilities of the Alexa, they may move to Lex or another AI platform and build a stand-alone device. Their next step is to develop reliable clinical trials, checking the effect of DiaBetty on health outcomes such as medication compliance, visits, and blood sugar levels, as well as cost reductions.

T2D2 from Columbia University

Just as DiaBetty explores the impact of mood on diabetes, T2D2 (which stands for “Taming Type 2 Diabetes, Together”) focuses on nutrition. Far more than sugar intake is involved in the health of people with diabetes. Elliot Mitchell, a PhD student who led the T2D2 team under Assistant Professor Lena Mamykina in the Department of Biomedical Informatics, told me that the balance of macronutrients (carbohydrates, fat, and protein) is important.

T2D2 is currently a prototype, developed as a combination of Alexa Skill and a chatbot based on Lex. The Alexa Skills Kit handle voice interactions. Both the Skill and the chatbot communicate with a back end that handles accounts and logic. Although related Columbia University technology in diabetes self-management is used, both the NLP and the voice interface were developed specifically for the Alexa Diabetes Challenge. The T2D2 team included people from the disciplines of computer interaction, data science, nursing, and behavioral nutrition.

The user invokes Alexa to tell it blood sugar values and the contents of meals. T2D2, in response, offers recipe recommendations and other advice. Like many of the finalists in this article, it looks back at meals over time, sees how combinations of nutrients matched changes in blood sugar, and personalizes its food recommendations.

For each patient, before it gets to know that patient’s diet, T2D2 can make food recommendations based on what is popular in their ZIP code. It can change these as it watches the patient’s choices and records comments to recommendations (for instance, “I don’t like that food”).

Data is also anonymized and aggregated for both recommendations and future research.

The care team and family caregivers are also involved, although less intensely than some other finalists do. The patient can offer caregivers a one-page report listing a plot of blood sugar by time and day for the previous two weeks, along with goals and progress made, and questions. The patient can also connect her account and share key medical information with family and friends, a feature called the Supportive Network.

The team’s next phase is run studies to evaluable some of assumptions they made when developing T2D2, and improve it for eventual release into the field.

Sugarpod from Wellpepper

I’ll finish this article with the winner of the challenge, already covered by an earlier article. Since the publication of the article, according to the founder and CEO of Wellpepper, Anne Weiler, the company has integrated some of Sugarpod functions into a bathroom scale. When a person stands on the scale, it takes an image of their feet and uploads it to sites that both the individual and their doctor can view. A machine learning image classifier can check the photo for problems such as diabetic foot ulcers. The scale interface can also ask the patient for quick information such as whether they took their medication and what their blood sugar is. Extended conversations are avoided, under the assumption that people don’t want to have them in the bathroom. The company designed its experiences to be integrated throughout the person’s day: stepping on the scale and answering a few questions in the morning, interacting with the care plan on a mobile device at work, and checking notifications and messages with an Echo device in the evening.

Any machine that takes pictures can arouse worry when installed in a bathroom. While taking the challenge and talking to people with diabetes, Wellpepper learned to add a light that goes on when the camera is taking a picture.

This kind of responsiveness to patient representatives in the field will determine the success of each of the finalists in this challenge. They all strive for behavioral change through connected health, and this strategy is completely reliant on engagement, trust, and collaboration by the person with a chronic illness.

The potential of engagement through voice is just beginning to be tapped. There is evidence, for instance, that serious illnesses can be diagnosed by analyzing voice patterns. As we come up on the annual Connected Health Conference this month, I will be interested to see how many participating developers share the common themes that turned up during the Alexa Diabetes Challenge.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 2 of 3)

Posted on October 19, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article introduced the problems of computer interfaces in health care and mentioned some current uses for natural language processing (NLP) for apps aimed at clinicians. I also summarized the common goals, problems, and solutions I found among the five finalists in the Alexa Diabetes Challenge. This part of the article shows the particular twist given by each finalist.

My GluCoach from HCL America in Partnership With Ayogo

There are two levels from which to view My GluCoach. On one level, it’s an interactive tool exemplifying one of the goals I listed earlier–intense engagement with patients over daily behavior–as well as the theme of comprehensivenesss. The interactions that My GluCoach offers were divided into three types by Abhishek Shankar, a Vice President at HCL Technologies America:

  • Teacher: the service can answer questions about diabetes and pull up stored educational materials

  • Coach: the service can track behavior by interacting with devices and prompt the patient to eat differently or go out for exercise. In addition to asking questions, a patient can set up Alexa to deliver alarms at particular times, a feature My GluCoach uses to deliver advice.

  • Assistant: provide conveniences to the patient, such as ordering a cab to take her to an appointment.

On a higher level, My GluCoach fits into broader services offered to health care institutions by HCL Technologies as part of a population health program. In creating the service HCL partnered with Ayogo, which develops a mobile platform for patient engagement and tracking. HCL has also designed the service as a general health care platform that can be expanded over the next six to twelve months to cover medical conditions besides diabetes.

Another theme I discussed earlier, interactions with outside data and the use of machine learning, are key to my GluCoach. For its demo at the challenge, My GluCoach took data about exercise from a Fitbit. It can potentially work with any device that shares information, and HCL plans to integrate the service with common EHRs. As My GluCoach gets to know the individual who uses it over months and years, it can tailor its responses more and more intelligently to the learning style and personality of the patient.

Patterns of eating, medical compliance, and other data are not the only input to machine learning. Shankar pointed out that different patients require different types of interventions. Some simply want to be given concrete advice and told what to do. Others want to be presented with information and then make their own decisions. My GluCoach will hopefully adapt to whatever style works best for the particular individual. This affective response–together with a general tone of humor and friendliness–will win the trust of the individual.

PIA from Ejenta

PIA, which stands for “personal intelligent agent,” manages care plans, delivering information to the affected patients as well as their care teams and concerned relatives. It collects medical data and draws conclusions that allow it to generate alerts if something seems wrong. Patients can also ask PIA how they are doing, and the agent will respond with personalized feedback and advice based on what the agent has learned about them and their care plan.

I talked to Rachna Dhamija, who worked on a team that developed PIA as the founder and CEO of Ejenta. (The name Ejenta is a version of the word “agent” that entered the Bengali language as slang.) She said that the AI technology had been licensed from NASA, which had developed it to monitor astronauts’ health and other aspects of flights. Ejenta helped turn it into a care coordination tool with interfaces for the web and mobile devices at a major HMO to treat patients with chronic heart failure and high-risk pregnancies. Ejenta expanded their platform to include an Alexa interface for the diabetes challenge.

As a care management tool, PIA records targets such as glucose levels, goals, medication plans, nutrition plans, and action parameters such as how often to take measurements using the devices. Each caregiver, along the patient, has his or her own agent, and caregivers can monitor multiple patients. The patient has very granular control over sharing, telling PIA which kind of data can be sent to each caretaker. Access rights must be set on the web or a mobile device, because allowing Alexa to be used for that purpose might let someone trick the system into thinking he was the patient.

Besides Alexa, PIA takes data from devices (scales, blood glucose monitors, blood pressure monitors, etc.) and from EHRs in a HIPAA-compliant method. Because the service cannot wake up Alexa, it currently delivers notifications, alerts, and reminders by sending a secure message to the provider’s agent. The provider can then contact the patient by email or mobile phone. The team plans to integrate PIA with an Alexa notifications feature in the future, so that PIA can proactively communicate with the patient via Alexa.

PIA goes beyond the standard rules for alerts, allowing alerts and reminders to be customized based on what it learns about the patient. PIA uses machine learning to discover what is normal activity (such as weight fluctuations) for each patient and to make predictions based on the data, which can be shared with the care team.

The final section of this article covers DiaBetty, T2D2, and Sugarpod, the remaining finalists.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 1 of 3)

Posted on October 16, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The leading pharmaceutical and medical company Merck, together with Amazon Web Services, has recently been exploring the potential health impacts of voice interfaces and natural language processing (NLP) through an Alexa Diabetes Challenge. I recently talked to the five finalists in this challenge. This article explores the potential of new interfaces to transform the handling of chronic disease, and what the challenge reveals about currently available technology.

Alexa, of course, is the ground-breaking system that brings everyday voice interaction with computers into the home. Most of its uses are trivial (you can ask about today’s weather or change channels on your TV), but one must not underestimate the immense power of combining artificial intelligence with speech, one of the most basic and essential human activities. The potential of this interface for disabled or disoriented people is particularly intriguing.

The diabetes challenge is a nice focal point for exploring the more serious contribution made by voice interfaces and NLP. Because of the alarming global spread of this illness, the challenge also presents immediate opportunities that I hope the participants succeed in productizing and releasing into the field. Using the challenge’s published criteria, the judges today announced Sugarpod from Wellpepper as the winner.

This article will list some common themes among the five finalists, look at the background about current EHR interfaces and NLP, and say a bit about the unique achievement of each finalist.

Common themes

Overlapping visions of goals, problems, and solutions appeared among the finalists I interviewed for the diabetes challenge:

  • A voice interface allows more frequent and easier interactions with at-risk individuals who have chronic conditions, potentially achieving the behavioral health goal of helping a person make the right health decisions on a daily or even hourly basis.

  • Contestants seek to integrate many levels of patient intervention into their tools: responding to questions, collecting vital signs and behavioral data, issuing alerts, providing recommendations, delivering educational background material, and so on.

  • Services in this challenge go far beyond interactions between Alexa and the individual. The systems commonly anonymize and aggregate data in order to perform analytics that they hope will improve the service and provide valuable public health information to health care providers. They also facilitate communication of crucial health data between the individual and her care team.

  • Given the use of data and AI, customization is a big part of the tools. They are expected to determine the unique characteristics of each patient’s disease and behavior, and adapt their advice to the individual.

  • In addition to Alexa’s built-in language recognition capabilities, Amazon provides the Lex service for sophisticated text processing. Some contestants used Lex, while others drew on other research they had done building their own natural language processing engines.

  • Alexa never initiates a dialog, merely responding when the user wakes it up. The device can present a visual or audio notification when new material is present, but it still depends on the user to request the content. Thus, contestants are using other channels to deliver reminders and alerts such as messaging on the individual’s cell phone or alerting a provider.

  • Alexa is not HIPAA-compliant, but may achieve compliance in the future. This would help health services turn their voice interfaces into viable products and enter the mainstream.

Some background on interfaces and NLP

The poor state of current computing interfaces in the medical field is no secret–in fact, it is one of the loudest and most insistent complaints by doctors, such as on sites like KevinMD. You can visit Healthcare IT News or JAMA regularly and read the damning indictments.

Several factors can be blamed for this situation, including unsophisticated electronic health records (EHRs) and arbitrary reporting requirements by Centers for Medicare & Medicaid Services (CMS). Natural language processing may provide one of the technical solutions to these problems. The NLP services by Nuance are already famous. An encouraging study finds substantial time savings through using NLP to enter doctor’s insights. And on the other end–where doctors are searching the notes they previously entered for information–a service called Butter.ai uses NLP for intelligent searches. Unsurprisingly, the American Health Information Management Association (AHIMA) looks forward to the contributions of NLP.

Some app developers are now exploring voice interfaces and NLP on the patient side. I covered two such companies, including the one that ultimately won the Alexa Diabetes Challenge, in another article. In general, developers using these interfaces hope to eliminate the fuss and abstraction in health apps that frustrate many consumers, thereby reaching new populations and interacting with them more frequently, with deeper relationships.

The next two parts of this article turn to each of the five finalists, to show the use they are making of Alexa.

Wellpepper and SimplifiMed Meet the Patients Where They Are Through Modern Interaction Techniques

Posted on August 9, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Over the past few weeks I found two companies seeking out natural and streamlined ways to connect patients with their doctors. Many of us have started using web portals for messaging–a stodgy communication method that involves logins and lots of clicking, often just for an outcome such as message “Test submitted. No further information available.” Web portals are better than unnecessary office visits or days of playing phone tag, and so are the various secure messaging apps (incompatible with one another, unfortunately) found in the online app stores. But Wellpepper and SimplifiMed are trying to bring us a bit further into the twenty-first century, through voice interfaces and natural language processing.

Wellpepper’s Sugarpod

Wellpepper recently ascended to finalist status in the Alexa Diabetes Challenge, which encourages research into the use of Amazon.com’s popular voice-activated device, Alexa, to improve the lives of people with Type 2 Diabetes. For this challenge, Wellpepper enhanced its existing service to deliver messages over Amazon Echo and interview patients. Wellpepper’s entry in the competition is an integrated care plan called Sugarpod.

The Wellpepper platform is organized around a care plan, and covers the entire cycle of treatment, such as delivering information to patients, managing their medications and food diaries, recording information from patients in the health care provider’s EHR, helping them prepare for surgery, and more. Messages adapt to the patient’s condition, attempting to present the right tone for adherent versus non-adherent patients. The data collected can be used for analytics benefitting both the provider and the patient–valuable alerts, for instance.

It must be emphasized at the outset that Wellpepper’s current support for Alexa is just a proof of concept. It cannot be rolled out to the public until Alexa itself is HIPAA-compliant.

I interviewed Anne Weiler, founder and CEO of Wellpepper. She explained that using Alexa would be helpful for people who have mobility problems or difficulties using their hands. The prototype proved quite popular, and people seem willing to open up to the machine. Alexa has some modest affective computing features; for instance, if the patient reports feeling pain, the device will may respond with “Ouch!”

Wellpepper is clinically validated. A study of patients with Parkinson’s showed that those using Wellpepper showed 9 percent improvement in mobility, whereas those without it showed a 12% decline. Wellpepper patients adhered to treatment plans 81% of the time.

I’ll end this section by mentioning that integration EHRs offer limited information of value to Wellpepper. Most EHRs don’t yet accept patient data, for instance. And how can you tell whether a patient was admitted to a hospital? It should be in the EHR, but Sugarpod has found the information to be unavailable. It’s especially hidden if the patient is admitted to a different health care providers; interoperability is a myth. Weiler said that Sugarpod doesn’t depend on the EHR for much information, using a much more reliable source of information instead: it asks the patient!

SimplifiMed

SimplifiMed is a chatbot service that helps clinics automate routine tasks such as appointments, refills, and other aspects of treatment. CEO Chinmay A. Singh emphasized to me that it is not an app, but a natural language processing tool that operates over standard SMS messaging. They enable a doctor’s landline phone to communicate via text messages and route patients’ messages to a chatbot capable of understanding natural language and partial sentences. The bot interacts with the patients to understand their needs, and helps them accomplish the task quickly. The result is round-the-clock access to the service with no waiting on the phone a huge convenience to busy patients.

SimplifiMed also collects insurance information when the patient signs up, and the patient can use the interface to change the information. Eventually, they expect the service to analyze patient’s symptom in light of data from the EHR and help the patient make the decision about whether to come in to the doctor.

SMS is not secure, but HIPAA does not get violated because the patient can choose what to send to the doctor, and the chatbot’s responses contain no personally identifiable information. Between the doctor and the SimplifiMed service, data is sent in encrypted form. Singh said that the company built its own natural language processing engine, because it didn’t want to share sensitive patient data with an outside service.

Due to complexity of care, insurance requirements, and regulations, a doctor today needs support from multiple staff members: front desk, MA, biller, etc. MACRA and value-based care will increase the burden on staff without providing the income to hire more. Automating routine activities adds value to clinics without breaking the bank.

Earlier this year I wrote about another company, HealthTap, that had added Alexa integration. This trend toward natural voice interfaces, which the Alexa Diabetes Challenge finalists are also pursuing, along with the natural language processing that they and SimplifiMed are implementing, could put health care on track to a new era of meeting patients where they are now. The potential improvements to care are considerable, because patients are more likely to share information, take educational interventions seriously, and become active participants in their own treatment.

Tips on Implementing Text Analytics in Healthcare

Posted on July 6, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

Most of us would agree that extracting clinical data from unstructured physician notes would be great. At present, few organizations have deployed such tools, nor have EMR vendors come to the rescue en masse, and the conventional wisdom holds that text analytics would be crazy expensive. I’ve always suspected that digging out and analyzing this data may be worth the trouble, however.

That’s why I really dug a recent article from HealthCatalyst’s Eric Just, which seemed to offer some worthwhile ideas on how to use text analytics effectively. Just, who is senior vice president of product development, made a good case for giving this approach a try. (Note: HealthCatalyst and partner Regenstrief Institute offer solutions in this area.)

The article includes an interesting case study explaining how healthcare text analytics performed head-to-head against traditional research methods.

It tells the story of a team of analysts in Indiana that set out to identify peripheral artery disease (PAD) patients across two health systems. At first gasp, things weren’t going well. When researchers looked at EMR and claims data, they found that failed to identify over 75% of patients with this condition, but text analytics improved their results dramatically.

Using ICD and CPT codes for PAD, and standard EMR data searches, team members had identified less than 10,000 patients with the disorder. However, once they developed a natural language processing tool designed to sift through text-based data, they discovered that there were at least 41,000 PAD patients in the population they were studying.

To get this kind of results, Just says, there are three key features a medical text analytics tool should have:

  • The medical text analytics software should tailor results to a given user’s needs. For example, he notes that if the user doesn’t have permission to view PHI, the analytics tool should display only nonprivate data.
  • Medical text analytics tools should integrate medical terminology to improve the scope of searches. For example, when a user does a search on the term “diabetes” the search tool should automatically be capable of displaying results for “NIDDM,” as this broadens the search to include more relevant content.
  • Text analytics algorithms should do more than just find relevant terms — they should provide context as well as content. For example, a search for patients with “pneumonia,” done with considering context, would also bring up phrases like “no history of pneumonia.” A better tool would be able to rule out phrases like “no history of pneumonia,” or “family history of pneumonia” from a search for patients who have been treated for this illness.

The piece goes into far more detail than I can summarize here, so I recommend you read it in full if you’re interested in leveraging text analytics for your organization.

But for what it’s worth, I came away from the piece with the sense that analyzing your clinical textual information is well worth the trouble — particularly if EMR vendors being to add such tools to their systems. After all, when it comes to improving outcomes, we need all the help we can get.

The Pain of Recording Patient Risk Factors as Illuminated by Apixio (Part 2 of 2)

Posted on October 28, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article introduced Apixio’s analytics for payers in the Medicare Advantage program. Now we’ll step through how Apixio extracts relevant diagnostic data.

The technology of PDF scraping
Providers usually submit SOAP notes to the Apixio web site in the form of PDFs. This comes to me as a surprise, after hearing about the extravagant efforts that have gone into new CCDs and other formats such as the Blue Button project launched by the VA. Normally provided in an XML format, these documents claim to adhere to standards and offer a relatively gentle face to a computer program. In contrast, a PDF is one of the most challenging formats to parse: words and other characters are reduced to graphical symbols, while layout bears little relation to the human meaning of the data.

Structured documents such as CCDs contain only about 20% of what CMS requires, and often are formatted in idiosyncratic ways so that even the best CCDs would be no more informative than a Word document or PDF. But the main barrier to getting information, according to Schneider, is that Medicare Advantage works through the payers, and providers can be reluctant to give payers direct access to their EHR data. This reluctance springs from a variety of reasons, including worries about security, the feeling of being deluged by requests from payers, and a belief that the providers’ IT infrastructure cannot handle the burden of data extraction. Their stance has nothing to do with protecting patient privacy, because HIPAA explicitly allows providers to share patient data for treatment, payment, and operations, and that is what they are doing giving sensitive data to Apixio in PDF form. Thus, Apixio had to master OCR and text processing to serve that market.

Processing a PDF requires several steps, integrated within Apixio’s platform:

  1. Optical character recognition to re-create the text from a photo of the PDF.

  2. Further structuring to recognize, for instance, when the PDF contains a table that needs to be broken up horizontally into columns, or constructs such the field name “Diagnosis” followed by the desired data.

  3. Natural language processing to find the grammatical patterns in the text. This processing naturally must understand medical terminology, common abbreviations such as CHF, and codings.

  4. Analytics that pull out the data relevant to risk and presents it in a usable format to a human coder.

Apixio can accept dozens of notes covering the patient’s history. It often turns up diagnoses that “fell through the cracks,” as Schneider puts it. The diagnostic information Apixio returns can be used by medical professionals to generate reports for Medicare, but it has other uses as well. Apixio tells providers when they are treating a patient for an illness that does not appear in their master database. Providers can use that information to deduce when patients are left out of key care programs that can help them. In this way, the information can improve patient care. One coder they followed could triple her rate of reviewing patient charts with Apixio’s service.

Caught between past and future
If the Apixio approach to culling risk factors appears round-about and overwrought, like bringing in a bulldozer to plant a rosebush, think back to the role of historical factors in health care. Given the ways doctors have been taught to record medical conditions, and available tools, Apixio does a small part in promoting the progressive role of accountable care.

Hopefully, changes to the health care field will permit more direct ways to deliver accountable care in the future. Medical schools will convey the requirements of accountable care to their students and teach them how to record data that satisfies these requirements. Technologies will make it easier to record risk factors the first time around. Quality measures and the data needed by policy-makers will be clarified. And most of all, the advantages of collaboration will lead providers and payers to form business agreements or even merge, at which point the EHR data will be opened to the payer. The contortions providers currently need to go through, in trying to achieve 21st-century quality, reminds us of where the field needs to go.

The Pain of Recording Patient Risk Factors as Illuminated by Apixio (Part 1 of 2)

Posted on October 27, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Many of us strain against the bonds of tradition in our workplace, harboring a secret dream that the industry could start afresh, streamlined and free of hampering traditions. But history weighs on nearly every field, including my own (publishing) and the one I cover in this blog (health care). Applying technology in such a field often involves the legerdemain of extracting new value from the imperfect records and processes with deep roots.

Along these lines, when Apixio aimed machine learning and data analytics at health care, they unveiled a business model based on measuring risk more accurately so that Medicare Advantage payments to health care payers and providers reflect their patient populations more appropriately. Apixio’s tools permit improvements to patient care, as we shall see. But the core of the platform they offer involves uploading SOAP notes, usually in PDF form, and extracting diagnostic codes that coders may have missed or that may not be supportable. Machine learning techniques extract the diagnostic codes for each patient over the entire history provided.

Many questions jostled in my mind as I talked to Apixio CTO John Schneider. Why are these particular notes so important to the Centers for Medicare & Medicaid Services (CMS)? Why don’t doctors keep track of relevant diagnoses as they go along in an easy-to-retrieve manner that could be pipelined straight to Medicare? Can’t modern EHRs, after seven years of Meaningful Use, provide better formats than PDFs? I asked him these things.

A mini-seminar ensued on the evolution of health care and its documentation. A combination of policy changes and persistent cultural habits have tangled up the various sources of information over many years. In the following sections, I’ll look at each aspect of the documentation bouillabaisse.

The financial role of diagnosis and risk
Accountable care, in varying degrees of sophistication, calculates the risk of patient populations in order to gradually replace fee-for-service with payments that reflect how adeptly the health care provider has treated the patient. Accountable care lay behind the Affordable Care Act and got an extra boost at the beginning of 2016 when CMS took on the “goal of tying 30 percent of traditional, or fee-for-service, Medicare payments to alternative payment models, such as ACOs, by the end of 2016 — and 50 percent by the end of 2018.

Although many accountable care contracts–like those of the much-maligned 1970s Managed Care era–ignore differences between patients, more thoughtful programs recognize that accurate and fair payments require measurement of how much risk the health care provider is taking on–that is, how sick their patients are. Thus, providers benefit from scrupulously complete documentation (having learned that upcoding and sloppiness will no longer be tolerated and will lead to significant fines, according to Schneider). And this would seem to provide an incentive for the provider to capture every nuance of a patient’s condition in a clearly code, structured way.

But this is not how doctors operate, according to Schneider. They rebel when presented with dozens of boxes to check off, as crude EHRs tend to present things. They stick to the free-text SOAP note (fields for subjective observations, objective observations, assessment, and plan) that has been taught for decades. It’s often up to post-processing tools to code exactly what’s wrong with the patient. Sometimes the SOAP notes don’t even distinguish the four parts in electronic form, but exist as free-flowing Word documents.

A number of key diagnoses come from doctors who have privileges at the hospital but come in only sporadically to do consultations, and who therefore don’t understand the layout of the EHR or make attempts to use what little structure it provides. Another reason codes get missed or don’t easily surface is that doctors are overwhelmed, so that accurately recording diagnostic information in a structured way is a significant extra burden, an essentially clerical function loaded onto these highly skilled healthcare professionals. Thus, extracting diagnostic information many times involves “reading between the lines,” as Schneider puts it.

For Medicare Advantage payments, CMS wants a precise delineation of properly coded diagnoses in order to discern the risk presented by each patient. This is where Apixio come in: by mining the free-text SOAP notes for information that can enhance such coding. We’ll see what they do in the next section of this article.

EHR Charting in Another Language

Posted on January 13, 2012 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently started to think about some of the implications associated with multiple languages in an EHR. One of my readers asked me how EHR vendors correlated data from those charting in Spanish and those charting in English. My first response to this question was, “How many doctors chart in Spanish?” Yes, this was a very US centric response since obviously I know that almost all of the doctors in Latin America and other Spanish speaking countries chart in Spanish, but I wonder how many doctors in the US chart in Spanish. I expect the answer is A LOT more than I realize.

Partial evidence of this is that about a year ago HIMSS announced a Latino Health IT Initiative. From that today there is now a HIMSS Latino Community web page and also a HIMSS Latino Community Workshop at the HIMSS Annual Conference in Las Vegas. I’m going to have to find some time to try and learn more about the HIMSS Latino Community. My Espanol is terrible, but I know enough that I think I could enjoy the event.

After my initial reaction, I then started wondering how you would correlate data from another language. So, much for coordinated care. I wonder what a doctor does if he asks for his patient’s record and it is all in Spanish. That’s great if all of your doctors know Spanish, but in the US at least I don’t know of any community that has doctors who know Spanish in every specialty. How do they get around it? I don’t think those translation services you can call are much help.

Once we start talking about automated patient records the language issue becomes more of a problem. Although, maybe part of that problem is solved if you use could standards like ICD-10, SNOMED, etc. A code is a code is a code regardless of what language it is and computers are great at matching up those codes. Although, if these standards are not used, then forget trying to connect the data even through Natural Language Processing (NLP). Sure the NLP could be bi-lingual, but has anyone done that? My guess is not.

All of this might start to really matter more when we’re talking about public health issues as we aggregate data internationally. Language becomes a much larger issue in this context and so it begs for an established set of standards for easy comparison.

I’d be interested to hear about other stories and experiences with EHR charting in Spanish or another language. I bet the open source EHR have some interesting solutions similar to the open source projects I know well. I look forward to learning more about the challenge of multiple languages.

Clinical Data Abstraction to Meet Meaningful Use – Meaningful Use Monday

Posted on November 21, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In many of our Meaningful Use Monday series we focused on a lot of the details around the meaningful use regulations. In this post I want to highlight one of the strategies that I’ve seen a bunch of EHR vendors and other EHR related companies employing to meet Meaningful Use. It’s an interesting concept that will be exciting to see play out.

The idea is what many are calling clinical data abstraction. I’ve actually heard some people refer to it as other names as well, but clinical data abstraction is the one that I like most.

I’ve seen two main types of clinical data abstraction. One is the automated clinical data abstraction. The other is manual clinical data abstraction. The first type is where your computer or server goes through the clinical content and using some combination of natural language processing (NLP) or other technology it identifies the important clinical data elements in a narrative passage. The second type is where a trained medical professional pulls out the various clinical data elements.

I asked one vendor that is working on clinical data abstraction whether they thought that the automated, computer generated clinical abstraction would be the predominate means or whether some manual abstraction will always be necessary. They were confident that we could get there with the automated computer abstraction of the clinical data. I’m not so confident. I think like transcription the computer could help speed up the abstraction, but there might still need to be someone who checks and verifies the data abstraction.

Why does this matter for meaningful use?
One of the challenges for meaningful use is that it really wants to know that you’ve documented certain discrete data elements. It’s not enough for you to just document the smoking status in a narrative paragraph. You have to not only document the smoking status, but your EMR has to have a way to report that you have documented the various meaningful use measures. In comes clinical data abstraction.

Proponents of clinical data abstraction argue that clinical data abstraction provides the best of both worlds: narrative with discrete data elements. It’s an interesting argument to make since many doctors love to see and read the narrative. However, all indications are that we need discrete data elements in order to improve patient care and see some of the other benefits of capturing all this healthcare data. In fact, the future Smart EMR that I wrote about before won’t be possible without these discrete healthcare data elements.

So far I believe that most people who have shown meaningful use haven’t used clinical data abstraction to meet the various meaningful use measures. Although, it’s an intriguing story to tell and could be an interesting way for doctors to meet meaningful use while minimizing changes to their workflow.

Side Note: Clinical data abstraction is also becoming popular when scanning old paper charts into your EHR. Although, that’s a topic for a future post.

Study Shows Value of NLP in Pinpointing Quality Defects

Posted on August 25, 2011 I Written By

For years, we’ve heard about how much clinical information is locked away in payer databases. Payers have offered to provide clinical summaries, electronic and otherwise, The problem is, it’s potentially inaccurate clinical information because it’s all based on billing claims. (Don’t believe me? Just ask “E-Patient” Dave de Bronkart.) It is for this reason that I don’t much trust “quality” ratings based on claims data.

Just how much of a difference there was between claims data and true clinical data hasn’t been so clear, though. Until today.

A paper just published online in the Journal of the American Medical Association found that searching EMRs with natural-language processing identified up to 12 times the number of pneumonia cases and twice the rate of kidney failure and sepsis as did searches based on billing codes—ironically called “patient safety indicators” in the study—for patients admitted for surgery at six VA hospitals. That means that hundreds of the nearly 3,000 patients whose were reviewed had postoperative complications that didn’t show up in quality and performance reports.

Just think of the implications of that as we move toward Accountable Care Organizations and outcomes-based reimbursement. If healthcare continues to rely on claims data for “quality” measurement, facilities that don’t take steps to prevent complications and reduce hospital-acquired infections could score just as high—and earn just as much bonus money—as those hospitals truly committed to patient safety. If so, quality rankings will remain false, subjective measures of true performance.

So how do we remedy this? It may not be so easy. As Cerner’s Dr. David McCallie told Bloomberg News, it will take a lot of reprogramming to embed natural-language search into existing EMRs, and doing so could, according to the Bloomberg story, “destabilize software systems” and necessitate a lot more training for physicians.

I’m no technical expert, so I don’t know how NLP could destabilize software. From a layman’s perspective, it almost sounds as if vendors don’t want to put the time and effort into redesigning their products. Could it be?

I suppose there is still a chance that HHS could require NLP in Stage 3 of meaningful use—it’s not gonna happen for Stage 2—but I’m sure vendors and providers alike will say it’s too difficult. They may even say there just isn’t enough evidence; this JAMA study certainly would have to be replicated and corroborated. But are you willing to take the chance that the hospital you visit for surgery doesn’t have any real incentive to take steps to prevent complications?