Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

A Learning EHR for a Learning Healthcare System

Posted on January 24, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Can the health care system survive the adoption of electronic health records? When the HITECH act mandated the installation of EHRs in 2009, we all hoped they would propel hospitals and clinics into a 21st-century consciousness. Instead, EHRs threaten to destroy those who have adopted them: the doctors whose work environment they degrade and the hospitals that they are pushing into bankruptcy. But the revolution in artificial intelligence that’s injecting new insights into many industries could also create radically different EHRs.

Here I define AI as software that, instead of dictating what a computer system should do, undergoes a process of experimentation and observation that creates a model to control the system, hopefully with far greater sophistication, personalization, and adaptability. Breakthroughs achieved in AI over the past decade now enable things that seemed impossible a bit earlier, such as voice interfaces that can both recognize and produce speech.

AI has famously been used by IBM Watson to make treatment recommendations. Analyses of big data (which may or may not qualify as AI) have saved hospitals large sums of money and even–finally, what we’ve been waiting for!–make patients healthier. But I’m talking in this article about a particular focus: the potential for changing the much-derided EHR. As many observers have pointed out, current EHRs are mostly billion-dollar file cabinets in electronic form. That epithet doesn’t even characterize them well enough–imagine instead a file cabinet that repeatedly screamed at you to check what you’re doing as you thumb through the papers.

How can AI create a new electronic health record? Major vendors have announced virtual assistants (See also John’s recent interview with MEDITECH which mentions their interest in virtual assistants) to make their interfaces more intuitive and responsive, so there is hope that they’re watching other industries and learning from machine learning. I don’t know what the vendors basing these assistants on, but in this article I’ll describe how some vanilla AI techniques could be applied to the EHR.

How a Learning EHR Would Work

An AI-based health record would start with the usual dashboard-like interface. Each record consists of hundreds of discrete pieces of data, such as age, latest blood pressure reading, a diagnosis of chronic heart failure, and even ZIP code and family status–important public health indicators. Each field of data would be called a feature in traditional AI. The goal is to find which combination of features–and their values, such as 75 for age–most accurately predict what a clinician does with the EHR. With each click or character typed, the AI model looks at all the features, discards the bulk of them that are not useful, and uses the rest to present the doctor with fields and information likely to be of value.

The EHR will probably learn that the forms pulled up by a doctor for a heart patient differ from those pulled up for a cancer patient. One case might focus on behavior, another on surgery and medication. Clinicians certainly behave differently in the hospital from how they behave in their home offices, or even how they behave in another hospital across town with different patient demographics. A learning EHR will discover and adapt to these differences, while also capitalizing on the commonalities in the doctor’s behavior across all settings, as well as how other doctors in the practice behave.

Clinicians like to say that every patient is different: well, with AI tracking behavior, the interface can adapt to every patient.

AI can also make use of messy and incomplete data, the well-known weaknesses of health care. But it’s crucial, to maximize predictive accuracy, for the AI system to have access to as many fields as possible. Privacy rules, however, dictate that certain fields be masked and others made fuzzy (for instance, specifying age as a range from 70 to 80 instead of precisely 75). Although AI can still make use of such data, it might be possible to provide more precise values through data sharing agreements strictly stipulating that the data be used only to improve the EHR–not for competitive strategizing, marketing, or other frowned-on exploitation.

A learning EHR would also be integrated with other innovations that increase available data and reduce labor–for instance, devices worn by patients to collect vital signs and exercise habits. This could free up doctors do less time collecting statistics and more time treating the patient.

Potential Impacts of AI-Based Records

What we hope for is interfaces that give the doctor just what she needs, when she needs it. A helpful interface includes autocompletion for data she enters (one feature of a mobile solution called Modernizing Medicine, which I profiled in an earlier article), clear and consistent displays, and prompts that are useful instead of distracting.

Abrupt and arbitrary changes to interfaces can be disorienting and create errors. So perhaps the EHR will keep the same basic interface but use cues such as changes in color or highlighted borders to suggest to the doctor what she should pay attention to. Or it could occasionally display a dialog box asking the clinician whether she would like the EHR to upgrade and streamline its interface based on its knowledge of her behavior. This intervention might be welcome because a learning EHR should be able to drastically reduce the number of alerts that interrupt the doctors’ work.

Doctors’ burdens should be reduced in other ways too. Current blind and dumb EHRs require doctors to enter the same information over and over, and even to resort to the dangerous practice of copy and paste. Naturally, observers who write about this problem take the burden off of the inflexible and poorly designed computer systems, and blame the doctors instead. But doing repetitive work for humans is the original purpose of computers, and what they’re best at doing. Better design will make dual entries (and inconsistent records) a thing of the past.

Liability

Current computer vendors disclaim responsibility for errors, leaving it up the busy doctor to verify that the system carried out the doctor’s intentions accurately. Unfortunately, it will be a long time (if ever) before AI-driven systems are accurate enough to give vendors the confidence to take on risk. However, AI systems have an advantage over conventional ones by assigning a confidence level to each decision they make. Therefore, they could show the doctor how much the system trusts itself, and a high degree of doubt could let the doctor know she should take a closer look.

One of the popular terms that have sprung up over the past decade to describe health care reform is the “learning healthcare system.” A learning system requires learning on every level and at every stage. Because nobody likes the designs of current EHRs, they should be happy to try a new EHR with a design based directly on their behavior.

Learning Health Care System

Posted on March 27, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In a recent post by Andy Oram on EMR and EHR titles “Exploring the Role of Clinical Documentation: a Step Toward EHRs for Learning” he introduced me to the idea of what he called a Learning Health Care System. Here’s his description:

Currently a popular buzzword, a learning health care system collects data from clinicians, patients, and the general population to look for evidence and correlations that can improve the delivery of health care. The learning system can determine the prevalence of health disorders in an area, pick out which people are most at risk, find out how well treatments work, etc. It is often called a “closed loop system” because it can draw on information generated from within the system to change course quickly.

I really love the concept and description of a learning healthcare system. Unfortunately, I see so very little of this in our current EHR technology and that’s a travesty. However, it’s absolutely the way we need to head. Andy add this insight into why we don’t yet have a learning health care system:

“Vendors need to improve the ability of systems to capture and manage structured data.” We need structured data for our learning health care system, and we can’t wait for natural language processing to evolve to the point where it can reliably extract the necessary elements of a document.

While I agree that managed structured data would be helpful in reaching the vision of a learning healthcare system, I don’t think we have to wait for that to happen. We can already use the data that’s available to make our EHRs smarter than they are today. Certainly we can’t do everything that we’d like to do with them, but we can do something. We shouldn’t do nothing just because we can’t do everything.

Plus, I’ve written about this a number of times before, but we need to create a means for the healthcare system to learn and for healthcare systems to be able to easily share that learning. This might be a different definition of leaning than what Andy described. I think he was referencing a learning system that learns about the patient. I’m taking it one step further and we need a healthcare system that learns something about technology or data to be able to easily share that learning with other outside healthcare systems. That would be powerful.

What are your thoughts on what Andy calls a popular buzzword: A Learning Health Care System? Are we heading that direction? What’s holding us back?