A Learning EHR for a Learning Healthcare System

Can the health care system survive the adoption of electronic health records? When the HITECH act mandated the installation of EHRs in 2009, we all hoped they would propel hospitals and clinics into a 21st-century consciousness. Instead, EHRs threaten to destroy those who have adopted them: the doctors whose work environment they degrade and the hospitals that they are pushing into bankruptcy. But the revolution in artificial intelligence that’s injecting new insights into many industries could also create radically different EHRs.

Here I define AI as software that, instead of dictating what a computer system should do, undergoes a process of experimentation and observation that creates a model to control the system, hopefully with far greater sophistication, personalization, and adaptability. Breakthroughs achieved in AI over the past decade now enable things that seemed impossible a bit earlier, such as voice interfaces that can both recognize and produce speech.

AI has famously been used by IBM Watson to make treatment recommendations. Analyses of big data (which may or may not qualify as AI) have saved hospitals large sums of money and even–finally, what we’ve been waiting for!–make patients healthier. But I’m talking in this article about a particular focus: the potential for changing the much-derided EHR. As many observers have pointed out, current EHRs are mostly billion-dollar file cabinets in electronic form. That epithet doesn’t even characterize them well enough–imagine instead a file cabinet that repeatedly screamed at you to check what you’re doing as you thumb through the papers.

How can AI create a new electronic health record? Major vendors have announced virtual assistants (See also John’s recent interview with MEDITECH which mentions their interest in virtual assistants) to make their interfaces more intuitive and responsive, so there is hope that they’re watching other industries and learning from machine learning. I don’t know what the vendors basing these assistants on, but in this article I’ll describe how some vanilla AI techniques could be applied to the EHR.

How a Learning EHR Would Work

An AI-based health record would start with the usual dashboard-like interface. Each record consists of hundreds of discrete pieces of data, such as age, latest blood pressure reading, a diagnosis of chronic heart failure, and even ZIP code and family status–important public health indicators. Each field of data would be called a feature in traditional AI. The goal is to find which combination of features–and their values, such as 75 for age–most accurately predict what a clinician does with the EHR. With each click or character typed, the AI model looks at all the features, discards the bulk of them that are not useful, and uses the rest to present the doctor with fields and information likely to be of value.

The EHR will probably learn that the forms pulled up by a doctor for a heart patient differ from those pulled up for a cancer patient. One case might focus on behavior, another on surgery and medication. Clinicians certainly behave differently in the hospital from how they behave in their home offices, or even how they behave in another hospital across town with different patient demographics. A learning EHR will discover and adapt to these differences, while also capitalizing on the commonalities in the doctor’s behavior across all settings, as well as how other doctors in the practice behave.

Clinicians like to say that every patient is different: well, with AI tracking behavior, the interface can adapt to every patient.

AI can also make use of messy and incomplete data, the well-known weaknesses of health care. But it’s crucial, to maximize predictive accuracy, for the AI system to have access to as many fields as possible. Privacy rules, however, dictate that certain fields be masked and others made fuzzy (for instance, specifying age as a range from 70 to 80 instead of precisely 75). Although AI can still make use of such data, it might be possible to provide more precise values through data sharing agreements strictly stipulating that the data be used only to improve the EHR–not for competitive strategizing, marketing, or other frowned-on exploitation.

A learning EHR would also be integrated with other innovations that increase available data and reduce labor–for instance, devices worn by patients to collect vital signs and exercise habits. This could free up doctors do less time collecting statistics and more time treating the patient.

Potential Impacts of AI-Based Records

What we hope for is interfaces that give the doctor just what she needs, when she needs it. A helpful interface includes autocompletion for data she enters (one feature of a mobile solution called Modernizing Medicine, which I profiled in an earlier article), clear and consistent displays, and prompts that are useful instead of distracting.

Abrupt and arbitrary changes to interfaces can be disorienting and create errors. So perhaps the EHR will keep the same basic interface but use cues such as changes in color or highlighted borders to suggest to the doctor what she should pay attention to. Or it could occasionally display a dialog box asking the clinician whether she would like the EHR to upgrade and streamline its interface based on its knowledge of her behavior. This intervention might be welcome because a learning EHR should be able to drastically reduce the number of alerts that interrupt the doctors’ work.

Doctors’ burdens should be reduced in other ways too. Current blind and dumb EHRs require doctors to enter the same information over and over, and even to resort to the dangerous practice of copy and paste. Naturally, observers who write about this problem take the burden off of the inflexible and poorly designed computer systems, and blame the doctors instead. But doing repetitive work for humans is the original purpose of computers, and what they’re best at doing. Better design will make dual entries (and inconsistent records) a thing of the past.

Liability

Current computer vendors disclaim responsibility for errors, leaving it up the busy doctor to verify that the system carried out the doctor’s intentions accurately. Unfortunately, it will be a long time (if ever) before AI-driven systems are accurate enough to give vendors the confidence to take on risk. However, AI systems have an advantage over conventional ones by assigning a confidence level to each decision they make. Therefore, they could show the doctor how much the system trusts itself, and a high degree of doubt could let the doctor know she should take a closer look.

One of the popular terms that have sprung up over the past decade to describe health care reform is the “learning healthcare system.” A learning system requires learning on every level and at every stage. Because nobody likes the designs of current EHRs, they should be happy to try a new EHR with a design based directly on their behavior.

About the author

Andy Oram

Andy is a writer and editor in the computer field. His editorial projects have ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. A correspondent for Healthcare IT Today, Andy also writes often on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM (Brussels), DebConf, and LibrePlanet. Andy participates in the Association for Computing Machinery's policy organization, named USTPC, and is on the editorial board of the Linux Professional Institute.

   

Categories