Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Google And Fitbit Partner On Wearables Data Options

Posted on May 7, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Fitbit and Google have announced plans to work together, in a deal intended to “transform the future of digital health and wearables.” While the notion of transforming digital health is hyperbole even for companies the size of Google and Fitbit, the pairing does have plenty of potential.

In a nutshell, Fitbit and Google expect to take on both consumer and enterprise health projects that integrate data from EMRs, wearables and other sources of patient information together. Given the players involved, it’s hard to doubt that at least something neat will emerge from their union.

Among the first things the pair plans to use Google’s new Cloud Healthcare API to connect Fitbit data with EMRs. Of course, readers will know that it’s one thing to say this and another to actually do it, but gross oversimplifications aside, the idea is worth pursuing.

Also, using services such as those offered by Twine Health– a recent Fitbit acquisition — the two companies will work to better manage chronic conditions such as diabetes and hypertension. Twine offers a connected health platform which leverages Fitbit data to offer customized health coaching.

Of course, as part of the deal Fitbit is moving to the Google Cloud Platform, which will supply the expected cloud services and engineering support.

The two say that moving to the Cloud Platform will offer Fitbit advanced security capabilities which will help speed up the growth of Fitbit Health Solutions business. They also expect to make inroads in population health analysis. For its part, Google also notes that it will bring its AI, machine learning capabilities and predictive analytics algorithms to the table.

It might be worth a small caution here. Google makes a point of saying it is “committed” to meeting HIPAA standards, and that most Google Cloud products do already. That “most” qualifier would make me a little bit nervous as a provider, but I know, why worry about these niceties when big deals are afoot. However, fair warning that when someone says general comments like this about meeting HIPAA standards, it probably means they already employ high security standards which are likely better than HIPAA. However, it also means that they probably don’t comply with HIPAA since HIPAA is about more than security and requires a contractual relationship between provider and business associate and the associated liability of being a business associate.

Anyway, to round out all of this good stuff, Fitbit and Google said they expect to “innovate and transform” the future of wearables, pairing Fitbit’s brand, community, data and high-profile devices with Google’s extreme data management and cloud capabilities.

You know folks, it’s not that I don’t think this is interesting. I wouldn’t be writing about if I didn’t. But I do think it’s worth pointing out how little this news announcement says, really.

Yes, I realize that when partnerships begin, they are by definition all big ideas and plans. But when giants like Google, much less Fitbit, have to fall back on words like innovate and transform (yawn!), the whole thing is still pretty speculative. Just sayin’.

Machine Learning and AI in Healthcare – #HITsm Chat Topic

Posted on February 28, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

We’re excited to share the topic and questions for this week’s #HITsm chat happening Friday, 3/2 at Noon ET (9 AM PT). This week’s chat will be hosted by Corinne Stroum (@healthcora) on the topic “Machine Learning and AI in Healthcare.”

Machine learning is hitting a furious pace in the consumer world, where AI estimates how long your food will take to arrive and targets you with the purchases you can’t resist. This week, we’ll discuss the implications of this technology as we translate it to the healthcare ecosystem.

Current machine learning topics of interest to healthcare range from adaptive and behavior-based care delivery pathways to the regulation of so-called “black box” systems those that cannot easily explain the reasons with which they made a prediction.

Please join us for this week’s #HITsm chat as we discuss the following questions:

T1: The Machine Learning community is currently discussing FAT: Fairness, Accountability, & Transparency. What does this mean in healthIT? #HITsm

T2: How can machine learning integrate naturally in clinical and patient facing workflows? #HITsm

T3: What consumer applications of machine learning are best suited for transition to the healthcare setting? #HITsm

T4: The FDA regulates software AS a medical device and IN a medical device. How do you envision this distinction today, and do you foresee it changing? #HITsm

T5: What successes have you seen in healthcare machine learning? Are particular care settings better suited for ML? Where do you see that alignment? #HITsm

Bonus: Is there a place for machine learning black box predictions? #HITsm

Upcoming #HITsm Chat Schedule
3/9 – HIMSS Break – No #HITsm Chat

3/16 – TBD

3/23 – TBD

We look forward to learning from the #HITsm community! As always, let us know if you’d like to host a future #HITsm chat or if you know someone you think we should invite to host.

If you’re searching for the latest #HITsm chat, you can always find the latest #HITsm chat and schedule of chats here.

Radiology Centers Poised To Adopt Machine Learning

Posted on February 8, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As with most other sectors of the healthcare industry, it seems likely that radiology will be transformed by the application of AI technologies. Of course, given the euphoric buzz around AI it’s hard to separate talk from concrete results. Also, it’s not clear who’s going to pay for AI adoption in radiology and where it is best used. But clearly, AI use in healthcare isn’t going away.

This notion is underscored by a new study by Reaction Data suggesting that both technology vendors and radiology leaders believe that widespread use of AI in radiology is imminent. The researchers argue that radiology AI applications are a “have to have” rather than a novel experiment, though survey respondents seem a little less enthusiastic.

The study, which included 133 respondents, focused on the use of machine learning in radiology. Researchers connected with a variety of relevant professionals, including directors of radiology, radiologists, techs, chiefs of radiology and PACS administrators.

It’s worth noting that the survey population was a bit lopsided. For example, 45% of respondents were PACS admins, while the rest of the respondent types represented less than 10%. Also, 90% of respondents were affiliated with hospital radiology centers. Still, the results offer an interesting picture of how participants in the radiology business are looking at machine learning.

When asked how important machine learning was for the future of radiology, one-quarter of respondents said that it was extremely important, and another 59% said it was very or somewhat important. When the data was sorted by job titles, it showed that roughly 90% of imaging directors said that machine learning would prove very important to radiology, followed by just over 75% of radiology chiefs. Radiology managers both came in at around 60%. Clearly, the majority of radiology leaders surveyed see a future here.

About 90% of radiology chiefs were extremely familiar with machine learning, and 75% of techs. A bit counterintuitively, less than 10% of PACS administrators reported being that familiar with this technology, though this does follow from the previous results indicating that only half were enthused about machine learning’s importance. Meanwhile, 75% of techs in roughly 60% of radiologists were extremely familiar with machine learning.

All of this is fine, but adoption is where the rubber meets the road. Reaction Data found that 15% of respondents said they’d been using machine learning for a while and 8% said they’d just gotten started.

Many more centers were preparing to jump in. Twelve percent reported that they were planning on adopting machine learning within the next 12 months, 26% of respondents said they were 1 to 2 years away from adoption and another 24% said they were 3+ years out.  Just 16% said they don’t think they’ll ever use machine learning in their radiology center.

For those who do plan to implement machine learning, top uses include analyzing lung imaging (66%), chest x-rays (62%), breast imaging (62%), bone imaging (41%) and cardiovascular imaging (38%). Meanwhile, among those who are actually using machine learning in radiology, breast imaging is by far the most common use, with 75% of respondents saying they used it in this case.

Clearly, applying the use of machine learning or other AI technologies will be tricky in any sector of medicine. However, if the survey results are any indication, the bulk of radiology centers are prepared to give it a shot.

A Learning EHR for a Learning Healthcare System

Posted on January 24, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Can the health care system survive the adoption of electronic health records? When the HITECH act mandated the installation of EHRs in 2009, we all hoped they would propel hospitals and clinics into a 21st-century consciousness. Instead, EHRs threaten to destroy those who have adopted them: the doctors whose work environment they degrade and the hospitals that they are pushing into bankruptcy. But the revolution in artificial intelligence that’s injecting new insights into many industries could also create radically different EHRs.

Here I define AI as software that, instead of dictating what a computer system should do, undergoes a process of experimentation and observation that creates a model to control the system, hopefully with far greater sophistication, personalization, and adaptability. Breakthroughs achieved in AI over the past decade now enable things that seemed impossible a bit earlier, such as voice interfaces that can both recognize and produce speech.

AI has famously been used by IBM Watson to make treatment recommendations. Analyses of big data (which may or may not qualify as AI) have saved hospitals large sums of money and even–finally, what we’ve been waiting for!–make patients healthier. But I’m talking in this article about a particular focus: the potential for changing the much-derided EHR. As many observers have pointed out, current EHRs are mostly billion-dollar file cabinets in electronic form. That epithet doesn’t even characterize them well enough–imagine instead a file cabinet that repeatedly screamed at you to check what you’re doing as you thumb through the papers.

How can AI create a new electronic health record? Major vendors have announced virtual assistants (See also John’s recent interview with MEDITECH which mentions their interest in virtual assistants) to make their interfaces more intuitive and responsive, so there is hope that they’re watching other industries and learning from machine learning. I don’t know what the vendors basing these assistants on, but in this article I’ll describe how some vanilla AI techniques could be applied to the EHR.

How a Learning EHR Would Work

An AI-based health record would start with the usual dashboard-like interface. Each record consists of hundreds of discrete pieces of data, such as age, latest blood pressure reading, a diagnosis of chronic heart failure, and even ZIP code and family status–important public health indicators. Each field of data would be called a feature in traditional AI. The goal is to find which combination of features–and their values, such as 75 for age–most accurately predict what a clinician does with the EHR. With each click or character typed, the AI model looks at all the features, discards the bulk of them that are not useful, and uses the rest to present the doctor with fields and information likely to be of value.

The EHR will probably learn that the forms pulled up by a doctor for a heart patient differ from those pulled up for a cancer patient. One case might focus on behavior, another on surgery and medication. Clinicians certainly behave differently in the hospital from how they behave in their home offices, or even how they behave in another hospital across town with different patient demographics. A learning EHR will discover and adapt to these differences, while also capitalizing on the commonalities in the doctor’s behavior across all settings, as well as how other doctors in the practice behave.

Clinicians like to say that every patient is different: well, with AI tracking behavior, the interface can adapt to every patient.

AI can also make use of messy and incomplete data, the well-known weaknesses of health care. But it’s crucial, to maximize predictive accuracy, for the AI system to have access to as many fields as possible. Privacy rules, however, dictate that certain fields be masked and others made fuzzy (for instance, specifying age as a range from 70 to 80 instead of precisely 75). Although AI can still make use of such data, it might be possible to provide more precise values through data sharing agreements strictly stipulating that the data be used only to improve the EHR–not for competitive strategizing, marketing, or other frowned-on exploitation.

A learning EHR would also be integrated with other innovations that increase available data and reduce labor–for instance, devices worn by patients to collect vital signs and exercise habits. This could free up doctors do less time collecting statistics and more time treating the patient.

Potential Impacts of AI-Based Records

What we hope for is interfaces that give the doctor just what she needs, when she needs it. A helpful interface includes autocompletion for data she enters (one feature of a mobile solution called Modernizing Medicine, which I profiled in an earlier article), clear and consistent displays, and prompts that are useful instead of distracting.

Abrupt and arbitrary changes to interfaces can be disorienting and create errors. So perhaps the EHR will keep the same basic interface but use cues such as changes in color or highlighted borders to suggest to the doctor what she should pay attention to. Or it could occasionally display a dialog box asking the clinician whether she would like the EHR to upgrade and streamline its interface based on its knowledge of her behavior. This intervention might be welcome because a learning EHR should be able to drastically reduce the number of alerts that interrupt the doctors’ work.

Doctors’ burdens should be reduced in other ways too. Current blind and dumb EHRs require doctors to enter the same information over and over, and even to resort to the dangerous practice of copy and paste. Naturally, observers who write about this problem take the burden off of the inflexible and poorly designed computer systems, and blame the doctors instead. But doing repetitive work for humans is the original purpose of computers, and what they’re best at doing. Better design will make dual entries (and inconsistent records) a thing of the past.

Liability

Current computer vendors disclaim responsibility for errors, leaving it up the busy doctor to verify that the system carried out the doctor’s intentions accurately. Unfortunately, it will be a long time (if ever) before AI-driven systems are accurate enough to give vendors the confidence to take on risk. However, AI systems have an advantage over conventional ones by assigning a confidence level to each decision they make. Therefore, they could show the doctor how much the system trusts itself, and a high degree of doubt could let the doctor know she should take a closer look.

One of the popular terms that have sprung up over the past decade to describe health care reform is the “learning healthcare system.” A learning system requires learning on every level and at every stage. Because nobody likes the designs of current EHRs, they should be happy to try a new EHR with a design based directly on their behavior.

Key Articles in Health IT from 2017 (Part 2 of 2)

Posted on January 4, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article set a general context for health IT in 2017 and started through the year with a review of interesting articles and studies. We’ll finish the review here.

A thoughtful article suggests a positive approach toward health care quality. The author stresses the value of organic change, although using data for accountability has value too.

An article extolling digital payments actually said more about the out-of-control complexity of the US reimbursement system. It may or not be coincidental that her article appeared one day after the CommonWell Health Alliance announced an API whose main purpose seems to be to facilitate payment and other data exchanges related to law and regulation.

A survey by KLAS asked health care providers what they want in connected apps. Most apps currently just display data from a health record.

A controlled study revived the concept of Health Information Exchanges as stand-alone institutions, examining the effects of emergency departments using one HIE in New York State.

In contrast to many leaders in the new Administration, Dr. Donald Rucker received positive comments upon acceding to the position of National Coordinator. More alarm was raised about the appointment of Scott Gottlieb as head of the FDA, but a later assessment gave him high marks for his first few months.

Before Dr. Gottlieb got there, the FDA was already loosening up. The 21st Century Cures Act instructed it to keep its hands off many health-related digital technologies. After kneecapping consumer access to genetic testing and then allowing it back into the ring in 2015, the FDA advanced consumer genetics another step this year with approval for 23andMe tests about risks for seven diseases. A close look at another DNA site’s privacy policy, meanwhile, warns that their use of data exploits loopholes in the laws and could end up hurting consumers. Another critique of the Genetic Information Nondiscrimination Act has been written by Dr. Deborah Peel of Patient Privacy Rights.

Little noticed was a bill authorizing the FDA to be more flexible in its regulation of digital apps. Shortly after, the FDA announced its principles for approving digital apps, stressing good software development practices over clinical trials.

No improvement has been seen in the regard clinicians have for electronic records. Subjective reports condemned the notorious number of clicks required. A study showed they spend as much time on computer work as they do seeing patients. Another study found the ratio to be even worse. Shoving the job onto scribes may introduce inaccuracies.

The time spent might actually pay off if the resulting data could generate new treatments, increase personalized care, and lower costs. But the analytics that are critical to these advances have stumbled in health care institutions, in large part because of the perennial barrier of interoperability. But analytics are showing scattered successes, being used to:

Deloitte published a guide to implementing health care analytics. And finally, a clarion signal that analytics in health care has arrived: WIRED covers it.

A government cybersecurity report warns that health technology will likely soon contribute to the stream of breaches in health care.

Dr. Joseph Kvedar identified fruitful areas for applying digital technology to clinical research.

The Government Accountability Office, terror of many US bureaucracies, cam out with a report criticizing the sloppiness of quality measures at the VA.

A report by leaders of the SMART platform listed barriers to interoperability and the use of analytics to change health care.

To improve the lower outcomes seen by marginalized communities, the NIH is recruiting people from those populations to trust the government with their health data. A policy analyst calls on digital health companies to diversify their staff as well. Google’s parent company, Alphabet, is also getting into the act.

Specific technologies

Digital apps are part of most modern health efforts, of course. A few articles focused on the apps themselves. One study found that digital apps can improve depression. Another found that an app can improve ADHD.

Lots of intriguing devices are being developed:

Remote monitoring and telehealth have also been in the news.

Natural language processing and voice interfaces are becoming a critical part of spreading health care:

Facial recognition is another potentially useful technology. It can replace passwords or devices to enable quick access to medical records.

Virtual reality and augmented reality seem to have some limited applications to health care. They are useful foremost in education, but also for pain management, physical therapy, and relaxation.

A number of articles hold out the tantalizing promise that interoperability headaches can be cured through blockchain, the newest hot application of cryptography. But one analysis warned that blockchain will be difficult and expensive to adopt.

3D printing can be used to produce models for training purposes as well as surgical tools and implants customized to the patient.

A number of other interesting companies in digital health can be found in a Fortune article.

We’ll end the year with a news item similar to one that began the article: serious good news about the ability of Accountable Care Organizations (ACOs) to save money. I would also like to mention three major articles of my own:

I hope this review of the year’s articles and studies in health IT has helped you recall key advances or challenges, and perhaps flagged some valuable topics for you to follow. 2018 will continue to be a year of adjustment to new reimbursement realities touched off by the tax bill, so health IT may once again languish somewhat.

Machine Learning, Data Science, AI, Deep Learning, and Statistics – It’s All So Confusing

Posted on November 30, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

It seems like these days every healthcare IT company out there is saying they’re doing machine learning, AI, deep learning, etc. So many companies are using these terms that they’ve started to lose meaning. The problem is that people are using these labels regardless of whether they really apply. Plus, we all have different definitions for these terms.

As I search to understand the differences myself, I found this great tweet from Ronald van Loon that looks at this world and tries to better define it:

In that tweet, Ronald also links to an article that looks at some of the differences. I liked this part he took from Quora:

  • AI (Artificial intelligence) is a subfield of computer science, that was created in the 1960s, and it was (is) concerned with solving tasks that are easy for humans, but hard for computers. In particular, a so-called Strong AI would be a system that can do anything a human can (perhaps without purely physical things). This is fairly generic, and includes all kinds of tasks, such as planning, moving around in the world, recognizing objects and sounds, speaking, translating, performing social or business transactions, creative work (making art or poetry), etc.
  • Machine learning is concerned with one aspect of this: given some AI problem that can be described in discrete terms (e.g. out of a particular set of actions, which one is the right one), and given a lot of information about the world, figure out what is the “correct” action, without having the programmer program it in. Typically some outside process is needed to judge whether the action was correct or not. In mathematical terms, it’s a function: you feed in some input, and you want it to to produce the right output, so the whole problem is simply to build a model of this mathematical function in some automatic way. To draw a distinction with AI, if I can write a very clever program that has human-like behavior, it can be AI, but unless its parameters are automatically learned from data, it’s not machine learning.
  • Deep learning is one kind of machine learning that’s very popular now. It involves a particular kind of mathematical model that can be thought of as a composition of simple blocks (function composition) of a certain type, and where some of these blocks can be adjusted to better predict the final outcome.

Is that clear for you now? Would you suggest different definitions? Where do you see people using these terms correctly and where do you see them using them incorrectly?

Can Machine Learning Tame Healthcare’s Big Data?

Posted on September 20, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Big data is both a blessing and a curse. The blessing is that if we use it well, it will tell us important things we don’t know about patient care processes, clinical improvement, outcomes and more. The curse is that if we don’t use it, we’ve got a very expensive and labor-hungry boondoggle on our hands.

But there may be hope for progress. One article I read today suggests that another technology may hold the key to unlocking these blessings — that machine learning may be the tool which lets us harvest the big data fields. The piece, whose writer, oddly enough, was cited only as “Mauricio,” lead cloud expert at Cloudwards.net, argues that machine learning is “the most effective way to excavate buried patterns in the chunks of unstructured data.” While I am an HIT observer rather than techie, what limited tech knowledge I possess suggests that machine learning is going to play an important role in the future of taming big data in healthcare.

In the piece, Mauricio notes that big data is characterized by the high volume of data, including both structured and non-structured data, the high velocity of data flowing into databases every working second, the variety of data, which can range from texts and email to audio to financial transactions, complexity of data coming from multiple incompatible sources and variability of data flow rates.

Though his is a general analysis, I’m sure we can agree that healthcare big data specifically matches his description. I don’t know if you who are reading this include wild cards like social media content or video in their big data repositories, but even if you don’t, you may well in the future.

Anyway, for the purposes of this discussion, let’s summarize by saying that in this context, big data isn’t just made of giant repositories of relatively normalized data, it’s a whirlwind of structured and unstructured data in a huge number of formats, flooding into databases in spurts, trickles and floods around the clock.

To Mauricio, an obvious choice for extracting value from this chaos is machine learning, which he defines as a data analysis method that automates extrapolated model-building algorithms. In machine learning models, systems adapt independently without any human interaction, using automatically-applied customized algorithms and mathematical calculations to big data. “Machine learning offers a deeper insight into collected data and allows the computers to find hidden patterns which human analysts are bound to miss,” he writes.

According to the author, there are already machine learning models in place which help predict the appearance of genetically-influenced diseases such as diabetes and heart disease. Other possibilities for machine learning in healthcare – which he doesn’t mention but are referenced elsewhere – include getting a handle on population health. After all, an iterative learning technology could be a great choice for making predictions about population trends. You can probably think of several other possibilities.

Now, like many other industries, healthcare suffers from a data silo problem, and we’ll have to address that issue before we create the kind of multi-source, multi-format data pool that Mauricio envisions. Leveraging big data effectively will also require people to cooperate across departmental and even organizational boundaries, as John Lynn noted in a post from last year.

Even so, it’s good to identify tools and models that can help get the technical work done, and machine learning seems promising. Have any of you experimented with it?

The Value of Machine Learning in Value-based Care

Posted on August 4, 2016 I Written By

The following is a guest blog post by Mary Hardy, Vice President of Healthcare for Ayasdi.

Variation is a natural element in most healthcare delivery. After all, every patient is unique. But unwarranted clinical variation—the kind that results from a lack of systems and collaboration or the inappropriate use of care and services—is another issue altogether.

Healthcare industry thought leaders have called for the reduction of such unwarranted variation as the key to improving the quality and decreasing the cost of care. They have declared, quite rightly, that the quality of care an individual receives should not depend on geography. In response, hospitals throughout the United States are taking on the significant challenge of understanding and managing this variation.

Most hospitals recognize that the ability to distill the right insights from patient data is the catalyst for eliminating unwarranted clinical variation and is essential to implementing care models based on value. However, the complexity of patient data—a complexity that will only increase with the impending onslaught of data from biometric and personal fitness devices—can be overwhelming to even the most advanced organizations. There aren’t enough data scientists or analysts to make sense of the exponentially growing data sets within each organization.

Enter machine learning. Machine learning applications combine algorithms from computational biology and other disciplines to find patterns within billions of data points. The power of these algorithms enables organizations to uncover the evidence-based insights required for success in the value-based care environment.

Machine Learning and the Evolutionary Leap in Clinical Pathway Development
Since the 1990s, provider organizations have attempted to curb unwarranted variation by developing clinical pathways. A multi-disciplinary team of providers use peer-reviewed literature and patient population data to develop and validate best-practice protocols and guidance for specific conditions, treatments, and outcomes.

However, the process is burdened by significant limitations. Pathways often require months or years to research, build, and validate. Additionally, today’s clinical pathways are typically one-size-fits-all. Health systems that have the resources to do so often employ their own experts, who review research, pull data, run tables and come to a consensus on the ideal clinical pathway, but are still constrained by the experts’ inability to make sense of billions of data points.

Additionally, once the clinical pathway has been established, hospitals have few resources for tracking the care team’s adherence to the agreed-upon protocol. This alone is enough to derail years of efforts to reduce unwarranted variation.

Machine learning is the evolutionary leap in clinical pathway development and adherence. Acceleration is certainly a positive. High-performance machines and algorithms can examine complex continuously growing data elements far faster and capture insights more comprehensively than traditional or homegrown analytics tools. (Imagine reducing the development of a clinical pathway from months or years to weeks or days.)

But the true value of machine learning is enabling provider organizations to leverage patient population data from their own systems of record to develop clinical pathways that are customized to the organization’s processes, demographics, and clinicians.

Additionally, machine learning applications empower organizations to precisely track care team adherence, improving communication and organization effectiveness. By guiding clinicians to follow best practices through each step of care delivery, clinical pathways that are rooted in machine learning ensure that all patients receive the same level of high-quality care at the lowest possible cost.

Machine Learning Proves its Value
St. Louis-based Mercy, one of the most innovative health systems in the world, used a machine-learning application to recreate and improve upon a clinical pathway for total knee replacement surgery.

Drawing from Mercy’s integrated electronic medical record (EMR), the application grouped data from a highly complex series of events related to the procedure and segmented it. It was then possible to adapt other methods from biology and signals processing to the problem of determining the optimal way to perform the procedure—which drugs, tests, implants and other processes contribute to that optimal outcome. It also was possible to link predictive machine learning methods like regression or classification to perform real-time pathway editing.

The application revealed that Mercy’s patients naturally divided into clusters or groups with similar outcomes. The primary metric of interest to Mercy as an indicator of high quality was length of stay (LOS). The system highlighted clusters of patients with the shortest LOS and quickly discerned what distinguished this cluster from patients with the longest LOS.

What this analysis revealed was an unforeseen and groundbreaking care pathway for high-quality total knee replacement. The common denominator between all patients with the shortest LOS and best outcomes was administration of pregabalin—a drug generally prescribed for shingles. A group of four physicians had seen something in the medical literature that led them to believe that administering the drug prior to surgery would inhibit postoperative pain, reduce opiate usage and produce faster ambulation. It did.

This innovation was happening in Mercy’s own backyard, and it was undeniably a best practice—the data revealed that each of the best outcomes included administration of this drug. Using traditional approaches, it is highly unlikely that Mercy would have asked the question, “What if we use a shingles drug to improve total knee replacement?” The superior outcomes of four physicians would have remained hidden in a sea of complex data.

This single procedure was worth over $1 million per year for Mercy in direct costs.

What Mercy’s experience demonstrates is that the most difficult, persistent and complex problems in healthcare can resolve themselves through data. The key lies in having the right tools to navigate that data’s complexity. The ability to determine at a glance what differentiates good outcomes from bad outcomes is incredibly powerful—and will transform care delivery.

Mary Hardy is the Vice President of Healthcare for Ayasdi, a developer of machine intelligent applications for health systems and payer organizations.