Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Key Articles in Health IT from 2017 (Part 2 of 2)

Posted on January 4, 2018 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site ( and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article set a general context for health IT in 2017 and started through the year with a review of interesting articles and studies. We’ll finish the review here.

A thoughtful article suggests a positive approach toward health care quality. The author stresses the value of organic change, although using data for accountability has value too.

An article extolling digital payments actually said more about the out-of-control complexity of the US reimbursement system. It may or not be coincidental that her article appeared one day after the CommonWell Health Alliance announced an API whose main purpose seems to be to facilitate payment and other data exchanges related to law and regulation.

A survey by KLAS asked health care providers what they want in connected apps. Most apps currently just display data from a health record.

A controlled study revived the concept of Health Information Exchanges as stand-alone institutions, examining the effects of emergency departments using one HIE in New York State.

In contrast to many leaders in the new Administration, Dr. Donald Rucker received positive comments upon acceding to the position of National Coordinator. More alarm was raised about the appointment of Scott Gottlieb as head of the FDA, but a later assessment gave him high marks for his first few months.

Before Dr. Gottlieb got there, the FDA was already loosening up. The 21st Century Cures Act instructed it to keep its hands off many health-related digital technologies. After kneecapping consumer access to genetic testing and then allowing it back into the ring in 2015, the FDA advanced consumer genetics another step this year with approval for 23andMe tests about risks for seven diseases. A close look at another DNA site’s privacy policy, meanwhile, warns that their use of data exploits loopholes in the laws and could end up hurting consumers. Another critique of the Genetic Information Nondiscrimination Act has been written by Dr. Deborah Peel of Patient Privacy Rights.

Little noticed was a bill authorizing the FDA to be more flexible in its regulation of digital apps. Shortly after, the FDA announced its principles for approving digital apps, stressing good software development practices over clinical trials.

No improvement has been seen in the regard clinicians have for electronic records. Subjective reports condemned the notorious number of clicks required. A study showed they spend as much time on computer work as they do seeing patients. Another study found the ratio to be even worse. Shoving the job onto scribes may introduce inaccuracies.

The time spent might actually pay off if the resulting data could generate new treatments, increase personalized care, and lower costs. But the analytics that are critical to these advances have stumbled in health care institutions, in large part because of the perennial barrier of interoperability. But analytics are showing scattered successes, being used to:

Deloitte published a guide to implementing health care analytics. And finally, a clarion signal that analytics in health care has arrived: WIRED covers it.

A government cybersecurity report warns that health technology will likely soon contribute to the stream of breaches in health care.

Dr. Joseph Kvedar identified fruitful areas for applying digital technology to clinical research.

The Government Accountability Office, terror of many US bureaucracies, cam out with a report criticizing the sloppiness of quality measures at the VA.

A report by leaders of the SMART platform listed barriers to interoperability and the use of analytics to change health care.

To improve the lower outcomes seen by marginalized communities, the NIH is recruiting people from those populations to trust the government with their health data. A policy analyst calls on digital health companies to diversify their staff as well. Google’s parent company, Alphabet, is also getting into the act.

Specific technologies

Digital apps are part of most modern health efforts, of course. A few articles focused on the apps themselves. One study found that digital apps can improve depression. Another found that an app can improve ADHD.

Lots of intriguing devices are being developed:

Remote monitoring and telehealth have also been in the news.

Natural language processing and voice interfaces are becoming a critical part of spreading health care:

Facial recognition is another potentially useful technology. It can replace passwords or devices to enable quick access to medical records.

Virtual reality and augmented reality seem to have some limited applications to health care. They are useful foremost in education, but also for pain management, physical therapy, and relaxation.

A number of articles hold out the tantalizing promise that interoperability headaches can be cured through blockchain, the newest hot application of cryptography. But one analysis warned that blockchain will be difficult and expensive to adopt.

3D printing can be used to produce models for training purposes as well as surgical tools and implants customized to the patient.

A number of other interesting companies in digital health can be found in a Fortune article.

We’ll end the year with a news item similar to one that began the article: serious good news about the ability of Accountable Care Organizations (ACOs) to save money. I would also like to mention three major articles of my own:

I hope this review of the year’s articles and studies in health IT has helped you recall key advances or challenges, and perhaps flagged some valuable topics for you to follow. 2018 will continue to be a year of adjustment to new reimbursement realities touched off by the tax bill, so health IT may once again languish somewhat.

Machine Learning, Data Science, AI, Deep Learning, and Statistics – It’s All So Confusing

Posted on November 30, 2017 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

It seems like these days every healthcare IT company out there is saying they’re doing machine learning, AI, deep learning, etc. So many companies are using these terms that they’ve started to lose meaning. The problem is that people are using these labels regardless of whether they really apply. Plus, we all have different definitions for these terms.

As I search to understand the differences myself, I found this great tweet from Ronald van Loon that looks at this world and tries to better define it:

In that tweet, Ronald also links to an article that looks at some of the differences. I liked this part he took from Quora:

  • AI (Artificial intelligence) is a subfield of computer science, that was created in the 1960s, and it was (is) concerned with solving tasks that are easy for humans, but hard for computers. In particular, a so-called Strong AI would be a system that can do anything a human can (perhaps without purely physical things). This is fairly generic, and includes all kinds of tasks, such as planning, moving around in the world, recognizing objects and sounds, speaking, translating, performing social or business transactions, creative work (making art or poetry), etc.
  • Machine learning is concerned with one aspect of this: given some AI problem that can be described in discrete terms (e.g. out of a particular set of actions, which one is the right one), and given a lot of information about the world, figure out what is the “correct” action, without having the programmer program it in. Typically some outside process is needed to judge whether the action was correct or not. In mathematical terms, it’s a function: you feed in some input, and you want it to to produce the right output, so the whole problem is simply to build a model of this mathematical function in some automatic way. To draw a distinction with AI, if I can write a very clever program that has human-like behavior, it can be AI, but unless its parameters are automatically learned from data, it’s not machine learning.
  • Deep learning is one kind of machine learning that’s very popular now. It involves a particular kind of mathematical model that can be thought of as a composition of simple blocks (function composition) of a certain type, and where some of these blocks can be adjusted to better predict the final outcome.

Is that clear for you now? Would you suggest different definitions? Where do you see people using these terms correctly and where do you see them using them incorrectly?

Can Machine Learning Tame Healthcare’s Big Data?

Posted on September 20, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

Big data is both a blessing and a curse. The blessing is that if we use it well, it will tell us important things we don’t know about patient care processes, clinical improvement, outcomes and more. The curse is that if we don’t use it, we’ve got a very expensive and labor-hungry boondoggle on our hands.

But there may be hope for progress. One article I read today suggests that another technology may hold the key to unlocking these blessings — that machine learning may be the tool which lets us harvest the big data fields. The piece, whose writer, oddly enough, was cited only as “Mauricio,” lead cloud expert at, argues that machine learning is “the most effective way to excavate buried patterns in the chunks of unstructured data.” While I am an HIT observer rather than techie, what limited tech knowledge I possess suggests that machine learning is going to play an important role in the future of taming big data in healthcare.

In the piece, Mauricio notes that big data is characterized by the high volume of data, including both structured and non-structured data, the high velocity of data flowing into databases every working second, the variety of data, which can range from texts and email to audio to financial transactions, complexity of data coming from multiple incompatible sources and variability of data flow rates.

Though his is a general analysis, I’m sure we can agree that healthcare big data specifically matches his description. I don’t know if you who are reading this include wild cards like social media content or video in their big data repositories, but even if you don’t, you may well in the future.

Anyway, for the purposes of this discussion, let’s summarize by saying that in this context, big data isn’t just made of giant repositories of relatively normalized data, it’s a whirlwind of structured and unstructured data in a huge number of formats, flooding into databases in spurts, trickles and floods around the clock.

To Mauricio, an obvious choice for extracting value from this chaos is machine learning, which he defines as a data analysis method that automates extrapolated model-building algorithms. In machine learning models, systems adapt independently without any human interaction, using automatically-applied customized algorithms and mathematical calculations to big data. “Machine learning offers a deeper insight into collected data and allows the computers to find hidden patterns which human analysts are bound to miss,” he writes.

According to the author, there are already machine learning models in place which help predict the appearance of genetically-influenced diseases such as diabetes and heart disease. Other possibilities for machine learning in healthcare – which he doesn’t mention but are referenced elsewhere – include getting a handle on population health. After all, an iterative learning technology could be a great choice for making predictions about population trends. You can probably think of several other possibilities.

Now, like many other industries, healthcare suffers from a data silo problem, and we’ll have to address that issue before we create the kind of multi-source, multi-format data pool that Mauricio envisions. Leveraging big data effectively will also require people to cooperate across departmental and even organizational boundaries, as John Lynn noted in a post from last year.

Even so, it’s good to identify tools and models that can help get the technical work done, and machine learning seems promising. Have any of you experimented with it?

The Value of Machine Learning in Value-based Care

Posted on August 4, 2016 I Written By

The following is a guest blog post by Mary Hardy, Vice President of Healthcare for Ayasdi.

Variation is a natural element in most healthcare delivery. After all, every patient is unique. But unwarranted clinical variation—the kind that results from a lack of systems and collaboration or the inappropriate use of care and services—is another issue altogether.

Healthcare industry thought leaders have called for the reduction of such unwarranted variation as the key to improving the quality and decreasing the cost of care. They have declared, quite rightly, that the quality of care an individual receives should not depend on geography. In response, hospitals throughout the United States are taking on the significant challenge of understanding and managing this variation.

Most hospitals recognize that the ability to distill the right insights from patient data is the catalyst for eliminating unwarranted clinical variation and is essential to implementing care models based on value. However, the complexity of patient data—a complexity that will only increase with the impending onslaught of data from biometric and personal fitness devices—can be overwhelming to even the most advanced organizations. There aren’t enough data scientists or analysts to make sense of the exponentially growing data sets within each organization.

Enter machine learning. Machine learning applications combine algorithms from computational biology and other disciplines to find patterns within billions of data points. The power of these algorithms enables organizations to uncover the evidence-based insights required for success in the value-based care environment.

Machine Learning and the Evolutionary Leap in Clinical Pathway Development
Since the 1990s, provider organizations have attempted to curb unwarranted variation by developing clinical pathways. A multi-disciplinary team of providers use peer-reviewed literature and patient population data to develop and validate best-practice protocols and guidance for specific conditions, treatments, and outcomes.

However, the process is burdened by significant limitations. Pathways often require months or years to research, build, and validate. Additionally, today’s clinical pathways are typically one-size-fits-all. Health systems that have the resources to do so often employ their own experts, who review research, pull data, run tables and come to a consensus on the ideal clinical pathway, but are still constrained by the experts’ inability to make sense of billions of data points.

Additionally, once the clinical pathway has been established, hospitals have few resources for tracking the care team’s adherence to the agreed-upon protocol. This alone is enough to derail years of efforts to reduce unwarranted variation.

Machine learning is the evolutionary leap in clinical pathway development and adherence. Acceleration is certainly a positive. High-performance machines and algorithms can examine complex continuously growing data elements far faster and capture insights more comprehensively than traditional or homegrown analytics tools. (Imagine reducing the development of a clinical pathway from months or years to weeks or days.)

But the true value of machine learning is enabling provider organizations to leverage patient population data from their own systems of record to develop clinical pathways that are customized to the organization’s processes, demographics, and clinicians.

Additionally, machine learning applications empower organizations to precisely track care team adherence, improving communication and organization effectiveness. By guiding clinicians to follow best practices through each step of care delivery, clinical pathways that are rooted in machine learning ensure that all patients receive the same level of high-quality care at the lowest possible cost.

Machine Learning Proves its Value
St. Louis-based Mercy, one of the most innovative health systems in the world, used a machine-learning application to recreate and improve upon a clinical pathway for total knee replacement surgery.

Drawing from Mercy’s integrated electronic medical record (EMR), the application grouped data from a highly complex series of events related to the procedure and segmented it. It was then possible to adapt other methods from biology and signals processing to the problem of determining the optimal way to perform the procedure—which drugs, tests, implants and other processes contribute to that optimal outcome. It also was possible to link predictive machine learning methods like regression or classification to perform real-time pathway editing.

The application revealed that Mercy’s patients naturally divided into clusters or groups with similar outcomes. The primary metric of interest to Mercy as an indicator of high quality was length of stay (LOS). The system highlighted clusters of patients with the shortest LOS and quickly discerned what distinguished this cluster from patients with the longest LOS.

What this analysis revealed was an unforeseen and groundbreaking care pathway for high-quality total knee replacement. The common denominator between all patients with the shortest LOS and best outcomes was administration of pregabalin—a drug generally prescribed for shingles. A group of four physicians had seen something in the medical literature that led them to believe that administering the drug prior to surgery would inhibit postoperative pain, reduce opiate usage and produce faster ambulation. It did.

This innovation was happening in Mercy’s own backyard, and it was undeniably a best practice—the data revealed that each of the best outcomes included administration of this drug. Using traditional approaches, it is highly unlikely that Mercy would have asked the question, “What if we use a shingles drug to improve total knee replacement?” The superior outcomes of four physicians would have remained hidden in a sea of complex data.

This single procedure was worth over $1 million per year for Mercy in direct costs.

What Mercy’s experience demonstrates is that the most difficult, persistent and complex problems in healthcare can resolve themselves through data. The key lies in having the right tools to navigate that data’s complexity. The ability to determine at a glance what differentiates good outcomes from bad outcomes is incredibly powerful—and will transform care delivery.

Mary Hardy is the Vice President of Healthcare for Ayasdi, a developer of machine intelligent applications for health systems and payer organizations.