Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Key Articles in Health IT from 2017 (Part 2 of 2)

Posted on January 4, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article set a general context for health IT in 2017 and started through the year with a review of interesting articles and studies. We’ll finish the review here.

A thoughtful article suggests a positive approach toward health care quality. The author stresses the value of organic change, although using data for accountability has value too.

An article extolling digital payments actually said more about the out-of-control complexity of the US reimbursement system. It may or not be coincidental that her article appeared one day after the CommonWell Health Alliance announced an API whose main purpose seems to be to facilitate payment and other data exchanges related to law and regulation.

A survey by KLAS asked health care providers what they want in connected apps. Most apps currently just display data from a health record.

A controlled study revived the concept of Health Information Exchanges as stand-alone institutions, examining the effects of emergency departments using one HIE in New York State.

In contrast to many leaders in the new Administration, Dr. Donald Rucker received positive comments upon acceding to the position of National Coordinator. More alarm was raised about the appointment of Scott Gottlieb as head of the FDA, but a later assessment gave him high marks for his first few months.

Before Dr. Gottlieb got there, the FDA was already loosening up. The 21st Century Cures Act instructed it to keep its hands off many health-related digital technologies. After kneecapping consumer access to genetic testing and then allowing it back into the ring in 2015, the FDA advanced consumer genetics another step this year with approval for 23andMe tests about risks for seven diseases. A close look at another DNA site’s privacy policy, meanwhile, warns that their use of data exploits loopholes in the laws and could end up hurting consumers. Another critique of the Genetic Information Nondiscrimination Act has been written by Dr. Deborah Peel of Patient Privacy Rights.

Little noticed was a bill authorizing the FDA to be more flexible in its regulation of digital apps. Shortly after, the FDA announced its principles for approving digital apps, stressing good software development practices over clinical trials.

No improvement has been seen in the regard clinicians have for electronic records. Subjective reports condemned the notorious number of clicks required. A study showed they spend as much time on computer work as they do seeing patients. Another study found the ratio to be even worse. Shoving the job onto scribes may introduce inaccuracies.

The time spent might actually pay off if the resulting data could generate new treatments, increase personalized care, and lower costs. But the analytics that are critical to these advances have stumbled in health care institutions, in large part because of the perennial barrier of interoperability. But analytics are showing scattered successes, being used to:

Deloitte published a guide to implementing health care analytics. And finally, a clarion signal that analytics in health care has arrived: WIRED covers it.

A government cybersecurity report warns that health technology will likely soon contribute to the stream of breaches in health care.

Dr. Joseph Kvedar identified fruitful areas for applying digital technology to clinical research.

The Government Accountability Office, terror of many US bureaucracies, cam out with a report criticizing the sloppiness of quality measures at the VA.

A report by leaders of the SMART platform listed barriers to interoperability and the use of analytics to change health care.

To improve the lower outcomes seen by marginalized communities, the NIH is recruiting people from those populations to trust the government with their health data. A policy analyst calls on digital health companies to diversify their staff as well. Google’s parent company, Alphabet, is also getting into the act.

Specific technologies

Digital apps are part of most modern health efforts, of course. A few articles focused on the apps themselves. One study found that digital apps can improve depression. Another found that an app can improve ADHD.

Lots of intriguing devices are being developed:

Remote monitoring and telehealth have also been in the news.

Natural language processing and voice interfaces are becoming a critical part of spreading health care:

Facial recognition is another potentially useful technology. It can replace passwords or devices to enable quick access to medical records.

Virtual reality and augmented reality seem to have some limited applications to health care. They are useful foremost in education, but also for pain management, physical therapy, and relaxation.

A number of articles hold out the tantalizing promise that interoperability headaches can be cured through blockchain, the newest hot application of cryptography. But one analysis warned that blockchain will be difficult and expensive to adopt.

3D printing can be used to produce models for training purposes as well as surgical tools and implants customized to the patient.

A number of other interesting companies in digital health can be found in a Fortune article.

We’ll end the year with a news item similar to one that began the article: serious good news about the ability of Accountable Care Organizations (ACOs) to save money. I would also like to mention three major articles of my own:

I hope this review of the year’s articles and studies in health IT has helped you recall key advances or challenges, and perhaps flagged some valuable topics for you to follow. 2018 will continue to be a year of adjustment to new reimbursement realities touched off by the tax bill, so health IT may once again languish somewhat.

Health IT Leaders Spending On Security, Not AI And Wearables

Posted on December 18, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

While breakout technologies like wearables and AI are hot, health system leaders don’t seem to be that excited about adopting them, according to a new study which reached out to more than 20 US health systems.

Nine out of 10 health systems said they increased their spending on cybersecurity technology, according to research by the Center for Connected Medicine (CCM) in partnership with the Health Management Academy.

However, many other emerging technologies don’t seem to be making the cut. For example, despite the publicity it’s received, two-thirds of health IT leaders said using AI was a low or very low priority. It seems that they don’t see a business model for using it.

The same goes for many other technologies that fascinate analysts and editors. For example, while many observers which expect otherwise, less than a quarter of respondents (17%) were paying much attention to wearables or making any bets on mobile health apps (21%).

When it comes to telemedicine, hospitals and health systems noted that they were in a bind. Less than half said they receive reimbursement for virtual consults (39%) or remote monitoring (46%}. Things may resolve next year, however. Seventy-one percent of those not getting paid right now expect to be reimbursed for such care in 2018.

Despite all of this pessimism about the latest emerging technologies, health IT leaders were somewhat optimistic about the benefits of predictive analytics, with more than half of respondents using or planning to begin using genomic testing for personalized medicine. The study reported that many of these episodes will be focused on oncology, anesthesia and pharmacogenetics.

What should we make of these results? After all, many seem to fly in the face of predictions industry watchers have offered.

Well, for one thing, it’s good to see that hospitals and health systems are engaging in long-overdue beefing up of their security infrastructure. As we’ve noted here in the past, hospital spending on cybersecurity has been meager at best.

Another thing is that while a few innovative hospitals are taking patient-generated health data seriously, many others are taking a rather conservative position here. While nobody seems to disagree that such data will change the business, it seems many hospitals are waiting for somebody else to take the risks inherent in investing in any new data scheme.

Finally, it seems that we are seeing a critical mass of influential hospitals that expect good things from telemedicine going forward. We are already seeing some large, influential academic medical centers treat virtual care as a routine part of their service offerings and a way to minimize gaps in care.

All told, it seems that at the moment, study respondents are less interested in sexy new innovations than the VCs showering them with money. That being said, it looks like many of these emerging strategies might pay off in 2018. It should be an interesting year.

Machine Learning, Data Science, AI, Deep Learning, and Statistics – It’s All So Confusing

Posted on November 30, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

It seems like these days every healthcare IT company out there is saying they’re doing machine learning, AI, deep learning, etc. So many companies are using these terms that they’ve started to lose meaning. The problem is that people are using these labels regardless of whether they really apply. Plus, we all have different definitions for these terms.

As I search to understand the differences myself, I found this great tweet from Ronald van Loon that looks at this world and tries to better define it:

In that tweet, Ronald also links to an article that looks at some of the differences. I liked this part he took from Quora:

  • AI (Artificial intelligence) is a subfield of computer science, that was created in the 1960s, and it was (is) concerned with solving tasks that are easy for humans, but hard for computers. In particular, a so-called Strong AI would be a system that can do anything a human can (perhaps without purely physical things). This is fairly generic, and includes all kinds of tasks, such as planning, moving around in the world, recognizing objects and sounds, speaking, translating, performing social or business transactions, creative work (making art or poetry), etc.
  • Machine learning is concerned with one aspect of this: given some AI problem that can be described in discrete terms (e.g. out of a particular set of actions, which one is the right one), and given a lot of information about the world, figure out what is the “correct” action, without having the programmer program it in. Typically some outside process is needed to judge whether the action was correct or not. In mathematical terms, it’s a function: you feed in some input, and you want it to to produce the right output, so the whole problem is simply to build a model of this mathematical function in some automatic way. To draw a distinction with AI, if I can write a very clever program that has human-like behavior, it can be AI, but unless its parameters are automatically learned from data, it’s not machine learning.
  • Deep learning is one kind of machine learning that’s very popular now. It involves a particular kind of mathematical model that can be thought of as a composition of simple blocks (function composition) of a certain type, and where some of these blocks can be adjusted to better predict the final outcome.

Is that clear for you now? Would you suggest different definitions? Where do you see people using these terms correctly and where do you see them using them incorrectly?

Health IT Continues To Drive Healthcare Leaders’ Agenda

Posted on October 23, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study laying out opportunities, challenges and issues in healthcare likely to emerge in 2018 demonstrates that health IT is very much top of mind for healthcare leaders.

The 2018 HCEG Top 10 list, which is published by the Healthcare Executive Group, was created based on feedback from executives at its 2017 Annual Forum in Nashville, TN. Participants included health plans, health systems and provider organizations.

The top item on the list was “Clinical and Data Analytics,” which the list describes as leveraging big data with clinical evidence to segment populations, manage health and drive decisions. The second-place slot was occupied by “Population Health Services Organizations,” which, it says, operationalize population health strategy and chronic care management, drive clinical innovation and integrate social determinants of health.

The list also included “Harnessing Mobile Health Technology,” which included improving disease management and member engagement in data collection/distribution; “The Engaged Digital Consumer,” which by its definition includes HSAs, member/patient portals and health and wellness education materials; and cybersecurity.

Other hot issues named by the group include value-based payments, cost transparency, total consumer health, healthcare reform and addressing pharmacy costs.

So, readers, do you agree with HCEG’s priorities? Has the list left off any important topics?

In my case, I’d probably add a few items to list. For example, I may be getting ahead of the industry, but I’d argue that healthcare AI-related technologies might belong there. While there’s a whole separate article to be written here, in short, I believe that both AI-driven data analytics and consumer-facing technologies like medical chatbots have tremendous potential.

Also, I was surprised to see that care coordination improvements didn’t top respondents’ list of concerns. Admittedly, some of the list items might involve taking coordination to the next level, but the executives apparently didn’t identify it as a top priority.

Finally, as unsexy as the topic is for most, I would have thought that some form of health IT infrastructure spending or broader IT investment concerns might rise to the top of this list. Even if these executives didn’t discuss it, my sense from looking at multiple information sources is that providers are, and will continue to be, hard-pressed to allocate enough funds for IT.

Of course, if the executives involved can address even a few of their existing top 10 items next year, they’ll be doing pretty well. For example, we all know that providers‘ ability to manage value-based contracting is minimal in many cases, so making progress would be worthwhile. Participants like hospitals and clinics still need time to get their act together on value-based care, and many are unlikely to be on top of things by 2018.

There are also problems, like population health management, which involve processes rather than a destination. Providers will be struggling to address it well beyond 2018. That being said, it’d be great if healthcare execs could improve their results next year.

Nit-picking aside, HCEG’s Top 10 list is largely dead-on. The question is whether will be able to step up and address all of these things. Fingers crossed!

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 3 of 3)

Posted on October 20, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Earlier parts of this article set the stage for understanding what the Alexa Diabetes Challenge is trying to achieve and how some finalists interpreted the mandate. We examine three more finalists in this final section.

DiaBetty from the University of Illinois-Chicago

DiaBetty focuses on a single, important aspect of diabetes: the effect of depression on the course of the disease. This project, developed by the Department of Psychiatry at the University of Illinois-Chicago, does many of the things that other finalists in this article do–accepting data from EHRs, dialoguing with the individual, presenting educational materials on nutrition and medication, etc.–but with the emphasis on inquiring about mood and handling the impact that depression-like symptoms can have on behavior that affects Type 2 diabetes.

Olu Ajilore, Associate Professor and co-director of the CoNECt lab, told me that his department benefited greatly from close collaboration with bioengineering and computer science colleagues who, before DiaBetty, worked on another project that linked computing with clinical needs. Although they used some built-in capabilities of the Alexa, they may move to Lex or another AI platform and build a stand-alone device. Their next step is to develop reliable clinical trials, checking the effect of DiaBetty on health outcomes such as medication compliance, visits, and blood sugar levels, as well as cost reductions.

T2D2 from Columbia University

Just as DiaBetty explores the impact of mood on diabetes, T2D2 (which stands for “Taming Type 2 Diabetes, Together”) focuses on nutrition. Far more than sugar intake is involved in the health of people with diabetes. Elliot Mitchell, a PhD student who led the T2D2 team under Assistant Professor Lena Mamykina in the Department of Biomedical Informatics, told me that the balance of macronutrients (carbohydrates, fat, and protein) is important.

T2D2 is currently a prototype, developed as a combination of Alexa Skill and a chatbot based on Lex. The Alexa Skills Kit handle voice interactions. Both the Skill and the chatbot communicate with a back end that handles accounts and logic. Although related Columbia University technology in diabetes self-management is used, both the NLP and the voice interface were developed specifically for the Alexa Diabetes Challenge. The T2D2 team included people from the disciplines of computer interaction, data science, nursing, and behavioral nutrition.

The user invokes Alexa to tell it blood sugar values and the contents of meals. T2D2, in response, offers recipe recommendations and other advice. Like many of the finalists in this article, it looks back at meals over time, sees how combinations of nutrients matched changes in blood sugar, and personalizes its food recommendations.

For each patient, before it gets to know that patient’s diet, T2D2 can make food recommendations based on what is popular in their ZIP code. It can change these as it watches the patient’s choices and records comments to recommendations (for instance, “I don’t like that food”).

Data is also anonymized and aggregated for both recommendations and future research.

The care team and family caregivers are also involved, although less intensely than some other finalists do. The patient can offer caregivers a one-page report listing a plot of blood sugar by time and day for the previous two weeks, along with goals and progress made, and questions. The patient can also connect her account and share key medical information with family and friends, a feature called the Supportive Network.

The team’s next phase is run studies to evaluable some of assumptions they made when developing T2D2, and improve it for eventual release into the field.

Sugarpod from Wellpepper

I’ll finish this article with the winner of the challenge, already covered by an earlier article. Since the publication of the article, according to the founder and CEO of Wellpepper, Anne Weiler, the company has integrated some of Sugarpod functions into a bathroom scale. When a person stands on the scale, it takes an image of their feet and uploads it to sites that both the individual and their doctor can view. A machine learning image classifier can check the photo for problems such as diabetic foot ulcers. The scale interface can also ask the patient for quick information such as whether they took their medication and what their blood sugar is. Extended conversations are avoided, under the assumption that people don’t want to have them in the bathroom. The company designed its experiences to be integrated throughout the person’s day: stepping on the scale and answering a few questions in the morning, interacting with the care plan on a mobile device at work, and checking notifications and messages with an Echo device in the evening.

Any machine that takes pictures can arouse worry when installed in a bathroom. While taking the challenge and talking to people with diabetes, Wellpepper learned to add a light that goes on when the camera is taking a picture.

This kind of responsiveness to patient representatives in the field will determine the success of each of the finalists in this challenge. They all strive for behavioral change through connected health, and this strategy is completely reliant on engagement, trust, and collaboration by the person with a chronic illness.

The potential of engagement through voice is just beginning to be tapped. There is evidence, for instance, that serious illnesses can be diagnosed by analyzing voice patterns. As we come up on the annual Connected Health Conference this month, I will be interested to see how many participating developers share the common themes that turned up during the Alexa Diabetes Challenge.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 2 of 3)

Posted on October 19, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article introduced the problems of computer interfaces in health care and mentioned some current uses for natural language processing (NLP) for apps aimed at clinicians. I also summarized the common goals, problems, and solutions I found among the five finalists in the Alexa Diabetes Challenge. This part of the article shows the particular twist given by each finalist.

My GluCoach from HCL America in Partnership With Ayogo

There are two levels from which to view My GluCoach. On one level, it’s an interactive tool exemplifying one of the goals I listed earlier–intense engagement with patients over daily behavior–as well as the theme of comprehensivenesss. The interactions that My GluCoach offers were divided into three types by Abhishek Shankar, a Vice President at HCL Technologies America:

  • Teacher: the service can answer questions about diabetes and pull up stored educational materials

  • Coach: the service can track behavior by interacting with devices and prompt the patient to eat differently or go out for exercise. In addition to asking questions, a patient can set up Alexa to deliver alarms at particular times, a feature My GluCoach uses to deliver advice.

  • Assistant: provide conveniences to the patient, such as ordering a cab to take her to an appointment.

On a higher level, My GluCoach fits into broader services offered to health care institutions by HCL Technologies as part of a population health program. In creating the service HCL partnered with Ayogo, which develops a mobile platform for patient engagement and tracking. HCL has also designed the service as a general health care platform that can be expanded over the next six to twelve months to cover medical conditions besides diabetes.

Another theme I discussed earlier, interactions with outside data and the use of machine learning, are key to my GluCoach. For its demo at the challenge, My GluCoach took data about exercise from a Fitbit. It can potentially work with any device that shares information, and HCL plans to integrate the service with common EHRs. As My GluCoach gets to know the individual who uses it over months and years, it can tailor its responses more and more intelligently to the learning style and personality of the patient.

Patterns of eating, medical compliance, and other data are not the only input to machine learning. Shankar pointed out that different patients require different types of interventions. Some simply want to be given concrete advice and told what to do. Others want to be presented with information and then make their own decisions. My GluCoach will hopefully adapt to whatever style works best for the particular individual. This affective response–together with a general tone of humor and friendliness–will win the trust of the individual.

PIA from Ejenta

PIA, which stands for “personal intelligent agent,” manages care plans, delivering information to the affected patients as well as their care teams and concerned relatives. It collects medical data and draws conclusions that allow it to generate alerts if something seems wrong. Patients can also ask PIA how they are doing, and the agent will respond with personalized feedback and advice based on what the agent has learned about them and their care plan.

I talked to Rachna Dhamija, who worked on a team that developed PIA as the founder and CEO of Ejenta. (The name Ejenta is a version of the word “agent” that entered the Bengali language as slang.) She said that the AI technology had been licensed from NASA, which had developed it to monitor astronauts’ health and other aspects of flights. Ejenta helped turn it into a care coordination tool with interfaces for the web and mobile devices at a major HMO to treat patients with chronic heart failure and high-risk pregnancies. Ejenta expanded their platform to include an Alexa interface for the diabetes challenge.

As a care management tool, PIA records targets such as glucose levels, goals, medication plans, nutrition plans, and action parameters such as how often to take measurements using the devices. Each caregiver, along the patient, has his or her own agent, and caregivers can monitor multiple patients. The patient has very granular control over sharing, telling PIA which kind of data can be sent to each caretaker. Access rights must be set on the web or a mobile device, because allowing Alexa to be used for that purpose might let someone trick the system into thinking he was the patient.

Besides Alexa, PIA takes data from devices (scales, blood glucose monitors, blood pressure monitors, etc.) and from EHRs in a HIPAA-compliant method. Because the service cannot wake up Alexa, it currently delivers notifications, alerts, and reminders by sending a secure message to the provider’s agent. The provider can then contact the patient by email or mobile phone. The team plans to integrate PIA with an Alexa notifications feature in the future, so that PIA can proactively communicate with the patient via Alexa.

PIA goes beyond the standard rules for alerts, allowing alerts and reminders to be customized based on what it learns about the patient. PIA uses machine learning to discover what is normal activity (such as weight fluctuations) for each patient and to make predictions based on the data, which can be shared with the care team.

The final section of this article covers DiaBetty, T2D2, and Sugarpod, the remaining finalists.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 1 of 3)

Posted on October 16, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The leading pharmaceutical and medical company Merck, together with Amazon Web Services, has recently been exploring the potential health impacts of voice interfaces and natural language processing (NLP) through an Alexa Diabetes Challenge. I recently talked to the five finalists in this challenge. This article explores the potential of new interfaces to transform the handling of chronic disease, and what the challenge reveals about currently available technology.

Alexa, of course, is the ground-breaking system that brings everyday voice interaction with computers into the home. Most of its uses are trivial (you can ask about today’s weather or change channels on your TV), but one must not underestimate the immense power of combining artificial intelligence with speech, one of the most basic and essential human activities. The potential of this interface for disabled or disoriented people is particularly intriguing.

The diabetes challenge is a nice focal point for exploring the more serious contribution made by voice interfaces and NLP. Because of the alarming global spread of this illness, the challenge also presents immediate opportunities that I hope the participants succeed in productizing and releasing into the field. Using the challenge’s published criteria, the judges today announced Sugarpod from Wellpepper as the winner.

This article will list some common themes among the five finalists, look at the background about current EHR interfaces and NLP, and say a bit about the unique achievement of each finalist.

Common themes

Overlapping visions of goals, problems, and solutions appeared among the finalists I interviewed for the diabetes challenge:

  • A voice interface allows more frequent and easier interactions with at-risk individuals who have chronic conditions, potentially achieving the behavioral health goal of helping a person make the right health decisions on a daily or even hourly basis.

  • Contestants seek to integrate many levels of patient intervention into their tools: responding to questions, collecting vital signs and behavioral data, issuing alerts, providing recommendations, delivering educational background material, and so on.

  • Services in this challenge go far beyond interactions between Alexa and the individual. The systems commonly anonymize and aggregate data in order to perform analytics that they hope will improve the service and provide valuable public health information to health care providers. They also facilitate communication of crucial health data between the individual and her care team.

  • Given the use of data and AI, customization is a big part of the tools. They are expected to determine the unique characteristics of each patient’s disease and behavior, and adapt their advice to the individual.

  • In addition to Alexa’s built-in language recognition capabilities, Amazon provides the Lex service for sophisticated text processing. Some contestants used Lex, while others drew on other research they had done building their own natural language processing engines.

  • Alexa never initiates a dialog, merely responding when the user wakes it up. The device can present a visual or audio notification when new material is present, but it still depends on the user to request the content. Thus, contestants are using other channels to deliver reminders and alerts such as messaging on the individual’s cell phone or alerting a provider.

  • Alexa is not HIPAA-compliant, but may achieve compliance in the future. This would help health services turn their voice interfaces into viable products and enter the mainstream.

Some background on interfaces and NLP

The poor state of current computing interfaces in the medical field is no secret–in fact, it is one of the loudest and most insistent complaints by doctors, such as on sites like KevinMD. You can visit Healthcare IT News or JAMA regularly and read the damning indictments.

Several factors can be blamed for this situation, including unsophisticated electronic health records (EHRs) and arbitrary reporting requirements by Centers for Medicare & Medicaid Services (CMS). Natural language processing may provide one of the technical solutions to these problems. The NLP services by Nuance are already famous. An encouraging study finds substantial time savings through using NLP to enter doctor’s insights. And on the other end–where doctors are searching the notes they previously entered for information–a service called Butter.ai uses NLP for intelligent searches. Unsurprisingly, the American Health Information Management Association (AHIMA) looks forward to the contributions of NLP.

Some app developers are now exploring voice interfaces and NLP on the patient side. I covered two such companies, including the one that ultimately won the Alexa Diabetes Challenge, in another article. In general, developers using these interfaces hope to eliminate the fuss and abstraction in health apps that frustrate many consumers, thereby reaching new populations and interacting with them more frequently, with deeper relationships.

The next two parts of this article turn to each of the five finalists, to show the use they are making of Alexa.

Searching EMR For Risk-Related Words Can Improve Care Coordination

Posted on September 18, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Though healthcare organizations are working on the problem, they’re still not as good at care coordination as they should be. It’s already an issue and will only get worse under value-based care schemes, in which the ability to coordinate care effectively could be a critical issue for providers.

Admittedly, there’s no easy way to solve care coordination problems, but new research suggests that basic health IT tools might be able to help. The researchers found that digging out important words from EMRs can help providers target patients needing extra care management and coordination.

The article, which appears in JMIR Medical Informatics, notes that most care coordination programs have a blind spot when it comes to identifying cases demanding extra coordination. “Care coordination programs have traditionally focused on medically complex patients, identifying patients that qualify by analyzing formatted clinical data and claims data,” the authors wrote. “However, not all clinically relevant data reside in claims and formatted data.”

For example, they say, relying on formatted records may cause providers to miss psychosocial risk factors such as social determinants of health, mental health disorder, and substance abuse disorders. “[This data is] less amenable to rapid and systematic data analyses, as these data are often not collected or stored as formatted data,” the authors note.

To address this issue, the researchers set out to identify psychosocial risk factors buried within a patient’s EHR using word recognition software. They used a tool known as the Queriable Patient Inference Dossier (QPID) to scan EHRs for terms describing high-risk conditions in patients already in care coordination programs.

After going through the review process, the researchers found 22 EHR-available search terms related to psychosocial high-risk status. When they were able to find nine or more of these terms in the patient’s EHR, it predicted that a patient would meet criteria for participation in a care coordination program. Presumably, this approach allowed care managers and clinicians to find patients who hadn’t been identified by existing care coordination outreach efforts.

I think this article is valuable, as it outlines a way to improve care coordination programs without leaping over tall buildings. Obviously, we’re going to see a lot more emphasis on harvesting information from structured data, tools like artificial intelligence, and natural language processing. That makes sense. After all, these technologies allow healthcare organizations to enjoy both the clear organization of structured data and analytical options available when examining pure data sets. You can have your cake and eat it too.

Obviously, we’re going to see a lot more emphasis on harvesting information from structured data, tools like artificial intelligence and natural language processing. That makes sense. After all, these technologies allow healthcare organizations to enjoy both the clear organization of structured data and analytical options available when examining pure data sets. You can have your cake and eat it too.

Still, it’s good to know that you can get meaningful information from EHRs using a comparatively simple tool. In this case, parsing patient medical records for a couple dozen keywords helped the authors find patients that might have otherwise been missed. This can only be good news.

Yes, there’s no doubt we’ll keep on pushing the limits of predictive analytics, healthcare AI, machine learning and other techniques for taming wild databases. In the meantime, it’s good to know that we can make incremental progress in improving care using simpler tools.

Analytics Take an Unusual Turn at PeraHealth

Posted on August 17, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Data scientists in all fields have learned to take data from unusual places. You’d think that monitoring people in a hospital for changes in their conditions would be easier than other data-driven tasks, such as tracking planets in far-off solar systems, but in all cases some creativity is needed. That’s what PeraHealth, a surveillance system for hospital patients, found out while developing alerts for clinicians.

It’s remarkably hard to identify at-risk patients in hospitals, even with so many machines and staff busy monitoring them. For instance, a nurse on each shift may note in the patient’s record that certain vital signs are within normal range, and no one might notice that the vital signs are gradually trending worse and worse–until a crisis occurs.

PeraHealth identifies at-risk patients through analytics and dashboards that doctors and nurses can pull up. They can see trends over a period of several shifts, and quickly see which patients in the ward are the most at risk. PeraHealth is a tool for both clinical surveillance and communication.

Michael Rothman, co-founder and Chief Science Officer, personally learned the dangers of insufficient monitoring in 2003 when a low-risk operation on his mother led to complications and her unfortunate death. Rothman and his brother decided to make something positive from the tragedy. They got permission from the hospital to work there for three weeks, applying Michael’s background in math and data analysis (he has worked in the AI department of IBM’s Watson research labs, among other places) and his brother’s background in data visualization. Their goal, arguably naive: to find a single number that summarizes patient risk, and expose that information in a usable way to clinicians.

Starting with 70 patients from the cardiac unit, they built a statistical model that they tested repeatedly with 1,200 patients, 6,000 patients, and finally 25,000 patients. At first they hoped to identify extra data that the nurse could enter into the record, but the chief nurse laid down, in no uncertain terms, that the staff was already too busy and that collecting more data was out of the question. It came time to get creative with data that was already being collected and stored.

The unexpected finding was that vital signs were not a reliable basis for assessing a patient’s trends. Even though they’re “hard” (supposedly objective) data, they bounce around too much.

Instead of relying on just vital signs, PeraHealth also pulls in nursing assessments–an often under-utilized source of information. On each shift, a nurse records information on a dozen different physical systems as well as essential facts such as whether a patient stopping eating or was having trouble walking. It turns out that this sort of information reliably indicates whether there’s a problem. Many of the assessments are simple, yes/no questions.

Rothman analyzed hospital data to find variables that predicted risk. For instance, he compared the heart rates of 25,000 patients before they left the hospital and checked who lived for a year longer. The results formed a U-shaped curve, showing that heart rates above a certain level or below a certain level predicted a bad outcome. It turns out that this meaure works equally well within the hospital, helping to predict admission to the ICU, readmission to the ICU, and readmission after discharge.

The PeraHealth team integrated their tool with the hospital’s EHR and started producing graphs for the clinicians in 2007. Now they can point to more than 25 peer-reviewed articles endorsing their approach, some studies comparing before-and-after outcomes, and others comparing different parts of the hospital with some using PeraHealth and others not using it. The service is now integrated with major EHR vendors.

PeraHealth achieved Rothman’s goal of producing a single meaningful score to rate patient risk. Each new piece of data that goes into the EHR triggers a real-time recalculation of the score and a new dot on a graph presented to the nurses. In order to save the nurses from signing into the EHR, PeraHealth put a dashboard on the nurse’s kiosk with all the patients’ graphs. Color-coding denotes which patients are sickest. PeraHealth also shows which patients to attend to first. In case no one looks at the screen, at some hospitals the system sends out text alerts to doctors about the most concerned patients.

PeraHealth is now expanding. In an experiment, they did phone interviews with people in a senior residential facility, and identified many of those who were deteriorating. So the basic techniques may be widely applicable to data-driven clinical decision support. But without analytics, one never knows which data is most useful.

Wellpepper and SimplifiMed Meet the Patients Where They Are Through Modern Interaction Techniques

Posted on August 9, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Over the past few weeks I found two companies seeking out natural and streamlined ways to connect patients with their doctors. Many of us have started using web portals for messaging–a stodgy communication method that involves logins and lots of clicking, often just for an outcome such as message “Test submitted. No further information available.” Web portals are better than unnecessary office visits or days of playing phone tag, and so are the various secure messaging apps (incompatible with one another, unfortunately) found in the online app stores. But Wellpepper and SimplifiMed are trying to bring us a bit further into the twenty-first century, through voice interfaces and natural language processing.

Wellpepper’s Sugarpod

Wellpepper recently ascended to finalist status in the Alexa Diabetes Challenge, which encourages research into the use of Amazon.com’s popular voice-activated device, Alexa, to improve the lives of people with Type 2 Diabetes. For this challenge, Wellpepper enhanced its existing service to deliver messages over Amazon Echo and interview patients. Wellpepper’s entry in the competition is an integrated care plan called Sugarpod.

The Wellpepper platform is organized around a care plan, and covers the entire cycle of treatment, such as delivering information to patients, managing their medications and food diaries, recording information from patients in the health care provider’s EHR, helping them prepare for surgery, and more. Messages adapt to the patient’s condition, attempting to present the right tone for adherent versus non-adherent patients. The data collected can be used for analytics benefitting both the provider and the patient–valuable alerts, for instance.

It must be emphasized at the outset that Wellpepper’s current support for Alexa is just a proof of concept. It cannot be rolled out to the public until Alexa itself is HIPAA-compliant.

I interviewed Anne Weiler, founder and CEO of Wellpepper. She explained that using Alexa would be helpful for people who have mobility problems or difficulties using their hands. The prototype proved quite popular, and people seem willing to open up to the machine. Alexa has some modest affective computing features; for instance, if the patient reports feeling pain, the device will may respond with “Ouch!”

Wellpepper is clinically validated. A study of patients with Parkinson’s showed that those using Wellpepper showed 9 percent improvement in mobility, whereas those without it showed a 12% decline. Wellpepper patients adhered to treatment plans 81% of the time.

I’ll end this section by mentioning that integration EHRs offer limited information of value to Wellpepper. Most EHRs don’t yet accept patient data, for instance. And how can you tell whether a patient was admitted to a hospital? It should be in the EHR, but Sugarpod has found the information to be unavailable. It’s especially hidden if the patient is admitted to a different health care providers; interoperability is a myth. Weiler said that Sugarpod doesn’t depend on the EHR for much information, using a much more reliable source of information instead: it asks the patient!

SimplifiMed

SimplifiMed is a chatbot service that helps clinics automate routine tasks such as appointments, refills, and other aspects of treatment. CEO Chinmay A. Singh emphasized to me that it is not an app, but a natural language processing tool that operates over standard SMS messaging. They enable a doctor’s landline phone to communicate via text messages and route patients’ messages to a chatbot capable of understanding natural language and partial sentences. The bot interacts with the patients to understand their needs, and helps them accomplish the task quickly. The result is round-the-clock access to the service with no waiting on the phone a huge convenience to busy patients.

SimplifiMed also collects insurance information when the patient signs up, and the patient can use the interface to change the information. Eventually, they expect the service to analyze patient’s symptom in light of data from the EHR and help the patient make the decision about whether to come in to the doctor.

SMS is not secure, but HIPAA does not get violated because the patient can choose what to send to the doctor, and the chatbot’s responses contain no personally identifiable information. Between the doctor and the SimplifiMed service, data is sent in encrypted form. Singh said that the company built its own natural language processing engine, because it didn’t want to share sensitive patient data with an outside service.

Due to complexity of care, insurance requirements, and regulations, a doctor today needs support from multiple staff members: front desk, MA, biller, etc. MACRA and value-based care will increase the burden on staff without providing the income to hire more. Automating routine activities adds value to clinics without breaking the bank.

Earlier this year I wrote about another company, HealthTap, that had added Alexa integration. This trend toward natural voice interfaces, which the Alexa Diabetes Challenge finalists are also pursuing, along with the natural language processing that they and SimplifiMed are implementing, could put health care on track to a new era of meeting patients where they are now. The potential improvements to care are considerable, because patients are more likely to share information, take educational interventions seriously, and become active participants in their own treatment.