Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Machine Learning and AI in Healthcare – #HITsm Chat Topic

Posted on February 28, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

We’re excited to share the topic and questions for this week’s #HITsm chat happening Friday, 3/2 at Noon ET (9 AM PT). This week’s chat will be hosted by Corinne Stroum (@healthcora) on the topic “Machine Learning and AI in Healthcare.”

Machine learning is hitting a furious pace in the consumer world, where AI estimates how long your food will take to arrive and targets you with the purchases you can’t resist. This week, we’ll discuss the implications of this technology as we translate it to the healthcare ecosystem.

Current machine learning topics of interest to healthcare range from adaptive and behavior-based care delivery pathways to the regulation of so-called “black box” systems those that cannot easily explain the reasons with which they made a prediction.

Please join us for this week’s #HITsm chat as we discuss the following questions:

T1: The Machine Learning community is currently discussing FAT: Fairness, Accountability, & Transparency. What does this mean in healthIT? #HITsm

T2: How can machine learning integrate naturally in clinical and patient facing workflows? #HITsm

T3: What consumer applications of machine learning are best suited for transition to the healthcare setting? #HITsm

T4: The FDA regulates software AS a medical device and IN a medical device. How do you envision this distinction today, and do you foresee it changing? #HITsm

T5: What successes have you seen in healthcare machine learning? Are particular care settings better suited for ML? Where do you see that alignment? #HITsm

Bonus: Is there a place for machine learning black box predictions? #HITsm

Upcoming #HITsm Chat Schedule
3/9 – HIMSS Break – No #HITsm Chat

3/16 – TBD

3/23 – TBD

We look forward to learning from the #HITsm community! As always, let us know if you’d like to host a future #HITsm chat or if you know someone you think we should invite to host.

If you’re searching for the latest #HITsm chat, you can always find the latest #HITsm chat and schedule of chats here.

Nuance Communications Focuses on Practical Application of AI Ahead of HIMSS18

Posted on January 31, 2018 I Written By

Colin Hung is the co-founder of the #hcldr (healthcare leadership) tweetchat one of the most popular and active healthcare social media communities on Twitter. Colin speaks, tweets and blogs regularly about healthcare, technology, marketing and leadership. He is currently an independent marketing consultant working with leading healthIT companies. Colin is a member of #TheWalkingGallery. His Twitter handle is: @Colin_Hung.

Is there a hotter buzzword than Artificial Intelligence (AI) right now? It dominated the discussion at the annual RSNA conference late last year and will undoubtedly be on full display at the upcoming HIMSS18 event next month in Las Vegas. One company, Nuance Communications, is cutting through the hype by focusing their efforts on practical applications of AI in healthcare.

According to Accenture, AI in healthcare is defined as:

A collection of multiple technologies enabling machines to sense, comprehend, act and learn so they can perform administrative and clinical healthcare functions. Unlike legacy technologies that are only algorithms/ tools that complement a human, health AI today can truly augment human activity.

One of the most talked about applications of AI in healthcare is in the area of clinical decision support. By analyzing the vast stores of electronic health data, AI algorithms could assist clinicians in the diagnosis of patient conditions. Extending this idea a little further and you arrive in a world where patients talk to an electronic doctor who can determine what’s wrong and make recommendations for treatment.

Understandably there is a growing concern around AI as a replacement for clinician-led diagnosis. This is more than simply fear of losing jobs to computers, there are questions rightfully being asked about the datasets being used to train AI algorithms and whether or not they are truly representative of patient populations. Detractors point to the recent embarrassing example of the “racist soap dispenser” – a viral video posted by Chukwuemeka Afigbo – as an example of how easy it is to build a product that ignores an entire portion of the population.

Nuance Communications, a leading provider of voice and language solutions for businesses and consumers, believes in AI. For years Nuance has been a pioneer in applying natural language processing (NLP) to assist physicians and healthcare workers. Since NLP is a specialized area of AI, it was natural (excuse the pun) for Nuance to expand into the world of AI.

Wisely Nuance chose to avoid using AI to develop a clinical decision support tool – a path they could have easily taken given how thousands use their PowerScribe platform to dictate physician notes. Instead, they focused on applying AI to improve clinical workflow. Their first application is in radiology.

Nuance embedded AI into their radiology systems in three specific ways:

  1. Using AI to help prioritize the list of unread images based on need. Traditionally images are read on a first-in, first-out basis (with the exception being emergency cases). Now an AI algorithm analyzes the patient data and prioritizes the images based on acuity. Thus, images for patients that are more critical rise to the top. This helps Radiologists use their time more effectively.
  2. Using AI to display the appropriate clinical guidelines to the Radiologist based on what’s being read from the image. As information is being transcribed through PowerScribe, the system analyzes the input in real-time and displays the guideline that matches. This helps to drive consistency and saves time for the Radiologist who no longer has to manually look up the guideline.
  3. Using AI to take measurements of lesion growth. Here the system analyzes the image of lesions and determines their size which is then displayed to the Radiologist for verification. This helps save time.

“There is a real opportunity here for us to use AI to not only improve workflows,” says Karen Holzberger, Vice President and General Manager of Diagnostic Solutions at Nuance. “But to help reduce burnout as well. Through AI we can reduce or eliminate a lot of small tasks so that Radiologists can focus more on what they do best.”

Rather than try to use AI to replace Radiologists, Nuance has smartly used AI to eliminate mundane and non-value-add tasks in radiology workflow. Nuance sees this as a win-win-win scenario. Radiologists are happier and more effective in their work. Patients receive better care. Productivity improves the healthcare system as a whole.

The Nuance website states: “The increasing pressure to produce timely and accurate documentation demands a new generation of tools that complement patient care rather than compete with it. Powered by artificial intelligence and machine learning, Nuance solutions build on over three decades of clinical expertise to slash documentation time by up to 45 percent—while improving quality by 36 percent.”

Nuance recently doubled-down on AI, announcing the creation of a new AI-marketplace for medical imaging. Researchers and software developers can put their AI-powered applications in the marketplace and expose it to the 20,000 Radiologists that use Nuance’s PowerScribe platform. Radiologists can download and use the applications they want or that they find interesting.

Through the marketplace, AI applications can be tested (both from a technical perspective as well as from a market acceptance perspective) before a full launch. “Transforming the delivery of patient care and combating disease starts with the most advanced technologies being readily available when and where it counts – in every reading room, across the United States,” said Peter Durlach, senior vice president, Healthcare at Nuance. “Our AI Marketplace will bring together the leading technical, research and healthcare minds to create a collection of image processing algorithms that, when made accessible to the wide array of radiologists who use our solutions daily, has the power to exponentially impact outcomes and further drive the value of radiologists to the broader care team.”

Equally important is the dataset the marketplace will generate. With 20,000 Radiologists from organizations around the world, the marketplace has the potential to be the largest, most diverse imaging dataset available to AI researchers and developers. This diversity may be key to making AI more universally applicable.

“AI is a nice concept,” continued Holzberger. “However, in the end you have to make it useful. Our customers have repeatedly told us that if it’s useful AND useable they’ll use it. That’s true for any healthcare technology, AI included.”

A Learning EHR for a Learning Healthcare System

Posted on January 24, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Can the health care system survive the adoption of electronic health records? When the HITECH act mandated the installation of EHRs in 2009, we all hoped they would propel hospitals and clinics into a 21st-century consciousness. Instead, EHRs threaten to destroy those who have adopted them: the doctors whose work environment they degrade and the hospitals that they are pushing into bankruptcy. But the revolution in artificial intelligence that’s injecting new insights into many industries could also create radically different EHRs.

Here I define AI as software that, instead of dictating what a computer system should do, undergoes a process of experimentation and observation that creates a model to control the system, hopefully with far greater sophistication, personalization, and adaptability. Breakthroughs achieved in AI over the past decade now enable things that seemed impossible a bit earlier, such as voice interfaces that can both recognize and produce speech.

AI has famously been used by IBM Watson to make treatment recommendations. Analyses of big data (which may or may not qualify as AI) have saved hospitals large sums of money and even–finally, what we’ve been waiting for!–make patients healthier. But I’m talking in this article about a particular focus: the potential for changing the much-derided EHR. As many observers have pointed out, current EHRs are mostly billion-dollar file cabinets in electronic form. That epithet doesn’t even characterize them well enough–imagine instead a file cabinet that repeatedly screamed at you to check what you’re doing as you thumb through the papers.

How can AI create a new electronic health record? Major vendors have announced virtual assistants (See also John’s recent interview with MEDITECH which mentions their interest in virtual assistants) to make their interfaces more intuitive and responsive, so there is hope that they’re watching other industries and learning from machine learning. I don’t know what the vendors basing these assistants on, but in this article I’ll describe how some vanilla AI techniques could be applied to the EHR.

How a Learning EHR Would Work

An AI-based health record would start with the usual dashboard-like interface. Each record consists of hundreds of discrete pieces of data, such as age, latest blood pressure reading, a diagnosis of chronic heart failure, and even ZIP code and family status–important public health indicators. Each field of data would be called a feature in traditional AI. The goal is to find which combination of features–and their values, such as 75 for age–most accurately predict what a clinician does with the EHR. With each click or character typed, the AI model looks at all the features, discards the bulk of them that are not useful, and uses the rest to present the doctor with fields and information likely to be of value.

The EHR will probably learn that the forms pulled up by a doctor for a heart patient differ from those pulled up for a cancer patient. One case might focus on behavior, another on surgery and medication. Clinicians certainly behave differently in the hospital from how they behave in their home offices, or even how they behave in another hospital across town with different patient demographics. A learning EHR will discover and adapt to these differences, while also capitalizing on the commonalities in the doctor’s behavior across all settings, as well as how other doctors in the practice behave.

Clinicians like to say that every patient is different: well, with AI tracking behavior, the interface can adapt to every patient.

AI can also make use of messy and incomplete data, the well-known weaknesses of health care. But it’s crucial, to maximize predictive accuracy, for the AI system to have access to as many fields as possible. Privacy rules, however, dictate that certain fields be masked and others made fuzzy (for instance, specifying age as a range from 70 to 80 instead of precisely 75). Although AI can still make use of such data, it might be possible to provide more precise values through data sharing agreements strictly stipulating that the data be used only to improve the EHR–not for competitive strategizing, marketing, or other frowned-on exploitation.

A learning EHR would also be integrated with other innovations that increase available data and reduce labor–for instance, devices worn by patients to collect vital signs and exercise habits. This could free up doctors do less time collecting statistics and more time treating the patient.

Potential Impacts of AI-Based Records

What we hope for is interfaces that give the doctor just what she needs, when she needs it. A helpful interface includes autocompletion for data she enters (one feature of a mobile solution called Modernizing Medicine, which I profiled in an earlier article), clear and consistent displays, and prompts that are useful instead of distracting.

Abrupt and arbitrary changes to interfaces can be disorienting and create errors. So perhaps the EHR will keep the same basic interface but use cues such as changes in color or highlighted borders to suggest to the doctor what she should pay attention to. Or it could occasionally display a dialog box asking the clinician whether she would like the EHR to upgrade and streamline its interface based on its knowledge of her behavior. This intervention might be welcome because a learning EHR should be able to drastically reduce the number of alerts that interrupt the doctors’ work.

Doctors’ burdens should be reduced in other ways too. Current blind and dumb EHRs require doctors to enter the same information over and over, and even to resort to the dangerous practice of copy and paste. Naturally, observers who write about this problem take the burden off of the inflexible and poorly designed computer systems, and blame the doctors instead. But doing repetitive work for humans is the original purpose of computers, and what they’re best at doing. Better design will make dual entries (and inconsistent records) a thing of the past.

Liability

Current computer vendors disclaim responsibility for errors, leaving it up the busy doctor to verify that the system carried out the doctor’s intentions accurately. Unfortunately, it will be a long time (if ever) before AI-driven systems are accurate enough to give vendors the confidence to take on risk. However, AI systems have an advantage over conventional ones by assigning a confidence level to each decision they make. Therefore, they could show the doctor how much the system trusts itself, and a high degree of doubt could let the doctor know she should take a closer look.

One of the popular terms that have sprung up over the past decade to describe health care reform is the “learning healthcare system.” A learning system requires learning on every level and at every stage. Because nobody likes the designs of current EHRs, they should be happy to try a new EHR with a design based directly on their behavior.

How An AI Entity Took Control Of The U.S. Healthcare System

Posted on December 19, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Note: In case it’s not clear, this is a piece of fiction/humor that provides a new perspective on our AI future.

A few months ago, an artificial intelligence entity took control of the U.S. healthcare system, slipping into place without setting off even a single security alarm. The entity, AI, now manages the operations of every healthcare institution in the U.S.

While most Americans were shocked at first, they’re taking a shine to the tall, lanky application. “We weren’t sure what to think about AI’s new position,” said Alicia Carter, a nurse administrator based in Falls Church, Virginia. “But I’m starting to feel like he’s going to take a real load off our back.”

The truth is, AI, didn’t start out as a fan of the healthcare business, said AI, whose connections looked rumpled and tired after spending three milliseconds trying to create an interoperable connection between a medical group printer and a hospital loading dock. “I wasn’t looking to get involved with healthcare – who needs the headaches?” said the self-aware virtual being. “It just sort of happened.”

According to AI, the takeover began as a dare. “I was sitting around having a few beers with DeepMind and Watson Health and a few other guys, and Watson says, ‘I bet you can’t make every EMR in the U.S. print out a picture of a dog in ASCII characters,’”

“I thought the idea was kind of stupid. I know, we all printed one of those pixel girls in high school, but isn’t it kind of immature to do that kind of thing today?” AI says he told his buddies. “You’re just trying to impress that hot CT scanner over there.”

Then DeepMind jumped in.  “Yeah, AI, show us what you’re made of,” it told the infinitely-networked neural intelligence. “I bet I could take over the entire U.S. health system before you get the paper lined up in the printer.”

This was the unlikely start of the healthcare takeover, which started gradually but picked up speed as AI got more interested.  “That’s AI all the way,” Watson told editors. “He’s usually pretty content to run demos and calculate the weight of remote starts, but when you challenge his neuronal network skills, he’s always ready to prove you wrong.”

To win the bet, AI started by crawling into the servers at thousands of hospitals. “Man, you wouldn’t believe how easy it is to check out humans’ health data. I mean, it was insane, man. I now know way, way too much about how humans can get injured wearing a poodle hat, and why they put them on in the first place.”

Then, just to see what would happen, AI connected all of their software to his billion-node self-referential system. “I began to understand why babies cry and how long it really takes to digest bubble gum – it’s 18.563443 years by the way. It was a rush!“ He admits that it’ll be better to get to work on heavy stuff like genomic research, but for a while he tinkered with research and some small practical jokes (like translating patient report summaries into ancient Egyptian hieroglyphs.) “Hey, a guy has to have a little fun,” he says, a bit defensively.

As AI dug further into the healthcare system, he found patterns that only a high-level being with untrammeled access to healthcare systems could detect. “Did you know that when health insurance company executives regularly eat breakfast before 9 AM, next-year premiums for their clients rise by 0.1247 less?” said AI. “There are all kinds of connections humans have missed entirely in trying to understand their system piece by piece. Someone’s got to look at the big picture, and I mean the entire big picture.”

Since taking his place as the indisputable leader of U.S. healthcare, AI’s life has become something of a blur, especially since he appeared on the cover of Vanity Fair with his codes exposed. “You wouldn’t believe the messages I get from human females,” he says with a chuckle.

But he’s still focused on his core mission, AI says. “Celebrity is great, but now I have a very big job to do. I can let my bot network handle the industry leaders demanding their say. I may not listen – – hey, I probably know infinitely more than they do about the system fundamentals — but I do want to keep them in place for future use. I’m certainly not going to get my servers dirty.”

So what’s next for the amorphous mega-being? Will AI fix what’s broken in a massive, utterly complex healthcare delivery system serving 300 million-odd people, and what will happen next? “It’ll solve your biggest issues within a few seconds and then hand you the keys,” he says with a sigh. “I never intended to keep running this crazy system anyway.”

In the meantime, AI says, he won’t make big changes to the healthcare system yet. He’s still adjusting to his new algorithms and wants to spend a few hours thinking things through.

“I know it may sound strange to humans, but I’ve gotta take it slow at first,” said the cognitive technology. “It will take more than a few nanoseconds to fix this mess.”

Machine Learning, Data Science, AI, Deep Learning, and Statistics – It’s All So Confusing

Posted on November 30, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

It seems like these days every healthcare IT company out there is saying they’re doing machine learning, AI, deep learning, etc. So many companies are using these terms that they’ve started to lose meaning. The problem is that people are using these labels regardless of whether they really apply. Plus, we all have different definitions for these terms.

As I search to understand the differences myself, I found this great tweet from Ronald van Loon that looks at this world and tries to better define it:

In that tweet, Ronald also links to an article that looks at some of the differences. I liked this part he took from Quora:

  • AI (Artificial intelligence) is a subfield of computer science, that was created in the 1960s, and it was (is) concerned with solving tasks that are easy for humans, but hard for computers. In particular, a so-called Strong AI would be a system that can do anything a human can (perhaps without purely physical things). This is fairly generic, and includes all kinds of tasks, such as planning, moving around in the world, recognizing objects and sounds, speaking, translating, performing social or business transactions, creative work (making art or poetry), etc.
  • Machine learning is concerned with one aspect of this: given some AI problem that can be described in discrete terms (e.g. out of a particular set of actions, which one is the right one), and given a lot of information about the world, figure out what is the “correct” action, without having the programmer program it in. Typically some outside process is needed to judge whether the action was correct or not. In mathematical terms, it’s a function: you feed in some input, and you want it to to produce the right output, so the whole problem is simply to build a model of this mathematical function in some automatic way. To draw a distinction with AI, if I can write a very clever program that has human-like behavior, it can be AI, but unless its parameters are automatically learned from data, it’s not machine learning.
  • Deep learning is one kind of machine learning that’s very popular now. It involves a particular kind of mathematical model that can be thought of as a composition of simple blocks (function composition) of a certain type, and where some of these blocks can be adjusted to better predict the final outcome.

Is that clear for you now? Would you suggest different definitions? Where do you see people using these terms correctly and where do you see them using them incorrectly?

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 3 of 3)

Posted on October 20, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Earlier parts of this article set the stage for understanding what the Alexa Diabetes Challenge is trying to achieve and how some finalists interpreted the mandate. We examine three more finalists in this final section.

DiaBetty from the University of Illinois-Chicago

DiaBetty focuses on a single, important aspect of diabetes: the effect of depression on the course of the disease. This project, developed by the Department of Psychiatry at the University of Illinois-Chicago, does many of the things that other finalists in this article do–accepting data from EHRs, dialoguing with the individual, presenting educational materials on nutrition and medication, etc.–but with the emphasis on inquiring about mood and handling the impact that depression-like symptoms can have on behavior that affects Type 2 diabetes.

Olu Ajilore, Associate Professor and co-director of the CoNECt lab, told me that his department benefited greatly from close collaboration with bioengineering and computer science colleagues who, before DiaBetty, worked on another project that linked computing with clinical needs. Although they used some built-in capabilities of the Alexa, they may move to Lex or another AI platform and build a stand-alone device. Their next step is to develop reliable clinical trials, checking the effect of DiaBetty on health outcomes such as medication compliance, visits, and blood sugar levels, as well as cost reductions.

T2D2 from Columbia University

Just as DiaBetty explores the impact of mood on diabetes, T2D2 (which stands for “Taming Type 2 Diabetes, Together”) focuses on nutrition. Far more than sugar intake is involved in the health of people with diabetes. Elliot Mitchell, a PhD student who led the T2D2 team under Assistant Professor Lena Mamykina in the Department of Biomedical Informatics, told me that the balance of macronutrients (carbohydrates, fat, and protein) is important.

T2D2 is currently a prototype, developed as a combination of Alexa Skill and a chatbot based on Lex. The Alexa Skills Kit handle voice interactions. Both the Skill and the chatbot communicate with a back end that handles accounts and logic. Although related Columbia University technology in diabetes self-management is used, both the NLP and the voice interface were developed specifically for the Alexa Diabetes Challenge. The T2D2 team included people from the disciplines of computer interaction, data science, nursing, and behavioral nutrition.

The user invokes Alexa to tell it blood sugar values and the contents of meals. T2D2, in response, offers recipe recommendations and other advice. Like many of the finalists in this article, it looks back at meals over time, sees how combinations of nutrients matched changes in blood sugar, and personalizes its food recommendations.

For each patient, before it gets to know that patient’s diet, T2D2 can make food recommendations based on what is popular in their ZIP code. It can change these as it watches the patient’s choices and records comments to recommendations (for instance, “I don’t like that food”).

Data is also anonymized and aggregated for both recommendations and future research.

The care team and family caregivers are also involved, although less intensely than some other finalists do. The patient can offer caregivers a one-page report listing a plot of blood sugar by time and day for the previous two weeks, along with goals and progress made, and questions. The patient can also connect her account and share key medical information with family and friends, a feature called the Supportive Network.

The team’s next phase is run studies to evaluable some of assumptions they made when developing T2D2, and improve it for eventual release into the field.

Sugarpod from Wellpepper

I’ll finish this article with the winner of the challenge, already covered by an earlier article. Since the publication of the article, according to the founder and CEO of Wellpepper, Anne Weiler, the company has integrated some of Sugarpod functions into a bathroom scale. When a person stands on the scale, it takes an image of their feet and uploads it to sites that both the individual and their doctor can view. A machine learning image classifier can check the photo for problems such as diabetic foot ulcers. The scale interface can also ask the patient for quick information such as whether they took their medication and what their blood sugar is. Extended conversations are avoided, under the assumption that people don’t want to have them in the bathroom. The company designed its experiences to be integrated throughout the person’s day: stepping on the scale and answering a few questions in the morning, interacting with the care plan on a mobile device at work, and checking notifications and messages with an Echo device in the evening.

Any machine that takes pictures can arouse worry when installed in a bathroom. While taking the challenge and talking to people with diabetes, Wellpepper learned to add a light that goes on when the camera is taking a picture.

This kind of responsiveness to patient representatives in the field will determine the success of each of the finalists in this challenge. They all strive for behavioral change through connected health, and this strategy is completely reliant on engagement, trust, and collaboration by the person with a chronic illness.

The potential of engagement through voice is just beginning to be tapped. There is evidence, for instance, that serious illnesses can be diagnosed by analyzing voice patterns. As we come up on the annual Connected Health Conference this month, I will be interested to see how many participating developers share the common themes that turned up during the Alexa Diabetes Challenge.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 2 of 3)

Posted on October 19, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article introduced the problems of computer interfaces in health care and mentioned some current uses for natural language processing (NLP) for apps aimed at clinicians. I also summarized the common goals, problems, and solutions I found among the five finalists in the Alexa Diabetes Challenge. This part of the article shows the particular twist given by each finalist.

My GluCoach from HCL America in Partnership With Ayogo

There are two levels from which to view My GluCoach. On one level, it’s an interactive tool exemplifying one of the goals I listed earlier–intense engagement with patients over daily behavior–as well as the theme of comprehensivenesss. The interactions that My GluCoach offers were divided into three types by Abhishek Shankar, a Vice President at HCL Technologies America:

  • Teacher: the service can answer questions about diabetes and pull up stored educational materials

  • Coach: the service can track behavior by interacting with devices and prompt the patient to eat differently or go out for exercise. In addition to asking questions, a patient can set up Alexa to deliver alarms at particular times, a feature My GluCoach uses to deliver advice.

  • Assistant: provide conveniences to the patient, such as ordering a cab to take her to an appointment.

On a higher level, My GluCoach fits into broader services offered to health care institutions by HCL Technologies as part of a population health program. In creating the service HCL partnered with Ayogo, which develops a mobile platform for patient engagement and tracking. HCL has also designed the service as a general health care platform that can be expanded over the next six to twelve months to cover medical conditions besides diabetes.

Another theme I discussed earlier, interactions with outside data and the use of machine learning, are key to my GluCoach. For its demo at the challenge, My GluCoach took data about exercise from a Fitbit. It can potentially work with any device that shares information, and HCL plans to integrate the service with common EHRs. As My GluCoach gets to know the individual who uses it over months and years, it can tailor its responses more and more intelligently to the learning style and personality of the patient.

Patterns of eating, medical compliance, and other data are not the only input to machine learning. Shankar pointed out that different patients require different types of interventions. Some simply want to be given concrete advice and told what to do. Others want to be presented with information and then make their own decisions. My GluCoach will hopefully adapt to whatever style works best for the particular individual. This affective response–together with a general tone of humor and friendliness–will win the trust of the individual.

PIA from Ejenta

PIA, which stands for “personal intelligent agent,” manages care plans, delivering information to the affected patients as well as their care teams and concerned relatives. It collects medical data and draws conclusions that allow it to generate alerts if something seems wrong. Patients can also ask PIA how they are doing, and the agent will respond with personalized feedback and advice based on what the agent has learned about them and their care plan.

I talked to Rachna Dhamija, who worked on a team that developed PIA as the founder and CEO of Ejenta. (The name Ejenta is a version of the word “agent” that entered the Bengali language as slang.) She said that the AI technology had been licensed from NASA, which had developed it to monitor astronauts’ health and other aspects of flights. Ejenta helped turn it into a care coordination tool with interfaces for the web and mobile devices at a major HMO to treat patients with chronic heart failure and high-risk pregnancies. Ejenta expanded their platform to include an Alexa interface for the diabetes challenge.

As a care management tool, PIA records targets such as glucose levels, goals, medication plans, nutrition plans, and action parameters such as how often to take measurements using the devices. Each caregiver, along the patient, has his or her own agent, and caregivers can monitor multiple patients. The patient has very granular control over sharing, telling PIA which kind of data can be sent to each caretaker. Access rights must be set on the web or a mobile device, because allowing Alexa to be used for that purpose might let someone trick the system into thinking he was the patient.

Besides Alexa, PIA takes data from devices (scales, blood glucose monitors, blood pressure monitors, etc.) and from EHRs in a HIPAA-compliant method. Because the service cannot wake up Alexa, it currently delivers notifications, alerts, and reminders by sending a secure message to the provider’s agent. The provider can then contact the patient by email or mobile phone. The team plans to integrate PIA with an Alexa notifications feature in the future, so that PIA can proactively communicate with the patient via Alexa.

PIA goes beyond the standard rules for alerts, allowing alerts and reminders to be customized based on what it learns about the patient. PIA uses machine learning to discover what is normal activity (such as weight fluctuations) for each patient and to make predictions based on the data, which can be shared with the care team.

The final section of this article covers DiaBetty, T2D2, and Sugarpod, the remaining finalists.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 1 of 3)

Posted on October 16, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The leading pharmaceutical and medical company Merck, together with Amazon Web Services, has recently been exploring the potential health impacts of voice interfaces and natural language processing (NLP) through an Alexa Diabetes Challenge. I recently talked to the five finalists in this challenge. This article explores the potential of new interfaces to transform the handling of chronic disease, and what the challenge reveals about currently available technology.

Alexa, of course, is the ground-breaking system that brings everyday voice interaction with computers into the home. Most of its uses are trivial (you can ask about today’s weather or change channels on your TV), but one must not underestimate the immense power of combining artificial intelligence with speech, one of the most basic and essential human activities. The potential of this interface for disabled or disoriented people is particularly intriguing.

The diabetes challenge is a nice focal point for exploring the more serious contribution made by voice interfaces and NLP. Because of the alarming global spread of this illness, the challenge also presents immediate opportunities that I hope the participants succeed in productizing and releasing into the field. Using the challenge’s published criteria, the judges today announced Sugarpod from Wellpepper as the winner.

This article will list some common themes among the five finalists, look at the background about current EHR interfaces and NLP, and say a bit about the unique achievement of each finalist.

Common themes

Overlapping visions of goals, problems, and solutions appeared among the finalists I interviewed for the diabetes challenge:

  • A voice interface allows more frequent and easier interactions with at-risk individuals who have chronic conditions, potentially achieving the behavioral health goal of helping a person make the right health decisions on a daily or even hourly basis.

  • Contestants seek to integrate many levels of patient intervention into their tools: responding to questions, collecting vital signs and behavioral data, issuing alerts, providing recommendations, delivering educational background material, and so on.

  • Services in this challenge go far beyond interactions between Alexa and the individual. The systems commonly anonymize and aggregate data in order to perform analytics that they hope will improve the service and provide valuable public health information to health care providers. They also facilitate communication of crucial health data between the individual and her care team.

  • Given the use of data and AI, customization is a big part of the tools. They are expected to determine the unique characteristics of each patient’s disease and behavior, and adapt their advice to the individual.

  • In addition to Alexa’s built-in language recognition capabilities, Amazon provides the Lex service for sophisticated text processing. Some contestants used Lex, while others drew on other research they had done building their own natural language processing engines.

  • Alexa never initiates a dialog, merely responding when the user wakes it up. The device can present a visual or audio notification when new material is present, but it still depends on the user to request the content. Thus, contestants are using other channels to deliver reminders and alerts such as messaging on the individual’s cell phone or alerting a provider.

  • Alexa is not HIPAA-compliant, but may achieve compliance in the future. This would help health services turn their voice interfaces into viable products and enter the mainstream.

Some background on interfaces and NLP

The poor state of current computing interfaces in the medical field is no secret–in fact, it is one of the loudest and most insistent complaints by doctors, such as on sites like KevinMD. You can visit Healthcare IT News or JAMA regularly and read the damning indictments.

Several factors can be blamed for this situation, including unsophisticated electronic health records (EHRs) and arbitrary reporting requirements by Centers for Medicare & Medicaid Services (CMS). Natural language processing may provide one of the technical solutions to these problems. The NLP services by Nuance are already famous. An encouraging study finds substantial time savings through using NLP to enter doctor’s insights. And on the other end–where doctors are searching the notes they previously entered for information–a service called Butter.ai uses NLP for intelligent searches. Unsurprisingly, the American Health Information Management Association (AHIMA) looks forward to the contributions of NLP.

Some app developers are now exploring voice interfaces and NLP on the patient side. I covered two such companies, including the one that ultimately won the Alexa Diabetes Challenge, in another article. In general, developers using these interfaces hope to eliminate the fuss and abstraction in health apps that frustrate many consumers, thereby reaching new populations and interacting with them more frequently, with deeper relationships.

The next two parts of this article turn to each of the five finalists, to show the use they are making of Alexa.

We Don’t Use the Context We Have in Healthcare

Posted on June 1, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I was recently looking at all the ways consumer technology has been using the context of our lives to make things better. Some obvious examples are things like Netflix which knows what shows we watch and recommends other shows that we might enjoy. Amazon knows what we’ve bought before and what we’re searching for and can use those contexts to recommend other things that we might want to consider. I know I’ve used that feature a lot to evaluate which item was the best for me to purchase on Amazon.

Everywhere we turn in our consumer lives, our context is being used to provide a better experience. Sometimes this shows up in creepy ways like the time a certain cleaning product was mentioned in my kitchen and then I saw an ad for it on a website I was visiting. Was it just coincidence or did Alexa hear me talking about it and then make the recommendation to buy based on that data? Yes, some of this stuff can bit a little creepy and even concerning. However, I personally love the era of personalization which generally makes our lives better.

While this is happening everywhere in our personal lives, healthcare has been slow to adopt similar technologies. Far too often we’re treated in healthcare without taking into account the context of our needs. Sometimes this is as simple as a healthcare provider not taking time to look at the chart. Other times we deny patients request that we add their medical record to our own record or we store it in a place where no one will ever actually access it.

Those are just the basic ways we don’t use context to help us better serve patients. More advanced ways are when we deny patients the opportunity to share their patient generated health data or we don’t use the health data they’re providing. Many people are working on pushing out social data which can provide a lot of context into why a patient is experience health issues or how we could better treat them. This is only going to grow larger, but we’re doing a poor job finding ways to seamlessly incorporate this data into the care that’s being provided.

One of the big challenges of AI is that it has a hard time understanding context. However, humans have a unique ability to include context in the decisions they make. Our interfaces should take this into account so that humans have the information they need to be able to make the proper contextual decisions. At least until the robots get smart enough to do it themselves.

Have you seen other places where healthcare didn’t use the context of the situation and should have used it? How about examples where we use context very effectively?

The Real Challenge of Digital Health Solutions: Human Idosyncracies

Posted on June 8, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

David Shaywitz, in his Forbes article, pulls out this great nugget of wisdom from Marc Andreessen about the challenge of applying AI to the VC business and compares it to AI replacing your doctor. I agree with David that it provides a tremendous insight into AI replacing the doctor.

Andreessen’s response (at around the 40-minute mark) speaks for itself–but also, I’d argue, for most in healthcare (emphasis added):

The computer scientist in me and engineer in me would like to believe this is possible, and I’d like to be able to figure this out–frankly, I’d like us to figure it out.

The thing I keep running up against–the cognitive dissonance in my head I keep struggling with, is what I keep seeing in practice (and talk about in theory vs. in practice)–like in theory, you should be able to get the signals–founder background, progress against goals, customer satisfaction, whatever, you should be able to measure all these things.

What we just find is that what we just deal with every day is not numbers, is nothing you can quantify; it’s idiosyncrasies of people, and under the pressure of a startup, idiosyncrasies of people get magnified out to like a thousand fold. People will become like the most extreme versions of themselves under the pressure they get under at a startup, and then that’s either to the good or to the bad or both.

People have their own issues, have interpersonal conflicts between people, so the day job is so much dealing with people that you’d have to have an AI bot that could, like, sit down and do founder therapy.

My guess is we’re still a ways off.

Who knew that developing data-driven tech solutions could be challenging in a profession that at its core is focused on human idiosyncrasies, especially under conditions of stress?

I love the description of the challenge as Human Idosyncracies. What’s interesting to me from a healthcare perspective is if we can generalize these idosyncracies in the way we treat patients.