Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 3 of 3)

Posted on October 20, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Earlier parts of this article set the stage for understanding what the Alexa Diabetes Challenge is trying to achieve and how some finalists interpreted the mandate. We examine three more finalists in this final section.

DiaBetty from the University of Illinois-Chicago

DiaBetty focuses on a single, important aspect of diabetes: the effect of depression on the course of the disease. This project, developed by the Department of Psychiatry at the University of Illinois-Chicago, does many of the things that other finalists in this article do–accepting data from EHRs, dialoguing with the individual, presenting educational materials on nutrition and medication, etc.–but with the emphasis on inquiring about mood and handling the impact that depression-like symptoms can have on behavior that affects Type 2 diabetes.

Olu Ajilore, Associate Professor and co-director of the CoNECt lab, told me that his department benefited greatly from close collaboration with bioengineering and computer science colleagues who, before DiaBetty, worked on another project that linked computing with clinical needs. Although they used some built-in capabilities of the Alexa, they may move to Lex or another AI platform and build a stand-alone device. Their next step is to develop reliable clinical trials, checking the effect of DiaBetty on health outcomes such as medication compliance, visits, and blood sugar levels, as well as cost reductions.

T2D2 from Columbia University

Just as DiaBetty explores the impact of mood on diabetes, T2D2 (which stands for “Taming Type 2 Diabetes, Together”) focuses on nutrition. Far more than sugar intake is involved in the health of people with diabetes. Elliot Mitchell, a PhD student who led the T2D2 team under Assistant Professor Lena Mamykina in the Department of Biomedical Informatics, told me that the balance of macronutrients (carbohydrates, fat, and protein) is important.

T2D2 is currently a prototype, developed as a combination of Alexa Skill and a chatbot based on Lex. The Alexa Skills Kit handle voice interactions. Both the Skill and the chatbot communicate with a back end that handles accounts and logic. Although related Columbia University technology in diabetes self-management is used, both the NLP and the voice interface were developed specifically for the Alexa Diabetes Challenge. The T2D2 team included people from the disciplines of computer interaction, data science, nursing, and behavioral nutrition.

The user invokes Alexa to tell it blood sugar values and the contents of meals. T2D2, in response, offers recipe recommendations and other advice. Like many of the finalists in this article, it looks back at meals over time, sees how combinations of nutrients matched changes in blood sugar, and personalizes its food recommendations.

For each patient, before it gets to know that patient’s diet, T2D2 can make food recommendations based on what is popular in their ZIP code. It can change these as it watches the patient’s choices and records comments to recommendations (for instance, “I don’t like that food”).

Data is also anonymized and aggregated for both recommendations and future research.

The care team and family caregivers are also involved, although less intensely than some other finalists do. The patient can offer caregivers a one-page report listing a plot of blood sugar by time and day for the previous two weeks, along with goals and progress made, and questions. The patient can also connect her account and share key medical information with family and friends, a feature called the Supportive Network.

The team’s next phase is run studies to evaluable some of assumptions they made when developing T2D2, and improve it for eventual release into the field.

Sugarpod from Wellpepper

I’ll finish this article with the winner of the challenge, already covered by an earlier article. Since the publication of the article, according to the founder and CEO of Wellpepper, Anne Weiler, the company has integrated some of Sugarpod functions into a bathroom scale. When a person stands on the scale, it takes an image of their feet and uploads it to sites that both the individual and their doctor can view. A machine learning image classifier can check the photo for problems such as diabetic foot ulcers. The scale interface can also ask the patient for quick information such as whether they took their medication and what their blood sugar is. Extended conversations are avoided, under the assumption that people don’t want to have them in the bathroom. The company designed its experiences to be integrated throughout the person’s day: stepping on the scale and answering a few questions in the morning, interacting with the care plan on a mobile device at work, and checking notifications and messages with an Echo device in the evening.

Any machine that takes pictures can arouse worry when installed in a bathroom. While taking the challenge and talking to people with diabetes, Wellpepper learned to add a light that goes on when the camera is taking a picture.

This kind of responsiveness to patient representatives in the field will determine the success of each of the finalists in this challenge. They all strive for behavioral change through connected health, and this strategy is completely reliant on engagement, trust, and collaboration by the person with a chronic illness.

The potential of engagement through voice is just beginning to be tapped. There is evidence, for instance, that serious illnesses can be diagnosed by analyzing voice patterns. As we come up on the annual Connected Health Conference this month, I will be interested to see how many participating developers share the common themes that turned up during the Alexa Diabetes Challenge.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 2 of 3)

Posted on October 19, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article introduced the problems of computer interfaces in health care and mentioned some current uses for natural language processing (NLP) for apps aimed at clinicians. I also summarized the common goals, problems, and solutions I found among the five finalists in the Alexa Diabetes Challenge. This part of the article shows the particular twist given by each finalist.

My GluCoach from HCL America in Partnership With Ayogo

There are two levels from which to view My GluCoach. On one level, it’s an interactive tool exemplifying one of the goals I listed earlier–intense engagement with patients over daily behavior–as well as the theme of comprehensivenesss. The interactions that My GluCoach offers were divided into three types by Abhishek Shankar, a Vice President at HCL Technologies America:

  • Teacher: the service can answer questions about diabetes and pull up stored educational materials

  • Coach: the service can track behavior by interacting with devices and prompt the patient to eat differently or go out for exercise. In addition to asking questions, a patient can set up Alexa to deliver alarms at particular times, a feature My GluCoach uses to deliver advice.

  • Assistant: provide conveniences to the patient, such as ordering a cab to take her to an appointment.

On a higher level, My GluCoach fits into broader services offered to health care institutions by HCL Technologies as part of a population health program. In creating the service HCL partnered with Ayogo, which develops a mobile platform for patient engagement and tracking. HCL has also designed the service as a general health care platform that can be expanded over the next six to twelve months to cover medical conditions besides diabetes.

Another theme I discussed earlier, interactions with outside data and the use of machine learning, are key to my GluCoach. For its demo at the challenge, My GluCoach took data about exercise from a Fitbit. It can potentially work with any device that shares information, and HCL plans to integrate the service with common EHRs. As My GluCoach gets to know the individual who uses it over months and years, it can tailor its responses more and more intelligently to the learning style and personality of the patient.

Patterns of eating, medical compliance, and other data are not the only input to machine learning. Shankar pointed out that different patients require different types of interventions. Some simply want to be given concrete advice and told what to do. Others want to be presented with information and then make their own decisions. My GluCoach will hopefully adapt to whatever style works best for the particular individual. This affective response–together with a general tone of humor and friendliness–will win the trust of the individual.

PIA from Ejenta

PIA, which stands for “personal intelligent agent,” manages care plans, delivering information to the affected patients as well as their care teams and concerned relatives. It collects medical data and draws conclusions that allow it to generate alerts if something seems wrong. Patients can also ask PIA how they are doing, and the agent will respond with personalized feedback and advice based on what the agent has learned about them and their care plan.

I talked to Rachna Dhamija, who worked on a team that developed PIA as the founder and CEO of Ejenta. (The name Ejenta is a version of the word “agent” that entered the Bengali language as slang.) She said that the AI technology had been licensed from NASA, which had developed it to monitor astronauts’ health and other aspects of flights. Ejenta helped turn it into a care coordination tool with interfaces for the web and mobile devices at a major HMO to treat patients with chronic heart failure and high-risk pregnancies. Ejenta expanded their platform to include an Alexa interface for the diabetes challenge.

As a care management tool, PIA records targets such as glucose levels, goals, medication plans, nutrition plans, and action parameters such as how often to take measurements using the devices. Each caregiver, along the patient, has his or her own agent, and caregivers can monitor multiple patients. The patient has very granular control over sharing, telling PIA which kind of data can be sent to each caretaker. Access rights must be set on the web or a mobile device, because allowing Alexa to be used for that purpose might let someone trick the system into thinking he was the patient.

Besides Alexa, PIA takes data from devices (scales, blood glucose monitors, blood pressure monitors, etc.) and from EHRs in a HIPAA-compliant method. Because the service cannot wake up Alexa, it currently delivers notifications, alerts, and reminders by sending a secure message to the provider’s agent. The provider can then contact the patient by email or mobile phone. The team plans to integrate PIA with an Alexa notifications feature in the future, so that PIA can proactively communicate with the patient via Alexa.

PIA goes beyond the standard rules for alerts, allowing alerts and reminders to be customized based on what it learns about the patient. PIA uses machine learning to discover what is normal activity (such as weight fluctuations) for each patient and to make predictions based on the data, which can be shared with the care team.

The final section of this article covers DiaBetty, T2D2, and Sugarpod, the remaining finalists.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 1 of 3)

Posted on October 16, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The leading pharmaceutical and medical company Merck, together with Amazon Web Services, has recently been exploring the potential health impacts of voice interfaces and natural language processing (NLP) through an Alexa Diabetes Challenge. I recently talked to the five finalists in this challenge. This article explores the potential of new interfaces to transform the handling of chronic disease, and what the challenge reveals about currently available technology.

Alexa, of course, is the ground-breaking system that brings everyday voice interaction with computers into the home. Most of its uses are trivial (you can ask about today’s weather or change channels on your TV), but one must not underestimate the immense power of combining artificial intelligence with speech, one of the most basic and essential human activities. The potential of this interface for disabled or disoriented people is particularly intriguing.

The diabetes challenge is a nice focal point for exploring the more serious contribution made by voice interfaces and NLP. Because of the alarming global spread of this illness, the challenge also presents immediate opportunities that I hope the participants succeed in productizing and releasing into the field. Using the challenge’s published criteria, the judges today announced Sugarpod from Wellpepper as the winner.

This article will list some common themes among the five finalists, look at the background about current EHR interfaces and NLP, and say a bit about the unique achievement of each finalist.

Common themes

Overlapping visions of goals, problems, and solutions appeared among the finalists I interviewed for the diabetes challenge:

  • A voice interface allows more frequent and easier interactions with at-risk individuals who have chronic conditions, potentially achieving the behavioral health goal of helping a person make the right health decisions on a daily or even hourly basis.

  • Contestants seek to integrate many levels of patient intervention into their tools: responding to questions, collecting vital signs and behavioral data, issuing alerts, providing recommendations, delivering educational background material, and so on.

  • Services in this challenge go far beyond interactions between Alexa and the individual. The systems commonly anonymize and aggregate data in order to perform analytics that they hope will improve the service and provide valuable public health information to health care providers. They also facilitate communication of crucial health data between the individual and her care team.

  • Given the use of data and AI, customization is a big part of the tools. They are expected to determine the unique characteristics of each patient’s disease and behavior, and adapt their advice to the individual.

  • In addition to Alexa’s built-in language recognition capabilities, Amazon provides the Lex service for sophisticated text processing. Some contestants used Lex, while others drew on other research they had done building their own natural language processing engines.

  • Alexa never initiates a dialog, merely responding when the user wakes it up. The device can present a visual or audio notification when new material is present, but it still depends on the user to request the content. Thus, contestants are using other channels to deliver reminders and alerts such as messaging on the individual’s cell phone or alerting a provider.

  • Alexa is not HIPAA-compliant, but may achieve compliance in the future. This would help health services turn their voice interfaces into viable products and enter the mainstream.

Some background on interfaces and NLP

The poor state of current computing interfaces in the medical field is no secret–in fact, it is one of the loudest and most insistent complaints by doctors, such as on sites like KevinMD. You can visit Healthcare IT News or JAMA regularly and read the damning indictments.

Several factors can be blamed for this situation, including unsophisticated electronic health records (EHRs) and arbitrary reporting requirements by Centers for Medicare & Medicaid Services (CMS). Natural language processing may provide one of the technical solutions to these problems. The NLP services by Nuance are already famous. An encouraging study finds substantial time savings through using NLP to enter doctor’s insights. And on the other end–where doctors are searching the notes they previously entered for information–a service called Butter.ai uses NLP for intelligent searches. Unsurprisingly, the American Health Information Management Association (AHIMA) looks forward to the contributions of NLP.

Some app developers are now exploring voice interfaces and NLP on the patient side. I covered two such companies, including the one that ultimately won the Alexa Diabetes Challenge, in another article. In general, developers using these interfaces hope to eliminate the fuss and abstraction in health apps that frustrate many consumers, thereby reaching new populations and interacting with them more frequently, with deeper relationships.

The next two parts of this article turn to each of the five finalists, to show the use they are making of Alexa.

Health Data Standardization Project Proposes “One Record Per Person” Model

Posted on October 13, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

When we sit around the ol’ HIT campfire and swap interoperability stories, many of us have little to do but gripe.

Is FHIR going to solve all of our interoperability problems? Definitely not right away, and who knows if it ever will? Can we get the big EMR vendors to share and share alike? They’ll try, but there’s always a catch. And so on. There’s always a major catch involved.

I don’t know if the following offers a better story than any of the others, but at least it’s new one, or at least new to me. Folks, I’m talking about the Standard Health Record, an approach to health data sharing doesn’t fall precisely any of the other buckets I’m aware of.

SHR is based at The MITRE Corporation, which also hosts virtual patient generator Synthea. Rather than paraphrase, let’s let the MITRE people behind SHR tell you what they’re trying to accomplish:

The Standard Health Record (SHR) provides a high quality, computable source of patient information by establishing a single target for health data standardization… Enabled through open source technology, the SHR is designed by, and for, its users to support communication across homes and healthcare systems.

Generalities aside, what is an SHR? According to the project website, the SHR specification will contain all information critical to patient identification, emergency care and primary care along with background on social determinants of health. In the future, the group expects the SHR to support genomics, microbiomics and precision medicine.

Before we dismiss this as another me-too project, it’s worth giving the collaborative’s rationale a look:

The fundamental problem is that today’s health IT systems contain semantically incompatible information. Because of the great variety of the data models of EMR/EHR systems, transferring information from one health IT system to another frequently results in the distortion or loss of information, blocking of critical details, or introduction of erroneous data. This is unacceptable in healthcare.

The approach of the Standard Health Record (SHR) is to standardize the health record and health data itself, rather than focusing on exchange standards.

As a less-technical person, I’m not qualified to say whether this can be done in a way that will be widely accepted, but the idea certainly seems intuitive.

In any event, no one is suggesting that the SHR will change the world overnight. The project seems to be at the beginning stages, with collaborators currently prototyping health record specifications leveraging existing medical record models. (The current SHR spec can be found here.)

Still, I’d love for this to work, because it is at least a fairly straightforward idea. Creating a single source of health data truth seems like it might work.

A Hospital CIO Perspective on Precision Medicine

Posted on July 31, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

#Paid content sponsored by Intel.

In this video interview, I talk with David Chou, Vice President, Chief Information and Digital Officer with Kansas City, Missouri-based Children’s Mercy Hospital. In addition to his work at Children’s Mercy, he helps healthcare organizations transform themselves into digital enterprises.

Chou previously served as a healthcare technology advisor with law firm Balch & Bingham and Chief Information Officer with the University of Mississippi Medical Center. He also worked with the Cleveland Clinic to build a flagship hospital in Abu Dhabi, as well as working in for-profit healthcare organizations in California.

Precision Medicine and Genomic Medicine are important topics for every hospital CIO to understand. In my interview with David Chou, he provides the hospital CIO perspective on these topics and offers insights into what a hospital organization should be doing to take part in and be prepared for precision medicine and genomic medicine.

Here are the questions I asked him, if you’d like to skip to a specific topic in the video or check out the full video interview embedded below:

What are you doing in your organization when it comes to precision medicine and genomic medicine?

Will Data Aggregation For Precision Medicine Compromise Patient Privacy?

Posted on April 10, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Like anyone else who follows medical research, I’m fascinated by the progress of precision medicine initiatives. I often find myself explaining to relatives that in the (perhaps far distant) future, their doctor may be able to offer treatments customized specifically for them. The prospect is awe-inspiring even for me, someone who’s been researching and writing about health data for decades.

That being the case, there are problems in bringing so much personal information together into a giant database, suggests Jennifer Kulynych in an article for OUPblog, which is published by Oxford University Press. In particular, bringing together a massive trove of individual medical histories and genomes may have serious privacy implications, she says.

In arguing her point, she makes a sobering observation that rings true for me:

“A growing number of experts, particularly re-identification scientists, believe it simply isn’t possible to de-identify the genomic data and medical information needed for precision medicine. To be useful, such information can’t be modified or stripped of identifiers to the point where there’s no real risk that the data could be linked back to a patient.”

As she points out, norms in the research community make it even more likely that patients could be individually identified. For example, while a doctor might need your permission to test your blood for care, in some states it’s quite legal for a researcher to take possession of blood not needed for that care, she says. Those researchers can then sequence your genome and place that data in a research database, and the patient may never have consented to this, or even know that it happened.

And there are other, perhaps even more troubling ways in which existing laws fail to protect the privacy of patients in researchers’ data stores. For example, current research and medical regs let review boards waive patient consent or even allow researchers to call DNA sequences “de-identified” data. This flies in the face of conventional wisdom that there’s no re-identification risk, she writes.

On top of all of this, the technology already exists to leverage this information for personal identification. For example, genome sequences can potentially be re-identified through comparison to a database of identified genomes. Law enforcement organizations have already used such data to predict key aspects of an individual’s face (such as eye color and race) from genomic data.

Then there’s the issue of what happens with EMR data storage. As the author notes, healthcare organizations are increasingly adding genomic data to their stores, and sharing it widely with individuals on their network. While such practices are largely confined to academic research institutions today, this type of data use is growing, and could also expose patients to involuntary identification.

Not everyone is as concerned as Kulynych about these issues. For example, a group of researchers recently concluded that a single patient anonymization algorithm could offer a “standard” level of privacy protection to patient, even when the organizations involved are sharing clinical data. They argue that larger clinical datasets that use this approach could protect patient privacy without generalizing or suppressing data in a manner that would undermine its usefulness.

But if nothing else, it’s hard to argue Kulynych’s central concern, that too few rules have been updated to reflect the realities of big genomic and medical data stories. Clearly, state and federal rules  need to address the emerging problems associated with big data and privacy. Otherwise, by the time a major privacy breach occurs, neither patients nor researchers will have any recourse.

AMIA Asks NIH To Push For Research Data Sharing

Posted on January 23, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

The American Medical Informatics Association has is urging leaders at the NIH to take researchers’ data sharing plans into account when considering grant proposals.

AMIA is responding to an NIH Request for Information (topic: “Strategies for NIH Data Management, Sharing and Citation”) was published in November 2016. In the RFI, it asked for feedback on how digital scientific data generated by NIH-funded research should be managed and disclosed to the public. It also asked for input on how to set standards for citing shared data and software.

In its response, AMIA said that the agency should give researchers “institutional incentives” designed to boost data sharing and strengthen data management. Specifically, the trade group suggested that NIH make data sharing plans a “scoreable” part of grant applications.

“Data sharing has become such an important proximal output of research that we believe the relative value of a proposed project should include consideration of how its data will be shared,” AMIA said in its NIH response. “By using the peer-review process, we will make incremental improvements to interoperability, while identifying approaches to better data sharing practices over time.”

To help the agency implement this change, AMIA recommended that applicants earmark funds for data curation and sharing as part of the grants’ direct costs. Doing so will help assure that data sharing becomes part of research ecosystems.

AMIA also recommends that NIH offer rewards to scholars who either create or contribute to publicly-available datasets and software. The trade group argues that such incentives would help those who create and analyze data advance their careers. (And this, your editor notes, would help foster a virtuous cycle in which data-oriented scientists are available to foster such efforts.)

Right now, to my knowledge, few big data integration projects include the kind of front-line research data we’re talking about here.  On the other hand, while few community hospitals are likely to benefit from research data in the near term, academic medical organizations are having a bit more luck, and offer us an attractive picture of how things could be.

For example, look at this project at Vanderbilt University Medical Center which collects and manages translational and clinical research data via an interface with its EMR system.

At Vanderbilt, research data collection is integrated with clinical EMR use. Doctors there use a module within the research platform (known as REDCap) to collect data for prospective clinical studies. Once they get their research project approved, clinicians use menus to map health record data fields to REDCap. Then, REDCap automatically retrieves health record data for selected patients.

My feeling is that if NIH starts pushing grantees to share data effectively, we’ll see more projects like REDCap, and in turn, better clinical care supported by such research. It looks to me like everybody wins here. So I hope the NIH takes AMIA’s proposal seriously.

IBM Watson Partners With FDA On Blockchain-Driven Health Sharing

Posted on January 16, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

IBM Watson Health has partnered with the FDA in an effort to create scalable exchange of health data using blockchain technology. The two will research the exchange of owner-mediated data from a variety of clinical data sources, including EMRs, clinical trial data and genomic health data. The researchers will also incorporate data from mobiles, wearables and the Internet of Things.

The initial project planned for IBM Watson and the FDA will focus on oncology-related data. This makes sense, given that cancer treatment involves complex communication between multispecialty care teams, transitions between treatment phases, and potentially, the need to access research and genomic data for personalized drug therapy. In other words, managing the communication of oncology data is a task fit for Watson’s big brain, which can read 200 million pages of text in 3 seconds.

Under the partnership, IBM and the FDA plan to explore how the blockchain framework can benefit public health by supporting information exchange use cases across varied data types, including both clinical trials and real-world data. They also plan to look at new ways to leverage the massive volumes of diverse data generated by biomedical and healthcare organizations. IBM and the FDA have signed a two-year agreement, but they expect to share initial findings this year.

The partnership comes as IBM works to expand its commercial blockchain efforts, including initiatives not only in healthcare, but also in financial services, supply chains, IoT, risk management and digital rights management. Big Blue argues that blockchain networks will spur “dramatic change” for all of these industries, but clearly has a special interest in healthcare.  According to IBM, Watson Health’s technology can access the 80% of unstructured health data invisible to most systems, which is clearly a revolution in the making if the tech giant can follow through on its potential.

According to Scott Lundstrom, group vice president and general manager of IDC Government and Health Insights, blockchain may solve some of the healthcare industry’s biggest data management challenges, including a distributed, immutable patient record which can be secured and shared, s. In fact, this idea – building a distributed, blockchain-based EMR — seems to be gaining traction among most health IT thinkers.

As readers may know, I’m neither an engineer nor a software developer, so I’m not qualified to judge how mature blockchain technologies are today, but I have to say I’m a bit concerned about the rush to adopt it nonetheless.  Even companies with a lot at stake  — like this one, which sells a cloud platform backed by blockchain tech — suggest that the race to adopt it may be a bit premature.

I’ve been watching tech fashions come and go for 25 years, and they follow a predictable pattern. Or rather, they usually follow two paths. Go down one, and the players who are hot for a technology put so much time and money into it that they force-bake it into success. (Think, for example, the ERP revolution.) Go down the other road, however, and the new technology crumbles in a haze of bad results and lost investments. Let’s hope we go down the former, for everyone’s sake.

A New Meaning for Connected Health at 2016 Symposium (Part 4 of 4)

Posted on November 8, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article continued our exploration of the integration of health care into daily life. This section wraps up the article with related insights, including some thoughts about the future.

Memorable moments
I had the chance to meet with Casper de Clercq, who has set up a venture capital plan devoted to health as a General Partner at Norwest Venture Partners. He recommends that manufacturers and clinicians give patients a device that collects data while doing something else they find useful, so that they are motivated to keep wearing it. As an example, he cited the Beddit sleep tracker, which works through sensors embedded (no pun intended) in the user’s bed.

He has found that successful companies pursue gradual, incremental steps toward automated programs. It is important to start with a manual process that works (such as phoning or texting patients from the provider), then move to semi-automation and finally, if feasible, full automation. The product must also be field-tested; one cannot depend on a pilot. This advice matches what Glen Tullman, CEO of Livongo Health, said in his keynote: instead of doing a pilot, try something out in the field and change quickly if it doesn’t work.

Despite his call for gradual change, de Clercq advises that companies show an ROI within one year–otherwise, the field of health care may have evolved and the solution may be irrelevant.

He also recommends a human component in any health program. The chief barrier to success is getting the individual to go along with both the initial activation and continuing motivation. Gamification, behavioral economics, and social connections can all enhance this participation.

A dazzling keynote on videogames for health was delivered by Adam Gazzaley, who runs Neuroscience labs at the University of California at San Francisco. He pointed out that conventional treatments get feedback on patient reactions far too slowly–sometimes months after the reaction has occurred. In the field of mental health, His goal is to supplement (not replace) medications with videogames, and to provide instant feedback to game players and their treatment staff alike. Videogames not only provide a closed-loop system (meaning that feedback is instantaneous), but also engage patients by being fun and offering real-time rewards. Attention spans, anxiety, and memory are among the issues he expects games to improve. Education and wellness are also on his game plan. This is certainly one talk where I did not multitask (which is correlated with reduced performance)!

A future, hopefully bigger symposium
The Connected Health symposium has always been a production of the Boston Partners Health Care conglomerate, a part of their Connected Health division. The leader of the division, Dr. Joseph Kvedar, introduced the symposium by expressing satisfaction that so many companies and organizations are taking various steps to make connected health a reality, then labeled three areas where leadership is still required:

  • Reassuring patients that the technologies and practices work for them. Most people will be willing to adopt these practices when urged by their doctors. But their privacy must be protected. This requires low-cost solutions to the well-known security problems in EHRs and devices–the latter being part of the Internet of Things, whose vulnerability was exposed by the recent attack on Dyn and other major Internet sites.

  • Relieving the pressures on clinicians. Kvedar reported that 45 percent of providers would like to adopt connected health practices, but only 12 percent do so. One of the major concerns holding them back is the possibility of data overload, along with liability for some indicator of ill health that they miss in the flood of updates. Partners Connected Health will soon launch a provider adoption initiative that deals with their concerns.

  • Scaling. Pilot projects in connected health invest a lot of researcher time and offers a lot of incentives to develop engagement among their subjects. Because engagement is the whole goal of connected health, the pilot may succeed but prove hard to turn into a widespread practice. Another barrier to scaling is consumers’ lack of tolerance for the smallest glitches or barriers to adoption. Providers, also, insist that new practices fit their established workflows.

Dr. Kvedar announced at this symposium that they would be doing future symposia in conjunction with the Personal Connected Health Alliance (Formerly the mHealth Summit owned by HIMSS), a collaboration that makes sense. Large as Partners Health Care is, the symposium reaches much farther into the health care industry. The collaboration should bring more resources and more attendees, establishing the ideals of connected health as a national and even international movement.

A New Meaning for Connected Health at 2016 Symposium (Part 3 of 4)

Posted on November 7, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article paused during a discussion of the accuracy and uses of devices. At a panel on patient generated data, a speaker said that one factor holding back the use of patient data was the lack of sophistication in EHRs. They must be enhanced to preserve the provenance of data: whether it came from a device or from a manual record by the patient, and whether the device was consumer-grade or a well-tested medical device. Doctors invest different levels of trust in different methods of collecting data: devices can provide more objective information than other ways of asking patients for data. A participant in the panel also pointed out that devices are more reliable in the lab than under real-world conditions. Consumers must be educated about the proper use of devices, such as whether to sit down and how to hold their arms when taking their blood pressure.

Costantini decried the continuing silos in both data sharing and health care delivery. She said only half of doctors share patient data with other doctors or caretakers. She also praised the recent collaboration between Philips and Qualcomm to make it easier for device data to get into medical records. Other organizations that have been addressing that issue for some time include Open mHealth, which I reviewed in an earlier article, and Validic.

Oozing into workflow
The biggest complaint I hear from clinicians about EHRs–aside from the time wasted in their use, which may be a symptom of the bigger problem-is that the EHRs disrupt workflow. Just as connected health must integrate with patient lives as seamlessly as possible, it should recognize how teams work and provide them with reasonable workflows. This includes not only entering existing workflows as naturally as capillary action, but helping providers adopt better ones.

The Veterans Administration is forging into this area with a new interface called the Enterprise Health Management Platform (eHMP). I mentioned it in a recent article on the future of the VA’s EHR. A data integration and display tool, eHMP is agnostic as to data source. It can be used to extend the VistA EHR (or potentially replace it) with other offerings. Although eHMP currently displays a modern dashboard format, as described in a video demo by Shane Mcnamee, the tool aims to be much more than that. It incorporates Business Process Modeling Notation (BPMN) and the WS-Human Task Specification to provide workflow support. The Activity Management Service in eHMP puts Clinical Best Practices directly into the workflow of health care providers.

Clinicians can use eHMP to determine where a consultation request goes; currently, the system is based on Red Hat’s BPMN engine. If one physician asks another to examine the patient, that task turns up on the receiving physician’s dashboard. Teams as well as individuals can be alerted to a patient need, and alerts can be marked as routine or urgent. The alerts can also be associated with time-outs, so that their importance is elevated if no one acts on them in the chosen amount of time.

eHMP is just in the beginning stages of workflow support. Developers are figuring out how to increase the sophistication of alerts, so that they offer a higher signal-to-noise ratio than most hospital CDS systems, and add intelligence to choose the best person to whom an alert should be directed. These improvements will hopefully free up time in the doctor’s session to discuss care in depth–what both patients and providers have long said they most want from the health care field.

At the Connected Health symposium, I found companies working on workflow as well. Dataiku (whose name is derived from “haiku”) has been offering data integration and analytics in several industries for the past three years. Workflows, including conditional branches and loops, can be defined through a graphical interface. Thus, a record may trigger a conditional inquiry: does a lab value exceed normal limits? if not, it is merely recorded, but if so, someone can be alerted to follow up.

Dataiku illustrates an all-in-one, comprehensive approach to analytics that remains open to extensions and integration with other systems. On the one hand, it covers the steps of receiving and processing data pretty well.

To clean incoming data (the biggest task on most data projects), their DSS system can use filters and even cluster data to find patterns. For instance, if 100 items list “Ohio” for their location, and one lists “Oiho”, the system can determine that the outlier is a probably misspelling. The system can also assign data to belonging to broad categories (string or integer) as well as more narrowly defined categories (such as social security number or ZIP code).

For analysis, Dataiku offers generic algorithms that are in wide use, such as linear regressions, and a variety of advanced machine learning (artificial intelligence) algorithms in the visual backend of the program–so the users don’t need to write a single line of code. Advanced users can also add their own algorithms coded in a variety of popular languages such as Python, R, and SQL. The software platform offers options for less technically knowledgeable users, pre-packaged solutions for various industries such as health care, security features such as audits, and artificial intelligence to propose an algorithm that works on the particular input data.

Orbita Health handles workflows between patients and providers to help with such issues as pain management and medication adherence. The company addresses ease of use by supporting voice-activated devices such as Amazon Echo, as well as some 250 other devices. Thus, a patient can send a message to a provider through a single statement to a voice-activated device or over another Internet-connected device. For workflow management, the provider can load a care plan into the system, and use Orbita’s orchestration engine (similar to the Business Process Modeling Notation mentioned earlier) to set up activities, such as sending a response to a patient’s device or comparing a measurement to the patient’s other measurements over time. Orbita’s system supports conditional actions, nests, and trees.

CitiusTech, founded in 2005, integrates data from patient devices and apps into provider’s data, allowing enterprise tools and data to be used in designing communications and behavioral management in the patient’s everyday life. The company’s Integrated Analytix platform offer more than 100,000 apps and devices from third-party developers. Industry studies have shown effective use of devices, with one study showing a 40% reduction in emergency room admissions among congestive heart failure patients through the use of scales, engaging the patients in following health protocols at home.

In a panel on behavior change and the psychology of motivation, participants pointed out that long-range change requires multiple, complex incentives. At the start, the patient may be motivated by a zeal to regain lost functioning, or even by extrinsic rewards such as lower insurance premiums. But eventually the patient needs to enfold the exercise program or other practice into his life as a natural activity. Rewards can include things like having a beer at the end of a run, or sharing daily activities with friends on social media.

In his keynote on behavioral medicine, the Co-founder & CEO of Omada Health, Sean Duffy, put up a stunningly complex chart showing the incentives, social connections, and other factors that go into the public’s adoption of health practices. At a panel called “Preserving the Human Touch in the Expanding World of Digital Therapies”, a speaker also gave the plausible advice that we tell patients what we can give back to them when collecting data.

The next section of this article offers some memorable statements at the conference, and a look toward the symposium’s future.