Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

LTPAC – A Vibrant Hidden World

Posted on November 20, 2017 I Written By

Colin Hung is the co-founder of the #hcldr (healthcare leadership) tweetchat one of the most popular and active healthcare social media communities on Twitter. Colin speaks, tweets and blogs regularly about healthcare, technology, marketing and leadership. He is currently an independent marketing consultant working with leading healthIT companies. Colin is a member of #TheWalkingGallery. His Twitter handle is: @Colin_Hung.

PointClickCare, makers of a cloud-based suite of applications designed for long-term post acute care (LTPAC), recently held its annual user conference (PointClickCare SUMMIT) in sunny Orlando, Florida. The conference quite literally shone a light on the LTPAC world – a world that is often overlooked by those of us that focus on the acute care side of healthcare. It was an eye-opening experience.

This year’s SUMMIT was the largest in the company’s history, attracting over 1,800 attendees from skilled nursing providers, senior living facilities, home health agencies and Continuing Care Retirement Communities. Over the three days of SUMMIT I managed to speak to about 100 attendees and every one of them had nothing but praise for PointClickCare.

“I couldn’t imagine doing my work without PointClickCare. I wouldn’t even know where to start if I had to use paper.”

“I don’t want to go back to the days before we had PointClickCare. We had so much paperwork back then and I used to spend an hour or two after my shift just documenting. Now I don’t have to. I track everything in the system as I go.”

“PointClickCare lets us focus more on the people in our care. We have the ability to do things that would have been impossible if we weren’t on an electronic system. We’re even starting to share data with some of our community partners.”

Contrary to what many believe, not every skilled nursing provider and senior living facility operates with clipboards and fax machines. “That’s one of the biggest misconceptions that people have of the LTPAC market,” says Dave Wessinger, Co-Founder and CTO at PointClickCare. “Almost everyone assumes that LTPAC organizations use nothing but paper or a terrible self-built electronic solution. The reality is that many have digitized their operations and are every bit as modern as their acute care peers.”

According to a recent Black Book survey, 19 percent of LTPAC providers have now adopted some form of an Electronic Health Record (EHR) system. In 2016, Black Book found the adoption rate was 15 percent. The Office of the National Coordinator recently published a data brief that showed adoption of EHRs by Skilled Nursing Facilities (SNFs) had reached 64% in 2016.

Although these numbers are low compared to the +90% EHR adoption rate by US hospitals, it does indicate that there are many pioneering LTPAC providers that have jumped into the digital world.

“It’s fun to be asked by our clients to work with their acute care partners,” explains BJ Boyle, Director of Product Management at PointClickCare. “First of all, they are surprised that a company like PointClickCare even exists. They are even more surprised when we work with them to exchange health information via CCD.”

Boyle’s statement was one of many during SUMMIT that opened my eyes to the innovative technology ecosystem that exists in LTPAC. Further proof came from the SUMMIT exhibit hall where no less than 72 partners had booths set up.

Among the exhibitors were several that focus exclusively on the LTPAC market:

  • Playmaker. A CRM/Sales solution for post-acute care.
  • Hymark. A technical consultancy that helps LTPAC organizations implement and optimize PointClickCare.
  • Careserv. A LTPAC cloud-hosting and managed services provider.

And some with specialized LTPAC offerings:

  • Care.ly. An app that helps families coordinate the care of their elderly loved ones with senior care facilities.
  • McBee Associates. Financial and revenue cycle consultants that help LTPAC organizations.

I came away from SUMMIT with a newfound respect for the people that work in LTPAC. I also have a new appreciation for the innovative solutions being developed for LTPAC by companies like PointClickCare, Care.ly and Playmaker. This is a vibrant hidden world that is worth paying attention to.

Note: PointClickCare did cover travel expenses for Healthcare Scene to be able to attend the conference.

Health IT Continues To Drive Healthcare Leaders’ Agenda

Posted on October 23, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

A new study laying out opportunities, challenges and issues in healthcare likely to emerge in 2018 demonstrates that health IT is very much top of mind for healthcare leaders.

The 2018 HCEG Top 10 list, which is published by the Healthcare Executive Group, was created based on feedback from executives at its 2017 Annual Forum in Nashville, TN. Participants included health plans, health systems and provider organizations.

The top item on the list was “Clinical and Data Analytics,” which the list describes as leveraging big data with clinical evidence to segment populations, manage health and drive decisions. The second-place slot was occupied by “Population Health Services Organizations,” which, it says, operationalize population health strategy and chronic care management, drive clinical innovation and integrate social determinants of health.

The list also included “Harnessing Mobile Health Technology,” which included improving disease management and member engagement in data collection/distribution; “The Engaged Digital Consumer,” which by its definition includes HSAs, member/patient portals and health and wellness education materials; and cybersecurity.

Other hot issues named by the group include value-based payments, cost transparency, total consumer health, healthcare reform and addressing pharmacy costs.

So, readers, do you agree with HCEG’s priorities? Has the list left off any important topics?

In my case, I’d probably add a few items to list. For example, I may be getting ahead of the industry, but I’d argue that healthcare AI-related technologies might belong there. While there’s a whole separate article to be written here, in short, I believe that both AI-driven data analytics and consumer-facing technologies like medical chatbots have tremendous potential.

Also, I was surprised to see that care coordination improvements didn’t top respondents’ list of concerns. Admittedly, some of the list items might involve taking coordination to the next level, but the executives apparently didn’t identify it as a top priority.

Finally, as unsexy as the topic is for most, I would have thought that some form of health IT infrastructure spending or broader IT investment concerns might rise to the top of this list. Even if these executives didn’t discuss it, my sense from looking at multiple information sources is that providers are, and will continue to be, hard-pressed to allocate enough funds for IT.

Of course, if the executives involved can address even a few of their existing top 10 items next year, they’ll be doing pretty well. For example, we all know that providers‘ ability to manage value-based contracting is minimal in many cases, so making progress would be worthwhile. Participants like hospitals and clinics still need time to get their act together on value-based care, and many are unlikely to be on top of things by 2018.

There are also problems, like population health management, which involve processes rather than a destination. Providers will be struggling to address it well beyond 2018. That being said, it’d be great if healthcare execs could improve their results next year.

Nit-picking aside, HCEG’s Top 10 list is largely dead-on. The question is whether will be able to step up and address all of these things. Fingers crossed!

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 3 of 3)

Posted on October 20, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Earlier parts of this article set the stage for understanding what the Alexa Diabetes Challenge is trying to achieve and how some finalists interpreted the mandate. We examine three more finalists in this final section.

DiaBetty from the University of Illinois-Chicago

DiaBetty focuses on a single, important aspect of diabetes: the effect of depression on the course of the disease. This project, developed by the Department of Psychiatry at the University of Illinois-Chicago, does many of the things that other finalists in this article do–accepting data from EHRs, dialoguing with the individual, presenting educational materials on nutrition and medication, etc.–but with the emphasis on inquiring about mood and handling the impact that depression-like symptoms can have on behavior that affects Type 2 diabetes.

Olu Ajilore, Associate Professor and co-director of the CoNECt lab, told me that his department benefited greatly from close collaboration with bioengineering and computer science colleagues who, before DiaBetty, worked on another project that linked computing with clinical needs. Although they used some built-in capabilities of the Alexa, they may move to Lex or another AI platform and build a stand-alone device. Their next step is to develop reliable clinical trials, checking the effect of DiaBetty on health outcomes such as medication compliance, visits, and blood sugar levels, as well as cost reductions.

T2D2 from Columbia University

Just as DiaBetty explores the impact of mood on diabetes, T2D2 (which stands for “Taming Type 2 Diabetes, Together”) focuses on nutrition. Far more than sugar intake is involved in the health of people with diabetes. Elliot Mitchell, a PhD student who led the T2D2 team under Assistant Professor Lena Mamykina in the Department of Biomedical Informatics, told me that the balance of macronutrients (carbohydrates, fat, and protein) is important.

T2D2 is currently a prototype, developed as a combination of Alexa Skill and a chatbot based on Lex. The Alexa Skills Kit handle voice interactions. Both the Skill and the chatbot communicate with a back end that handles accounts and logic. Although related Columbia University technology in diabetes self-management is used, both the NLP and the voice interface were developed specifically for the Alexa Diabetes Challenge. The T2D2 team included people from the disciplines of computer interaction, data science, nursing, and behavioral nutrition.

The user invokes Alexa to tell it blood sugar values and the contents of meals. T2D2, in response, offers recipe recommendations and other advice. Like many of the finalists in this article, it looks back at meals over time, sees how combinations of nutrients matched changes in blood sugar, and personalizes its food recommendations.

For each patient, before it gets to know that patient’s diet, T2D2 can make food recommendations based on what is popular in their ZIP code. It can change these as it watches the patient’s choices and records comments to recommendations (for instance, “I don’t like that food”).

Data is also anonymized and aggregated for both recommendations and future research.

The care team and family caregivers are also involved, although less intensely than some other finalists do. The patient can offer caregivers a one-page report listing a plot of blood sugar by time and day for the previous two weeks, along with goals and progress made, and questions. The patient can also connect her account and share key medical information with family and friends, a feature called the Supportive Network.

The team’s next phase is run studies to evaluable some of assumptions they made when developing T2D2, and improve it for eventual release into the field.

Sugarpod from Wellpepper

I’ll finish this article with the winner of the challenge, already covered by an earlier article. Since the publication of the article, according to the founder and CEO of Wellpepper, Anne Weiler, the company has integrated some of Sugarpod functions into a bathroom scale. When a person stands on the scale, it takes an image of their feet and uploads it to sites that both the individual and their doctor can view. A machine learning image classifier can check the photo for problems such as diabetic foot ulcers. The scale interface can also ask the patient for quick information such as whether they took their medication and what their blood sugar is. Extended conversations are avoided, under the assumption that people don’t want to have them in the bathroom. The company designed its experiences to be integrated throughout the person’s day: stepping on the scale and answering a few questions in the morning, interacting with the care plan on a mobile device at work, and checking notifications and messages with an Echo device in the evening.

Any machine that takes pictures can arouse worry when installed in a bathroom. While taking the challenge and talking to people with diabetes, Wellpepper learned to add a light that goes on when the camera is taking a picture.

This kind of responsiveness to patient representatives in the field will determine the success of each of the finalists in this challenge. They all strive for behavioral change through connected health, and this strategy is completely reliant on engagement, trust, and collaboration by the person with a chronic illness.

The potential of engagement through voice is just beginning to be tapped. There is evidence, for instance, that serious illnesses can be diagnosed by analyzing voice patterns. As we come up on the annual Connected Health Conference this month, I will be interested to see how many participating developers share the common themes that turned up during the Alexa Diabetes Challenge.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 2 of 3)

Posted on October 19, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article introduced the problems of computer interfaces in health care and mentioned some current uses for natural language processing (NLP) for apps aimed at clinicians. I also summarized the common goals, problems, and solutions I found among the five finalists in the Alexa Diabetes Challenge. This part of the article shows the particular twist given by each finalist.

My GluCoach from HCL America in Partnership With Ayogo

There are two levels from which to view My GluCoach. On one level, it’s an interactive tool exemplifying one of the goals I listed earlier–intense engagement with patients over daily behavior–as well as the theme of comprehensivenesss. The interactions that My GluCoach offers were divided into three types by Abhishek Shankar, a Vice President at HCL Technologies America:

  • Teacher: the service can answer questions about diabetes and pull up stored educational materials

  • Coach: the service can track behavior by interacting with devices and prompt the patient to eat differently or go out for exercise. In addition to asking questions, a patient can set up Alexa to deliver alarms at particular times, a feature My GluCoach uses to deliver advice.

  • Assistant: provide conveniences to the patient, such as ordering a cab to take her to an appointment.

On a higher level, My GluCoach fits into broader services offered to health care institutions by HCL Technologies as part of a population health program. In creating the service HCL partnered with Ayogo, which develops a mobile platform for patient engagement and tracking. HCL has also designed the service as a general health care platform that can be expanded over the next six to twelve months to cover medical conditions besides diabetes.

Another theme I discussed earlier, interactions with outside data and the use of machine learning, are key to my GluCoach. For its demo at the challenge, My GluCoach took data about exercise from a Fitbit. It can potentially work with any device that shares information, and HCL plans to integrate the service with common EHRs. As My GluCoach gets to know the individual who uses it over months and years, it can tailor its responses more and more intelligently to the learning style and personality of the patient.

Patterns of eating, medical compliance, and other data are not the only input to machine learning. Shankar pointed out that different patients require different types of interventions. Some simply want to be given concrete advice and told what to do. Others want to be presented with information and then make their own decisions. My GluCoach will hopefully adapt to whatever style works best for the particular individual. This affective response–together with a general tone of humor and friendliness–will win the trust of the individual.

PIA from Ejenta

PIA, which stands for “personal intelligent agent,” manages care plans, delivering information to the affected patients as well as their care teams and concerned relatives. It collects medical data and draws conclusions that allow it to generate alerts if something seems wrong. Patients can also ask PIA how they are doing, and the agent will respond with personalized feedback and advice based on what the agent has learned about them and their care plan.

I talked to Rachna Dhamija, who worked on a team that developed PIA as the founder and CEO of Ejenta. (The name Ejenta is a version of the word “agent” that entered the Bengali language as slang.) She said that the AI technology had been licensed from NASA, which had developed it to monitor astronauts’ health and other aspects of flights. Ejenta helped turn it into a care coordination tool with interfaces for the web and mobile devices at a major HMO to treat patients with chronic heart failure and high-risk pregnancies. Ejenta expanded their platform to include an Alexa interface for the diabetes challenge.

As a care management tool, PIA records targets such as glucose levels, goals, medication plans, nutrition plans, and action parameters such as how often to take measurements using the devices. Each caregiver, along the patient, has his or her own agent, and caregivers can monitor multiple patients. The patient has very granular control over sharing, telling PIA which kind of data can be sent to each caretaker. Access rights must be set on the web or a mobile device, because allowing Alexa to be used for that purpose might let someone trick the system into thinking he was the patient.

Besides Alexa, PIA takes data from devices (scales, blood glucose monitors, blood pressure monitors, etc.) and from EHRs in a HIPAA-compliant method. Because the service cannot wake up Alexa, it currently delivers notifications, alerts, and reminders by sending a secure message to the provider’s agent. The provider can then contact the patient by email or mobile phone. The team plans to integrate PIA with an Alexa notifications feature in the future, so that PIA can proactively communicate with the patient via Alexa.

PIA goes beyond the standard rules for alerts, allowing alerts and reminders to be customized based on what it learns about the patient. PIA uses machine learning to discover what is normal activity (such as weight fluctuations) for each patient and to make predictions based on the data, which can be shared with the care team.

The final section of this article covers DiaBetty, T2D2, and Sugarpod, the remaining finalists.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 1 of 3)

Posted on October 16, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The leading pharmaceutical and medical company Merck, together with Amazon Web Services, has recently been exploring the potential health impacts of voice interfaces and natural language processing (NLP) through an Alexa Diabetes Challenge. I recently talked to the five finalists in this challenge. This article explores the potential of new interfaces to transform the handling of chronic disease, and what the challenge reveals about currently available technology.

Alexa, of course, is the ground-breaking system that brings everyday voice interaction with computers into the home. Most of its uses are trivial (you can ask about today’s weather or change channels on your TV), but one must not underestimate the immense power of combining artificial intelligence with speech, one of the most basic and essential human activities. The potential of this interface for disabled or disoriented people is particularly intriguing.

The diabetes challenge is a nice focal point for exploring the more serious contribution made by voice interfaces and NLP. Because of the alarming global spread of this illness, the challenge also presents immediate opportunities that I hope the participants succeed in productizing and releasing into the field. Using the challenge’s published criteria, the judges today announced Sugarpod from Wellpepper as the winner.

This article will list some common themes among the five finalists, look at the background about current EHR interfaces and NLP, and say a bit about the unique achievement of each finalist.

Common themes

Overlapping visions of goals, problems, and solutions appeared among the finalists I interviewed for the diabetes challenge:

  • A voice interface allows more frequent and easier interactions with at-risk individuals who have chronic conditions, potentially achieving the behavioral health goal of helping a person make the right health decisions on a daily or even hourly basis.

  • Contestants seek to integrate many levels of patient intervention into their tools: responding to questions, collecting vital signs and behavioral data, issuing alerts, providing recommendations, delivering educational background material, and so on.

  • Services in this challenge go far beyond interactions between Alexa and the individual. The systems commonly anonymize and aggregate data in order to perform analytics that they hope will improve the service and provide valuable public health information to health care providers. They also facilitate communication of crucial health data between the individual and her care team.

  • Given the use of data and AI, customization is a big part of the tools. They are expected to determine the unique characteristics of each patient’s disease and behavior, and adapt their advice to the individual.

  • In addition to Alexa’s built-in language recognition capabilities, Amazon provides the Lex service for sophisticated text processing. Some contestants used Lex, while others drew on other research they had done building their own natural language processing engines.

  • Alexa never initiates a dialog, merely responding when the user wakes it up. The device can present a visual or audio notification when new material is present, but it still depends on the user to request the content. Thus, contestants are using other channels to deliver reminders and alerts such as messaging on the individual’s cell phone or alerting a provider.

  • Alexa is not HIPAA-compliant, but may achieve compliance in the future. This would help health services turn their voice interfaces into viable products and enter the mainstream.

Some background on interfaces and NLP

The poor state of current computing interfaces in the medical field is no secret–in fact, it is one of the loudest and most insistent complaints by doctors, such as on sites like KevinMD. You can visit Healthcare IT News or JAMA regularly and read the damning indictments.

Several factors can be blamed for this situation, including unsophisticated electronic health records (EHRs) and arbitrary reporting requirements by Centers for Medicare & Medicaid Services (CMS). Natural language processing may provide one of the technical solutions to these problems. The NLP services by Nuance are already famous. An encouraging study finds substantial time savings through using NLP to enter doctor’s insights. And on the other end–where doctors are searching the notes they previously entered for information–a service called Butter.ai uses NLP for intelligent searches. Unsurprisingly, the American Health Information Management Association (AHIMA) looks forward to the contributions of NLP.

Some app developers are now exploring voice interfaces and NLP on the patient side. I covered two such companies, including the one that ultimately won the Alexa Diabetes Challenge, in another article. In general, developers using these interfaces hope to eliminate the fuss and abstraction in health apps that frustrate many consumers, thereby reaching new populations and interacting with them more frequently, with deeper relationships.

The next two parts of this article turn to each of the five finalists, to show the use they are making of Alexa.

E-Patient Update:  I Was A Care Coordination Victim

Posted on June 12, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

Over the past few weeks, I’ve been recovering from a shoulder fracture. (For the record, I wasn’t injured engaging in some cool athletic activity like climbing a mountain; I simply lost my footing on the tile floor of a beauty salon and frightened a gaggle of hair stylists. At least I got a free haircut!)

During the course of my treatment for the injury, I’ve had a chance to sample both the strengths and weaknesses of coordinated treatment based around a single EMR. And unfortunately, the weaknesses have shown up more often than the strengths.

What I’ve learned, first hand, is that templates and shared information may streamline treatment, but also pose a risk of creating a “groupthink” environment that inhibits a doctor’s ability to make independent decisions about patient care.

At the same time, I’ve concluded that centralizing treatment across a single EMR may provide too little context to help providers frame care issues appropriately. My sense is that my treatment team had enough information to be confident they were doing the right thing, but not enough to really understand my issues.

Industrial-style processes

My insurance carrier is Kaiser Permanente, which both provides insurance and delivers all of my care. Kaiser, which reportedly spent $4 billion on the effort, rolled out Epic roughly a decade ago, and has made it the backbone of its clinical operations. As you can imagine, every clinician who touches a Kaiser patient has access to that patient’s full treatment history with Kaiser providers.

During the first few weeks with Kaiser, I found that physicians there made good use of the patient information they were accumulating, and used it to handle routine matters quite effectively. For example, my primary care physician had no difficulty getting an opinion on a questionable blood test from a hematologist colleague, probably because the hematologist had access not only to the test result but also my medical history.

However, the system didn’t serve me so well when I was being treated for the fracture, an injury which, given my other issues, may have responded better to a less standardized approach.  In this case, I believe that the industrial-style process of care facilitated by the EMR worked to my disadvantage.

Too much information, yet not enough

After the fracture, as I worked my way through my recovery process, I began to see that the EMR-based process used to make Kaiser efficient may have discouraged providers from inquiring more deeply into my particulalr circumstances.

And yes, this could have happened in a paper world, but I believe the EMR intensified the tendency to treat as “the fracture in room eight” rather than an individual with unique needs.

For example, at each step of the way I informed physicians that the sling they had provided was painful to use, and that I needed some alternative form of arm support. As far as I can tell, each physician who saw me looked at other providers’ notes, assumed that the predecessor had a good reason for insisting on the sling, and simply followed suit. Worse, none seemed to hear me when I insisted that it would not work.

While this may sound like a trivial concern, the lack of a sling alternative seemed to raise my level of pain significantly. (And let me tell you, a shoulder fracture is a very painful event already.)

At the same time, otherwise very competent physicians seemed to assume that I’d gotten information that I hadn’t, particularly education on my prognosis. At each stage, I asked questions about the process of recovery, and for whatever reason didn’t get the information I needed. Unfortunately, in my pain-addled state I didn’t have the fortitude to insist they tell me more.

My sense is that my care would’ve benefited from both a more flexible process and more information on my general situation, including the fact that I was missing work and really needed reassurance that I would get better soon. Instead, it was care by data point.

Dealing with exceptions

All that being said, I know that the EMR alone isn’t itself to blame for the problems I encountered. Kaiser physicians are no doubt constrained by treatment protocols which exist whether or not they’re relying on EMR-based information.

I also know that there are good reasons that organizations like Kaiser standardize care, such as improving outcomes and reducing care costs. And on the whole, my guess is that these protocols probably do improve outcomes in many cases.

But in situations like mine, I believe they fall short. If nothing else, Kaiser perhaps should have a protocol for dealing with exceptions to the protocols. I’m not talking about informal, seat-of-the-pants judgment call, but an actual process for dealing with exceptions to the usual care flow.

Three weeks into healing, my shoulder is doing much better, thank you very much. But though I can’t prove it, I strongly suspect that I might have hurt less if physicians were allowed to make exceptions and address my emerging needs. And while I can’t blame the EMR for this experience entirely, I believe it played a critical role in consolidating opinion and effectively limiting my options.

While I have as much optimism about the role of EMRs as anyone, I hope they don’t serve as a tool to stifle dissension and oversimplify care in the future. I, for one, don’t want to suffer because someone feels compelled to color inside of the lines.

E-Patient Update: Patients Need Better Care Management Workflows

Posted on March 10, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

Now and then, I get a little discouraged by the state of my health data. Like providers, I’m frustrated as heck by the number of independent data sources I must access to get a full picture of my medications, care and health status. These include:

* The medication tracker on my retail pharmacy’s site
* My primary care group’s portal
* My hospital’s Epic MyChart portal
* A medication management app to track my compliance with my regimen
* A health tracker app in which I track my blood pressure
* My Google calendar, to keep up with my health appointments
* Email clients to exchange messages with some providers

That’s not all – I’m sure I could think of other tools, interfaces and apps – but it offers a good idea of what I face. And I’m pretty sure I’m not unusual in this regard, so we’re talking about a big issue here.

By the way, bear in mind I’m not just talking about hyperportalotus – a fun term for the state of having too many portals to manage – but rather, a larger problem of data coordination. Even if all of my providers came together and worked through a shared single portal, I’d still have to juggle many tools for tracking and documenting my care.

The bottom line is that given the obstacles I face, my self-care process is very inefficient. And while we spend a lot of time talking about clinician workflow (which, of course, is quite important) we seldom talk about patient/consumer health workflow. But it’s time that we did.

Building a patient workflow

A good initial step in addressing this problem might be to create a patient self-care workflow builder and make it accessible website. Using such a tool, I could list all of the steps I need to take to manage my conditions, and the tool would help me develop a process for doing so effectively.

For example, I could “tell” the software that I need to check the status of my prescriptions once a week, visit certain doctors once a month, check in about future clinical visits on specific days and enter my data in my medication management app twice a day. As I did this, I would enter links to related sites, which would display in turn as needed.

This tool could also display critical web data, such as the site compiling the blood sugar readings from my husband’s connected blood glucose monitor, giving patients like me the ability to review trends at a glance.

I haven’t invented the wheel here, of course. We’re just talking about an alternate approach to a patient portal. Still, even this relatively crude approach – displaying various web-based sources under one “roof” along with an integrated process – could be quite helpful.

Eventually, health IT wizards could build much more sophisticated tools, complete with APIs to major data sources, which would integrate pretty much everything patients need first-hand. This next-gen data wrangler would be able to create charts and graphs and even issue recommendations if the engine behind it was sophisticated enough.

Just get started

All that being said, I may be overstating how easy it would be to make such a solution work. In particular, I’m aware that integrating a tool with such disparate data sources is far, far easier said than done. But why not get started?

After all, it’s hard to overestimate how much such an approach would help patients, at least those who are comfortable working with digital health solutions. Having a coordinated, integrated tool in place to help me manage my care needs would certainly save me a great deal of time, and probably improve my health as well.

I urge providers to consider this approach, which seems like a crying need to me. The truth is, most of the development money is going towards enabling the professionals to coordinate and manage care. And while that’s not a bad thing, don’t forget us!

Can Interoperability Drive Value-Based Care?

Posted on December 14, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

As the drive to interoperability has evolved over the last few decades — and those of you who are HIT veterans know that these concerns go at least that far back — open data sharing has gone from being a “nice to have” to a presumed necessity for providing appropriate care.

And along the way, backers of interoperability efforts have expanded their goals. While the need to support coordinated care has always been a basis for the discussion, today the assumption is that value-based care simply isn’t possible without data interoperability between providers.

I don’t disagree with the premise. However, I believe that many providers, health systems and ACOs have significant work to do before they can truly benefit from interoperability. In fact, we may be putting the cart before the horse in this case.

A fragmented system

At present, our health system is straining to meet the demand for care coordination among the populations it serves. That may be in part because the level of chronic illness in the US is particularly high. According to one Health Affairs study, two out of three Americans will have a chronic condition by the year 2030. Add that to the need to care for patients with episodic care needs and the problem becomes staggering.

While some health organizations, particularly integrated systems like the Cleveland Clinic and staff-model managed care plans like Kaiser Permanente, plan for and execute well on care coordination, most others have too many siloes in place to do the job correctly. Though many health systems have installed enterprise EMRs like Epic and Cerner, and share data effectively while the patient remains down in their system, they may do very little to integrate information from community providers, pharmacies, laboratories or diagnostic imaging centers.

I have no doubt that when needed, individual providers collect records from these community organizations. But collecting records on the fly is no substitute for following patients in a comprehensive way.

New models required

Given this history, I’d argue that many health systems simply aren’t ready to take full advantage of freely shared health data today, much less under value-based care payment models of the future.

Before they can use interoperable data effectively, provider organizations will need to integrate outside data into their workflow. They’ll need to put procedures in place on how care coordination works in their environment. This will include not only deciding who integrates of outside data and how, but also how organizations will respond as a whole.

For example, hospitals and clinics will need to figure out who handles care coordination tasks, how many resources to pour into this effort, how this care coordination effort fits into the larger population health strategy and how to measure whether they are succeeding or failing in their care coordination efforts. None of these are trivial tasks, and the questions they raise won’t be answered overnight.

In other words, even if we achieved full interoperability across our health system tomorrow, providers wouldn’t necessarily be able to leverage it right away. In other words, unfettered health data sharing won’t necessarily help providers win at value-based care, at least not right away. In fact, I’d argue that it’s dangerous to act as though interoperability can magically make this happen. Even if full interoperability is necessary, it’s not sufficient. (And of course, even getting there seems like a quixotic goal to some, including myself.)

Planning ahead

That being said, health organizations probably do have time to get their act together on this front. The move to value-based care is happening quickly, but not at light speed, so they do have time to make plans to leverage interoperable health data.

But unless they acknowledge the weaknesses of their current system, which in many cases is myopic, siloed and rigid, interoperability may do little to advance their long-term goals. They’ll have to admit that their current systems are far too inward-looking, and that the problem will only go away if they take responsibility for fixing it.

Otherwise, even full interoperability may do little to advance value-based care. After all, all the data in the world won’t change anything on its own.

Early Attestation Results: Some Observations – Meaningful Use Monday

Posted on August 8, 2011 I Written By

Lynn Scheps is Vice President, Government Affairs at EHR vendor SRSsoft. In this role, Lynn has been a Voice of Physicians and SRSsoft users in Washington during the formulation of the meaningful use criteria. Lynn is currently working to assist SRSsoft users interested in showing meaningful use and receiving the EHR incentive money.

Lynn Scheps is Vice President, Government Affairs at EHR vendor SRSsoft. In this role, Lynn has been a Voice of Physicians and SRSsoft users in Washington during the formulation of the meaningful use criteria. Lynn is currently working to assist SRSsoft users interested in showing meaningful use and receiving the EHR incentive money. Check out Lynn’s previous Meaningful Use Monday posts.

At last week’s HIT Policy Committee meeting, Robert Tagalicod, (the new director of the Office of E-Health Standards & Services), presented an analysis of the attestation experience to-date [See John’s previous Meaningful Use Details post for the slides and report]. The results lend themselves to some interesting observations—admittedly preliminary findings, but revealing nonetheless: 

  • The average performance levels were quite high—on those measures that have thresholds to be met, providers attested to results considerably above the level required for successful accomplishment. This is a positive sign that once providers commit to an EHR and to meaningful use, they try to use the EHR on a routine basis, not just to satisfy the minimum requirements. True, these initial attesters represent early EHR adopters who have had time to become successful EHR users, but hopefully this trend will be sustained.
  • Care coordination measures seem to present a challenge for many providers—the most commonly deferred (i.e., not selected) menu measures were medication reconciliation and summary of care at transitions.
  • Very few providers were actually able to conduct a test of their ability to electronically submit syndromic surveillance information to public health agencies or submit immunization data to registries (5% and 28% of attesters, respectively). Not surprisingly, most EPs either excluded or deferred these public health measures

Of the 2,383 EPs that attested, 137 were unsuccessful. I’d be interested to know where they stumbled and if they will succeed in another reporting period.