Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

HL7 Backs Effort To Boost Patient Data Exchange

Posted on December 8, 2014 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Standards group Health Level Seven has kicked off a new project intended to increase the adoption of tech standards designed to improve electronic patient data exchange. The initiative, the Argonaut Project, includes just five EMR vendors and four provider organizations, but it seems to have some interesting and substantial goals.

Participating vendors include Athenahealth, Cerner, Epic, McKesson and MEDITECH, while providers include Beth Israel Deaconess Medical Center, Intermoutain  Healthcare, Mayo Clinic and Partners HealthCare. In an interesting twist, the group also includes SMART, Boston Children’s Hospital Informatics Program’s federally-funded mobile app development project. (How often does mobile get a seat at the table when interoperability is being discussed?) And consulting firm the Advisory Board Company is also involved.

Unlike the activity around the much-bruited CommonWell Alliance, which still feels like vaporware to industry watchers like myself, this project seems to have a solid technical footing. On the recommendation of a group of science advisors known as JASON, the group is working at creating a public API to advance EMR interoperability.

The springboard for its efforts is HL7’s Fast Healthcare Interoperability Resources. HL7’s FHir is a RESTful API, an approach which, the standards group notes, makes it easier to share data not only across traditional networks and EMR-sharing modular components, but also to mobile devices, web-based applications and cloud communications.

According to JASON’s David McCallie, Cerner’s president of medical informatics, the group has an intriguing goal. Members’ intent is to develop a health IT operating system such as those used by Apple and Android mobile devices. Once that was created, providers could then use both built-in apps resident in the OS and others created by independent developers. While the devices a “health IT OS” would have to embrace would be far more diverse than those run by Android or iOS, the concept is still a fascinating one.

It’s also neat to hear that the collective has committed itself to a fairly aggressive timeline, promising to accelerate current FHIT development to provide hands-on FHIR profiles and implementation guides to the healthcare world by spring of next year.

Lest I seem too critical of CommonWell, which has been soldiering along for quite some time now, it’s onlyt fair to note that its goals are, if anything, even more ambitious than the Argonauts’. CommonWell hopes to accomplish nothing less than managing a single identity for every person/patient, locating the person’s records in the network and managing consent. And CommonWell member Cerner recently announced that it would provide CommonWell services to its clients for free until Jan. 1, 2018.

But as things stand, I’d wager that the Argonauts (I love that name!) will get more done, more quickly. I’m truly eager to see what emerges from their efforts.

Are You A Sitting Duck for HIPAA Data Breaches? – Infographic

Posted on November 18, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The people at DataMotion, cloud based HISP providers, sent me the following infographic covering the HIPAA data breaches. It’s a good reminder of the potential for data breaches in healthcare. As Marc Probst recently suggested, we should be focusing as much attention on things like security as we are on meaningful use since the penalties for a HIPAA violation are more than the meaningful use penalties.

Are You A Sitting Duck for HIPAA Data Breaches Infographic

Healthcare Interoperability Series Outline

Posted on November 7, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Interoperability is one of the major priorities of ONC. Plus, I hear many doctors complaining that their EHR doesn’t live up to its potential because the EHR is not interoperable. I personally believe that healthcare would benefit immeasurably from interoperable healthcare records. The problem is that healthcare interoperability is a really hard nut to crack

With that in mind, I’ve decided to do a series of blog posts highlighting some of the many challenges and issues with healthcare interoperability. Hopefully this will provide a deeper dive into what’s really happening with healthcare interoperability, what’s holding us back from interoperability and some ideas for how we can finally achieve interoperable healthcare records.

As I started thinking through the subject of Healthcare Interoperability, here are some of the topics, challenges, issues, discussions, that are worth including in the series:

  • Interoperability Benefits
  • Interoperability Risks
  • Unique Identifier (Patient Identification)
  • Data Standards
  • Government vs Vendor vs Healthcare Organization Efforts and Motivations
  • When Should You Share The Data and When Not?
  • Major Complexities (Minors, Mental Health, etc)
  • Business Model

I think this is a good start, but I’m pretty sure this list is not comprehensive. I’d love to hear from readers about other issues, topics, questions, discussion points, barriers, etc to healthcare interoperability that I should include in this discussion. If you have some insights into any of these topics, I’d love to hear it as well. Hopefully we can contribute to a real understanding of healthcare interoperability.

Karen DeSalvo and Jacob Reider Leave ONC

Posted on October 24, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

UPDATE: It seems that DeSalvo will still be National Coordinator of Healthcare IT along with her new position.

It’s been a tumultuous few months for ONC and it’s just gotten even more tumultuous. We previously reported about the departures of Doug Fridsma MD, ONC’s Chief Science Officer, Joy Pritts, the first Chief Privacy Officer at ONC, and Lygeia Ricciardi, Director of the Office of Consumer eHealth, and Judy Murphy, Chief Nursing Officer (CNO) from ONC. Yesterday, the news dropped that Karen DeSalvo, ONC’s National Coordinator, and Jacob Reider, ONC’s Deputy National Coordinator, are both leaving ONC as well.

Karen DeSalvo has been tapped by HHS Secretary Sylvia Mathews Burwell to replace Wanda K. Jones as assistant secretary of health which oversees the surgeon general’s office and will be working on Ebola and other pressing health issues. I think DeSalvo’s letter to staff describes it well:

As you know, I have deep roots and a belief in public health and its critical value in assuring the health of everyone, not only in crisis, but every day, and I am honored to be asked to step in to serve.

DeSalvo’s always been a major public health advocate and that’s where her passion lies. Her passion isn’t healthcare technology. So, this change isn’t surprising. Although, it is a little surprising that it comes only 10 months into her time at ONC.

The obvious choice as Acting National Coordinator would have been Jacob Reider who was previously Acting National Coordinator when Farzad Mostashari left. However, Reider also announced his decision to leave ONC:

In light of the events that led to Karen’s announcement today–it’s appropriate now to be clear about my plans, as well. With Jon White and Andy Gettinger on board, and a search for a new Deputy National Coordinator well underway, I am pleased that much of this has now fallen into place–with only a few loose ends yet to be completed. I’ll remain at ONC until late November, working closely with Lisa as she assumes her role as Acting National Coordinator.

As Reider mentions, Lisa Lewis who is currently ONC’s COO will be serving as Acting National Coordinator at ONC.

What’s All This Mean?
There’s a lot of speculation as to why all of these departures are happening at ONC. Many people believe that ONC is a sinking ship and people are doing everything they can to get off the ship before it sinks completely. Others have suggested that these people see an opportunity to make a lot more money working for a company. The government certainly doesn’t pay market wages for the skills these people have. Plus, their connections and experience at ONC give them some unique qualifications that many companies are willing to pay to get. Some have suggested that the meaningful use work is mostly done and so these people want to move on to something new.

My guess is that it’s a mix of all of these things. It’s always hard to make broad generalizations about topics like this. For example, I already alluded to the fact that I think Karen DeSalvo saw an opportunity to move to a position that was more in line with her passions. Hard to fault someone for making that move. We’d all do the same.

What is really unclear is the future of ONC. They still have a few years of meaningful use which they’ll have to administer including the EHR penalties which could carry meaningful use forward for even longer than just a few years. I expect ONC will still have money to work on things like interoperability. We’ll see if ONC can put together the patient safety initiative they started or if that will get shut down because it’s outside their jurisdiction.

Beyond those things, what’s the future of ONC?

The Medication List Said, “Raised toilet seat daily”

Posted on September 25, 2014 I Written By

The following is a guest blog post by Lisa Pike, CEO of Versio.
???????????????????????????????
With over a third of healthcare organizations switching to a new EHR in 2014, there is a lot of data movement going on. With the vast amount of effort it took to create that data, it’s a valuable asset to the organization. It can mean life or death; it can keep a hospital out of the courtroom; and it can mean the difference between a smooth-running organization and an operational nightmare.

But when that important data needs to be converted and moved to a new EHR, you realize just how complex it really is.

During a recent conversion of legacy data over to a new EHR, we came across this entry in the Medication List:  Raised toilet seat, daily.

Uh, come again??

How about this one?  “Dignity Plus XXL [adult diapers]; take one by mouth daily.”  What does the patient have, potty mouth?

Now, while we may snicker at the visual, it’s really no joke. These are actual entries encountered in source systems during clinical data migration projects. Some entries are comical; some are just odd; and some are downright frightening. But all of them are a conversion nightmare when you are migrating data.

Patient clinical data is unlike any other kind of data, for many reasons. It’s massive. It requires near-perfect accuracy. It’s also extremely complex, especially when you are not just migrating, but also converting from one system “language” to another.

Automated conversion is a common choice for healthcare organizations when moving data from legacy systems to newly adopted EHRs. It can be a great choice for some of the data, but not all. If your source says “hypertension, uncontrolled,” but your target system only has “uncontrolled hypertension,” that’s a simple enough inconsistency to overcome, but how would you predict every non-standard or incorrect entry you will encounter?

Here are some more actual examples. If you’re considering automated conversion, consider how your software would tangle up over these:

SOURCE SYSTEM SAYS COMMENTS
346.71D  Chm gr wo ara w nt wo st ???
levothyroxine 100 mg Should be mcg. Yikes!
Proventil Target system has 20 choices
NKDA (vomiting) NKDA= no known drug allergies.
Having no allergies causes vomiting?
Massage Therapy, take one by mouth twice weekly ???
Tylenol suppositories; take 1 by mouth daily Maybe not life-threatening, but certainly unpleasant
PMD
(Pelizaeus-Merzbacher disease)
Should have been PMDD
(premenstrual dysphoric disorder)
Allergy:  Reglan 5 mg Is patient allergic only to that dosage, or should this have been in the med list?
Confusing allergies and meds can be deadly.
Height 60 Centimeters or inches? Convert carefully!

 

These just scratch the surface of the myriad complexities, entry errors, and inconsistencies that exist in medical records across the industry. No matter how diligent your staff is, I guarantee your charts contain entries like these!

When an automated conversion program encounters data it can’t convert, it falls out as an “exception.” If the exception can’t be resolved, the data is simply left behind. Even with admirable effort, almost no one in the industry can capture more than 80% of the data. Some report as low as 50%.

How safe would you feel if your doctor didn’t know about 20% of your allergies? What if one of those left behind was the one that could kill you? What if a medication left behind was one you absolutely shouldn’t take with a new medication your doctor prescribed? Consider the woman whose aneurysm history was omitted during a conversion to a new EHR, so her specialist was unaware of it. She later died during a procedure when her aneurysm burst. I would say her family considered that data left behind pretty important, as did the treating physician, who could be found liable.

Liable, you say?

That’s right. The specialist could be found liable for the information in the legacy record because it was available….even if it was archived in an old EHR or paper chart.

You can begin to see the enormity of the problem and the potentially dangerous ramifications. Certainly every patient deserves an accurate record, and healthcare providers’ effectiveness, if not their very livelihood, depends on it. But maintaining the integrity of the data, especially during an EHR conversion, is no trivial task. Unfortunately, too many healthcare organizations underestimate it, and clearly it deserves more attention.

There is good news, however. With a well-planned conversion, using a system that combines robust technology with human expertise, it is possible to achieve 100% data capture with 99.8% accuracy. We’ve done it with well over a million patient chartsIt isn’t easy, but the results are worth it. Patients and doctors deserve no less.

Lisa Pike is the CEO of Versio, a healthcare technology company specializing in legacy data migration, with a proven track record of 100% data capture and 99.8% quality. We call it “No Data Left Behind.” For more information on Versio’s services or to schedule an introductory conversation, please visit us at www.MyVersio.com or email sales@myversio.com.

Comprehensive Patient View, Social Media Time, and Linking Millions of EMR

Posted on August 10, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.


You don’t really need to click on the link above. The answer is no. The answer is that it probably won’t ever happen. There are just too many source systems where our health data is stored and it’s getting more complicated, not less.


If the social media maven Mandi has a challenge getting her social media on, now you can understand why many others “don’t have the time.” It takes a commitment and many don’t want to make that commitment. It doesn’t make them bad people. We all only have so many hours in a day.


No need to read this link either. Although, I found it great that they described the challenge as linking millions of EMR. Let’s be generous and say there are 700 EHR vendors. Unfortunately, that doesn’t describe what it takes to make EMR interoperable. To use a cliche phrase, if you’ve connected with one Epic installation, you’ve connected with one Epic installation. I know it’s getting better, but it’s not there. If you want interoperable EMR data, you need to connect a lot of different installs.

Eyes Wide Shut – Patient Engagement Pitfalls Prior to Meaningful Use Reporting Period

Posted on June 30, 2014 I Written By

Mandi Bishop is a healthcare IT consultant and a hardcore data geek with a Master's in English and a passion for big data analytics, who fell in love with her PCjr at 9 when she learned to program in BASIC. Individual accountability zealot, patient engagement advocate, innovation lover and ceaseless dreamer. Relentless in pursuit of answers to the question: "How do we GET there from here?" More byte-sized commentary on Twitter: @MandiBPro.

July 1, 2015 – the start of the Meaningful Use Stage 1 Year 2 reporting period for the hospital facilities within this provider integrated delivery network (IDN). The day the 50% online access measure gets real. The day the inpatient summary CCDA MUST be made available online within 36 hours of discharge. The day we must overcome a steady 65% patient portal decline rate.

A quick recap for those who haven’t followed this series (and refresher for those who have): this IDN has multiple hospital facilities, primary care, and specialty practices, on disparate EMRs, all connecting to an HIE and one enterprise patient portal. There are 8 primary EMRs and more than 20 distinct patient identification (MRN) pools. And many entities within this IDN are attempting to attest to Meaningful Use Stage 2 this year.

For the purposes of this post, I’m ignoring CMS and the ONC’s new proposed rule that would, if adopted, allow entities to attest to Meaningful Use Stage 1 OR 2 measures, using 2011 OR 2014 CEHRT (or some combination thereof). Even if the proposed rule were sensible, it came too late for the hospitals which must start their reporting period in the third calendar quarter of 2014 in order to complete before the start of the fiscal year on October 1. For this IDN, the proposed rule isn’t changing anything.

Believe me, I would have welcomed change.

The purpose of the so-called “patient engagement” core measures is just that: engage patients in their healthcare, and liberate the data so that patients are empowered to have meaningful conversations with their providers, and to make informed health decisions. The intent is a good one. The result of releasing the EMR’s compilation of chart data to recently-discharged patients may not be.

I answered the phone on a Saturday, while standing in the middle of a shopping mall with my 12 year-old daughter, to discover a distraught man and one of my help desk representatives on the line. The man’s wife had been recently released from the hospital; they had been provided patient portal access to receive and review her records, and they were bewildered by the information given. The medications listed on the document were not the same as those his wife regularly takes, the lab section did not have any context provided for why the tests were ordered or what the results mean, there were a number of lab results missing that he knew had been performed, and the problems list did not seem to have any correlation to the diagnoses provided for the encounter.

Just the kind of call an IT geek wants to receive.

How do you explain to an 84 year-old man that his wife’s inpatient summary record contains only a snapshot of the information that was captured during that specific hospital encounter, by resources at each point in the patient experience, with widely-varied roles and educational backgrounds, with varied attention to detail, and only a vague awareness of how that information would then be pulled together and presented by technology that was built to meet the bare minimum standards for perfect-world test scenarios required by government mandates?

How do you tell him that the lab results are only what was available at time of discharge, not the pathology reports that had to be sent out for analysis and would not come back in time to meet the 36-hour deadline?

How do you tell him that the reasons there are so many discrepancies between what he sees on the document and what is available on the full chart are data entry errors, new workflow processes that have not yet been widely adopted by each member of the care team, and technical differences between EMRs in the interpretation of the IHE’s XML standards for how these CCDA documents were to be created?

EMR vendors have responded to that last question with, “If you use our tethered portal, you won’t have that problem. Our portal can present the data from our CCDA just fine.” But this doesn’t take into account the patient experience. As a consumer, I ask you: would you use online banking if you had to sign on to a different website, with a different username and password, for each account within the same bank? Why should it be acceptable for managing health information online to be less convenient than managing financial information?

How do hospital clinical and IT staff navigate this increasingly-frequent scenario that is occurring: explaining the data that patients now see?

I’m working hard to establish a clear delineation between answering technical and clinical questions, because I am not – by any stretch of the imagination – a clinician. I can explain deviations in the records presentation, I can explain the data that is and is not available – and why (which is NOT generally well-received), and I can explain the logical processes for patients to get their clinical questions answered.

Solving the other half of this equation – clinicians who understand the technical nuances which have become patient-facing, and who incorporate that knowledge into regular patient engagement to insure patients understand the limitations of their newly-liberated data – proves more challenging. In order to engage patients in the way the CMS Meaningful Use program mandates, have we effectively created a new hybrid role requirement for our healthcare providers?

And what fresh new hell have we created for some patients who seek wisdom from all this information they’ve been given?

Caveat – if you’re reading this, it’s likely you’re not the kind of patient who needs much explaining. You’re likely to do your own research on the data that’s presented on your CCDA outputs, and you have the context of the entire Meaningful Use initiative to understand why information is presented the way it is. But think – can your grandma read it and understand it on HER own?

Not So Open: Redefining Goals for Sharing Health Data in Research

Posted on June 24, 2014 I Written By

The following is a guest blog post by Andy Oram, writer and editor at O’Reilly Media.

One couldn’t come away with more enthusiasm for open data than at this month’s Health Datapalooza, the largest conference focused on using data in health care. The whole 2000-strong conference unfolds from the simple concept that releasing data publicly can lead to wonderful things, like discovering new cancer drugs or intervening with patients before they have to go to the emergency room.

But look more closely at the health care field, and open data is far from the norm. The demonstrated benefits of open data sets in other fields–they permit innovation from any corner and are easy to combine or “mash up” to uncover new relationships–may turn into risks in health care. There may be better ways to share data.

Let’s momentarily leave the heady atmosphere of the Datapalooza and take a subway a few stops downtown to the Health Privacy Summit, where fine points of patient consent, deidentification, and the data map of health information exchange were discussed the following day. Participants here agree that highly sensitive information is traveling far and wide for marketing purposes, and perhaps even for more nefarious uses to uncover patient secrets and discriminate against them.

In addition to outright breaches–which seem to be reported at least once a week now, and can involve thousands of patients in one fell swoop–data is shared in many ways that arguably should be up to patients to decide. It flows from hospitals, doctors, and pharmacies to health information exchanges, researchers in both academia and business, marketers, and others.

Debate has raged for years between those who trust deidentification and those who claim that reidentification is too easy. This is not an arcane technicality–the whole industry of analytics represented at the Datapalooza rests on the result. Those who defend deidentification tend to be researchers in health care and the institutions who use their results. In contrast, many computer scientists outside the health care field cite instances where people have been reidentified, usually by combining data from various public sources.

Latanya Sweeney of Harvard and MIT, who won a privacy award this year at the summit, can be credited both with a historic reidentification of the records of Massachusetts Governor William Weld in 1997 and a more recent exposé of state practices. The first research led to the current HIPAA regime for deidentification, while the second showed that states had not learned the lessons of anonymization. No successful reidentifications have been reported against data sets that use recommended deidentification techniques.

I am somewhat perplexed by the disagreement, but have concluded that it cannot be resolved on technical grounds. Those who look at the current state of reidentification are satisfied that health data can be secured. Those who look toward an unspecified future with improved algorithms find reasons to worry. In a summit lunchtime keynote, Adam Tanner reported his own efforts as a non-expert to identify people online–a fascinating and sometimes amusing tale he has written up in a new book, What Stays in Vegas. So deidentification is like encryption–we all use encryption even though we expect that future computers will be able to break current techniques.

But another approach has flown up from the ashes of the “privacy is dead” nay-sayers: regulating the use of data instead of its collection and dissemination. This has been around for years, most recently in a federal PCAST report on big data privacy. One of the authors of that report, Craig Mundie of Microsoft, also published an article with that argument in the March/April issue of Foreign Affairs.

A simple application of this doctrine in health care is the Genetic Information Nondiscrimination Act of 2008. A more nuanced interpretation of the doctrine could let each individual determine who gets to use his or her data, and for what purpose.

Several proposals have been aired to make it easier for patients to grant blanket permission for certain data uses, one proposal being “patient privacy bundles” in a recent report commissioned by AHRQ. Many people look forward to economies of data, where patients can make money by selling data (how much is my blood pressure reading worth to you)?

Medyear treats personal health data like Twitter feeds, letting you control the dissemination of individual data fields through hash tags. You could choose to share certain data with your family, some with your professional care team, and some with members of your patient advocacy network. This offers an alternative to using services such as PatientsLikeMe, which use participants’ data behind the scenes.

Open data can be simulated by semi-open data sets that researchers can use under license, as with the Genetic Association Information Network that controls the Database of Genotypes and Phenotypes (dbGaP). Many CMS data sets are actually not totally open, but require a license to use.

And many data owners create relationships with third-party developers that allow them access to data. Thus, the More Disruption Please program run by athenahealth allows third-party developers to write apps accessing patient data through an API, once the developers sign a nondisclosure agreement and a Code of Conduct promising to use the data for legitimate purposes and respect privacy. These apps can then be offered to athenahealth’s clinician clients to extend the system’s capabilities.

Some speakers went even farther at the Datapalooza, asking whether raw data needs to be shared at all. Adriana Lukas of London Quantified Self and Stephen Friend of Sage Bionetworks suggested that patients hold on to all their data and share just “meanings” or “methods” they’ve found useful. The future of health analytics, it seems to me, will use relatively few open data sets, and lots of data obtained through patient consent or under license.

Understanding Apple Health

Posted on June 17, 2014 I Written By

Kyle is Founder and CEO of Pristine, a company in Austin, TX that develops telehealth communication tools optimized for Google Glass in healthcare environments. Prior to founding Pristine, Kyle spent years developing, selling, and implementing electronic medical records (EMRs) into hospitals. He also writes for EMR and HIPAA, TechZulu, and Svbtle about the intersections of healthcare, technology, and business. All of his writing is reproduced at kylesamani.com

Apple recently announced Health and Healthkit as part of iOS 8, and initial responses have been mixed.

At one extreme, the (highly biased) CEO of Mayo Clinic called Apple Health “revolutionary.” At the other, cynical health IT pundits claim that Apple Health is a consumer novelty and won’t crack the enigmatic healthcare system. As a cynical health IT pundit myself, I’m more inclined towards the latter, but have some optimism about Apple’s first steps into healthcare.

For the uninitiated, Apple Health is a central dashboard for health related information, packaged for consumers as an iOS app. Consumers open the app and see a broad array of clinical indicators (e.g. as physical activity, blood pressure, blood glucose, sleep data). You can learn more about Health and Healthkit from Apple.

The rest of this post assumes significant understanding of modern health IT challenges such as data silos, EMPIs, HIEs, and an understanding of what Health and Healthkit can and can’t do. I’ll address what Apple Health does well, ask some questions, and then provide some commentary.

Apple Health does a few things well:

1) Apple Health acts as a central dashboard for consumers. Rather than switching between five different apps, Health provides a central view of all clinical indicators. In time, Health could help patients understand the nuances of their own data. By removing friction to seeing a variety of indicators in a single view, patients may discover correlations that they wouldn’t have observed before. With that information, consumers should be able to adjust behaviors to lead healthier lifestyles.

2) Apple Health provides a robust mechanism for health apps to share data with one another. Until now, health app developers needed to form partnerships with one another and develop custom code to share information; now they can do this in a standardized way with minimal technical or administrative overhead. This reduces app lock-in by enabling data liquidity, empowering consumers to switch to the best health app or device and carry data between apps. This is a big win for consumers.

Unanswered questions:

1) How does Apple Health actually work? Apple provided virtually no details. Does the patient need the Epic MyChart app on their phone? Is there custom code integrating iOS to Epic MyChart? Is there a Mayo Clinic app that is separate from Epic MyChart? If not, how does Apple Health know that the consumer is a Mayo patient? Or a Kaiser Permanente patient? Or a Sutter Health patient?

2) Does the patient give consent per data value, or is it all or nothing? How long does consent last? Must consent be taken at the hospital, or can the patient opt in or out any time on their phone? Who within the health system can access the consented data?

3) Given that there are hundreds of EpicCare silos and dozens of CareEverywhere silos, how does Apple Health decide which silo(s) to interface with? Does data go to an HIE or to an EMR? If to an HIE, can all eligible connected providers access the data with consent? If a patient has records in multiple HIEs and EMRs (which they likely do), how does Apple Health determine which HIE(s) to push and pull data from?

4) Does Apple Health support non-numerical data such as CCDAs? What about unstandardized data? For example, PatientIO allows providers to develop customized care plans for patients that can include almost any behavioral prescription. Examples include water intake, exercising at a certain time of the day, taper schedules, etc.

5) Can providers write back to a patient’s Health profile? Given that open.epic doesn’t allow Epic to send data out, how could Apple Health receive data from Epic?

7) How will Apple handle competing health apps installed on the same consumer’s phone? For example, if I tap “more diabetes info” in Apple Health, will it open Mayo Clinic’s app (and if so, to the right place in the Mayo Clinic app?) or the blood glucose tracking app that came with with my blood glucose meter? Or my iTriage or WebMD app?

8) Is Apple Health intended to function as a patient-centric HIE? If so, what standards does it support? CCDA? FHIR? Direct?

Comments:

1) The Apple-Epic partnership is obviously built on open.epic, which Epic announced in September of 2013. It’s likely that Apple and Epic reached an agreement around that time, and asked the public for ideas on how to shape the program to get a sense of what developers wanted.

2) The only way to succeed in health IT is to force the industry to conform to one’s standards, or to support a hybrid of hybrids approach. Early indicators show Apple (predictably) trending toward the former. Unfortunately, Apple’s perennially Apple-centric approach inhibits supporting the level of interoperability necessary to power an effective consumer health strategy. Although Apple provides a great foundation for some basic functions, the long term potential based on the current offering is limited. What Apple has produced to date provides for sexy screenshots, but appears to fall short of addressing the core interoperability and connectivity issues that plague chronic disease management and coordination of care.

3) In a hypothetical world at some indeterminate point in the future, there would be a patient-facing, DNS-like lookup system for provider organizations (Direct eventually?). Patients should be able to lookup provider organizations and share their data with providers selectively. Apple Health provides a great first step towards that dream world by empowering patients to see and, to some extent, control their own data.

Population Health Management and Business Process Management

Posted on June 13, 2014 I Written By

Chuck Webster, MD, MSIE, MSIS has degrees in Accountancy, Industrial Engineering, Intelligent Systems, and Medicine (from the University of Chicago). He designed the first undergraduate program in medical informatics, was a software architect in a hospital MIS department, and also VP and CMIO for an EHR vendor for over a decade. Dr. Webster helped three healthcare organizations win the HIMSS Davies Award and is a judge for the annual Workflow Management Coalition Awards for Excellence in BPM and Workflow and Awards for Case Management. Chuck is a ceaseless evangelist for process-aware technologies in healthcare, including workflow management systems, Business Process Management, and dynamic and adaptive case management. Dr. Webster tweets from @wareFLO and maintains numerous websites, including EHR Workflow Management Systems (http://chuckwebster.com), Healthcare Business Process Management (http://HCBPM.com) and the People and Organizations improving Healthcare with Health Information Technology (http://EHRworkflow.com). Please join with Chuck to spread the message: Viva la workflow!

This is my fifth and final of five guest blog posts covering Health IT and EHR Workflow.

Way back in 2009 I penned a research paper with a long and complicated title that could also have been, simply, Population Health Management and Business Process Management. In 2010 I presented it at MedInfo10 in Cape Town, Africa. Check out my travelogue!

Since then, some of what I wrote has become reality, and much of the rest is on the way. Before I dive into the weeds, let me set the stage. The Affordable Care Act added tens of millions of new patients to an already creaky and dysfunctional healthcare and health IT system. Accountable Care Organizations were conceived as virtual enterprises to be paid to manage the clinical outcome and costs of care of specific populations of individuals. Population Health Management has become the dominant conceptual framework for proceeding.

I looked at a bunch of definitions of population health management and created the following as a synthesis: “Proactive management of clinical and financial risks of a defined patient group to improve clinical outcomes and reduce cost via targeted, coordinated engagement of providers and patients across all care settings.”

You can see obvious places in this definition to apply trendy SMAC tech — social, mobile, analytics, and cloud — social, patient settings; mobile, provider and patient settings; analytics, cost and outcomes; cloud, across settings. But here I want to focus on the “targeted, coordinated.” Increasingly, it is self-developed and vendor-supplied care coordination platforms that target and coordinate, filling a gap between EHRs and day-to-day provider and patient workflows.

The best technology on which, from which, to create care coordination platforms is workflow technology, AKA business process management and adaptive/dynamic case management software. In fact, when I drill down on most sophisticated, scalable population health management and care coordination solutions, I usually find a combination of a couple things. Either the health IT organization or vendor is, in essence, reinventing the workflow tech wheel, or they embed or build on third-party BPM technology.

Let me direct you to my section Patient Class Event Hierarchy Intermediates Patient Event Stream and Automated Workflow in that MedInfo10 paper. First of all you have to target the right patients for intervention. Increasingly, ideas from Complex Event Processing are used to quickly and appropriately react to patient events. A Patient Class Event Hierarchy is a decision tree mediating between low-level events (patient state changes) and higher-level concepts clinical concepts such as “on-protocol,” “compliant”, “measured”, and “controlled.”

Examples include patients who aren’t on protocol but should be, aren’t being measured but should be, or whose clinical values are not controlled. Execution of appropriate automatic policy-based workflows (in effect, intervention plans) moves patients from off-protocol to on-protocol, non-compliance to compliance, unmeasured to measured, and from uncontrolled to controlled state categories.

Population health management and care coordination products and services may use different categories, terminology, etc. But they all tend to focus on sensing and reacting to untoward changes in patient state. But simply detecting these changes is insufficient. These systems need to cause actions. And these actions need to be monitored, managed, and improved, all of which are classic sterling qualities of business process management software systems and suites.

I’m reminded of several tweets about Accountable Care Organization IT systems I display during presentations. One summarizes an article about ACOs. The other paraphrases an ACO expert speaking at a conference. The former says ACOs must tie together many disparate IT systems. The later says ACOs boil down to lists: actionable lists of items delivered to the right person at the right time. If you put these requirements together with system-wide care pathways delivered safely and conveniently to the point of care, you get my three previous blog posts on interoperability, usability, and safety.

I’ll close here with my seven advantages of BPM-based care coordination technology. It…

  • More granularly distinguishes workflow steps
  • Captures more meaningful time-stamped task data
  • More actively influences point-of-care workflow
  • Helps model and understand workflow
  • Better coordinates patient care task handoffs
  • Monitors patient care task execution in real-time
  • Systematically improves workflow effectiveness & efficiency

Distinguishing among workflow steps is important to collecting data about which steps provide value to providers and patients, as well as time-stamps necessary to estimate true costs. Further, since these steps are executed, or at least monitored, at the point-of-care, there’s more opportunity to facilitate and influence at the point-of-care. Modeling workflow contributes to understanding workflow, in my view an intrinsically valuable state of affairs. These workflow models can represent and compensate for interruptions to necessary care task handoffs. During workflow execution, “enactment” in BPM parlance, workflow state is made transparently visible. Finally, workflow data “exhaust” (particularly times-stamped evidence-based process maps) can be used to systematically find bottlenecks and plug care gaps.

In light of the fit between complex event processing detecting changes in patient state, and BPM’s automated, managed workflow at the point-of-care, I see no alternative to what I predicted in 2010. Regardless of whether it’s rebranded as care or healthcare process management, business process management is the most mature, practical, and scalable way to create the care coordination and population health management IT systems required by Accountable Care Organizations and the Affordable Care Act. A bit dramatically, I’d even say business process management’s royal road to healthcare runs through care coordination.

This was my fifth and final blog post in this series on healthcare and workflow technology solicited by John Lynn for this week that he’s on vacation. Here was the outline:

If you missed one of my previous posts, I hope you’ll still check it out. Finally, thank you John, for allowing to me temporarily share your bully pulpit.