How Do We Achieve Continuous Healthcare Interoperability?

Today I had a really interesting chat about healthcare interoperability with Mario Hyland, Founder of AEGIS. I’m looking at a number of ways that Mario and I can work together to move healthcare interoperability forward. We’ll probably start with a video hangout with Mario and then expand from there.

Until then, I was struck by something Mario said in our conversation: “Healthcare interoperability is not a point in time. You can be interoperable today and then not be tomorrow.

This really resonated with me and no doubt resonates with doctors and hospitals who have an interface with some other medical organization. You know how easy it is for your interface to break. It’s never intentional, but these software are so large and complex that someone will make a change and not realize the impact that change will have across all your connections. As I wrote on Hospital EMR and EHR, API’s are Hard!

Currently we don’t even have a bunch of complex APIs with hundreds of companies connecting to the EHR. We’re lucky if an EHR has a lab interface, ePrescribing, maybe a radiology interface, and maybe a connection to a local hospital. Now imagine the issues that crop up when you’re connecting to hundreds of companies and systems. Mario was right when he told me, “Healthcare thinks we’re dealing with the complex challenges of healthcare interoperability. Healthcare doesn’t know the interoperability challenges that still wait for them and they’re so much more complex than what we’re dealing with today.”

I don’t want to say this as discouragement, but it should encourage us to be really thoughtful about how we handle healthcare interoperability so it can scale up. The title of this post asks a tough question that isn’t being solved by our current one time approach to certification. How do we achieve continuous healthcare interoperability that won’t break on the next upgrade cycle?

I asked Mario why the current EHR certification process hasn’t been able to solve this problem and he said that current EHR certification is more of a one time visual inspection of interoperability. Unfortunately it doesn’t include a single testing platform that really tests an EHR against a specific interoperability standard, let alone ongoing tests to make sure that any changes to the EHR don’t affect future interoperability.

I’ve long chewed on why it is that we can have a “standard” for interoperability, but unfortunately that doesn’t mean that EHR systems can actually interoperate. I’ve heard people tell me that there are flavors of the standard and each organization has a different flavor. I’ve seen this, but what I’ve found is that there are different interpretations of the same standard. When you dig into the details of any standard, you can see how it’s easy for an organization to interpret a standard multiple ways.

In my post API’s are Hard, the article that is linked talks about the written promise and the behavioral promise of an API. The same thing applies to a healthcare interoperability standard. There’s the documented standard (written promise), and then there’s the way the EHR implements the standard (behavioral promise).

In the API world, one company creates the API and so you have one behavioral promise to those who use it. Even with one company, tracking the behavioral promise can be a challenge. In the EHR world, each EHR vendor has implemented interoperability according to their own interpretation of the standard and so there are 300+ behavioral promises that have to be tracked and considered. One from each company and heaven help us if and when a company changes that behavioral promise. It’s impossible to keep up with and explains one reason why healthcare intoperability isn’t a reality today.

What’s the solution? One solution is to create a set of standard test scripts that can be tested against by any EHR vendor on an ongoing basis. This way any EHR vendor can test the interoperability functionality of their application throughout their development cycle. Ideally these test scripts would be offered in an open source manner which would allow multiple contributors to continue to provide feedback and improve the test scripts as errors in the test scripts are found. Yes, it’s just the nature of any standard and testing of that standard that exceptions and errors will be found that need to be addressed and improved.

I mentioned that I was really interested in diving in deeper to healthcare interoperability. I still have a lot more deeper to go, but consider this the first toe dip into the healthcare interoperability waters. I really want to see this problem solved.

About the author

John Lynn

John Lynn is the Founder of HealthcareScene.com, a network of leading Healthcare IT resources. The flagship blog, Healthcare IT Today, contains over 13,000 articles with over half of the articles written by John. These EMR and Healthcare IT related articles have been viewed over 20 million times.

John manages Healthcare IT Central, the leading career Health IT job board. He also organizes the first of its kind conference and community focused on healthcare marketing, Healthcare and IT Marketing Conference, and a healthcare IT conference, EXPO.health, focused on practical healthcare IT innovation. John is an advisor to multiple healthcare IT companies. John is highly involved in social media, and in addition to his blogs can be found on Twitter: @techguy.

4 Comments

  • When a publisher of data adds a new data element, it’s less of a problem than when they rename a data element or delete a data element from their data stream.

    The way to look at new data elements is what you don’t know won’t hurt you, except that you many be missing out on information that you might find useful/important (i.e. a change in legislation makes a new data element mandatory but you are not set up to receive the new data element).

    Renaming causes waves because if you are trending, your graph suddenly stops (i.e. no more data points). It can take months to discover this but your import functions will hopefully reject the “new” data element and this may help you with early detection

    Worse when a data element is deleted and you have an algorithm that relies on the data element. Now you get bad results or failure at the algorithm.

    Failure is the preferred problem. It can take years to discover that results are bad.

    Some assistance can be provided by an invention of Dr Bertrand Meyer, inventor of the Eiffel programming language.

    He saw the need for pre-conditions before engaging functions and post-conditions on exit from functions. Sort of like assigning a QA inspector on the way into a function and another one on the way out.

    When you apply pre-post-conditions to clusters of data like “prescriptions” you get a halt if one of dose/frequency/start/end/ etc is missing.

Click here to post a comment
   

Categories