Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

tranSMART and i2b2 Show that Open Source Software Can Fuel Precision Medicine

Posted on April 19, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Medical reformers have said for years that the clinic and the research center have to start working closely together. The reformists’ ideal–rarely approached by any current institution–is for doctors to stream data about treatments and outcomes to the researchers, who in turn inject the insights that their analytics find back into the clinic to make a learning institution. But the clinicians and researchers have trouble getting on the same page culturally, and difficulties in data exchange exacerbate the problem.

On the data exchange front, software developers have long seen open source software as the solution. Proprietary companies are stingy in their willingness to connect. They parcel out gateways to other providers as expensive favors, and the formats often fail to mesh anyway (as we’ve always seen in electronic health records) because they are kept secret. In contrast, open source formats are out for everyone to peruse, and they tend to be simpler and more intuitive. As open source, the software can be enhanced by anyone with programming skill in order to work with other open source software.

Both of these principles are on display in the recent merger announced by two open source projects, the tranSMART Foundation and i2b2. As an organizational matter, this is perhaps a minor historical note–a long-awaited rectification of some organizational problems that have kept apart two groups of programmers who should always have been working together. But as a harbinger of progress in medicine, the announcement is very significant.

tranSMART logo

Here’s a bit about what these two projects do, to catch up readers who haven’t been following their achievements.

  • i2b2 allows doctors to transform clinical data into a common format suitable for research. The project started in 2004 in response to an NIH Roadmap initiative. It was the brainchild of medical researchers trying to overcome the frustrating barriers to extracting and sharing patient data from EHRs. The nugget from which i2b2 came was a project of the major Boston hospital consortium, Partners Healthcare. As described in another article, the project was housed at the Harvard Medical School and mostly funded by NIH.

  • The “trans” in tranSMART stands for translational research, the scientific effort that turns chemistry and biology into useful cures. It was a visionary impulse among several pharma companies that led them to create the tranSMART Foundation in 2013 from a Johnson & Johnson project, as I have documented elsewhere, and then to keep it open source and turn it into a model of successful collaboration. Their software helps researchers represent clinical and research data in ways that facilitate analytics and visualizations. In an inspired moment, the founders of the tranSMART project chose the i2b2 data format as the basis for their project. So the tranSMART and i2b2 foundations have always worked on joint projects and coordinated their progress, working also with the SMART open source API.

Why, then, have tranSMART and i2b2 remained separate organizations for the past three or four years? I talked recently with Keith Elliston, CEO of the tranSMART, who pointed to cultural differences as the factor that kept them apart. A physician culture drove i2b2, whereas a pharma and biochemistry research culture drove tranSMART. In addition, as development shops, they evolved in very different ways from the start.

tranSMART, as I said, adopted a robust open source strategy early on. They recognized the importance of developing a community, and the whole point of developing a foundation–just like other stalwarts of the free software community, such as the Apache Foundation, OpenStack Foundation, and Linux Foundation–was to provide a nurturing but neutral watering hole from which many different companies and contributors could draw what they need. Now the tranSMART code base benefits from 125 different individual contributors.

In contrast, i2b2 started and remained a small, closely-knit team. Although the software was under an open source license, the project operated in a more conservative model, although accepting external contributions.

Elliston says the two projects have been talking for the last two and a half years about improving integration and more recently merging, and that each has learned the best of what the other has to offer in order to meet in the middle. tranSMART is adopting some of i2b2’s planning, while i2b2 is learning how to organize a community around its work.

Together they believe their projects can improve more quickly. Ultimately, they’ll contribute to the movement to target cures to patients, proceeding now under the name Precision Medicine. Fund-raising and partnerships will be easier.

I have written repeatedly about these organizations to show the power that free and open source software brings to medicine. Their timely merger shows that open source overcomes cultural and institutional barriers. What it did for these two organizations it can do for the fractured landscape of hospitals, clinics, long-term care facilities, behavioral health centers, and other medical institutions struggling to work together. My hope is that the new foundation’s model for collaboration, as well as the results of its research, can slay the growing monster of health care costs and make us all healthier.

AMIA Shares Recommendations On Health IT-Friendly Policymaking

Posted on April 17, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

The American Medical Informatics Association has released the findings from a new paper addressing health IT policy, including recommendation on how policymakers can support patient access to health data, interoperability for clinicians and patient care-related research and innovation.

As the group accurately notes, the US healthcare system has transformed itself into a digital industry at astonishing speed, largely during the past five years. Nonetheless, many healthcare organizations haven’t unlocked the value of these new tools, in part because their technical infrastructure is largely a collection of disparate systems which don’t work together well.

The paper, which is published in the Journal of the American Medical Informatics Association, offers several policy recommendations intended to help health IT better support value-based health, care and research. The paper argues that governments should implement specific policy to:

  • Enable patients to have better access to clinical data by standardizing data flow
  • Improve access to patient-generated data compiled by mHealth apps and related technologies
  • Engage patients in research by improving ways to alert clinicians and patients about research opportunities, while seeing to it that researchers manage consent effectively
  • Enable patient participation in and contribution to care delivery and health management by harmonizing standards for various classes of patient-generated data
  • Improve interoperability using APIs, which may demand that policymakers require adherence to chosen data standards
  • Develop and implement a documentation-simplification framework to fuel an overhaul of quality measurement, ensure availability of coded EHRs clinical data and support reimbursement requirements redesign
  • Develop and implement an app-vetting process emphasizing safety and effectiveness, to include creating a knowledgebase of trusted sources, possibly as part of clinical practice improvement under MIPS
  • Create a policy framework for research and innovation, to include policies to aid data access for research conducted by HIPAA-covered entities and increase needed data standardization
  • Foster an ecosystem connecting safe, effective and secure health applications

To meet these goals, AMIA issued a set of “Policy Action Items” which address immediate, near-term and future policy initiatives. They include:

  • Clarifying a patient’s HIPAA “right to access” to include a right to all data maintained by a covered entity’s designated record set;
  • Encourage continued adoption of 2015 Edition Certified Health IT, which will allow standards-based APIs published in the public domain to be composed of standard features which can continue to be deployed by providers; and
  • Make effective Common Rule revisions as finalized in the January 19, 2017 issue of the Federal Register

In looking at this material, I noted with interest AMIA’s thinking on the appropriate premises for current health IT policy. The group offered some worthwhile suggestions on how health IT leaders can leverage health data effectively, such as giving patients easy access to their mHealth data and engaging them in the research process.

Given that they overlap with suggestions I’ve seen elsewhere, we may be getting somewhere as an industry. In fact, it seems to me that we’re approaching industry consensus on some issues which, despite seeming relatively straightforward have been the subject of professional disputes.

As I see it, AMIA stands as good a chance as any other healthcare entity at getting these policies implemented. I look forward to seeing how much progress it makes in drawing attention to these issues.

Study: “Information Blocking” By Vendors And Providers Persists

Posted on April 6, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A newly-released study suggests that both EHR vendors and providers may still be interfering with the free exchange of patient healthcare data. The researchers concluded that despite the hearty disapproval of both Congress and healthcare providers, the two still consider “information blocking” to be in their financial interest.

To conduct the study, which appears in this month’s issue of The Milbank Quarterly, researchers conducted a national survey between October 2015 and January 2015. Researchers reached out to leaders driving HIE efforts among provider organizations. The study focused on how often information blocking took place, what forms it took and how effective various policy strategies might be at stopping the practice.

It certainly seems that the practice continues to be a major issue of concern to HIE leaders. Eighty-three percent of respondents said they were very familiar with information blocking, while just 12 percent reported having just some familiarity with the practice and 5 percent said they had minimal familiarity. On average, the respondents offered a good cross-industry view, having worked with 18 EHR vendors and with 31 hospitals or health systems on average.

Forms of Blocking:

If the research is accurate, information blocking is a widespread and persistent problem.

When questioned about specific forms of information by EHR vendors, 29 percent of respondents said that vendors often or routinely roll out products with limited interoperability capabilities. Meanwhile, 47 percent said that vendors routinely or often charge high fees for sharing data across HIEs, and 42 percent said that the vendors routinely or often make third-party access to standardized data harder than it needs to be. (For some reason, the study didn’t mention what types of information blocking providers have instituted.)

Frequency of blocking:

It’s hardly surprising that most of the respondents were familiar with information blocking issues, given how often the issue comes up.

In fact, a full fifty percent said that EHR vendors routinely engaged in information blocking, 33 percent said that the vendors blocked information occasionally, with only 17 percent stating that EHR vendors rarely did so.

Interestingly, the HIE managers said that providers were also engaged in information blocking, though fewer did so than among the vendor community. Twenty-five percent reported that providers routinely engage in information blocking, and 34 percent saying that providers did so occasionally. Meanwhile, 41 percent said information blocking by providers was rare.

Motivations for blocking:

Why do HIE participants block the flow of health data? It seems that at present they get something important out of it, and unless somebody stops them it makes sense to continue.

When it came to EHR vendors, the respondents felt that their motivations included a desire to maximize short-term revenue, with 41 percent reporting that this was a routine motivation and 28 percent that it was an occasional motivation. They also felt EHR vendors blocked information to improve the chances that providers would choose their platform over competing products, with 44 percent of respondents saying this was routine and 11 percent that it was occasional.

Meanwhile, they believed that hospitals and health systems, the most common motivation was to improve revenue by strengthening their competitive advantage, with 47 percent seeing this as routine and 30 percent occasional. Also, respondents said providers wanted to accommodate priorities other than data exchange, with 29 percent seeing this as routine and 31 percent occasional.

Solutions:

So what can be done about vendor and provider information blocking? There are a number of ways policymakers can get involved, but few have done so as of yet.

When given a choice of policy-based strategies, 67 percent said that making this practice illegal would be very effective. Meanwhile, respondents said that three strategies would be very or moderately effective. They included prohibiting gag clauses and encouraging public reporting and comparisons of vendors and their products (93 percent); requiring stronger demonstrations of product interoperability (92 percent) and national policies defining policies and standards for core aspects of information exchange.

Meanwhile, when it came to reducing information blocking by providers, respondents recommended that CMS roll out stronger incentives for care coordination and risk-based contracts (97 percent) and public reporting or other efforts shining a spotlight on provider business practices (93 providers).

HL7 Releases New FHIR Update

Posted on April 3, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

HL7 has announced the release of a new version of FHIR designed to link it with real-world concepts and players in healthcare, marking the third of five planned updates. It’s also issuing the first release of the US Core Implementation Guide.

FHIR release 3 was produced with the cooperation of hundreds of contributors, and the final product incorporates the input of more than 2,400 suggested changes, according to project director Grahame Grieve. The release is known as STU3 (Standard for Trial Use, release 3).

Key changes to the standard include additional support for clinical quality measures and clinical decision support, as well as broader functionality to cover key clinical workflows.

In addition, the new FHIR version includes incremental improvements and increased maturity of the RESTful API, further development of terminology services and new support for financial management. It also defined an RDF format, as well as how FHIR relates to linked data.

HL7 is already gearing up for the release of FHIR’s next version. It plans to publish the first draft of version 4 for comment in December 2017 and review comments on the draft. It will then have a ballot on the version, in April 2018, and publish the new standard by October 2018.

Among those contributing to the development of FHIR is the Argonaut project, which brings together major US EHR vendors to drive industry adoption of FHIR forward. Grieve calls the project a “particularly important” part of the FHIR community, though it’s hard to tell how far along its vendor members have come with the standard so far.

To date, few EHR vendors have offered concrete support for FHIR, but that’s changing gradually. For example, in early 2016 Cerner released an online sandbox for developers designed to help them interact with its platform. And earlier this month, Epic announced the launch of a new program, helping physician practices to build customized apps using FHIR.

In addition to the vendors, which include athenahealth, Cerner, Epic, MEDITECH and McKesson, several large providers are participating. Beth Israel Deaconess Medical Center, Intermountain Healthcare, the Mayo Clinic and Partners HealthCare System are on board, as well as the SMART team at the Boston Children’s Hospital Informatics Program.

Meanwhile, the progress of developing and improving FHIR will continue.  For release 4 of FHIR, the participants will focus on record-keeping and data exchange for the healthcare process. This will encompass clinical data such as allergies, problems and care plans; diagnostic data such observations, reports and imaging studies; medication functions such as order, dispense and administration; workflow features like task, appointment schedule and referral; and financial data such as claims, accounts and coverage.

Eventually, when release 5 of FHIR becomes available, developers should be able to help clinicians reason about the healthcare process, the organization says.

Epic and other EHR vendors caught in dilemmas by APIs (Part 2 of 2)

Posted on March 16, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first section of this article reported some news about Epic’s Orchard, a new attempt to provide an “app store” for health care. In this section we look over the role of APIs as seen by EHR vendors such as Epic.

The Roles of EHRs

Dr. Travis Good, with whom I spoke for this article, pointed out that EHRs glom together two distinct functions: a canonical, trusted store for patient data and an interface that becomes a key part of the clinician workflow. They are being challenged in both these areas, for different reasons.

As a data store, EHRs satisfied user needs for many years. The records organized the data for billing, treatment, and compliance with regulations. If there were problems with the data, they stemmed not from the EHRs but from how they were used. We should not blame the EHR if the doctor upcoded clinical information in order to charge more, or if coding was too primitive to represent the complexity of patient illness. But clinicians and regulators are now demanding functions that EHRs are fumbling at fulfillling:

  • More and more regulatory requirements, which intelligent software would calculate on its own from data already in the record, but which most EHRs require the physician to fill out manually

  • Patient-generated data, which may be entered by the patient manually or taken from devices

  • Data in streamlined formats for large-scale data analysis, for which institutions are licensing new forms of databases

Therefore, while the EHR still stores critical data, it is not the sole source of truth and is having to leave its borders porous in order to work with other data sources.

The EHR’s second function, as an interface that becomes part of the clinicians’ hourly workflow, has never been fulfilled well. EHRs are the most hated software among their users. And that’s why users are calling on them to provide APIs that permit third-party developers to compete at the interface level.

So if I were to write a section titled “The Future of Current EHRs” it could conceivably be followed by a blank page. But EHRs do move forward, albeit slowly. They must learn to be open systems.

With this perspective, Orchard looks like doubling down on an obsolete strategy. The limitations and terms of service give the impression that Epic wants to remain a one-stop shopping service for customers. But if Epic adopted the SMART approach, with more tolerance for failure and options for customers, it would start to reap the benefits promised by FHIR and foster health care innovation.

Epic and Other EHR Vendors Caught in Dilemmas by APIs (Part 1 of 2)

Posted on March 15, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The HITECH act of 2009 (part of the American Recovery and Reinvestment Act) gave an unprecedented boost to an obscure corner of the IT industry that produced electronic health records. For the next eight years they were given the opportunity to bring health care into the 21st century and implement common-sense reforms in data sharing and analytics. They largely squandered this opportunity, amassing hundreds of millions of dollars while watching health care costs ascend into the stratosphere, and preening themselves over modest improvements in their poorly functioning systems.

This was not solely a failure of EHR vendors, of course. Hospitals and clinicians also needed to adopt agile methods of collaborating and using data to reduce costs, and failed to do so. They’re sweating profusely now, as shown in protests by the American Medical Association and major health care providers over legislative changes that will drastically reduce their revenue through cuts to insurance coverage and Medicaid. EHR vendors will feel the pain of a thousand cuts as well.

I recently talked to Dr. Travis Good, CEO of Datica that provides data integration and storage to health care providers. We discussed the state of EHR interoperability, the roles of third-party software vendors, and in particular the new “app store” offered by Epic under the name Orchard. Although Datica supports integration with a dozen EHRs, 70% of their business involves Epic. So we’ll start with the new Orchard initiative.

The Epic App Store

Epic, like most vendors, has offered an API over the past few years that gives programmers at hospitals access to patient data in the EHR. This API now complies with the promising new standard for health data, FHIR, and uses the resources developed by the Argonaut Project. So far, this is all salutary and positive. Dr. Good points out, however, that EHR vendors such as Epic offer the API mostly to extract data. They are reluctant to allow data to be inserted programmatically, claiming it could allow errors into the database. The only change one can make, usually, is to add an annotation.

This seriously hampers the ability of hospitals or third-party vendors to add new value to the clinical experience. Analytics benefit from a read-only data store, but to reach in and improve the doctor’s workflow, an application must be able to write new data into the database.

More risk springs from controls that Epic is putting on the apps uploaded to Orchard. Like the Apple Computer store that inspired Orchard, Epic’s app store vets every app and allows in only the apps that it finds useful. For a while, the terms of service allowed Epic access to the data structures of the app. What this would mean in practice is hard to guess, but it suggests a prurient interest on the part of Epic in what its competitors are doing. We can’t tell where Epic’s thinking is headed, though, because the public link to the terms of service was recently removed, leaving a 404 message.

Good explained that Epic potentially could track all the transactions between the apps and their users, and in particular will know which ones are popular. This raises fears among third-party developers that Epic will adopt their best ideas and crowd them out of the market by adding the features to its own core system, as Microsoft notoriously did during the 1980s when it dominated the consumer software market.

Epic’s obsession with control can be contrasted with the SMART project, an open platform for health data developed by researchers at Harvard Medical School. They too offer an app gallery (not a store), but their goal is to open the repository to as wide a collection of contributors as possible. This maximizes the chances for innovation. As described at one of their conferences, control over quality and fitness for use would be exerted by the administrator of each hospital or other institution using the gallery. This administrator would choose which apps to make available for clinical staff to download.

Of course, SMART apps also work seamlessly cross-platform, which distinguishes them from the apps provided by individual vendors. Eventually–ideally–FHIR support will allow the apps in Orchard and from other vendors to work on all EHRs that support FHIR. But the standard is not firm enough to allow this–there are too many possible variations. People who have followed the course of HITECH implementation know the history of interoperability, and how years of interoperability showcases at HIMSS have been mocked by the real incompatibilities between EHRs out in the field.

To understand how EHRs are making use of APIs, we should look more broadly at their role in health care. That will be the topic of the next section of this article.

Are Healthcare Integration Engines Needed?

Posted on March 13, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In a perfect world, we might ask why health systems need to purchase an integration engine. The standards used by integration engines are pretty widespread and every EHR and Healthcare IT vendor uses that standard. Why then do we need an integration engine in the middle?

I’m sure there are a lot of reasons, but two reasons stand out the most to me are: integration costs and flavors of standards.

Integration Costs
It’s amazing how expensive it is to build integrations with EHR and other healthcare IT software. I still look back on the first lab interface integration I did. I couldn’t believe how expensive it was to do the integration and how the vendors were happy to nickle and dime you all along the way. Many of them look at it as a secondary business model.

While an integration engine can’t solve all these costs, if you have a large number of integrations, the integration engine can save you a lot of money. This includes the integration engine’s experience integrating with multiple vendors, but it also means you can often only pay your EHR vendor one time instead of getting charged for every integration.

Flavors of Standards
If you’ve ever managed an integration, you know how miserable it can be. Each side of the integration implements their own “flavor” of the standard (which makes no sense, but is reality) and that flavor can often change as the various software gets updated. It’s no fun to manage and often leads to interface downtime. You know the impact interface downtime can have on your providers who don’t understand the intricacies of an interface. No one likes something that previously just worked to stop working.

This is where integrations engines definitely shine. Their whole job is to manage these types of changes and ensure that they’re prepared for the change. If they can’t do this right, then you should search for a new integration engine. Plus, integration engines usually have tools to help you manage this and to update this as vendors change (and they will change).

Will Integration Engines Survive?
In the perfect world, we wouldn’t need an integration engine. Healthcare is not a perfect world. In fact, it’s far from it, so I see integration engines sticking around for a long while to come. They’re quite entrenched in the business processes of most large healthcare organizations.

While at the HIMSS Conference, I was talking with Summit Healthcare and they noted that they have 1 client that’s sending 5 million messages per day (Yes, I said per day!). That’s a lot of messages and that’s only one client from one integration engine. Hearing that number illustrated how valuable these integration engines are to an organization. It also flew in the face of healthcare not being interoperable. However, it illustrates how much data needs to be shared if we had true interoperability since those 5 million messages only includes a small portion of health data that could be shared.

We’ll look at diving into integration engines in more detail in future posts. I think they’re an important backbone of what’s happening in healthcare IT and many don’t realize it.

Consumers Fear Theft Of Personal Health Information

Posted on February 15, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Probably fueled by constant news about breaches – duh! – consumers continue to worry that their personal health information isn’t safe, according to a new survey.

As the press release for the 2017 Xerox eHealth Survey notes, last year more than one data breach was reported each day. So it’s little wonder that the survey – which was conducted online by Harris poll in January 2017 among more than 3,000 U.S. adults – found that 44% of Americans are worried about having their PHI stolen.

According to the survey, 76% of respondents believe that it’s more secure to share PHI between providers through a secure electronic channel than to fax paper documents. This belief is certainly a plus for providers. After all, they’re already committed to sharing information as effectively as possible, and it doesn’t hurt to have consumers behind them.

Another positive finding from the study is that Americans also believe better information sharing across providers can help improve patient care. Xerox/Harris found that 87% of respondents believe that wait times to get test results and diagnoses would drop if providers securely shared and accessed patient information from varied providers. Not only that, 87% of consumers also said that they felt that quality of service would improve if information sharing and coordination among different providers was more common.

Looked at one way, these stats offer providers an opportunity. If you’re already spending tens or hundreds of millions of dollars on interoperability, it doesn’t hurt to let consumers know that you’re doing it. For example, hospitals and medical practices can put signs in their lobby spelling out what they’re doing by way of sharing data and coordinating care, have their doctors discuss what information they’re sharing and hand out sheets telling consumers how they can leverage interoperable data. (Some organizations have already taken some of these steps, but I’d argue that virtually any of them could do more.)

On the other hand, if nearly half of consumers afraid that their PHI is insecure, providers have to do more to reassure them. Though few would understand how your security program works, letting them know how seriously you take the matter is a step forward. Also, it’s good to educate them on what they can do to keep their health information secure, as people tend to be less fearful when they focus on what they can control.

That being said, the truth is that healthcare data security is a mixed bag. According to a study conducted last year by HIMSS, most organizations conduct IT security risk assessments, many IT execs have only occasional interactions with top-level leaders. Also, many are still planning out their medical device security strategy. Worse, provider security spending is often minimal. HIMSS notes that few organizations spend more than 6% of their IT budgets on data security, and 72% have five or fewer employees allocated to security.

Ultimately, it’s great to see that consumers are getting behind the idea of health data interoperability, and see how it will benefit them. But until health organizations do more to protect PHI, they’re at risk of losing that support overnight.

ONC Takes Another Futile Whack At Interoperability

Posted on January 2, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

With the New Year on its way, ONC has issued its latest missive on how to move the healthcare industry towards interoperability. Its Interoperability Standards Advisory for 2017, an update from last year’s version, offers a collection of standards and implementation specs the agency has identified as important to health data sharing.

I want to say at the outset that this seems a bit, well, strange to me. It really does seem like a waste of time to create a book of curated standards when the industry’s interoperability take changes every five minutes. In fact, it seems like an exercise in futility.

But I digress. Let’s talk about this.

About the ISA

The Advisory includes four technical  sections, covering a) vocabulary/code sets/terminology, b) content/structure standards and implementation specs, c) standards and implementation specs for services and d) models and profiles, plus a fifth section listing ONC’s questions and requesting feedback. This year’s version takes the detailed feedback the ONC got on last year’s version into account.

According to ONC leader Vindell Washington, releasing the ISA is an important step toward achieving the goals the agency has set out in the Shared Nationwide Interoperability Roadmap, as well as the Interoperability Pledge announced earlier this year. There’s little doubt, at minimum, that it represents the consensus thinking of some very smart and thoughtful people.

In theory ONC would appear to be steaming ahead toward meeting its interoperability goals. And one can hardly disagree that it’s overarching goal set forth in the Roadmap, of creating a “learning health system” by 2024 sounds attractive and perhaps doable.

Not only that, at first glance it might seem that providers are getting on board. As ONC notes, companies which provide 90% of EHRs used by hospitals nationwide, as well as the top five healthcare systems in the country, have agreed to the Pledge. Its three core requirements are that participants make it easy for consumers to access their health information, refrain from interfering with health data sharing, and implement federally recognized national interoperability standards.

Misplaced confidence

But if you look at the situation more closely, ONC’s confidence seems a bit misplaced. While there’s much more to its efforts, let’s consider the Pledge as an example of how slippery the road ahead is.

So let’s look at element one, consumer access to data. While agreeing to give patients access is a nice sentiment, to me it seems inevitable that there will be as many forms of data access as there are providers. Sure, ONC or other agencies could attempt to regulate this, but it’s like trying to nail down jello given the circumstances. And what’s more, as soon as we define what adequate consumer access is, some new technology, care model or consumer desire will change everything overnight.

What about information blocking? Will those who took the Pledge be able to avoid interfering with data flows? I’d guess that if nothing else, they won’t be able to support the kind of transparency and sharing ONC would like to see. And then when you throw in those who just don’t think full interoperability is in their interests – but want to seem as though they play well with others – you’ve pretty much got a handful o’ nothing.

And consider the third point of the Pledge, which asks providers to implement “federally recognized” standards. OK, maybe the ISA’s curated specs meet this standard, but as the Advisory is considered “non-binding” perhaps they don’t. OK, so what if there were a set of agreed-upon federal standards? Would the feds be able to keep up with changes in the marketplace (and technology) that would quickly make their chosen models obsolete? I doubt it. So we have another swing and a miss.

Given how easy the Pledge is to challenge, how much weight can we assign to efforts like the ISA or even ONC’s long-term interoperability roadmap? I’d argue that the answer is “not much.” The truth is that at least in its current form, there’s little chance the ONC can do much to foster a long-term, structural change in how health organizations share data. It’d be nice to think that, but thinking doesn’t make it so.

The Case For Accidental Interoperability

Posted on December 22, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Many of us who follow #HITsm on Twitter have encountered the estimable standards guru Keith Boone (known there as @motorcycle_guy). Keith always has something interesting to share, and his recent article, on “accidental” interoperability, is no exception.

In his article, he describes an aha moment: “I had a recent experience where I saw a feature one team was demonstrating in which I could actually integrate one of my interop components to supply additional functionality,” he writes. “When you build interop components right, this kind of accidental interop shows up all the time.”

In his piece, he goes on to argue that this should happen a lot more often, because by doing so, “you can create lot of value through it with very little engineering investment.”

In an ideal world, such unplanned instances of interoperability would happen often, allowing developers and engineers to connect solutions with far less trouble and effort. And the more often that happened, the more resources everyone involved would have to invest in solving other types of problems.

But in his experience, it can be tough to get dev teams into the “component-based” mindset that would allow for accidental interoperability. “All too often I’ve been told those more generalized solutions are ‘scope expansions,’ because they don’t fit the use case,” and any talk of future concerns is dropped, he says.

While focusing on a particular use case can save time, as it allows developers to take shortcuts which optimize their work for that use case, this approach also limits the value of their work, he argues. Unfortunately, this intense focus prevents developers from creating more general solutions that might have broader use.

Instead of focusing solely on their short-term goals, he suggests, health IT leaders may want to look at the bigger picture. “My own experience tells me that the value I get out of more general solutions is well worth the additional engineering attention,” he writes. “It may not help THIS use case, but when I can supply the same solution to the next use case that comes along, then I’ve got a clear win.”

Keith’s article points up an obstacle to interoperability that we don’t think much about right now. While most of what I read about interoperability options — including on this blog — focus on creating inter-arching standards that can tie all providers together, we seldom discussed the smaller, day-to-day decisions that stand in the way of health data sharing.

If he’s right (and I have little doubt that he is) health IT interoperability will become a lot more feasible, a lot more quickly, if health organizations take a look at the bigger purposes an individual development project can meet. Otherwise, the next project may just be another silo in the making.