Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Don’t Yell FHIR in a Hospital … Yet

Posted on November 30, 2016 I Written By

The following is a guest blog post by Richard Bagdonas, CTO and Chief Healthcare Architect at MI7.
richard-bagdonas
The Fast Healthcare Interoperability Resource standard, commonly referred to as FHIR (pronounced “fire”) has a lot of people in the healthcare industry hopeful for interoperability between the electronic health record (EHR) systems and external systems — enabling greater information sharing.

As we move into value-based healthcare and away from fee-for-service healthcare, one thing becomes clear: care is no longer siloed to one doctor and most certainly not to one facility. Think of the numerous locations a patient must visit when getting a knee replaced. They start at their general practitioner’s office, then go to the orthopedic surgeon, followed by the radiology center, then to the hospital, often back to the ortho’s office, and finally to one or more physical therapists.

Currently the doctor’s incentives are not aligned with the patient. If the surgery needs to be repeated, the insurance company and patient pay for it again. In the future the doctor will be judged and rewarded or penalized for their performance in what is called the patient’s “episode of care.” All of this coordination between providers requires the parties involved become intimately aware of everything happening at each step in the process.

This all took off back in 2011 when Medicare began an EHR incentive program providing $27B in incentives to doctors at the 5,700 hospitals and 235,000 medical practices to adopt EHR systems. Hospitals would receive $2M and doctors would receive $63,750 when they put in the EHR system and performed some basic functions proving they were using it under what has been termed “Meaningful Use” or MU.

EHR manufacturers made a lot of money selling systems leveraging the MU incentives. The problem most hospitals ran into is their EHR didn’t come with integrations to external systems. Integration is typically done using a 30 year old standard called Health Level 7 or HL7. The EHR can talk to outside systems using HL7, but only if the interface is turned on and both systems use the same version. EHR vendors typically charge thousands of dollars and sometimes tens of thousands to turn on each interface. This is why interface engines have been all the rage since they turn one interface into multiple.

The great part of HL7 is it is standard. The bad parts of HL7 are a) there are 11 standards, b) not all vendors use all standards, c) most EHRs are still using version 2.3 which was released in 1997, and d) each EHR vendor messes up the HL7 standard in their own unique way, causing untold headaches for integration project managers across the country. The joke in the industry is if you have seen one EHR integration, you’ve seen “just one.”

image-1
HL7 versions over the years

HL7 version 3.0 which was released in 2005 was supposed to clear up a lot of this integration mess. It used the Extensible Markup Language (XML) to make it easier for software developers to parse the healthcare messages from the EHR, and it had places to stick just about all of the data a modern healthcare system needs for care coordination. Unfortunately HL7 3.0 didn’t take off and many EHRs didn’t build support for it.

FHIR is the new instantiation of HL7 3.0 using JavaScript Object Notation (JSON), and optionally XML, to do similar things using more modern technology concepts such as Representation State Transfer (REST) with HTTP requests to GET, PUT, POST, and DELETE these resources. Developers love JSON.

FHIR is not ready for prime time and based on how HL7 versions have been rolled out over the years it will not be used in a very large percentage of the medical facilities for several years. The problem the FHIR standard created is a method by which a medical facility could port EHR data from one manufacturer to another. EHR manufacturers don’t want to let this happen so it is doubtful they will completely implement FHIR — especially since it is not a requirement of MU.

And FHIR is still not hardened. There have been fifteen versions of FHIR released over the last two years with six incompatible with earlier versions. We are a year away at best from the standard going from draft to release, so plan on there being even more changes.

image-2
15 versions of FHIR since 2014 with 6 that are incompatible with earlier versions

Another reason for questioning FHIR’s impact is the standard has several ways to transmit and receive data besides HTTP requests. One EHR may use sockets, while another uses file folder delivery, while another uses HTTP requests. This means the need for integration engines still exists and as such the value from moving to FHIR may be reduced.

Lastly, the implementation of FHIR’s query-able interface means hospitals will have to decide if they must host all of their data in a cloud-based system for outside entities to use or become a massive data center running the numerous servers it will take to allow patients with mobile devices to not take down the EHR when physicians need it for mission-critical use.

While the data geek inside me loves the idea of FHIR, my decades of experience performing healthcare integrations with EHRs tell me there is more smoke than there is FHIR right now.

My best advice when it comes to FHIR is to keep using the technologies you have today and if you are not retired by the time FHIR hits its adoption curve, look at it with fresh eyes at that time. I will be eagerly awaiting its arrival, someday.

About Richard Bagdonas
Richard Bagdonas has over 12 years integrating software with more than 40 electronic health record system brands. He is an expert witness on HL7 and EDI-based medical billing. Richard served as a technical consultant to the US Air Force and Pentagon in the mid-1990’s and authored 4 books on telecom/data network design and engineering. Richard is currently the CTO and Chief Healthcare Architect at MI7, a healthcare integration software company based in Austin, TX.

What Would A Community Care Plan Look Like?

Posted on November 16, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Recently, I wrote an article about the benefits of a longitudinal patient record and community care plan to patient care. I picked up the idea from a piece by an Orion Health exec touting the benefits of these models. Interestingly, I couldn’t find a specific definition for a community care plan in the article — nor could I dig anything up after doing a Google search — but I think the idea is worth exploring nonetheless.

Presumably, if we had a community care plan in place for each patient, it would have interlocking patient-specific and population health-level elements to it. (To my knowledge, current population health models don’t do this.) Rather than simply handing patients off from one provider to another, in the hope that the rare patient-centered medical home could manage their care effectively on its own, it might set care goals for each patient as part of the larger community strategy.

With such a community care strategy, groups of providers would have a better idea where to allocate resources. It would simultaneously meet the goals of traditional medical referral patterns, in which clinicians consult with one another on strategy, and help them decide who to hire (such as a nurse-practitioner to serve patient clusters with higher levels of need).

As I envision it, a community care plan would raise the stakes for everyone involved in the care process. Right now, for example, if a primary care doctor refers a patient to a podiatrist, on a practical level the issue of whether the patient can walk pain-free is not the PCP’s problem. But in a community-based care plan, which help all of the individual actors be accountable, that podiatrist couldn’t just examine the patient, do whatever they did and punt. They might even be held to quantitative goals, if the they were appropriate to the situation.

I also envision a community care plan as involving a higher level of direct collaboration between providers. Sure, providers and specialists coordinate care across the community, minimally, but they rarely talk to each other, and unless they work for the same practice or health system virtually never collaborate beyond sharing care documentation. And to be fair, why should they? As the system exists today, they have little practical or even clinical incentive to get in the weeds with complex individual patients and look at their future. But if they had the right kind of community care plan in place for the population, this would become more necessary.

Of course, I’ve left the trickiest part of this for last. This system I’ve outlined, basically a slight twist on existing population health models, won’t work unless we develop new methods for sharing data collaboratively — and for reasons I be glad to go into elsewhere, I’m not bullish about anything I’ve seen. But as our understanding of what we need to get done evolves, perhaps the technology will follow. A girl can hope.

A Look At Nursing Home Readiness For HIE Participation

Posted on October 12, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A newly released study suggests that nursing homes have several steps to go through before they are ready to participate in health information exchanges. The study, which appeared in the AHIMA-backed Perspectives in Health Information Management, was designed to help researchers understand the challenges nursing homes faced in health information sharing, as well as what successes they had achieved to date.

As the study write up notes, the U.S. nursing home population is large — nearly 1.7 million residents spread across 15,600 nursing homes as of 2014. But unlike other settings that care for a high volume of patients, nursing homes haven’t been eligible for EMR incentive programs that might have helped them participate in HIEs.

Not surprisingly, this has left the homes at something of a disadvantage, with very few participating in networked health data sharing. And this is a problem in caring for residents adequately, as their care is complex, involving nurses, physicians, physicians’ offices, pharmacists and diagnostic testing services. So understanding what potential these homes have to connect is a worthwhile topic of study. That’s particularly the case given that little is known about HIE implementation and the value of shared patient records across multiple community-based settings, the study notes.

To conduct the study, researchers conducted interviews with 15 nursing home leaders representing 14 homes in the midwestern United States that participated in the Missouri Quality Improvement Initiative (MOQI) national demonstration project.  Studying MOQI participants helped researchers to achieve their goals, as one of the key technology goals of the CMS-backed project is to develop knowledge of HIE implementations across nursing homes and hospital boundaries and determine the value of such systems to users.

The researchers concluded that incorporating HIE technology into existing work processes would boost use and overall adoption. They also found that participation inside and outside of the facility, and providing employees with appropriate training and retraining, as well as getting others to use the HIE, would have a positive effect on health data sharing projects. Meanwhile, getting the HIE operational and putting policies for technology use were challenges on the table for these institutions.

Ultimately, the study concluded that nursing homes considering HIE adoption should look at three areas of concern before getting started.

  • One area was the home’s readiness to adopt technology. Without the right level of readiness to get started, any HIE project is likely to fail, and nursing home-based data exchanges are no exception. This would be particularly important to a project in a niche like this one, which never enjoyed the outside boost to the emergence of the technology culture which hospitals and doctors enjoyed under Meaningful Use.
  • Another area identified by researchers was the availability of technology resources. While the researchers didn’t specify whether they meant access to technology itself or the internal staff or consultants to execute the project, but both seem like important considerations in light of this study.
  • The final area researchers identified as critical for making a success of HIE adoption in nursing homes was the ability to match new clinical workflows to the work already getting done in the homes. This, of course, is important in any setting where leaders are considering major new technology initiatives.

Too often, discussions of health data sharing leave out major sectors of the healthcare economy like this one. It’s good to take a look at what full participation in health data sharing with nursing homes could mean for healthcare.

The Variables that Need Adjusting to Make Health Data Sharing a Reality

Posted on October 7, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

During today’s #HITsm chat, John Trader offered this fascinating quote from SusannahFox, CTO at HHS:

I quickly replied with the following:

This concept is definitely worth exploring. There are a lot of things in life that we want. However, that doesn’t mean we want them enough to actually do them. I want to be skinny and muscular. I don’t want it enough to stop eating the way I do and start working out in a way that would help me lose weight and become a chiseled specimen of a man. The problem is that there are different levels of “want.”

This applies so aptly to data sharing in healthcare. Most of us want the data sharing to happen. I’ve gone so far as to say that I think most patients think that the data sharing is already happening. Most patients probably don’t realize that it’s not happening. Most caregivers want the data shared as well. What doctor wants to see a patient with limited information? The more high quality information a doctor has, the better they can do their job. So, yes, they want to share patients data so they can help others (ie. their patients).

The problem is that most patients and caregivers don’t want it enough. They’re ok with data sharing. They think that data sharing is beneficial. They might even think that data sharing is the right thing to do. However, they don’t want it enough to make it a reality.

It’s worth acknowledging that there’s a second part of this equation: Difficulty. If something is really difficult to do, then your level of “want” needs to be extremely high to overcome those difficulties. If something is really easy to do, then your level of want can be much lower.

For the programmer geeks out there:

If (Difficulty > Want) Then End

If (Difficulty < Want) Then ResultAchieved

When we talk about healthcare data sharing, it’s really difficult to do and people’s “want” is generally low. There are a few exceptions. Chronically ill patients have a much bigger “want” to solve the problem of health data sharing. So, some of them overcome the difficulty and are able to share the data. Relatively healthy patients don’t have a big desire to get and share their health data, so they don’t do anything to overcome the challenge of getting and sharing that data.

If we want health data sharing, we have to change the variables. We can either make health data sharing easier (something many are working to accomplish) or we can provide (or create the perception of) more value to patients and caregivers so that they “want” it more. Until that happens, we’re unlikely to see things change.

CommonWell and Healthcare Interoperability

Posted on October 6, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Healthcare Scene sat down with Scott Stuewe, Director at Cerner Network and Daniel Cane, CEO & Co-Founder at Modernizing Medicine, where we talked about Cerner’s participation in CommonWell and Modernizing Medicine’s announcement to join CommonWell. This was a great opportunity to learn more about the progress CommonWell has made.

During our discussion, we talk about where CommonWell is today and where it’s heading in the future. Plus, we look at some of the use cases where CommonWell works today and where they haven’t yet build out that capability. We also talk about how the CommonWell member companies are working together to make healthcare interoperability a reality. Plus, we talk a bit about the array of interoperability solutions that will be needed beyond CommonWell. Finally, we look at where healthcare interoperability is headed.

In the “After Party” video we continued our conversation with Scott and Daniel where we talked about the governance structure for CommonWell and how it made decisions. We also talked about the various healthcare standards that are available and where we’re at in the development of those standards. Plus, we talk about the potential business model for EHR vendors involved in CommonWell. Scott and Daniel finish off by talking about what we really need to know about CommonWell and where it’s heading.

CommonWell is a big part of many large EHR vendors interoperability plans. Being familiar with what they’re doing is going to be important to understand how healthcare data sharing will or won’t happen in the future.

Validic Survey Raises Hopes of Merging Big Data Into Clinical Trials

Posted on September 30, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Validic has been integrating medical device data with electronic health records, patient portals, remote patient monitoring platforms, wellness challenges, and other health databases for years. On Monday, they highlighted a particularly crucial and interesting segment of their clientele by releasing a short report based on a survey of clinical researchers. And this report, although it doesn’t go into depth about how pharmaceutical companies and other researchers are using devices, reveals great promise in their use. It also opens up discussions of whether researchers could achieve even more by sharing this data.

The survey broadly shows two trends toward the productive use of device data:

  • Devices can report changes in a subject’s condition more quickly and accurately than conventional subject reports (which involve marking observations down by hand or coming into the researcher’s office). Of course, this practice raises questions about the device’s own accuracy. Researchers will probably splurge for professional or “clinical-grade” devices that are more reliable than consumer health wearables.

  • Devices can keep the subject connected to the research for months or even years after the end of the clinical trial. This connection can turn up long-range side effects or other impacts from the treatment.

Together these advances address two of the most vexing problems of clinical trials: their cost (and length) and their tendency to miss subtle effects. The cost and length of trials form the backbone of the current publicity campaign by pharma companies to justify price hikes that have recently brought them public embarrassment and opprobrium. Regardless of the relationship between the cost of trials and the cost of the resulting drugs, everyone would benefit if trials could demonstrate results more quickly. Meanwhile, longitudinal research with massive amounts of data can reveal the kinds of problems that led to the Vioxx scandal–but also new off-label uses for established medications.

So I’m excited to hear that two-thirds of the respondents are using “digital health technologies” (which covers mobile apps, clinical-grade devices, and wearables) in their trials, and that nearly all respondents plan to do so over the next five years. Big data benefits are not the only ones they envision. Some of the benefits have more to do with communication and convenience–and these are certainly commendable as well. For instance, if a subject can transmit data from her home instead of having to come to the office for a test, the subject will be much more likely to participate and provide accurate data.

Another trend hinted at by the survey was a closer connection between researchers and patient communities. Validic announced the report in a press release that is quite informative in its own right.

So over the next few years we may enter the age that health IT reformers have envisioned for some time: a merger of big data and clinical trials in a way to reap the benefits of both. Now we must ask the researchers to multiply the value of the data by a whole new dimension by sharing it. This can be done in two ways: de-identifying results and uploading them to public or industry-maintained databases, or providing identifying information along with the data to organizations approved by the subject who is identified. Although researchers are legally permitted to share de-identified information without subjects’ consent (depending on the agreements they signed when they began the trials), I would urge patient consent for all releases.

Pharma companies are already under intense pressure for hiding the results of trials–but even the new regulations cover only results, not the data that led to those results. Organizations such as Sage Bionetworks, which I have covered many times, are working closely with pharmaceutical companies and researchers to promote both the software tools and the organizational agreements that foster data sharing. Such efforts allow people in different research facilities and even on different continents to work on different aspects of a target and quickly share results. Even better, someone launching a new project can compare her data to a project run five years before by another company. Researchers will have millions of data points to work with instead of hundreds.

One disappointment in the Validic survey was a minority of respondents saw a return on investment in their use of devices. With responsible data sharing, the next Validic survey may raise this response rate considerably.

Please, No More HIE “Coming Of Age” Stories

Posted on September 29, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Today I read a Modern Healthcare story suggesting that health information exchanges are “coming of age,” and after reading it, I swear my eyes almost rolled back into my head. (An ordinary eye roll wouldn’t do.)

The story leads with the assertion that a new health data sharing deal, in which Texas Health Resources agreed to share data via a third-party HIE, suggests that such HIEs are becoming sustainable.

Author Joseph Conn writes that the 14-hospital system is coming together with 32 other providers sending data to Healthcare Access San Antonio, an entity which supports roughly 2,400 HIE users and handles almost 2.2 million patient records. He notes that the San Antonio exchange is one of about 150 nationwide, hardly a massive number for a country the size of the U.S.

In partial proof of his assertion that HIEs are finding their footing, he notes that that from 2010 to 2015, the number of HIEs in the U.S. fluctuated but saw a net gain of 41%, according to federal stats. And he attributes this growth to pressure on providers to improve care, lower costs and strengthen medical research, or risk getting Medicare or Medicaid pay cuts.

I don’t dispute that there is increased pressure on providers to meet some tough goals. Nor am I arguing that many healthcare organizations believe that healthcare data sharing via an HIE can help them meet these goals.

But I would argue that even given the admittedly growing pressure from federal regulators to achieve certain results, history suggests that an HIE probably isn’t the way to get this done, as we don’t seem to have found a business model for them that works over the long term.

As Conn himself notes, seven recipients of federal, state-wide HIE grants issued by the ONC — awarded in Connecticut, Illinois, Montana, Nevada, New Hampshire, Puerto Rico and Wyoming — went out of business after the federal grants dried up. So were not talking about HIEs’ ignoble history of sputtering out, we’re talking about fairly recent failures.

He also notes that a commercially-funded model, MetroChicago HIE, which connected more than 30 northeastern Illinois hospitals, went under earlier this year. This HIE failed because its most critical technology vendor suddenly went out of business with 2 million patient records in its hands.

As for HASA, the San Antonio exchange discussed above, it’s not just a traditional HIE. Conn’s piece notes that most of the hospitals in the Dallas-Fort Worth area have already implemented or plan to use an Epic EMR and share clinical messages using its information exchange capabilities. Depending on how robust the Epic data-sharing functions actually are, this might offer something of a solution.

But what seems apparent to me, after more than a decade of watching HIEs flounder, is that a data-sharing model relying on a third-party platform probably isn’t financially or competitively sustainable.

The truth is, a veteran editor like Mr. Conn (who apparently has 35 years of experience under his belt) must know that his reporting doesn’t sustain the assertion that HIEs are coming into some sort of golden era. A single deal undertaken by even a large player like Texas Health Resources doesn’t prove that HIEs are seeing a turnaround. It seems that some people think the broken clock that is the HIE model will be right at least once.

P.S.  All of this being said, I admit that I’m intrigued by the notion of  “public utility” HIE. Are any of you associated with such a project?

The Burden of Structured Data: What Health Care Can Learn From the Web Experience (Part 2 of 2)

Posted on September 23, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article summarized what Web developers have done to structure data, and started to look at the barriers presented by health care. This part presents more recommendations for making structured data work.

The Grand Scheme of Things
Once you start classifying things, it’s easy to become ensnared by grandiose pipe dreams and enter a free fall trying to design the perfect classification system. A good system is distinguished by knowing its limitations. That’s why microdata on the Web succeeded. In other areas, the field of ontology is littered with the carcasses of projects that reached too far. And health care ontologies always teeter on the edge of that danger.

Let’s take an everyday classification system as an example of the limitations of ontology. We all use genealogies. Imagine being able to sift information about a family quickly, navigating from father to son and along the trail of siblings. But even historical families, such as royal ones, introduce difficulties right away. For instance, children born out of wedlock should be shown differently from legitimate heirs. Modern families present even bigger headaches. How do you represent blended families where many parents take responsibilities of different types for the children, or people who provided sperm or eggs for artificial insemination?

The human condition is a complicated one not subject to easy classification, and that naturally extends to health, which is one of the most complex human conditions. I’m sure, for instance, that the science of mosquito borne diseases moves much faster than the ICD standard for disease. ICD itself should be replaced with something that embodies semantic meaning. But constant flexibility must be the hallmark of any ontology.

Transgender people present another enormous challenge to ontologies and EHRs. They’re a test case for every kind of variation in humanity. Their needs and status vary from person to person, with no classification suiting everybody. These needs can change over time as people make transitions. And they may simultaneously need services defined for male and female, with the mix differing from one patient to the next.

Getting to the Point
As the very term “microdata” indicates, those who wish to expose semantic data on the Web can choose just a few items of information for that favored treatment. A movie theater may have text on its site extolling its concession stand, its seating, or its accommodations for the disabled, but these are not part of the microdata given to search engines.

A big problem in electronic health records is their insistence that certain things be filled out for every patient. Any item that is of interest for any class of patient must appear in the interface, a problem known in the data industry as a Cartesian explosion. Many observers counsel a “less is more” philosophy in response. It’s interesting that a recent article that complained of “bloated records” and suggested a “less is more” approach goes on to recommend the inclusion of scads of new data in the record, to cover behavioral and environmental information. Without mentioning the contradiction explicitly, the authors address it through the hope that better interfaces for entering and displaying information will ease the burden on the clinician.

The various problems with ontologies that I have explained throw doubt on whether EHRs can attain such simplicity. Patients are not restaurants. To really understand what’s important about a patient–whether to guide the clinician in efficient data entry or to display salient facts to her–we’ll need systems embodying artificial intelligence. Such systems always feature false positives and negatives. They also depend on continuous learning, which means they’re never perfect. I would not like to be the patient whose data gets lost or misclassified during the process of tuning the algorithms.

I do believe that some improvements in EHRs can promote the use of structured data. Doctors should be allowed to enter the data in the order and the manner they find intuitive, because that order and that manner reflect their holistic understanding of the patient. But suggestions can prompt them to save some of the data in structured format, without forcing them to break their trains of thought. Relevant data will be collected and irrelevant fields will not be shown or preserved at all.

The resulting data will be less messy than what we have in unstructured text currently, but still messy. So what? That is the nature of data. Analysts will make the best use of it they can. But structure should never get in the way of the information.

The Burden of Structured Data: What Health Care Can Learn From the Web Experience (Part 1 of 2)

Posted on September 22, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Most innovations in electronic health records, notably those tied to the Precision Medicine initiative that has recently raised so many expectations, operate by moving clinical information into structure of one type or another. This might be a classification system such as ICD, or a specific record such as “medications” or “lab results” with fixed units and lists of names to choose from. There’s no arguing against the benefits of structured data. But its costs are high as well. So we should avoid repeating old mistakes. Experiences drawn from the Web may have something to teach the health care field in respect to structured data.

What Works on the Web
The Web grew out of a structured data initiative. The dream of organizing information goes back decades, and was embodied in Standard Generalized Markup Language (SGML) years before Tim Berners-Lee stole its general syntax to create HTML and present information on the Web. SGML could let a firm mark in its documents that FR927 was a part number whereas SG1 was a building. Any tags that met the author’s fancy could be defined. This put semantics into documents. In other words, the meaning of text could be abstracted from the the text and presented explicitly. Semantics got stripped out of HTML. Although the semantic goals of SGML were re-introduced into the HTML successor XML, it found only niche uses. Another semantic Web tool, JSON, was reserved for data storage and exchange, not text markup.

Since the Web got popular, people have been trying to reintroduce semantics into it. There was Dublin Core, then RDF, then microdata in places like schema.org–just to list a few. Two terms denoting structured data on the Web, the Semantic Web and Linked Data, have been enthusiastically taken up by the World Wide Web Consortium and Tim Berners-Lee himself.

But none of these structured data initiatives are widely known among the Web-browsing public, probably because they all take a lot of work to implement. Furthermore, they run into the bootstrapping problem faced by nearly all standards: if your web site uses semantics that aren’t recognized by the browser, they’re just dropped on the ground (or even worse, the browser mangles your web pages).

Even so, recent years have seen an important form of structured data take off. When you look up a movie or restaurant on a major search engine such a Google, Yahoo!, or Bing, you’ll see a summary of the information most people want to see: local showtimes for the movie, phone number and ratings for a restaurant, etc. This is highly useful (particularly on mobile devices) and can save you the trouble of visiting the web site from which the data comes. Google calls these summaries Rich Cards and Rich Snippets.

If my memory serves me right, the basis for these snippets didn’t come from standards committees involving years of negotiation between stake-holders. Google just decided what would be valuable to its users and laid out the standard. It got adopted because it was a win-win. The movie theaters and restaurants got their information right into the viewer’s face, and the search engine became instantly more valuable and more likely to be used again. The visitors doing the search obviously benefitted too. Everyone found it worth their time to implement the standards.

Interestingly, as structure moves into metadata, HTML itself is getting less semantic. The most recent standard, HTML5, did add a few modest tags such as header and footer. But many sites are replacing meaningful HTML markup, such as p for paragraph, with two ultra-generic tags: div for a division that is set off from other parts of the page, and span for a piece of text embedded within another. Formatting is expressed through CSS, a separate language.

Having reviewed a bit of Web history, let’s see what we can learn from it and apply to health care.

Make the Customer Happy
Win-win is the key to getting a standard adopted. If your clinician doesn’t see any benefit from the use of structured data, she will carp and bristle at any attempt to get her to enter it. One of the big reasons electronic health records are so notoriously hard to use is, “All those fields to fill out.” And while lists of medications or other structured data can help the doctor choose the right one, they can also help her enter serious errors–perhaps because she chose the one next to the one she meant to choose, or because the one she really wanted isn’t offered on the list.

Doctors’ resentment gets directed against every institution implicated in the structured data explosion: the ONC and CMS who demand quality data and other fields of information for their own inscrutable purposes, the vendor who designs up the clunky system, and the hospital or clinic that forces doctors to use it. But the Web experience suggests that doctors would fill out fields that would help them in their jobs. The use of structured data should be negotiated, not dictated, just like other innovations such as hand-washing protocols or checklists. Is it such a radical notion to put technology at the service of the people using it?

I know it’s frustrating to offer that perspective, because many great things come from collecting data that is used in analytics and can turn up unexpected insights. If we fill out all those fields, maybe we’ll find a new cure! But the promised benefit is too far off and too speculative to justify the hourly drag upon the doctor’s time.

We can fall back on the other hope for EHR improvement: an interface that makes data entry so easy that doctors don’t mind using structured fields. I have some caveats to offer about that dream, which will appear in the second part of this article.

ONC’s Interoperability Standards Advisory Twitter Chat Summary

Posted on September 2, 2016 I Written By

The following is a guest blog post by Steve Sisko (@ShimCode and www.shimcode.com).

Yesterday the Office of the National Coordinator for Health Information Technology (ONC) hosted an open chat to discuss their DRAFT 2017 Interoperability Standards Advisory (ISA) artifacts.  The chat was moderated by Steven Posnak, Director, Office of Standards and Technology at Office of the National Coordinator for Health Information and used the #ISAchat hashtag under the @HealthIT_Policy account. The @ONC_HealthIT Twitter account also weighed in.

It was encouraging to see that the ONC hosted a tweetchat to share information and solicit feedback and questions from interested parties. After a little bit of a rough start and clarification of the objectives of the chat, the pace of interactions increased and some good information and ideas were exchanged. In addition, some questions were raised; some of which were answered by Steven Posnak and some of which were not addressed.

What’s This All About?

This post summarizes all of the tweets from the #ISAchat. I’ve organized the tweets as best as I could and I’ve excluded re-tweets and most ‘salutatory’ and ‘thank you’ tweets.

Note: The @hitechanswers  account shared a partial summary of the #ISAchat on 8/31/16 but it included less than half of the tweets shared in this post. So you’re getting the complete scoop here.

Topic 1: Tell us about the ISA (Interoperability Standards Advisory)
Account Tweet Time
@gratefull080504 Question: What is the objective of #ISAchat?   12:04:35
@onc_healthit To spread the word and help people better understand what the ISA is about 12:05:00
@gratefull080504 Question: What are today’s objectives, please? 12:08:43
@onc_healthit Our objective is to educate interested parties. Answer questions & hear from the creators 12:11:02
@johnklimek “What’s this I hear about interoperability?” 12:12:00
@cperezmha What is #PPDX? What is #HIE? What is interoperability? What is interface? #providers need to know the differences. Most do not. 12:14:41
@techguy Who is the target audience for these documents? 12:44:06
@healthit_policy HITdevs, CIOs, start-ups, fed/state gov’t prog admins. Those that have a need to align standards 4 use #ISAchat 12:46:18
@ahier No one should have to use proprietary standards to connect to public data #ISAchat 12:46:19
@shimcode Reference Materials on ISA
Ok then, here’s the “2016 Interoperability Standards Advisory” https://t.co/5QkmV3Yc6w
12:07:19
@shimcode And here’s “Draft 2017 Interoperability Standards Advisory” https://t.co/TUFidMXk0j 12:07:38
@stephenkonya #ICYMI Here’s the link to the @ONC_HealthIT 2017 DRAFT Interoperability Standards Advisory (ISA): https://t.co/VTqdZHUjBW 12:10:57
@techguy Question: Do you have a good summary blog post that summarizes what’s available in the ISA? 12:52:15
@onc_healthit We do! https://t.co/vVW6BM5TFW Authored by @HealthIT_Policy and Chris Muir – both of whom are in the room for #ISAchat 12:53:15
@healthit_policy Good? – The ISA can help folks better understand what standards are being implemented & at what level 12:06:29
@healthit_policy Getting more detailed compared to prior versions due largely to HITSC & public comments 12:29:48
@healthit_policy More work this fall on our side to make that come to fruition. In future, we’re aiming for a “standards wikipedia” approach 12:33:03
@survivorshipit It would be particularly helpful to include cited full documents to facilitate patient, consumer participation 12:40:22
@davisjamie77 Seeing lots of references to plans to “explore inclusion” of certain data. Will progress updates be provided? 12:50:00
@healthit_policy 1/ Our next milestone will be release of final 2017 ISA in Dec. That will rep’snt full transition to web 12:51:15
@healthit_policy 2/ after that future ISA will be updated more regularly & hopefully with stakeholder involved curation 12:52:21
@bjrstn Topic:  How does the ISA link to the Interoperability Roadmap? 12:51:38
@cnsicorp How will #ISA impact Nationwide Interoperability Roadmap & already established priorities? 12:10:49
@healthit_policy ISA was 1st major deliverable concurrent w/ Roadmap. Will continue to b strong/underlying support to work 12:13:49
@healthit_policy ISA is 1 part of tech & policy section of Roadmap. Helps add transparency & provides common landscape 12:53:55
@healthit_policy Exciting thing for me is the initiated transition from PDF to a web-based/interactive experience w/ ISA 12:30:51
@onc_healthit Web-based version of the ISA can be found here: https://t.co/F6KtFMjNA1 We welcome comments! 12:32:04
@techguy Little <HSML> From a Participant on the Ease of Consuming ISA Artifacts
So easy to consume!
12:40:57
@healthit_policy If I knew you better I’d sense some sarcasm :) that said, working on better nav approaches too 12:43:36
@techguy You know me well. It’s kind of like the challenge of EHRs. You can only make it so usable given the reqs. 12:45:36
@shimcode I think John forgot to enclose his tweet with <HSML> tags (Hyper Sarcasm Markup Language) 12:46:48
@ahier Don ‘t Use My Toothbrush!
OH (Overheard) at conference “Standards are like toothbrushes, everyone has one and no one wants to use yours”
13:15:43
Topic 2: What makes this ISA different than the previous drafts you have issued?
Account Tweet Time
@cnsicorp #Interoperability for rural communities priority 12:32:40
@healthit_policy Rural, underserved, LTPAC and other pieces of the interoperability puzzle all important #ISAchat 12:35:33
@cnsicorp “more efficient, closer to real-time updates and comments…, hyperlinks to projects…” 12:47:15
@shimcode Question: So you’re not providing any guidance on the implementation of interoperability standards? Hmm… 12:21:10
@gratefull080504 Question: Are implementation pilots planned? 12:22:51
@healthit_policy ISA reflects what’s out there, being used & worked on. Pointer to other resources, especially into future #ISAchat 12:24:10
@ahier The future is here it’s just not evenly distributed (yet) #ISAchat 12:25:15
@healthit_policy Yes, we put out 2 FOAs for High Impact Pilots & Standards Exploration Awards 12:25:56
@healthit_policy HHS Announces $1.5 Million in Funding Opportunities to Advance Common Health Data Standards. Info here: https://t.co/QLo05LfsLw
Topic 3: If you had to pick one of your favorite parts of the ISA, what would it be?
Account Tweet Time
@shimcode The “Responses to Comments Requiring Additional Consideration” section. Helps me understand ONC’s thinking. 12:45:32
@healthit_policy Our aim is to help convey forward trajectory for ISA, as we shift to web, will be easier/efficient engagement 12:47:47
@healthit_policy Depends on sections. Some, like #FHIR, @LOINC, SNOMED-CT are pointed to a bunch. 12:49:15
@gratefull080504 Question: What can patients do to support the objectives of #ISAchat ? 12:07:02
@gratefull080504 Question: Isn’t #ISAChat for patients? Don’t set low expectations for patients 12:10:44
@gratefull080504 I am a patient + I suffer the consequences of lack of #interoperability 12:12:26
@healthit_policy Certainly want that perspective, would love thoughts on how to get more feedback from patients on ISA 12:12:35
@gratefull080504 What about patients? 12:13:03
@gratefull080504 First step is to ensure they have been invited. I am happy to help you after this chat 12:13:57
@survivorshipit Think partly to do w/cascade of knowledge–>as pts know more about tech, better able to advocate 12:15:21
@healthit_policy Open door, numerous oppty for comment, and representation on advisory committees. #MoreTheMerrier 12:15:52
@gratefull080504 I am currently on @ONC_HealthIT Consumer Advisory Task Force Happy to contribute further 12:17:08
@healthit_policy 1 / The ISA is technical in nature, & we haven’t gotten any comments on ISA before from patient groups 12:08:54
@healthit_policy 2/ but as we look to pt generated health data & other examples of bi-directional interop, we’d like to represent those uses in ISA 12:09:51
@resultant TYVM all! Trying to learn all i can about #interoperability & why we’re not making progress patients expect 13:09:22
@shimcode Question: Are use cases being developed in parallel with the Interoperability Standards? 12:13:28
@shimcode Value of standards don’t lie in level of adoption of std as a whole, but rather in implementation for a particular use case. 12:16:33
@healthit_policy We are trying to represent broader uses at this point in the “interoperability need” framing in ISA 12:18:58
@healthit_policy 2/ would be great into the future to have more detailed use case -> interop standards in the ISA with details 12:19:49
@healthit_policy Indeed, royal we will learn a lot from “doing” 12:20:40
@shimcode IHE Profiles provide a common language to discuss integration needs of healthcare sites and… Info here: https://t.co/iBt2m8F9Ob 12:29:12
@techguy I’d love to see them take 1 section (say allergies) and translate where we’d see the standards in the wild. 12:59:04
@techguy Or some example use cases where people want to implement a standard and how to use ISA to guide it. 13:00:38
@healthit_policy Check out links now in ISA to the Interop Proving Ground – projects using #ISAchat standards. Info here: https://t.co/Co1l1hau3B 13:02:54
@healthit_policy Thx for feedback, agree on need to translate from ISA to people seeing standards implemented in real life 13:01:08
@healthit_policy Commenting on ISA Artifacts
We want to make the #ISA more accessible, available, and update-able to be more current compared to 1x/yr publication
12:34:22
@cperezmha #interoperability lowers cost and shows better outcomes changing the culture of healthcare to be tech savvy is key 12:35:10
@healthit_policy One new feature we want to add to web ISA is citation ability to help document what’s happ’n with standards 12:37:12
@shimcode A “discussion forum” mechanism where individual aspects can be discussed & rated would be good. 12:39:53
@healthit_policy Good feedback. We’re looking at that kind of approach as an option. ISA will hopefully prompt debate 12:40:50
@shimcode Having to scroll through all those PDF’s and then open them 1 by 1 only to have to scroll some more is VERY inefficient. 12:41:25
@shimcode Well, I wouldn’t look/think too long about it. Adding that capability is ‘cheap’ & can make it way easier on all. 12:43:48
@shimcode Question: What Can Be Learned About Interoperability from the Private Sector?
Maybe @ONC_HealthIT can get input from Apple’s latest #healthIT purchase/Gliimpse? What do they know of interoperability?
12:19:13
@healthit_policy > interest from big tech cos and more mainstream awareness is good + more innovation Apple iOS has CCDA sprt 12:22:59
@drewivan Testing & Tools
I haven’t had time to count, but does anyone know approximately how many different standards are included in the document?
12:47:29
@healthit_policy Don’t know stat off had, but we do identify and provide links for test tools as available. 12:56:31
@drewivan And what percentage of them have test tools available? 12:54:38
@shimcode According to the 2017 ISA stds just released, a tiny fraction of them have test tools. See here: https://t.co/Jbw7flDuTg 12:58:02
@shimcode I take back “tiny faction” comment on test tools. I count 92 don’t have test tools, 46 do. No assessment of tool quality though. 13:08:31
@healthit_policy Testing def an area for pub-private improvement, would love to see # increase, with freely available too 12:59:10
@techguy A topic near and dear to @interopguy’s heart! 12:59:54
@resultant Perhaps we could replace a couple days of HIMSS one year with #interoperability testing? #OutsideBox 13:02:30
 
Walk on Topic: Promotion of ISA (Thank you @cperezmha)
What can HIE clinics do to help other non-users get on board? Is there a certain resource we should point them too to implement?
Account Tweet Time
@davisjamie77 Liking the idea of an interactive resource library. How will you promote it to grow use? 12:35:57
@healthit_policy A tweetchat of course! ;) Also web ISA now linking to projects in the Interoperability Proving Ground 12:39:04
@davisjamie77 Lol! Of course! Just seeing if RECs, HIEs, other #HIT programs might help promote. 12:40:44
@healthit_policy Exactly… opportunities to use existing relationships and comm channels ONC has to spread the word 12:41:28
@stephenkonya Question: How can we better align public vs private #healthcare delivery systems through #interoperability standards? 12:42:23
Miscellaneous Feedback from Participants
Account Tweet Time
@ahier Restful APIs & using JSON and other modern technologies 12:54:03
@waynekubick Wayne Kubick joining from #HL7 anxious to hear how #FHIR and #CCDA can help further advance #interoperability. 12:11:30
@resultant We all do! The great fail of #MU was that we spent $38B and did not get #interoperability 12:14:21
@waynekubick SMART on #FHIR can help patients access and gain insights from their own health data — and share it with care providers. 12:17:44
@resultant I think throwing money at it is the only solution… IMHO providers are not going to move to do it on their own… 12:20:44
@shimcode @Search_E_O your automatic RT’s of the #ISAChat tweets are just clouding up the stream. Why? smh 12:08:30
@ahier
Do you see #blockchain making it into future ISA
12:28:02
@healthit_policy Phew… toughy. lots of potential directions for it. Going to segue my response into T2 12:28:58
@hitpol #blockchain for healthcare! ➡ @ONC_HealthIT blockchain challenge. Info here: https://t.co/vG60qRAqqa 12:31:33
@healthit_policy That’s All Folks!
Thank you everyone for joining our #ISAchat! Don’t forget to leave comments.
PDF version

 
About Steve Sisko
Steve Sisko has over 20 years of experience in the healthcare industry and is a consultant focused on healthcare data, technology and services – mainly for health plans, payers and risk-bearing providers. Steve is known as @ShimCode on Twitter and runs a blog at www.shimcode.com. You can learn more about Steve at his LinkedIn page and he can be contacted at shimcode@gmail.com.