Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Measuring the Vital Signs of Health Care Progress at the Connected Health Conference (Part 3 of 3)

Posted on November 17, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous segment of this article covered one of the crucial themes in health care today: simplifying technology’s interactions with individuals over health care. This segment finishes my coverage of this year’s Connected Health Conference with two more themes: improved data sharing and blockchains.

Keynote at Connected Health Conference

Keynote at Connected Health Conference

Improved data sharing
The third trend I’m pursuing is interoperability. If data collection is the oxygen that fuels connected health, data sharing is the trachea that brings it where it’s needed. Without interoperability, clinicians cannot aid patients in their homes, analysts cannot derive insights that inform treatments, and transitions to assisted living facilities or other environments will lead to poor care.

But the health care field is notoriously bad at data sharing. The usual explanation is that doctors want to make it hard for competitors to win away their patients. If that’s true, fee-for-value reimbursements will make them even more possessive. After all, under fee-for-value, clinicians are held accountable for patient outcomes over a long period of time. They won’t want to lose control of the patient. I first heard of this danger at a 2012 conference (described in the section titled “Low-hanging fruit signals a new path for cost savings”).

So the trade press routinely and ponderously reports that once again, years have gone by without much progress in data sharing. The US government recognizes that support for interoperability is unsatisfactory, and has recently changed the ONC certification program to focus on it.

Carla Kriwet, CEO of Connected Care and Health Informatics at Philips, was asked in her keynote Fireside Chat to rate the interoperability of health data on a scale from 0 to 10, and chose a measly 3. She declared that “we don’t believe in closed systems at all” and told me in an interview that Philips is committed to creating integrated solutions that work with any and all products. Although Philips devices are legendary in many domains, Kriwet wants customers to pay for outcomes, not devices.

For instance, Philips recently acquired the Wellcentive platform that allows better care in hospitals by adopting population health approaches that look at whole patient populations to find what works. The platform works with a wide range of input sources and is meant to understand patient populations, navigate care and activate patients. Philips also creates dashboards with output driven by artificial intelligence–the Philips IntelliVue Guardian solution with Early Warning Scoring (EWS)–that leverages predictive analytics to present critical information about patient deterioration to nurses and physicians. This lets them intervene quickly before an adverse event occurs, without the need for logging in repeatedly. (This is an example of another trend I cover in this article, the search for simpler interfaces.)

Kriwet also told me that Philips has incorporated the principles of agile programming throughout the company. Sprints of a few weeks develop their products, and “the boundary comes down” between R&D and the sales team.

I also met with Jon Michaeli, EVP of Strategic Partnerships with Medisafe, a company that I covered two years ago. Medisafe is one of a slew of companies that encourage medication adherence. Always intensely based on taking in data and engaging patients in a personalized way, Medisafe has upped the sophistication of their solution, partly by integrating with other technologies. One recent example is its Safety Net, provided by artificial intelligence platform Neura. For instance, if you normally cart your cell phone around with you, but it’s lying quiet from 10:00 PM until 6:00 AM, Safety Net may determine your reason for missing your bedtime dose at 11:00 PM was that you had already fallen asleep. If Safety Net sees recurring patterns of behavior, it will adjust reminder time automatically.

Medisafe also gives users the option of recording the medication adherence through sensors rather than responding to reminders. They can communicate over Bluetooth to a pill bottle cap (“iCap”) that replaces the standard medicine cap and lets the service know when you have opened the bottle. The iCap fits the vast majority of medicine bottles dispensed by U.S. pharmacies and costs only $20 ($40 for a pack of 2), so you can buy several and use them for as long as you’re taking your medicine.

On another level, Mivatek provides some of the low-level scaffolding to connected health by furnishing data from devices to systems developed by the company’s clients. Suppose, for instance, that a company is developing a system that responds to patients who fall. Mivatek can help them take input from a button on the patient’s phone, from a camera, from a fall detector, or anything else to which Mivatek can connect. The user can add a device to his system simply by taking a picture of the bar code with his phone.

Jorge Perdomo, Senior Vice President Corporate Strategy & Development at Mivatek, told me that these devices work with virtually all of the available protocols on the market that have been developed to promote interoperability. In supporting WiFi, Mivatek loads an agent into its system to provide an additional level of security. This prevents device hacking and creates an easy-to-install experience with no setup requirements.

Blockchains
Most famous as a key technological innovation supporting BitCoin, blockchains have a broad application as data stores that record transactions securely. They can be used in health care for granting permissions to data and other contractual matters. The enticement offered by this technology is that no central institution controls or stores the blockchain. One can distribute the responsibility for storage and avoid ceding control to one institution.

Blockchains do, however, suffer from inherent scaling problems by design: they grow linearly as people add transactions, the additions must be done synchronously, and the whole chain must be stored in its entirety. But for a limited set of participants and relatively rate updates (for instance, recording just the granting of permissions to data and not each chunk of data exchanged), the technology holds great promise.

Although I see a limited role for blockchains, the conference gave considerable bandwidth to the concept. In a keynote that was devoted to blockchains, Dr. Samir Damani described how one of his companies, MintHealth, planned to use them to give individuals control over health data that is currently held by clinicians or researchers–and withheld from the individuals themselves.

I have previously covered the importance patient health records, and the open source project spotlighted by that article, HIE of One, now intends to use blockchain in a manner similar to MintHealth. In both projects, the patient owns his own data. MintHealth adds the innovation of offering rewards for patients who share their data with researchers, all delivered through the blockchain. The reward system is quite intriguing, because it would create for the first time a real market for highly valuable patient data, and thus lead to more research use along with fair compensation for the patients. MintHealth’s reward system also fits the connected health vision of promoting healthy behavior on a daily basis, to reduce chronic illness and health care costs.

Conclusion
Although progress toward connected health comes in fits and starts, the Connected Health Conference is still a bright spot in health care each year. For the first time this year, Partners’ Center for Connected Health partnered with another organization, the Personal Connected Health Alliance, and the combination seems to be a positive one. Certain changes were noticeable: for instance, all the breakout sessions were panels, and the keynotes were punctuated by annoying ads. An interesting focus this year was wellness in aging, the topic of the final panel. One surprising difference was the absence of the patient advocates from the Society for Participatory Medicine whom I’m used to meeting each year at this conference, perhaps because they held their own conference the day before.

The Center for Connected Health’s Joseph Kvedar still ran the program team, and the themes were familiar from previous years. This conference has become my touchstone for understanding health IT, and it will continue to be the place to go to track the progress of health care reform from a technological standpoint.

Measuring the Vital Signs of Health Care Progress at the Connected Health Conference (Part 2 of 3)

Posted on November 15, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first segment of this article introduced the themes of the Connected Health Conference and talked about the importance of validating what new technologies do using trials or studies like traditional medical advances. This segment continues my investigation into another major theme in health care: advanced interfaces.

Speaker from Validic at Connected Health Conference

Speaker from Validic at Connected Health Conference

Advanced interfaces
The compulsory picture of health care we’re accustomed to seeing, whenever we view hospital propaganda or marketing from health care companies, shows a patient in an awkward gown seated on an uncomfortable examination table. A doctor faces him or her full on–not a computer screen in site–exuding concern, wisdom, friendliness, and professionalism.

More and more, however, health sites are replacing this canonical photograph with one of a mobile phone screen speckled with indicators of our vital signs or thumbnail shot of our caregivers. The promise being conveyed is no longer care from a trusted clinician in the office, but instant access to all our information through a medium familiar to almost everyone everywhere–the personal mobile device.

But even touchscreen access to the world of the cloud is beginning to seem fusty. Typing in everything you eat with your thumbs, or even answering daily surveys about your mental state, gets old fast. As Dr. Yechiel Engelhard of TEVA said in his keynote, patients don’t want to put a lot of time into managing their illnesses, nor do doctors want to change their workflows. So I’m fascinated with connected health solutions that take the friction out of data collection and transmission.

One clear trend is the move to voice–or rather, I should say back to voice, because it is the original form of human communication for precise data. The popularity of Amazon Echo, along with Siri and similar interfaces, shows that this technology will hit a fever pitch soon. One research firm found that voice-triggered devices more than doubled in popularity between 2015 and 2016, and that more than half of Americans would like such a device in the home.

I recently covered a health care challenge using Amazon Alexa that demonstrates how the technology can power connected health solutions. Most of the finalists in the challenge were doing the things that the Connected Health Conference talks about incessantly: easy and frequent interactions with patients, analytics to uncover health problems, integration with health care providers, personalization, and so on.

Orbita is another company capitalizing on voice interfaces to deliver a range of connected health solutions, from simple medication reminders to complete care management applications for diabetes. I talked to CEO Bill Rogers, who explained that they provide a platform for integrating with AI engines provided by other services to carry out communication with individuals through whatever technology they have available. Thus, Orbita can talk through Echo, send SMS messages, interact with a fitness device or smart scale, or even deliver a reminder over a plain telephone interface.

One client of Orbita uses it platform to run a voice bot that talks to patients during their discharge process. The bot provides post-discharge care instructions and answers patients’ questions about things like pain management and surgery wound care. The results show that patients are more willing to ask questions of the bot than of a discharge nurse, perhaps because they’re not afraid of wasting someone’s time. Rogers also said services are improving their affective interfaces, which respond to the emotional tone of the patient.

Another trick to avoid complex interfaces is to gather as much data as possible from the patient’s behavior (with her consent, of course) to eliminate totally the need for her to manually enter data, or even press a button. Devices are getting closer to this kind of context-awareness. Following are some of the advances I enjoyed seeing at the Connected Health Conference.

  • PulseOn puts more health data collection into a wrist device than I’ve ever seen. Among the usual applications to fitness, they claim to detect atrial fibrillation and sleep apnea by shining a light on the user’s skin and measuring changes in reflections caused by variations in blood flow.
  • A finger-sized device called Gocap, from Common Sensing, measures insulin use and reports it over wireless connections to clinical care-takers. The device is placed over the needle end of an insulin pen, determines how much was injected by measuring the amount of fluid dispensed after a dose, and transmits care activity to clinicians through a companion app on the user’s smartphone. Thus, without having to enter any information by hand, people with diabetes can keep the clinicians up to date on their treatment.
  • One of the cleverest devices I saw was a comprehensive examination tool from Tyto Care. A small kit can carry the elements of a home health care exam, all focused on a cute little sphere that fits easily in the palm. Jeff Cutler, Chief Revenue Officer, showed me a simple check on the heart, ear, and throat that anyone can perform. You can do it with a doctor on the other end of a video connection, or save the data and send it to a doctor for later evaluation.

    Tyto Care has a home version that is currently being used and distributed by partners such as Heath Systems, providers, payers and employers, but will ultimately be available for sale to consumers for $299. They also offer a professional and remote clinic version that’s tailor-made for a school or assisted living facility.

A new Digital Therapeutics Alliance was announced just before the conference, hoping to promote more effective medical devices and allow solutions to scale up through such things as improving standards and regulations. Among other things, the alliance will encourage clinical trials, which I have already highlighted as critical.

Big advances were also announced by Validic, which I covered last year. Formerly a connectivity solution that unraveled the varying quasi-standard or non-standard protocols of different devices in order to take their data into electronic health records, Validic has created a new streaming API that allows much faster data transfers, at a much higher volume. On top of this platform they have built a notification service called Inform, which takes them from a networking solution to a part of the clinicians’ workflow.

Considerable new infrastructure is required to provide such services. For instance, like many medication adherence services, Validic can recognize when time has gone by without a patient reporting that’s he’s taken his pill. This level of monitoring requires storing large amounts of longitudinal data–and in fact, Validic is storing all transactions carried out over its platform. The value of such a large data set for discovering future health care solutions through analytics can make data scientists salivate.

The next segment of this article wraps up coverage of the conference with two more themes.

Measuring the Vital Signs of Health Care Progress at the Connected Health Conference (Part 1 of 3)

Posted on November 13, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Attendees at each Connected Health Conference know by now the architecture of health reform promoted there. The term “connected health” has been associated with a sophisticated amalgam of detailed wellness plans, modern sensors, continuous data collection in the field, patient control over data, frequent alerts and reminders, and analytics to create a learning health care system. The mix remains the same each year, so I go each time to seek out progress toward the collective goal. This year, I’ve been researching what’s happening in these areas:

  • Validation through clinical trials
  • Advanced interfaces to make user interaction easier
  • Improved data sharing (interoperability)
  • Blockchains

Panel at Connected Health Conference

Panel at Connected Health Conference

There were a few other trends of interest, which I’ll mention briefly here. Virtual reality (VR) and augmented reality (AR) turned up at some exhibitor booths and were the topic of a panel. Some of these technologies run on generic digital devices–such as the obsession-inducing Pokémon GO game–while others require special goggles such as the Oculus Rift (the first VR technology to show a promise for widespread adoption, and now acquired by Facebook) or Microsoft’s HoloLens. VR shuts out the user’s surroundings and presents her with a 360-degree fantasy world, whereas AR imposes information or images on the surroundings. Both VR and AR are useful for teaching, such as showing an organ in 3D organ in front of a medical student on a HoloLens, and rotating it or splitting it apart to show details.

I haven’t yet mentioned the popular buzzword “telehealth,” because it’s subsumed under the larger goal of connected health. I do use the term “artificial intelligence,” certainly a phrase that has gotten thrown around too much, and whose meaning is subject of much dissension. Everybody wants to claim the use of artificial intelligence, just as a few years ago everybody talked about “the cloud.” At the conference, a panel of three experts took up the topic and gave three different definitions of the term. Rather than try to identify the exact algorithms used by each product in this article and parse out whether they constitute “real” artificial intelligence, I go ahead and use the term as my interviewees use it.

Exhibition hall at Connected Health Conference

Exhibition hall at Connected Health Conference

Let’s look now at my main research topics.

Validation through clinical trials
Health apps and consumer devices can be marketed like vitamin pills, on vague impressions that they’re virtuous and that doing something is better than doing nothing. But if you want to hook into the movement for wellness–connected health–you need to prove your value to the whole ecosystem of clinicians and caretakers. The consumer market just doesn’t work for serious health care solutions. Expecting an individual to pay for a service or product would limit you to those who can afford it out-of-pocket, and who are concerned enough about wellness to drag out their wallets.

So a successful business model involves broaching the gates of Mordor and persuading insurers or clinicians to recommend your solution. And these institutions won’t budge until you have trials or studies showing that you actually make a difference–and that you won’t hurt anybody.

A few savvy app and device developers build in such studies early in their existence. For instance, last year I covered a typical connected health solution called Twine Health, detailing their successful diabetes and hypertension trials. Twine Health combines the key elements that one finds all over the Connected Health Conference: a care plan, patient tracking, data analysis, and regular check-ins. Their business model is to work with employer-owned health plans, and to expand to clinicians as they gradually migrate to fee-for-value reimbursement.

I sense that awareness is growing among app and device developers that the way to open doors in health care is to test their solutions rigorously and objectively. But I haven’t found many who do so yet.

In the next segment of this article continues my exploration of the key themes I identified at the start of this article.

Study: “Information Blocking” By Vendors And Providers Persists

Posted on April 6, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

A newly-released study suggests that both EHR vendors and providers may still be interfering with the free exchange of patient healthcare data. The researchers concluded that despite the hearty disapproval of both Congress and healthcare providers, the two still consider “information blocking” to be in their financial interest.

To conduct the study, which appears in this month’s issue of The Milbank Quarterly, researchers conducted a national survey between October 2015 and January 2015. Researchers reached out to leaders driving HIE efforts among provider organizations. The study focused on how often information blocking took place, what forms it took and how effective various policy strategies might be at stopping the practice.

It certainly seems that the practice continues to be a major issue of concern to HIE leaders. Eighty-three percent of respondents said they were very familiar with information blocking, while just 12 percent reported having just some familiarity with the practice and 5 percent said they had minimal familiarity. On average, the respondents offered a good cross-industry view, having worked with 18 EHR vendors and with 31 hospitals or health systems on average.

Forms of Blocking:

If the research is accurate, information blocking is a widespread and persistent problem.

When questioned about specific forms of information by EHR vendors, 29 percent of respondents said that vendors often or routinely roll out products with limited interoperability capabilities. Meanwhile, 47 percent said that vendors routinely or often charge high fees for sharing data across HIEs, and 42 percent said that the vendors routinely or often make third-party access to standardized data harder than it needs to be. (For some reason, the study didn’t mention what types of information blocking providers have instituted.)

Frequency of blocking:

It’s hardly surprising that most of the respondents were familiar with information blocking issues, given how often the issue comes up.

In fact, a full fifty percent said that EHR vendors routinely engaged in information blocking, 33 percent said that the vendors blocked information occasionally, with only 17 percent stating that EHR vendors rarely did so.

Interestingly, the HIE managers said that providers were also engaged in information blocking, though fewer did so than among the vendor community. Twenty-five percent reported that providers routinely engage in information blocking, and 34 percent saying that providers did so occasionally. Meanwhile, 41 percent said information blocking by providers was rare.

Motivations for blocking:

Why do HIE participants block the flow of health data? It seems that at present they get something important out of it, and unless somebody stops them it makes sense to continue.

When it came to EHR vendors, the respondents felt that their motivations included a desire to maximize short-term revenue, with 41 percent reporting that this was a routine motivation and 28 percent that it was an occasional motivation. They also felt EHR vendors blocked information to improve the chances that providers would choose their platform over competing products, with 44 percent of respondents saying this was routine and 11 percent that it was occasional.

Meanwhile, they believed that hospitals and health systems, the most common motivation was to improve revenue by strengthening their competitive advantage, with 47 percent seeing this as routine and 30 percent occasional. Also, respondents said providers wanted to accommodate priorities other than data exchange, with 29 percent seeing this as routine and 31 percent occasional.

Solutions:

So what can be done about vendor and provider information blocking? There are a number of ways policymakers can get involved, but few have done so as of yet.

When given a choice of policy-based strategies, 67 percent said that making this practice illegal would be very effective. Meanwhile, respondents said that three strategies would be very or moderately effective. They included prohibiting gag clauses and encouraging public reporting and comparisons of vendors and their products (93 percent); requiring stronger demonstrations of product interoperability (92 percent) and national policies defining policies and standards for core aspects of information exchange.

Meanwhile, when it came to reducing information blocking by providers, respondents recommended that CMS roll out stronger incentives for care coordination and risk-based contracts (97 percent) and public reporting or other efforts shining a spotlight on provider business practices (93 providers).

Exchange Value: A Review of Our Bodies, Our Data by Adam Tanner (Part 3 of 3)

Posted on January 27, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article raised the question of whether data brokering in health care is responsible for raising or lower costs. My argument that it increases costs looks at three common targets for marketing:

  • Patients, who are targeted by clinicians for treatments they may not need or have thought of

  • Doctors, who are directed by pharma companies toward expensive drugs that might not pay off in effectiveness

  • Payers, who pay more for diagnoses and procedures because analytics help doctors maximize charges

Tanner flags the pharma industry for selling drugs that perform no better than cheaper alternatives (Chapter 13, page 146), and even drugs that are barely effective at all despite having undergone clinical trials. Anyway, Tanner cites Hong Kong and Europe as places far more protective of personal data than the United States (Chapter 14, page 152), and they don’t suffer higher health care costs–quite the contrary.

Strangely, there is no real evidence so far that data sales have produced either harm to patients or treatment breakthroughs (Conclusion, 163). But the supermarket analogy does open up the possibility that patients could be induced to share anonymized data voluntarily by being reimbursed for it (Chapter 14, page 157). I have heard this idea aired many times, and it fits with the larger movement called Vendor Relationship Management. The problem with such ideas is the close horizon limiting our vision in a fast-moving technological world. People can probably understand and agree to share data for particular research projects, with or without financial reimbursement. But many researchers keep data for decades and recombine it with other data sets for unanticipated projects. If patients are to sign open-ended, long-term agreements, how can they judge the potential benefits and potential risks of releasing their data?

Data for sale, but not for treatment

In Chapter 11, Tanner takes up the perennial question of patient activists: why can drug companies get detailed reports on patient conditions and medications, but my specialist has to repeat a test on me because she can’t get my records from the doctor who referred me to her? Tanner mercifully shields here from the technical arguments behind this question–sparing us, for instance, a detailed discussion of vagaries in HL7 specifications or workflow issues in the use of Health Information Exchanges–but strongly suggests that the problem lies with the motivations of health care providers, not with technical interoperability.

And this makes sense. Doctors do not have to engage in explicit “blocking” (a slippery term) to keep data away from fellow practitioners. For a long time they were used to just saying “no” to requests for data, even after that was made illegal by HIPAA. But their obstruction is facilitated by vendors equally uninterested in data exchange. Here Tanner discards his usual pugilistic journalism and gives Judy Faulkner an easy time of it (perhaps because she was a rare CEO polite enough to talk to him, and also because she expressed an ethical aversion to sharing patient data) and doesn’t air such facts as the incompatibilities between different Epic installations, Epic’s tendency to exchange records only with other Epic installations, and the difficulties it introduces toward companies that want to interconnect.

Tanner does not address a revolution in data storage that many patient advocates have called for, which would at one stroke address both the Chapter 11 problem of patient access to data and the book’s larger critique of data selling: storing the data at a site controlled by the patient. If the patient determined who got access to data, she would simply open it to each new specialist or team she encounters. She could also grant access to researchers and even, if she chooses, to marketers.

What we can learn from Chapter 9 (although Tanner does not tell us this) is that health care organizations are poorly prepared to protect data. In this woeful weakness they are just like TJX (owner of the T.J. Maxx stores), major financial institutions, and the Democratic National Committee. All of these leading institutions have suffered breaches enabled by weak computer security. Patients and doctors may feel reluctant to put data online in the current environment of vulnerability, but there is nothing special about the health care field that makes it more vulnerable than other institutions. Here again, storing the data with the individual patient may break it into smaller components and therefore make it harder for attackers to find.

Patient health records present new challenges, but the technology is in place and the industry can develop consent mechanisms to smooth out the processes for data exchange. Furthermore, some data will still remain with the labs and pharmacies that have to collect it for financial reasons, and the Supreme Court has given them the right to market that data.

So we are left with ambiguities throughout the area of health data collection. There are few clear paths forward and many trade-offs to make. In this I agree ultimately with Tanner. He said that his book was meant to open a discussion. Among many of us, the discussion has already started, and Tanner provides valuable input.

Are We Waiting For An Interoperability Miracle?

Posted on December 12, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

Today, in reading over some industry news, my eyes settled on an advertising headline that gave me pause: “Is Middleware The Next Interoperability Miracle?”  Now, I have to admit a couple things: 1) that vendors have to pitch the latest white paper with all the fervor they can command, and 2) that it never hurts to provoke conversation with a strong assertion. But seeing a professional advertisement include the word “miracle” — an expostulatory term which you might use to sell dishwashers — still took me back a bit.

And then I began to think about what I had seen. I wondered whether it will really take a miracle to achieve health data interoperability sometime in our lifetime. I asked myself whether health IT insiders like you, dear readers, are actually that discouraged. And I wondered if any vendor truly believes that they can produce such a miracle, if indeed one is needed.

First, let’s ask ourselves about whether we need a Hail Mary pass or even a miracle to salvage industry hopes for data interoperability. I’m afraid that in my view, the answer is quite possibly yes. In saying this, I’m assuming that interoperability must arrive soon to meet our current needs, at least within the next several years.

Unfortunately, nothing I’ve seen suggests that we can realistically achieve robust interoperability within the next say, 5 to 10 years, despite all appearances to the contrary. I know some readers may disagree with me, but as I see it the combination of technical and behavioral obstacles to interoperability are just too profound to be addressed in a timely manner.

Okay, then, on to whether health IT rank and file are so burned out on interoperability efforts that they just want the problem taken off of their hands. If they did, I would certainly sympathize, as the forces in play here are beyond the control of any individual IT staffer, consultant, hospital or health system. The forces holding back interoperability are interwoven with technical, financial, policy and operational issues which can’t be addressed without a high level of cooperation between competing entities — and perhaps not even then.

So, back to where we started. Headline aside, does the vendor in question or any other truly believe that they can engineer a solution to such an intractable problem, conquer world interoperability issues and grow richer than Scrooge McDuck? Probably not. Interoperability is a set of behaviors as much as a technology, and I doubt even the cockiest startup thinks it can capture that many hearts and minds.

Ultimately, though, whoever wrote that headline is probably keying into something real. While the people doing the hard work of attempting health data sharing aren’t exactly desperate, I think there’s a growing sense that we’re running out of time to get this thing done. Obviously, other than artificial ones imposed by laws and regulations, we aren’t facing any actual deadline, but things can’t go on like this forever.

In fact, I’d argue that if we don’t create a useful interoperability model soon, a window of opportunity for doing so will be lost for quite some time. After all, we can’t keep spending on this infrastructure if it’s never going to offer a payback.

The cold reality is that eventually, the data sharing system we have — such as it is — will fall apart of its own weight, as organizations simply stop paying for their part of it. So while we might not need a miracle as such, being granted one wouldn’t hurt. If this effort fails us, who knows when we’ll have the time and money to try again.

Sansoro Hopes Its Health Record API Will Unite Them All

Posted on June 20, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

After some seven years of watching the US government push interoperability among health records, and hearing how far we are from achieving it, I assumed that fundamental divergences among electronic health records at different sites posed problems of staggering complexity. I pricked up my ears, therefore, when John Orosco, CTO of Sansoro Health, said that they could get EHRs to expose real-time web services in a few hours, or at most a couple days.

What does Sansoro do? Its goal, like the FHIR standard, is to give health care providers and third-party developers a single go-to API where they can run their apps on any supported EHR. Done right, this service cuts down development costs and saves the developers from having to distribute a different version of their app for different customers. Note that the SMART project tries to achieve a similar goal by providing an API layer on top of EHRs for producing user interfaces, whereas Sansoro offers an API at a lower level on particular data items, like FHIR.

Sansoro was formed in the summer of 2014. Researching EHRs, its founders recognized that even though the vendors differed in many superficial ways (including the purportedly standard CCDs they create), all EHRs dealt at bottom with the same fields. Diagnoses, lab orders, allergies, medications, etc. are the same throughout the industry, so familiar items turn up under the varying semantics.

FHIR was just starting at that time, and is still maturing. Therefore, while planning to support FHIR as it becomes ready, Sansoro designed their own data model and API to meet industry’s needs right now. They are gradually adding FHIR interfaces that they consider mature to their Emissary application.

Sansoro aimed first at the acute care market, and is expanding to support ambulatory EHR platforms. At the beginning, based on market share, Sansoro chose to focus on the Cerner and Epic EHRs, both of which offer limited web services modules to their customers. Then, listening to customer needs, Sansoro added MEDITECH and Allscripts; it will continue to follow customer priorities.

Although Orosco acknowledged that EHR vendors are already moving toward interoperability, their services are currently limited and focus on their own platforms. For various reasons, they may implement the FHIR specification differently. (Health IT experts hope that Argonaut project will ensure semantic interoperability for at least the most common FHIR items.) Sansoro, in contrast can expose any field in the EHR using its APIs, thus serving the health care community’s immediate needs in an EHR-agnostic manner. Emissary may prevent the field from ending up again the way the CCD has fared, where each vendor can implement a different API and claim to be compliant.

This kind of fragmented interface is a constant risk in markets in which proprietary companies are rapidly entering an competing. There is also a risk, therefore, that many competitors will enter the API market as Sansoro has done, reproducing the minor and annoying differences between EHR vendors at a higher level.

But Orosco reminded me that Google, Facebook, and Microsoft all have competing APIs for messaging, identity management, and other services. The benefits of competition, even when people have to use different interfaces, drives a field forward, and the same can happen in healthcare. Two directions face us: to allow rapid entry of multiple vendors and learn from experience, or to spend a long time trying to develop a robust standard in an open manner for all to use. Luckily, given Sansoro and FHIR, we have both options.

Mobile PHRs On The Way — Slowly

Posted on October 24, 2013 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

On-demand mobile PHRs are likely to emerge over time, but not until the healthcare industry does something to mend its interoperability problems, according to a new report from research firm Frost & Sullivan.

As the paper notes, mobile application development is moving at a brisk clip, driven by consumer and governmental demands for better quality care, lower healthcare costs and improved access to information.

The problem is, it’s hard to create mobile products — especially a mobile PHR — when the various sectors of the healthcare industry don’t share data effectively.  According to Frost  & Sullivan, it will be necessary to connect up providers, hospitals, physician specialty groups, imaging centers, laboratories, payers and government entities, each of which have operated within their own informational silos and deployed their own unique infrastructures.

The healthcare industry will also need to resolve still-undecided questions as to who owns patient information, Frost & Sullivan suggests.  As things stand, “the patient does not own his or her health information, as this data is stored within the IT  protocols of the EHR system,  proprietary to providers, hospitals and health systems,” said Frost & Sullivan Connected Health Senior Industry Analyst Patrick Riley in a press statement.

While patient ownership of medical data sounds like a problem worth addressing, the industry hasn’t shown the will to address it.  To date, efforts to address the issue of who owns digital files has been met with a “tepid” response, the release notes.

However, it’s clear that outside vendors can solve the problem if they see a need. For example, consider the recent deal in which Allscripts agreed to supply clinical data to health plans.  Allscripts plans to funnel data from participating users of its ambulatory EMR to vendor Inovalon, which aggregates claims, lab, pharmacy, durable medical equipment, functional status and patient demographics for payers. Providers are getting patient-level analyses of the data in return for their participation.

Deals like this one suggest that rather than wait for interoperability, bringing together the data for a robust mobile PHR should be done by a third  party. Which party, what it will it cost to work with them and how the data collection would work are the least of the big problems that would have to be solved — but might be that or nothing for the foreseeable future.

Great Response to Blumenthal Interview

Posted on February 1, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The other day I came across an interview with David Blumenthal. I didn’t find anything all that meaningful in the interview itself. However, in the comments, someone provided some really interesting commentary on what Blumenthal said in the interview.

Dr. Blumenthal says we need operability before we move to interoperability. Yet if you don’t design your systems from the start to interoperate, you’ll inevitably wind up with operable systems that do not interoperate – at all. Having accomplished this, we’ll then have to develop and impose an after-the-fact standard to which all systems must comply. This will mean redesign, retrofit, and plastering all kinds of middleware layers between disparate systems. It may even result in retraining tomorrow all those providers you hope will learn new ways of working today.

Dr. Blumenthal also says that new and better technology is coming out every day. Yet the current incentive and certification programs heavily favor the older technology which he himself says frightens many providers away from this migration. Many of the older vendors have huge installed bases and old technology. They no doubt influence advisory boards much to lean towards what is versus what might be, all assurances to the contrary.

The cost of fixing practically anything is much higher than doing it correctly the first time. I realize you can’t design perfection, and anything we build will need adaptation and improvement. But we’re following a path that ensures that we will have to do much more fixing than we would if we’d just stop and think a bit more.

The inevitability of this evolution is not a justifcation for doing it carelessly.

Talk about bringing up some valid issues. The second one really hits me that the incentive money favors older technology. I’m afraid this is very much the case and that 5 years from now the major topic we’re covering on EMR and HIPAA is switching EMRs.