Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Epic and other EHR vendors caught in dilemmas by APIs (Part 2 of 2)

Posted on March 16, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first section of this article reported some news about Epic’s Orchard, a new attempt to provide an “app store” for health care. In this section we look over the role of APIs as seen by EHR vendors such as Epic.

The Roles of EHRs

Dr. Travis Good, with whom I spoke for this article, pointed out that EHRs glom together two distinct functions: a canonical, trusted store for patient data and an interface that becomes a key part of the clinician workflow. They are being challenged in both these areas, for different reasons.

As a data store, EHRs satisfied user needs for many years. The records organized the data for billing, treatment, and compliance with regulations. If there were problems with the data, they stemmed not from the EHRs but from how they were used. We should not blame the EHR if the doctor upcoded clinical information in order to charge more, or if coding was too primitive to represent the complexity of patient illness. But clinicians and regulators are now demanding functions that EHRs are fumbling at fulfillling:

  • More and more regulatory requirements, which intelligent software would calculate on its own from data already in the record, but which most EHRs require the physician to fill out manually

  • Patient-generated data, which may be entered by the patient manually or taken from devices

  • Data in streamlined formats for large-scale data analysis, for which institutions are licensing new forms of databases

Therefore, while the EHR still stores critical data, it is not the sole source of truth and is having to leave its borders porous in order to work with other data sources.

The EHR’s second function, as an interface that becomes part of the clinicians’ hourly workflow, has never been fulfilled well. EHRs are the most hated software among their users. And that’s why users are calling on them to provide APIs that permit third-party developers to compete at the interface level.

So if I were to write a section titled “The Future of Current EHRs” it could conceivably be followed by a blank page. But EHRs do move forward, albeit slowly. They must learn to be open systems.

With this perspective, Orchard looks like doubling down on an obsolete strategy. The limitations and terms of service give the impression that Epic wants to remain a one-stop shopping service for customers. But if Epic adopted the SMART approach, with more tolerance for failure and options for customers, it would start to reap the benefits promised by FHIR and foster health care innovation.

Epic and Other EHR Vendors Caught in Dilemmas by APIs (Part 1 of 2)

Posted on March 15, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The HITECH act of 2009 (part of the American Recovery and Reinvestment Act) gave an unprecedented boost to an obscure corner of the IT industry that produced electronic health records. For the next eight years they were given the opportunity to bring health care into the 21st century and implement common-sense reforms in data sharing and analytics. They largely squandered this opportunity, amassing hundreds of millions of dollars while watching health care costs ascend into the stratosphere, and preening themselves over modest improvements in their poorly functioning systems.

This was not solely a failure of EHR vendors, of course. Hospitals and clinicians also needed to adopt agile methods of collaborating and using data to reduce costs, and failed to do so. They’re sweating profusely now, as shown in protests by the American Medical Association and major health care providers over legislative changes that will drastically reduce their revenue through cuts to insurance coverage and Medicaid. EHR vendors will feel the pain of a thousand cuts as well.

I recently talked to Dr. Travis Good, CEO of Datica that provides data integration and storage to health care providers. We discussed the state of EHR interoperability, the roles of third-party software vendors, and in particular the new “app store” offered by Epic under the name Orchard. Although Datica supports integration with a dozen EHRs, 70% of their business involves Epic. So we’ll start with the new Orchard initiative.

The Epic App Store

Epic, like most vendors, has offered an API over the past few years that gives programmers at hospitals access to patient data in the EHR. This API now complies with the promising new standard for health data, FHIR, and uses the resources developed by the Argonaut Project. So far, this is all salutary and positive. Dr. Good points out, however, that EHR vendors such as Epic offer the API mostly to extract data. They are reluctant to allow data to be inserted programmatically, claiming it could allow errors into the database. The only change one can make, usually, is to add an annotation.

This seriously hampers the ability of hospitals or third-party vendors to add new value to the clinical experience. Analytics benefit from a read-only data store, but to reach in and improve the doctor’s workflow, an application must be able to write new data into the database.

More risk springs from controls that Epic is putting on the apps uploaded to Orchard. Like the Apple Computer store that inspired Orchard, Epic’s app store vets every app and allows in only the apps that it finds useful. For a while, the terms of service allowed Epic access to the data structures of the app. What this would mean in practice is hard to guess, but it suggests a prurient interest on the part of Epic in what its competitors are doing. We can’t tell where Epic’s thinking is headed, though, because the public link to the terms of service was recently removed, leaving a 404 message.

Good explained that Epic potentially could track all the transactions between the apps and their users, and in particular will know which ones are popular. This raises fears among third-party developers that Epic will adopt their best ideas and crowd them out of the market by adding the features to its own core system, as Microsoft notoriously did during the 1980s when it dominated the consumer software market.

Epic’s obsession with control can be contrasted with the SMART project, an open platform for health data developed by researchers at Harvard Medical School. They too offer an app gallery (not a store), but their goal is to open the repository to as wide a collection of contributors as possible. This maximizes the chances for innovation. As described at one of their conferences, control over quality and fitness for use would be exerted by the administrator of each hospital or other institution using the gallery. This administrator would choose which apps to make available for clinical staff to download.

Of course, SMART apps also work seamlessly cross-platform, which distinguishes them from the apps provided by individual vendors. Eventually–ideally–FHIR support will allow the apps in Orchard and from other vendors to work on all EHRs that support FHIR. But the standard is not firm enough to allow this–there are too many possible variations. People who have followed the course of HITECH implementation know the history of interoperability, and how years of interoperability showcases at HIMSS have been mocked by the real incompatibilities between EHRs out in the field.

To understand how EHRs are making use of APIs, we should look more broadly at their role in health care. That will be the topic of the next section of this article.

Exchange Value: A Review of Our Bodies, Our Data by Adam Tanner (Part 3 of 3)

Posted on January 27, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article raised the question of whether data brokering in health care is responsible for raising or lower costs. My argument that it increases costs looks at three common targets for marketing:

  • Patients, who are targeted by clinicians for treatments they may not need or have thought of

  • Doctors, who are directed by pharma companies toward expensive drugs that might not pay off in effectiveness

  • Payers, who pay more for diagnoses and procedures because analytics help doctors maximize charges

Tanner flags the pharma industry for selling drugs that perform no better than cheaper alternatives (Chapter 13, page 146), and even drugs that are barely effective at all despite having undergone clinical trials. Anyway, Tanner cites Hong Kong and Europe as places far more protective of personal data than the United States (Chapter 14, page 152), and they don’t suffer higher health care costs–quite the contrary.

Strangely, there is no real evidence so far that data sales have produced either harm to patients or treatment breakthroughs (Conclusion, 163). But the supermarket analogy does open up the possibility that patients could be induced to share anonymized data voluntarily by being reimbursed for it (Chapter 14, page 157). I have heard this idea aired many times, and it fits with the larger movement called Vendor Relationship Management. The problem with such ideas is the close horizon limiting our vision in a fast-moving technological world. People can probably understand and agree to share data for particular research projects, with or without financial reimbursement. But many researchers keep data for decades and recombine it with other data sets for unanticipated projects. If patients are to sign open-ended, long-term agreements, how can they judge the potential benefits and potential risks of releasing their data?

Data for sale, but not for treatment

In Chapter 11, Tanner takes up the perennial question of patient activists: why can drug companies get detailed reports on patient conditions and medications, but my specialist has to repeat a test on me because she can’t get my records from the doctor who referred me to her? Tanner mercifully shields here from the technical arguments behind this question–sparing us, for instance, a detailed discussion of vagaries in HL7 specifications or workflow issues in the use of Health Information Exchanges–but strongly suggests that the problem lies with the motivations of health care providers, not with technical interoperability.

And this makes sense. Doctors do not have to engage in explicit “blocking” (a slippery term) to keep data away from fellow practitioners. For a long time they were used to just saying “no” to requests for data, even after that was made illegal by HIPAA. But their obstruction is facilitated by vendors equally uninterested in data exchange. Here Tanner discards his usual pugilistic journalism and gives Judy Faulkner an easy time of it (perhaps because she was a rare CEO polite enough to talk to him, and also because she expressed an ethical aversion to sharing patient data) and doesn’t air such facts as the incompatibilities between different Epic installations, Epic’s tendency to exchange records only with other Epic installations, and the difficulties it introduces toward companies that want to interconnect.

Tanner does not address a revolution in data storage that many patient advocates have called for, which would at one stroke address both the Chapter 11 problem of patient access to data and the book’s larger critique of data selling: storing the data at a site controlled by the patient. If the patient determined who got access to data, she would simply open it to each new specialist or team she encounters. She could also grant access to researchers and even, if she chooses, to marketers.

What we can learn from Chapter 9 (although Tanner does not tell us this) is that health care organizations are poorly prepared to protect data. In this woeful weakness they are just like TJX (owner of the T.J. Maxx stores), major financial institutions, and the Democratic National Committee. All of these leading institutions have suffered breaches enabled by weak computer security. Patients and doctors may feel reluctant to put data online in the current environment of vulnerability, but there is nothing special about the health care field that makes it more vulnerable than other institutions. Here again, storing the data with the individual patient may break it into smaller components and therefore make it harder for attackers to find.

Patient health records present new challenges, but the technology is in place and the industry can develop consent mechanisms to smooth out the processes for data exchange. Furthermore, some data will still remain with the labs and pharmacies that have to collect it for financial reasons, and the Supreme Court has given them the right to market that data.

So we are left with ambiguities throughout the area of health data collection. There are few clear paths forward and many trade-offs to make. In this I agree ultimately with Tanner. He said that his book was meant to open a discussion. Among many of us, the discussion has already started, and Tanner provides valuable input.

Exchange Value: A Review of Our Bodies, Our Data by Adam Tanner (Part 2 of 3)

Posted on January 26, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article summarized the evolution of data brokering in patient information and how it was justified ethically and legally, partly because most data is de-identified. Now we’ll take a look at just what that means.

The identified patient

Although doctors can be individually and precisely identified when they prescribe medicines, patient data is supposedly de-identified so that none of us can be stigmatized when trying to buy insurance, rent an apartment, or apply for a job. The effectiveness of anonymization or de-identification is one of the most hotly debated topics in health IT, and in the computer field more generally.

I have found a disturbing split between experts on this subject. Computer science experts don’t just criticize de-identification, but speak of it as something of a joke, assuming that it can easily be overcome by those with a will to do so. But those who know de-identification best (such as the authors of a book I edited, Anonymizing Health Data) point out that intelligent, well-designed de-identification databases have been resistant to cracking, and that the highly publicized successes in re-identification have used databases that were de-identified unprofessionally and poorly. That said, many entities (including the South Korean institutions whose practices are described in Chapter 10, page 110 of Tanner’s book) don’t call on the relatively rare experts in de-identification to do things right, and therefore fall into the category of unprofessional and poor de-identification.

Tanner accurately pinpoints specific vulnerabilities in patient data, such as the inclusion of genetic information (Chapter 9, page 96). A couple of companies promise de-identified genetic data (Chapter 12, page 130, and Conclusion, page 162), which all the experts agree is impossible due to the wide availability of identified genomes out in the field for comparison (Conclusion, page 162).

Tanner has come down on the side of easy re-identification, having done research in many unconventional areas lacking professional de-identification. However, he occasionally misses a nuance, as when describing the re-identification of people in the Personal Genome Project (Chapter 8 page 92). The PGP is a uniquely idealistic initiative. People who join this project relinquish interest in anonymity (Chapter 9, page 96), declaring their willingness to risk identification in pursuit of the greater good of finding new cures.

In the US, no legal requirement for anonymization interferes with selling personal data collected on social media sites, from retailers, from fitness devices, or from genetic testing labs. For most brokers, no ethical barriers to selling data exist either, although Apple HealthKit bars it (Chapter 14 page 155). So more and more data about our health is circulating widely.

With all these data sets floating around–some supposedly anonymized, some tightly tied to your identity–is anonymization dead? Every anonymized data set already contains a few individuals who can be theoretically re-identified; determining this number is part of the technical process of de-identification? Will more and more of us fall into this category as time goes on, victims of advanced data mining and the “mosaic effect” (combining records from different data sets)? This is a distinct possibility for the future, but in the present, there are no examples of re-identifying data that is anonymized properly–the last word properly being all important here. (The authors of Anonymizing Health Data talk of defensible anonymization, meaning you can show you used research-vetted processes.) Even Latanya Sweeney, whom Tanner tries to portray in Chapter 9 as a relentless attacker who strips away the protections of supposedly de-identified data, believes that data can be shared safely and anonymously.

To address people’s fretting over anonymization, I invoke the analogy of encryption. We know that our secret keys can be broken, given enough computing power. Over the decades, as Moore’s Law and the growth of large computing clusters have increased computing power, the recommended size of keys has also grown. But someday, someone will assemble the power (or find a new algorithm) that cracks our keys. We know this, yet we haven’t stopped using encryption. Why give up the benefits of sharing anonymized data, then? What hurts us is the illegal data breaches that happen on average more than once a day, not the hypothetical re-identification of patients.

To me, the more pressing question is what the data is being used for. No technology can be assessed outside of its economic and social context.

Almighty capitalism

One lesson I take from the creation of a patient data market, but which Tanner doesn’t discuss, is its existence as a side effect of high costs and large inefficiencies in health care generally. In countries that put more controls on doctors’ leeway to order drugs, tests, and other treatments, there is less wiggle room for the marketing of unnecessary or ineffective products.

Tanner does touch on the tendency of the data broker market toward monopoly or oligopoly. Once a company such as IMS Health builds up an enormous historical record, competing is hard. Although Tanner does not explore the affect of size on costs, it is reasonable to expect that low competition fosters padding in the prices of data.

Thus, I believe the inflated health care market leaves lots of room for marketing, and generally props up the companies selling data. The use of data for marketing may actually hinder its use for research, because marketers are willing to pay so much more than research facilities (Conclusion, pages 163-164).

Not everybody sells the data they collect. In Chapter 13, Tanner documents a complicated spectrum for anonymized data, ranging from unpublicized sales to requiring patient consent to forgoing all data sales (for instance, footnote 6 to Chapter 13 lists claims by Salesforce.com and Surescripts not to sell patient information). Tenuous as trust in reputation may seem, it does offer some protection to patients. Companies that want to be reputable make sure not to re-identify individual patients (Chapter 7, page 72, Chapter 9, pages 88-90, and Chapter 9, page 99). But data is so valuable that even companies reluctant to enter that market struggle with that decision.

The medical field has also pushed data collectors to make data into a market for all comers. The popular online EHR, Practice Fusion, began with a stable business model offering its service for a monthly fee (Chapter 13, page 140). But it couldn’t persuade doctors to use the service until it moved to an advertising and data-sharing model, giving away the service supposedly for free. The American Medical Association, characteristically, has also found a way to extract profit from sale of patient data, and therefore has colluded in marketing to doctors (Chapter 5, page 41, and Chapter 6, page 54).

Thus, a Medivo executive makes a good argument (Chapter 13, page 147) that the medical field benefits from research without paying for the dissemination of data that makes research possible. Until doctors pony up for this effort, another source of funds has to support the collection and research use of data. And if you believe that valuable research insights come from this data (Chapter 14, page 154, and Conclusion, page 166), you are likely to develop some appreciation for the market they have created. Another obvious option is government support for the collection and provision of data for research, as is done in Britain and some Nordic countries, and to a lesser extent in the US (Chapter 14, pages 158-159).

But another common claim, aired in this book by a Cerner executive (Chapter 13, page 143) is that giving health data to marketers reduces costs across the system, similarly to how supermarkets grant discounts to shoppers willing to have their purchases tracked. I am not convinced that costs are reduced in either case. In the case of supermarkets, their discounts may persuade shoppers to spend more money on expensive items than they would have otherwise. In health care, the data goes to very questionable practices. These become the topic of the last part of this article.

Exchange Value: A Review of Our Bodies, Our Data by Adam Tanner (Part 1 of 3)

Posted on January 25, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A lot of people are feeling that major institutions of our time have been compromised, hijacked, or perverted in some way: journalism, social media, even politics. Readers of Adam Tanner’s new book, Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records, might well add health care data to that list.

Companies collecting our data–when they are not ruthlessly trying to keep their practices secret–hammer us with claims that this data will improve care and lower costs. Anecdotal evidence suggests it does. But the way this data is used now, it serves the business agendas of drug companies and health care providers who want to sell us treatments we don’t need. When you add up the waste of unnecessary tests and treatments along with the money spent on marketing, as well as the data collection that facilitates that marketing, I’d bet it dwarfs any savings we currently get from data collection.

How we got to our current data collection practices

Tanner provides a bit of history of data brokering in health care, along with some intriguing personalities who pushed the industry forward. At first, there was no economic incentive to collect data–even though visionary clinicians realized it could help find new diagnoses and treatments. Tanner says that the beginnings of data collection came with the miracle drugs developed after World War II. Now that pharmaceutical companies had a compelling story to tell, ground-breaking companies such as IMS Health (still a major player in the industry) started to help them target physicians who had both the means of using their drugs–that is, patients with the target disease–and an openness to persuasion.

Lots of data collection initiatives started with good intentions, some of which paid off. Tanner mentions, as one example, a computer program in the early 1970s that collected pharmacy data in the pursuit of two laudable goals (Chapter 2, page 13): preventing patients from getting multiple prescriptions for the same drug, and preventing adverse interactions between drugs. But the collection of pharmacy data soon found its way to the current dominant use: a way to help drug companies market high-profit medicines to physicians.

The dual role of data collection–improving care but taking advantage of patients, doctors, and payers–persists over the decades. For instance, Tanner mentions a project by IMS Health (which he treats pretty harshly in Chapter 5) collecting personal data from AIDS patients in 1997 (Chapter 7, page 70). Tanner doesn’t follow through to say what IMS did with the AIDS data, but I am guessing that AIDS patients don’t offer juicy marketing opportunities, and that this initiative was aimed at improving the use and effectiveness of treatments for this very needy population. And Chapter 7 ends with a list of true contributions to patient health and safety created by collecting patient data.

Chapter 6 covers the important legal battles fought by several New England states (including the scrappy little outpost known for its worship of independent thinking, New Hampshire) to prevent pharmacies from selling data on what doctors are prescribing. These attempts were quashed by the well-known 2011 Supreme Court ruling on Vermont’s law. All questions of privacy and fairness were submerged by considering the sale of data to be a matter of free speech. As we have seen during several decisions related to campaign financing, the current Supreme Court has a particularly expansive notion of what the First Amendment covers. I just wonder what they will say when someone who breaks into the records of an insurer or hospital and steals several million patient records pleads free speech to override the Computer Fraud and Abuse Act.

Tanner has become intrigued, and even enamored, by the organization Patient Privacy Rights and its founder, Deborah Peel. I am closely associated with this organization and with Peel as well, working on some of their privacy summits and bringing other people into their circle. Because Tanner airs some criticisms of Peel, I’d like to proffer my own observation that she has made exaggerated and unfair criticisms of health IT in the past, but has moderated her views a great deal. Working with experts in health IT sympathetic to patient privacy, she has established Patient Privacy Rights during the 2010 decade as a responsible and respected factor in the health care field. So I counter Tanner’s repeated quotes regarding Peel as “crazy” (Chapter 8, page 83) by hailing her as a reputable and crucial force in modern health IT.

Coincidentally, Tanner refers (Chapter 8, page 79) to a debate that I moderated between IMS representative Kim Gray and Michelle De Mooy (available in a YouTube video). The discussion started off quite tame but turned up valuable insights during the question-and-answer period (starting at 38:33 in the video) about data sharing and the role of de-identification.

While the Supreme Court ruling stripped doctors of control over data about their practices–a bit of poetic irony, perhaps, if you consider their storage of patient data over the decades as an unjust taking–the question of patient rights was treated as irrelevant. The lawyer for the data miners said, “The patients have nothing to do with this” (Chapter 6, page 57) and apparently went unchallenged. How can patients’ interest in their own data be of no concern? For that question we need to look at data anonymization, also known as de-identification. This will begin the next section of our article.

An Intelligent Interface for Patient Diagnosis by HealthTap

Posted on January 9, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

HealthTap, an organization that’s hard to categorize, really should appear in more studies of modern health care. Analysts are agog over the size of the Veterans Administration’s clientele, and over a couple other major institutions such as Kaiser Permanente–but who is looking at the 104,000 physicians and the hundreds of millions of patients from 174 countries in HealthTap’s database?

HealthTap allows patients to connect with doctors online, and additionally hosts an enormous repository of doctors’ answers to health questions. In addition to its sheer size and its unique combination of services, HealthTap is ahead of most other health care institutions in its use of data.

I talked with founder and CEO Ron Gutman about a new service, Dr. AI, that triages the patient and guides her toward a treatment plan: online resources for small problems, doctors for major problems, and even a recommendation to head off to the emergency room when that is warranted. The service builds on the patient/doctor interactions HealthTap has offered over its six years of operation, but is fully automated.

Somewhat reminiscent of IBM’s Watson, Dr. AI evaluates the patient’s symptoms and searches a database for possible diagnoses. But the Dr. AI service differs from Watson in several key aspects:

  • Whereas Watson searches a huge collection of clinical research journals, HealthTap searches its own repository of doctor/patient interactions and advice given by its participating doctors. Thus, Dr. AI is more in line with modern “big data” analytics, such as PatientsLikeMe does.

  • More importantly, HealthTap potentially knows more about the patient than Watson does, because the patient can build up a history with HealthTap.

  • And most important, Dr. AI is interactive. Instead of doing a one-time search, it employs artificial intelligence techniques to generate questions. For instance, it may ask, “Did you take an airplane flight recently?” Each question arises from the totality of what HealthTap knows about the patient and the patterns found in HealthTap’s data.

The following video shows Dr. AI in action:

A well-stocked larder of artificial intelligence techniques feed Dr. AI’s interactive triage service: machine learning, natural language processing (because the doctor advice is stored in plain text), Bayesian learning, and pattern recognition. These allow a dialog tailored to each patient that is, to my knowledge, unique in the health care field.

HealthTap continues to grow as a platform for remote diagnosis and treatment. In a world with too few clinicians, it may become standard for people outside the traditional health care system.

Newly Released Open Source Libraries for Health Analytics from Health Catalyst

Posted on December 19, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

I celebrate and try to report on each addition to the pool of open source resources for health care. Some, of course, are more significant than others, and I suspect the new healthcare.ai libraries released by the Health Catalyst organization will prove to be one of the significant offerings. One can do a search for health care software on sites such as GitHub and turn up thousands of hits (of which many are probably under open and free licenses), but for a company with the reputation and accomplishments of Health Catalyst to open up the tools it has been using internally gives healthcare.ai great legitimacy from the start.

According to Health Catalyst’s Director of Data Science Levi Thatcher, the main author of the project, these tools are tried and tested. Many of them are based on popular free software libraries in the general machine learning space: he mentions in particular the Python Scikit-learn library and the R language’s caret and and data.table libraries. The contribution of Health Catalyst is to build on these general tools to produce libraries tailored for the needs of health care facilities, with their unique populations, workflows, and billing needs. The company has used the libraries to deploy models related to operational, financial, and clinical questions. Eventually, Thatcher says, most of Health Catalyst’s applications will use predictive analytics based on healthcare.ai, and now other programmers can too.

Currently, Health Catalyst is providing libraries for R and Python. Moving them from internal projects to open source was not particularly difficult, according to Thatcher: the team mainly had to improve the documentation and broaden the range of usable data connections (ODBC and more). The packages can be installed in the manner common to free software projects in these language. The documentation includes guidelines for submitting changes, so that an ecosystem of developers can build up around the software. When I asked about RESTful APIs, Thatcher answered, “We do plan on using RESTful APIs in our work—mainly as a way of integrating these tools with ETL processes.”

I asked Thatcher one more general question: why did Health Catalyst open the tools? What benefit do they derive as a company by giving away their creative work? Thatcher answers, “We want to elevate the industry and educate it about what’s possible, because a rising tide will lift all boats. With more data publicly available each year, I’m excited to see what new and open clinical or socio-economic datasets are used to optimize decisions related to health.”

An Interview with Open Source Health IT Project: LibreHealth

Posted on December 7, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

LibreHealth is the largest health IT project to emerge recently, particularly in the area of free and open source software. In this video, Dr. Judy Gichoya of the LibreHealth project explains what clinicians in Africa are dealing with and what their IT needs are.

Both developed and developing countries need better health IT systems to improve patient care. In the developed countries, electronic records and other health IT systems sprout complexities that reflect the health care systems in which they function. But these IT systems are far removed from real-life needs of doctors caring for patients, and have transformed physicians in the US into its largest data entry workforce.

In developing countries, scarcity is the norm, and gains cannot be achieved without innovative approaches to delivering health care. The LibreHealth team hopes to learn from both the failures of proprietary IT systems and the opportunities missed by various open source systems such as OpenMRS and OpenEMR. Dr. Gichoya describes the future of open source health IT systems and the health projects united under LibreHealth. The project seeks to provide transparency and be a patient advocate when developing and deploying health systems.

Learn more in my podcast interview with Dr. Gichoya below:

You can find more information and connect with the community on the LibreHealth forums.

Check out all the Healthcare Scene interviews on the Healthcare Scene YouTube channel.

Ignoring the Obvious: Major Health IT Organizations Put Aside Patients

Posted on November 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Frustrated stories from patients as well as health care providers repeatedly underline the importance of making a seismic shift in the storage and control of patient data. The current system leads to inaccessible records, patients who reach nursing homes or other treatment centers without information crucial to their care, excess radiation from repeated tests, massive data breaches that compromise thousands of patients at a time, and–most notably for quality–patients excluded from planning their own care.

A simple solution became available over the past 25 years with the widespread adoption of the Web, and has been rendered even easier by modern Software as a Service (SaaS): storing the entire record over the patient’s lifetime with the patient. This was unfeasible in the age of patient records, but is currently efficient, secure, and easy to manage. The only reason we didn’t switch to personal records years ago is the greed and bad faith of the health care institutions: keeping hold of the data allows them to exploit it in order to market treatments to patients that they don’t need, while hampering the ability of other institutions to recruit and treat patients.

So I wonder how the American Health Information Management Association (AHIMA) can’t feel ridiculous, if not a bit seamy, by releasing a 3000-word report on the patient data crisis this past October without even a hint at the solution. On the contrary: using words designed to protect the privileges of the health care provider, they call this crisis a “patient matching” problem. The very terminology sets in stone the current practice of scattering health records among providers, with the assumption that selective records will be recombined for particular treatment purposes–if those records can be found.

A reading of their report reveals that the crisis outpaces the tepid remedies suggested by conventional institutions. In a survey, institutions admitted that up to eight percent of their patients have duplicate records in the institutions own systems (six percent of the survey respondents reported this high figure). Institutions also report spending large efforts on mitigating the problems of duplicate records: 47 percent do so during patient registration, and 72 percent run efforts on a weekly basis. AHIMA didn’t even ask about the problems caused by lack of access to records from other providers.

To pretend they are addressing the problem without actually offering the solution, AHIMA issues some rather bizarre recommendations. Along with extending the same processes currently in use, they suggest using biometrics such as fingerprints or retinal scans. This has a worrisome impact on patient privacy–it puts out more and more information that is indelibly linked to persons and that can be used to track those persons. What are the implications of such recommendations in the current environment, which features not only targeted system intrusions by international criminal organizations, but the unaccountable transfer of data by those authorized to collect it? We should strenuously oppose the collection of unnecessary personal information. But it makes sense for a professional organization to seek a solution that leads to the installation of more equipment, requires more specialized staff, tightens their control over individuals, and raises health care costs.

There’s nothing wrong with certain modest suggestions in the AHIMA report. Standardizing the registration process and following the basic information practices they recommend (compliance with regulations, etc.) should be in place at any professional institution. But none of that will bring together the records doctors and other health care professionals need to deliver care.

Years ago, Microsoft HealthVault and Google Health tried to bring patient control into the mainstream. Neither caught on, because the time was not right. A major barrier to adoption was resistance by health care providers, who (together with the vendors of their electronic health records) disallowed patients from downloading provider data. The Department of Veterans Affairs Blue Button won fans in both the veterans’ community and a few other institutions (for instance, Kaiser Permanente supported it) but turned out to be an imperfect standard and was never integrated into a true patient-centered health system.

But cracks in the current system are appearing as health care providers are shoved toward fee-for-value systems. Technologies are also coalescing around personal records. Notably, the open source HIE of One project, described in another article, employs standard security and authentication protocols to give patients control over what data gets sent out and who receives it.

Patient control, not patient “matching,” is the future of health care. The patient will ensure that her doctors and any legitimate researchers get access to data. Certainly, there are serious issues left, such as data management for patients who have trouble with the technical side of the storage systems, and informed consent protocols that give researchers maximum opportunities for deriving beneficial insights from patient data. But the current system isn’t working for doctors or researchers any better than it is for patients. A strong personal health record system will advance us in all areas of health care.

Vocera Aims For More Intelligent Hospital Interventions

Posted on November 14, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Everyday scenes that Vocera Communications would like to eliminate from hospitals:

  • A nurse responds to an urgent change in the patient’s condition. While the nurse is caring for the patient, monitors continue to go off with alerts about the situation, distracting her and increasing the stress for both herself and the patient.

  • A monitor beeps in response to a dangerous change in a patient’s condition. A nurse pages the physician in charge. The physician calls back to the nurse’s station, but the nurse is off on another task. They play telephone tag while patient needs go unmet around the floor.

  • A nurse is engaged in a delicate operation when her mobile device goes off, distracting her at a crucial moment. Neither the patient she is currently working with nor the one whose condition triggered the alert gets the attention he needs.

  • A nurse describes a change in a patient’s condition to a physician, who promises to order a new medication. The nurse then checks the medical record every few minutes in the hope of seeing when the order went through. (This is similar to a common computing problem called “polling”, where a software or hardware component wakes up regularly just to see whether data has come in for it to handle.)

Wasteful, nerve-racking situations such as these have caught the attention of Vocera over the past several years as it has rolled out communications devices and services for hospital staff, and have just been driven forward by its purchase of the software firm Extension Healthcare.

Vocera Communications’ and Extension Healthcare’s solutions blend to take pressures off clinicians in hospitals and improve their responses to patient needs. According to Brent Lang, President and CEO of Vocera Communications, the two companies partnered together on 40 customers before the acquisition. They take data from multiple sources–such as patient monitors and electronic health records–to make intelligent decisions about “when to send alarms, whom to send them to, and what information to include” so the responding nurse or doctor has the information needed to make a quick and effective intervention.

Hospitals are gradually adopting technological solutions that other parts of society got used to long ago. People are gradually moving away from setting their lights and thermostats by hand to Internet-of-Things systems that can adjust the lights and thermostats according to who is in the house. The combination of Vocera and Extension Healthcare should be able to do the same for patient care.

One simple example concerns the first scenario with which I started this article. Vocera can integrate with the hospital’s location monitoring (through devices worn by health personnel) that the system can consult to see whether the nurse is in the same room as the patient for whom the alert is generated. The system can then stop forwarding alarms about that patient to the nurse.

The nurse can also inform the system when she is busy, and alerts from other patients can be sent to a back-up nurse.

Extension Healthcare can deliver messages to a range of devices, but the Vocera badge and smartphone app work particularly well with it because they can deliver contextual information instead of just an alert. Hospitals can define protocols stating that when certain types of devices deliver certain types of alerts, they should be accompanied by particular types of data (such as relevant vital signs). Extension Healthcare can gather and deliver the data, which the Vocera badge or smartphone app can then display.

Lang hopes the integrated systems can help the professionals prioritize their interventions. Nurses are interrupt-driven, and it’s hard for them to keep the most important tasks in mind–a situation that leads to burn-out. The solutions Vocera is putting together may significantly change workflows and improve care.