Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Software Marks Advances at the Connected Health Conference (Part 2 of 2)

Posted on October 31, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article focused on FDA precertification of apps and the state of interoperability. This part covers other interesting topics at the Connected Health conference.

Presentation at Connected Health Conference

Presentation at Connected Health Conference

Patient engagement

A wonderful view upon the value of collecting patient data was provided by Steve Van, a patient champion who has used intensive examination of vital signs and behavioral data to improve his diabetic condition. He said that the doctor understands the data and the patient knows how he feels, but without laying the data out, they tend to talk past each other. Explicit data on vital signs and behavior moves them from monologue to dialogue. George Savage, MD, co-founder and CMO of Proteus, described the value of data as “closing the loop”–in other words, providing immediate and accurate information back to the patient about the effects of his behavior.

I also gained an interesting perspective from Gregory Makoul, founder and CEO of PatientWisdom, a company that collects a different kind of data from patients over mobile devices. The goal of PatientWisdom is to focus questions and make sure they have an impact: the questionnaire asks patients to share “stories” about themselves, their health, and their care (e.g., goals and feelings) before a doctor visit. A one-screen summary is then provided to clinical staff via the EHR. The key to high adoption is that they don’t “drill” the patient over things such as medications taken, allergies, etc. They focus instead on distilling open-ended responses about what matters to patients as people, which patients like and providers also value.

Sam Margolis, VP of client strategy and growth at Cantina, saw several aspects of the user experience (UX) as the main hurdle for health IT companies. This focus was reasonable, given that Cantina combines strengths in design and development. Margolis said that companies find it hard to make their interfaces simple and to integrate into the environments where their products operate. He pointed out that health care involves complex environments with many considerations. He also said they should be thinking holistically and design a service, not just a product–a theme I have seen across modern business in general, where companies are striving to engage customers over long periods of time, not just sell isolated objects.

Phil Marshall, MD, co-founder and chief product officer of Conversa Health, described how they offer a chatbot to patients discharged from one partnering hospital, in pursuit of the universal goal by US hospitals to avoid penalties from Medicare for readmissions. The app asks the patient for information about her condition and applies the same standards the hospital uses when its staff evaluates discharged patients. Marshall said that the standards make the chatbot highly accurate, and is tuned regularly. It is also popular: 80 percent of the patients offered the app use it, and 97 percent of these say it is helpful. The chat is tailored to each patient. In addition to relieving the staff of a routine task, the hospital found that the app reduces variation among outcomes among physicians, because the chatbot will ask for information they might forget.

Jay V. Patel, Clinical Transformation Officer at Seniorlink, described a care management program that balances technology and the human touch to help caregivers of people with dementia. Called VOICE (Vital Outcomes Inspired by Caregiver Engagement) Dementia Care, the program connects a coach to family caregivers and their care teams through Vela, Seniorlink’s collaboration platform. The VOICE DC program reduced ER visits by 51 percent and hospitalizations by 18 percent in the six-month pilot. It was also good for caregivers, reducing their stress and increasing their confidence.

Despite the name, VOICE DC is text-based (with video content) rather than voice-based. An example of the advances in voice interfaces was provided at this conference by Boston Children’s Hospital. Elizabeth Kidder, manager of their digital health accelerator, reported using voice interfaces to let patients ask common questions, such as when to get vaccinations and whether an illness was bad enough to keep children home from school and day care. Another non-voice app they use is a game that identifies early whether a child has a risk of dyslexia. Starting treatment before the children are old enough to learn reading in school can greatly increase success.

Nathan Treloar, president of Orbita, reported that at a recent conference on voice interfaces, participants in a hackathon found nine use cases for them in health.

Pattie Maes of the MIT Media Lab–one of the most celebrated research institutions in digital innovation–envisions using devices to strengthen the very skills that our devices are now blamed for weakening, such as how to concentrate. Of course, she warned, there is a danger that users will become dependent on the device while using it for such skills.

Working at the top of one’s license

I heard that appealing phrase from Christine Goscila, a family nurse practitioner at Massachusetts General Hospital Revere. She was describing how an app makes it easier for nurses to collect data from remote patients and spend more time on patient care. This shift from routine tasks to high-level interactions is a major part of the promise of connected health.

I heard a similar goal from Gregory Pelton, MD, CMO of ICmed, one of the many companies providing an integrated messaging platform for patients, clinicians, and family caregivers. Pelton talks of handling problems at the lowest possible level. In particular, the doctor is relieved of entering data because other team members can do it. Furthermore, messages can prepare the patient for a visit, rendering him more informed and better able to make decisions.

Clinical trials get smarter

While most health IT and connected health practitioners focus on the doctor/patient interaction and health in the community, the biggest contribution connected health might make to cost-cutting may come from its use by pharmaceutical companies. As we watch the astounding rise in drug costs–caused by a range of factors I will cover in a later article, but only partly by deliberate overcharging–we could benefit from anything that makes research and clinical trials more efficient.

MITRE, a non-profit that began in the defense industry but recently has created a lot of open source tools and standards for health care, presented their Synthea platform, offering synthetic data for researchers. The idea behind synthetic data is that, when you handle a large data set, you don’t need to know that a particular patient has congestive heart failure, is in his sixties, and weighs 225 pounds. Even if the data is deidentified, giving information about each patient raises risks of reidentification. All you need to know is a collection of facts about diagnoses, age, weights, etc. that match a typical real patient population. If generated using rigorous statistical algorithms, fake data in large quantities can be perfectly usable for research purposes. Synthea includes data on health care costs as well as patients, and is used for FHIR connectathons, education, the free SMART Health IT Sandbox, and many other purposes.

Telemedicine

Payers are gradually adapting their reimbursements to telemedicine. The simplest change is just to pay for a video call as they would pay for an office visit, but this does not exploit the potential for connected health to create long-range, continuous interactions between doctor, patient, and other staff. But many current telemedicine services work outside the insurance system, simply charging patients for visits. This up-front payment obviously limits the ability of these services to reach most of the population.

The uncertainties, as well as the potential, of this evolving market are illustrated by the business model chosen by American Telephysicians, which goes so far as to recruit patients internationally, such as from Pakistan and Dubai, to create a telemedicine market for U.S. specialists. They will be starting services in some American communities soon, though. Taking advantage of the ubiquity of mobil devices, they extend virtual visits with online patient records and a marketplace for pharmaceuticals, labs, and radiology. Waqas Ahmed, MD, founder and CEO, says: “ATP is addressing global health care problems that include inaccessibility of primary, specialty, and high-quality healthcare services, lack of price transparency, substandard patient education, escalating costs and affordability, a lack of healthcare integration, and fragmentation along the continuum of care.”

The network is the treatment center

We were honored with a keynote from FCC chair Ajit Pai, who achieved notoriety recently in the contentious “net neutrality” debate and was highlighted in WIRED for his position. Pai is not the most famous FCC chair, however; that honor goes to Newton Minow, who as chair from 1961 to 1963 called television a “vast wasteland.” More recently, Michael Powell (who became chair in 2001, before the confounding term “net neutrality” was invented) garnered a lot of attention for changing Internet regulations. Newton Minow, by the way, is still on the scene. I heard him talk recently at a different conference, and Pai mentioned talking to Minow about Internet access.

Pai has made expansion of Internet access his key issue (it was mentioned in the WIRED article) and talked about the medical benefits of bringing fast, continuous access to rural areas. His talk fit well with the focus many companies at the Connected Health conference placed on telemedicine. But Pai did not vaunt competition or innovation as a solution to reaching rural areas. Instead, he seemed happy with the current oligopoly that characterizes Internet access in most areas, and promoted an increase in funding to get them to do more of what they’re now doing (slowly).

The next day, Nancy Green of Verizon offered a related suggestion that 5G wireless will make batteries in devices last longer. This is not intuitive, but I think can be justified by the decrease in the time it will take for devices to communicate with the cloud, decreasing in turn the drain on the batteries.

Devices that were just cool

One device I liked at Connected Health coll was the Eko stethoscope, which sends EKG data to a computer for display. Patients will soon be able to use Eko devices to view their own EKGs, along with interpretations that help non-specialists make sense of the results. Of course, the results are also sent to the patients’ doctors.

Another device is a smart pillbox by CUEMED that doubles as a voice-interactive health assistant, HEXIS. Many companies make smart pill boxes that keep track of whether you open them, and flash or speak up to remind you when it’s time to take the pills. (Non-compliance with prescription medications is rampant.) HEXIS is a more advanced innovation that incorporates Alexa-like voice interactivity with the user and can connect to other medical devices and wearables such as Apple Watch and blood pressure monitors. The device uses the data and vital signs to motivate the user, and provides suggestions for the user to feel better. Another nice feature is that if you’re going out, you can remove one day’s meds and take them with you, while the device continues to do its job of reminding and tracking.

I couldn’t get to every valuable session at the Connected Health conference, or cover every speaker I heard. However, the conference seems to be achieving its goals of bringing together innovators and of prodding the health care industry toward the effective use of technology.

Software Marks Advances at the Connected Health Conference (Part 1 of 2)

Posted on October 29, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The precepts of connected health were laid out years ago, and merely get updated with nuances and technological advances at each year’s Connected Health conference. The ideal of connected health combines matching the insights of analytics with the real-life concerns of patients; monitoring people in everyday settings through devices that communicate back to clinicians and other caregivers; and using automation to free up doctors to better carry out human contact. Pilots and deployments are being carried out successfully in scattered places, while in others connected health languishes while waiting for the slow adoption of value-based payments.

Because I have written at length about the Connected Health conference in 2015, 2016, and 2017, I will focus this article on recent trends I ran into at this year’s conference. Key themes include precertification at the FDA, the state of interoperability (which is poor), and patient engagement.

Exhibition floor at Connected Health conference

Exhibition floor at Connected Health conference

Precertification: the status of streamlining approval for medical software

One of the ongoing challenges in the progress of patient involvement and connected health is the approval of software for diagnosis and treatment. Traditionally, the FDA regulated software and hardware together in all devices used in medicine, requiring rigorous demonstrations of safety and efficacy in a manner similar to drugs. This was reasonable until recently, because anything that the doctor gives to the patient needs to be carefully checked. Otherwise, insurers can waste a lot of money on treatments that don’t work, and patients can even be harmed.

But more and more software is offered on generic computers or mobile devices, not specialized medical equipment. And the techniques used to develop the software inherit the “move fast and break things” mentality notoriously popular in Silicon Valley. (The phrase was supposedly a Facebook company motto.) Software can be updated several times a day. Although A/B testing (an interesting parallel to randomized controlled trials) might be employed to see what is popular with users, quality control is done in completely different ways. Modern software tends to rely for safety and quality on unit tests (which make sure individual features work as expected), regression tests (which look for things that no longer work they way they should), continuous integration (which forces testing to run each time a change is submitted to the central repository), and a battery of other techniques that bear such names as static testing, dynamic testing, and fuzz testing. Security testing is yet another source of reliability, using techniques such as penetration testing that may be automated or manual. (Medical devices, which are notoriously insecure, might benefit from an updated development model.

The FDA has realized that reliable software can be developed within the Silicon Valley model, so long as rigor and integrity are respected. Thus, it has started a Pre-Cert Pilot Program that works with nine brave vendors to find guidelines the FDA can apply in the future to other software developers.

Representatives of four vendors reported at the Connected Health conference that the pilot is going quite well, with none of the contentious and adversarial atmosphere that characterizes the interactions between the FDA with most device manufacturers. Every step of the software process is available for discussion and checking, and the inquiries go quite deep. All participants are acutely aware of the risk–cited by critics of the program–that it will end up giving vendors too much leeway and leaving the public open to risks. The participants are committed to closing loopholes and making sure everyone can trust the resulting guidelines.

The critical importance of open source software became clear in the report of the single open source vendor who is participating in the pilot: Tidepool. Because it is open source, according to CEO Howard Look, Tidepool was willing to show its code as well as its software development practices to independent experts using multiple evaluation assessment methods, including a “peer appraisal” by fellow precert participants Verily and Pear Therapeutics. One other test appraisal (CMMI, using external auditors) was done by both Tidepool and Johnson & Johnson; no other participants did a test appraisal. Thus, if the FDA comes out with new guidelines that stimulate a tremendous development of new software for medical use, we can thank open source.

Making devices first-class players in health care

Several exhibitors at the conference were consulting firms who provide specific services to start-ups and other vendors trying to bring products to market. I asked a couple of these consultants what they saw as the major problems their clients face. Marcus Fontaine, president of Impresiv Health, said their biggest problem is the availability of data, particularly because of a lack of interoperable data exchange. I wanted to exclaim, “Still?”

Joseph Kvedar, MD, who chairs the Connected Health conference, spoke of a new mobile app developed by his organization, Partners Connected Health, to bring device data into their EHR. This greatly improves the collection of data and guarantees accuracy, because patients no longer have to manually enter vital signs or other information. In addition to serving Partners in improving patient care, the data can be used for research and public health. In developing this app, Partners depended heavily for interoperable data exchange on work by Validic, the most prominent company in the device interoperability space, and one that I have profiled and whose evolution I have followed.

Ideally, each device could communicate directly with the EHR. Why would Partners Connected Health invest heavily in creating a special app as an intermediary? Kvedar cited several reasons. First, each device currently offers its own app as a user interface, and users with multiple devices get confused and annoyed by the proliferation of apps. Second, many devices are not designed to communicate cleanly with EHRs. Finally, the way networks are set up, communicating would require a separate cellular connection and SIM card for each device, raising costs.

A similar effort is pursued by Indie Health, trying to solve the problem of data access by making it easy to create Bluetooth connections between devices and mobile phones using a variety of Bluetooth, IEEE, Continua, and other standards.

The CEO of Validic, Drew Schiller, spoke on another panel about maximizing the value of patient-generated data. He pointed out that Validic, as an intermediary for a huge number of devices and health care providers, possesses a correspondingly huge data set on how patients are using the devices, and in particular when they stop using the devices. I assume that Validic does not preserve the data generated by the devices, such as blood pressure or steps taken–at least, Schiller did not say they have that data, and it would be intrusive to collect it. However, the metadata they do collect can be very useful in designing interactions with patients. He also talked about the value of what he dubs “invisible health care,” where behavior change and other constructive uses of data can flow easily from the data.

Barry Reinhold, president and CTO of Lamprey Networks, was manning the Continua booth when I came by. Continua defines standard for devices used in the home, in nursing faciliies, and in other places outside the hospital. This effort should be open source, supported by fees by all affected stakeholders (hospitals, device manufacturers, etc.). But open source is spurned by the health care field, so Continua does the work as a private company. Reinhold told me that device manufacturers rarely contract with Continua, which I treat as a sign that device manufacturers value data silos as a business model. Instead, Continua contracts come from the institutions that desperately need access to the data, such as nursing facilities. Continua does the best it can to exploit existing standards, including the “continuing data” profile from FHIR.

Other speakers at the conference, including Andrew Hayek, CEO of OptumHealth, confirmed Reinhold’s observation that interoperability still lags among devices and EHRs. And Schiller of Validic admitted that in order to get data from some devices into a health system, the patient has to take a photo of the device’s screen. Validic not only developed an app to process the photo, but patented it–a somewhat odd indication that they consider it a major contribution to health care.

Tasha van Es and Claire Huber of Redox, a company focused on healthcare interoperability and data integration, said that they are eager to work with FHIR, and that it’s a major part of their platform, but they think it has to develop more before being ready for widespread use. This made me worry about recent calls by health IT specialists for the ONC, CMS, and FDA to make FHIR a requirement.

It was a pleasure to reconnect at the conference with goinvo, which creates open source health care software on a contract basis, but offers much of it under a free license.

A non-profit named Xcertia also works on standards in health care. Backed by the American Medical Association, American Heart Association, DHX Group, and HIMSS, they focus on security, privacy, and usability. Although they don’t take on certification, they design their written standards so that other organizations can offer certification, and a law considered in California would mandate the use of their standards. The guidelines have just been released for public comment.

The second section of this article covers patient engagement and other topics of interest that turned up at the conference.

Open Source Software and the Path to EHR Heaven (Part 2 of 2)

Posted on September 20, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous segment of this article explained the challenges faced by health care organizations and suggested two ways they could be solved through free and open source software. We’ll finish the exploration in this segment of the article.

Situational awareness would reduce alert fatigue and catch errors

Difficult EHR interfaces are probably the second most frustrating aspect of being a doctor today: the first prize goes to the EHR’s inability to understand and adapt to the clinician’s workflow and environment. This is why the workplace redounds with beeps and belches from EHRs all day, causing alert fatigue and drowning out truly serious notifications. Stupid EHRs have an even subtler and often overlooked effect: when regulators or administrators require data for quality or public health purposes, the EHR is often “upgraded” with an extra field that the doctor has to fill in manually, instead of doing what computers do best and automatically replicating data that is already in the record. When doctors complain about the time they waste in the EHR, they often blame the regulators or the interface instead of placing their finger on the true culprit, which is the lack of awareness in the EHR.

Open source can ease these problems in several ways. First, the customizability outlined in the first section of this article allows savvy users to adapt it to their situations. Second, the interoperability from the previous section makes it easier to feed in information from other parts of the hospital or patient environment, and to hook in analytics that make sense of that information.

Enhancements from outside sources could be plugged in

The modularity of open source makes it easier to offer open platforms. This could lead to marketplaces for EHR enhancements, a long-time goal of the open SMART standard. Certainly, there would have to be controls for the sake of safety: an administrator, for instance, could limit downloads to carefully vetted software packages.

At best, storage and interface in an EHR would be decoupled in separate modules. Experts at storage could optimize it to improve access time and develop new options, such as new types of filtering. At the same time, developers could suggest new interfaces so that users can have any type of dashboard, alerting system, data entry forms, or other access they want.

Bugs could be fixed expeditiously

Customers of proprietary software remain at the mercy of the vendors. I worked in one computer company that depended on a very subtle feature from our supplier that turned out not to work as advertised. Our niche market, real-time computing, needed that feature to achieve the performance we promised customers, but it turned out that no other company needed it. The supplier admitted the feature was broken but told us point-blank that they had no plans to fix it. Our product failed in the marketplace, for that reason along with others.

Other software users suffer because proprietary vendors shift their market focus or for other reasons–even going out of business.

Free and open source software never ossifies, so long as users want it. Anyone can hire a developer to fix a bug. Furthermore, the company fixing it usually feeds the fix back into the core project because they want it to be propagated to future versions of the software. Thus, the fixes are tested, hardened, and offered to all users.

What free and open source tools are available?

Numerous free and open source EHRs have been developed, and some are in widespread use. Most famously is VistA, the software created at the Department of Veterans Affairs, and used also by the Indian Health Service and other government agencies, has a community chaperone and has been adopted by the country of Jordan. VistA was considered by the Department of Defense as well, but ultimately rejected because the department didn’t want to invest in adding some missing features.

Another free software EHR, OpenMRS, supports health care in Kenya, Haiti, and elsewhere. OpenEMR is also deployed internationally.

What free and open source software has accomplished in these settings is just a hint of what it can do for health care across the board. The problem holding back open source is simple neglect: as VistA’s experience with the DoD showed, institutions are unwilling to support open source, even through they will pay 10 or 100 times as much on substandard proprietary software. Open Health Tools, covered in the article I just linked to, is one of several organizations that shriveled up and disappeared for lack of support. Some organizations gladly hop on for a free ride, using the software without contributing either funds or code. Others just ignore open source software, even though that means their own death: three hospitals have recently declared bankruptcy after installing proprietary EHRs. Although the article focuses on the up-front costs of installing the EHRs, I believe the real fatal blow was the inability of the EHRs to support efficient, streamlined health care services.

We need open source EHRs not just to reduce health care costs, but to transform health. But first, we need a vision of EHR heaven. I hope this article has taken us at least into the clouds.

Open Source Software and the Path to EHR Heaven (Part 1 of 2)

Posted on September 19, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Do you feel your electronic health record (EHR) is heaven or hell? The vast majority of clinicians–and many patients, too, who interact with the EHR through a web portal–see it as the latter. In this article, I’ll describe an EHR heaven and how free and open source software can contribute to it. But first an old joke (which I have adapted slightly).

A salesman for an EHR vendor dies and goes before the Pearly Gates. Saint Peter asks him, “Would you like to go to heaven or hell?”

Surprised, the salesman says, “I didn’t know I had a choice.”

Saint Peter suggests, “How about this. We’ll show you heaven and hell, and then you can decide.”

“Sounds fair,” says the EHR salesman.

First they take him to heaven. People wearing white robes are strumming harps and singing hymns, and it goes on for a long time, till they take him away.

Next they take him to hell. And it’s really cool! People are clinking wine glasses together and chatting about amusing topics around the pool.

When the EHR salesman gets back to the Pearly Gates, he says to Saint Peter, “You know, this sounds really strange, but I choose hell.”

Immediately comes a clap of thunder. The salesman is in a fiery pit being prodded with pitchforks by dreadful demons.

“Wait!” he cries out. “This is not the hell I saw!”

One of the demons answers, “They must have shown you the demo.”

Most hospitals and clinicians are currently in EHR hell–one they have freely chosen, and one paid for partly by government Meaningful Use reimbursements. So we all know what EHR hell look like. What would EHR heaven be? And how does free and open source software enable it? The following sections of this article list the traits I think clinicians would like to see.

Interfaces could be easily replaced and customized

The greatest achievement of the open source movement, in my opinion, has been to strike an ideal balance between “let a hundred flowers bloom” experimentation and choosing the best option to advance the field. A healthy open source project encourages branching, which lets any individual or team with the required expertise change a product to their heart’s content. Users can then try out different versions, and a central committee vets the changes to decide which version is most robust.

Furthermore, modularization on various levels (programming modules, hooks, compile-time options, configuration tools) allows multiple versions to co-exist, each user choosing the options right for their environment. Open source software tends to be modular for several reasons, notably because it is developed by many different individuals and teams who want control over their small parts of the system.

With easy customization, a hospital or clinic can mandate that certain items be highlighted and that safe workflow rules be followed when entering or retrieving data. But the institution can also offer leeway for individual clinicians and patients to arrange a dashboard, color scheme, or other aspect of the environment to their liking.

Many of the enablers for this kind of agile, user-friendly programming are technical. Modularity is built into programming languages, while branching is standard in version control systems. So why can’t proprietary vendors do what open source communities routinely do? A few actually do, but most are constrained in ways that prevent such flexibility, especially in electronic health records:

  • Most vendors are dragging out the lifetime of nearly 40-year old technology, with brittle languages and tools that put insurmountable barriers in the way of agile work styles. They are also stuck with monolithic systems instead of modular ones.
  • The vendors’ business model depends on this monolithic control. To unbundle components, allow mix-and-match installations, and allow third parties to plug in new features would challenge the prices they charge.
  • The vendors are fundamentally unprepared for empowered users. They may vet features with clinically trained consultants and do market research, but handling power over the system to users is not in their DNA.

Data could be exchanged in a standard format without complex transformations

Data sharing is the lifeblood of modern computing; you can’t get much done on a single computer anymore. Data sharing lies behind new technologies ranging from the Internet of Things to real-time ad generation (the reason you’ll see a link to an article about “Fourteen celebrities who passed out drunk in public” when you’re trying to read a serious article about health IT). But it’s so rare in health care–where it’s uniquely known as “interoperability”–that every year, reformers call it the most critical goal for health IT, and the Office of the National Coordinator has repeatedly narrowed its Meaningful Use and related criteria to emphasize interoperability.

Open source software can share data with other systems as a matter of course. Data formats are simple, often text-based, and defined in the code in easy-to-find ways. Open source programmers, freed from the pressures on proprietary developers to reinvent wheels and set themselves apart from competitors, like to copy existing data formats. As a stark example of open source’s advantages, consider the most recent version of the Open Document Format, used by LibreOffice and other office suites. It defines an entire office suite in 104 pages. How big is the standards document for the Microsoft OOXML format, offering roughly equivalent functionality? Currently, 6,755 pages–and many observers say even that is incomplete. In short, open source is consistently the right choice for data exchange.

What would the adoption of open source do to improve health care, given that it would solve the interoperability problem? Records could be stored in the cloud–hopefully under patient control–and released to any facility treating the patient. Research would blossom, and researchers could share data as allowed by patients. Analytical services could be plugged in to produce new insights about disease and treatment from the records of millions of people. Perhaps interoperability could also contribute to solving the notorious patient matching problem–but that’s a complicated issue that I have discussed elsewhere, touching on privacy issues and user control outside the scope of this article.

The next segment of this article will list three more benefits of free and open source software, along with an assessment of its current and future prospects.

Schlag and Froth: Argonauts Navigate Between Heavy-weight and Light-weight Standardization (Part 2 of 2)

Posted on August 26, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article laid out the context for HL7 FHIR standard and the Argonaut project; now we can look at the current status.

The fruits of Argonaut are to be implementation guides that they will encourage all EHR vendors to work from. These guides, covering a common clinical data set that has been defined by the ONC (and hopefully will not change soon), are designed to help vendors achieve certification so they can sell their products with the assurance that doctors using them will meet ONC regulations, which require a consumer-facing API. The ONC will also find certification easier if most vendors claim adherance to a single unambiguous standard.

The Argonaut implementation guides, according to Tripathi, will be complete in late September. Because FHIR is expected to be passed in September 2017, the Argonaut project will continue to refine and test the guides. One guide already completed by the project covers security authorization using OpenID and OAuth. FHIR left the question of security up to those standards, because they are well-established and already exist in thousands of implementations around the Web.

Achieving rough consensus

Tripathi portrays the Argonaut process as radically different from HL7 norms. HL7 has established its leading role in health standards by following the rules of the American National Standards Institute (ANSI) in the US, and similar bodies set up in other countries where HL7 operates. These come from the pre-Internet era and emphasize ponderous, procedure-laden formalities. Meetings must be held, drafts circulated, comments explicitly reconciled, ballots taken. Historically this has ensured that large industries play fair and hear through all objections, but the process is slow and frustrates smaller actors who may have good ideas but lack the resources to participate.

In contrast, FHIR brings together engineers and other interested persons in loose forums that self-organize around issues of interest. The process still tried to consider every observation and objection, and therefore, as we have seen, has taken a long time. But decision-making takes place at Internet speed and there is no jockeying for advantage in the marketplace. Only when a milestone is reached does the formal HL7 process kick in.

The Argonaut project works similarly. Tripathi reports that the vendors have gotten along very well. Epic and Cerner, the behemoths of the EHR field, are among the most engaged. Company managers don’t interfere with engineer’s opinions. And new vendors with limited resources are very active.

Those with a background in computers can recognize, in these modes of collaboration, the model set up by the Internet Engineering Task Force (IETF) decades ago. Like HL7, the IETF essentially pre-dated the Internet as we know it, which they helped to design. (The birth of the Internet is usually ascribed to 1969, and the IETF started in 1986, at an early stage of the Internet. FTP was the canonical method of exchanging their plain-text documents with ASCII art, and standards were distributed as Requests for Comments or RFCs.) The famous criteria cited by the IETF for approving standards is “rough consensus and running code.” FHIR and the Argonauts produce no running code, but they seem to operate through rough consensus, and the Argonauts could add a third criterion, “Get the most important 90% done and don’t let the rest hold you up.”

Tripathi reports that EHR vendors are now collaborating in this same non-rivalrous manner in other areas, including the Precision Medicine initiative, the Health Services Platform Consortium (HSPC), and the SMART on FHIR initiative.

What Next?

The dream of interoperability has long included the dream of a marketplace for apps, so that we’re not stuck with the universally hated EHR interfaces that clinicians struggle with daily, or awkwardly designed web sites for consumers. Tripathi notes that SMART offers an app gallery with applications that ought to work on any EHR that conforms to the open SMART platform. Cerner and athenahealth also have app stores protected by a formal approval process. (Health apps present more risk than the typical apps in the Apple App Store or Google Play, so they call more more careful, professional vetting.) Tripathi is certain that other vendors will follow in the lead of these projects, and that cross-vendor stores like SMART’s App Gallery will emerge in a few years along with something like a Good Housekeeping seal for apps.

The Argonaut guides will have to evolve. It’s already clear that EHR vendors are doing things that aren’t covered by the Argonaut FHIR guide, so there will be a few incompatible endpoints in their APIs. Consequently, the Argonaut project has a big decision to make: how to provide continuity? The project was deliberately pitched to vendors as a one-time, lightweight initiative. It is not a legal entity, and it does not have a long-term plan for stewardship of the outcomes.

The conversation over continuity is ongoing. One obvious option is to turn over everything to HL7 and let the guides fall under its traditional process. A new organization could also be set up. HL7 itself has set up the FHIR Foundation under a looser charter than HL7, probably (in my opinion) because HL7 realizes it is not nimble and responsive enough for the FHIR community.

Industries reach a standard in many different ways. In health care, even though the field is narrow, standards present tough challenges because of legacy issues, concerns over safety, and the complexity of human disease. It seems in this case that a blend of standardization processes has nudged forward a difficult process. Over the upcoming year, we should know how well it worked.

Schlag and Froth: Argonauts Navigate Between Heavy-weight and Light-weight Standardization (Part 1 of 2)

Posted on August 25, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

You generally have to dwell in deep Nerdville to get up much excitement about technical standards. But one standard has been eagerly followed by thousands since it first reached the public eye in 2012: Fast Healthcare Interoperability Resources (FHIR). To health care reformers, FHIR embodies all the values and technical approaches they have found missing in health care for years. And the development process for FHIR is as unusual in health care as the role the standard is hoped to play.

Reform From an Unusual Corner

FHIR started not as an industry initiative but as a pet project of Australian Grahame Grieve and a few developers gathered around him. From this unusual genesis it got taken up by HL7 and an initial draft was released in March 2012. Everybody in health care reform rallied around FHIR, recognizing it as a viable solution to the long-stated need for application programming interfaces (APIs). The magic of APIs, in turn, is their potential to make data exchange easy and create a platform for innovative health care applications that need access to patient data.

So, as a solution to the interoperability problems for which EHR vendors had been dunned by users and the US government, FHIR won immediate accolades. But these vendors knew they couldn’t trust normal software adoption processes to use FHIR interoperably–those processes had already failed on earlier standards.

HL7 version 2 had duly undergone a long approval process and had been implemented as an output document format by numerous EHR vendors, who would show off their work annually at an Interoperability Showcase in a central hall of the HIMSS conference. Yet all that time, out in the field, innumerable problems were reported. These failures are not just technical glitches, but contribute to serious setbacks in health care reform. For instance, complaints from Accountable Care Organizations are perennial.

Congress’s recent MACRA bill, follow-up HHS regulations, and pronouncements from government leaders make it clear that hospitals and their suppliers won’t be off the hook till they solve this problem of data exchange, which was licked decades ago by most other industries. It was by dire necessity, therefore, that an impressive array of well-known EHR vendors announced the maverick Argonaut project in December 2014. (I don’t suppose its name bears any relation to the release a few months before of a highly-publicized report from a short-lived committee called JASON.)

Argonaut include major EHR vendors, health care providers such as Partners Healthcare, Mayo, Intermountain, and Beth Israel Deaconess, and other interested parties such as Surescripts, The Advisory Board, and Accenture. Government agencies, especially the ONC, and app developers have come on board as testers.

One of the leading Argonauts is Micky Tripathi, CEO of the Massachusetts eHealth Collaborative. Tripathi has been involved in health care reform and technical problems such as data exchange long before these achieved notable public attention with the 2009 HITECH act. I had a chance to talk to him this week about the Argonauts’ progress.

Reaching a Milestone

FHIR is large and far-reaching but deliberately open-ended. Many details are expected to vary from country to country and industry to industry, and thus are left up to extensions that various players will design later. It is precisely in the extensions that the risk lurks of reproducing the Tower of Babel that exists in other health care standards.

The reason the industry have good hopes for success this time is the unusual way in which the Argonaut project was limited in both time and scope. It was not supposed to cover the entire health field, as standards such as the International Classification of Diseases (ICD) try to do. It would instead harmonize the 90% of cases seen most often in the US. For instance, instead of specifying a standard of 10,000 codes, it might pick out the 500 that the doctor is most likely to see. Instead of covering all the ways to take a patient’s blood pressure (sitting, standing, etc.), it recommends a single way. And it sticks closely to clinical needs, although it may well be extended for other uses such as pharma or Precision Medicine.

Finally instead of staying around forever to keep chopping off more tasks to solve, the Argonaut project would go away when it was done. In fact, it was supposed to be completed one year ago. But FHIR has taken longer than expected to coalesce, and in the meantime, the Argonaut project has been recognized as a fertile organization by the vendors. So they have extended it to deal with some extra tasks, such as an implementation guide for provider directories, and testing sprints.

That’s some history; the next section of this article will talk about the fruits of the Argonaut project and their plans for the future.