Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Big Data and the Social Good: The Value for Healthcare Organizations

Posted on May 22, 2017 I Written By

The following is a guest blog post by Mike Serrano from NETSCOUT.

It’s a well-known fact that Facebook, Google, and our phone companies collect a lot of information about each of us. This has been the case for a long time, and more often than not it’s to improve the user experience of the services we rely on. If data is shared outside the organization, it’s anonymized to prevent the usage of any one individual from being identified. But it’s understandable while this practice has still sparked a passionate and longstanding debate about privacy and ‘big brother’-style snooping.

What is often forgotten, however, or more likely drowned out by the inevitably growing chorus of privacy concerns, is the opportunity within the big data community for this valuable information to be used for social good. The potential is already there. The question, though, is how different organizations, and particularly the healthcare sector, can take advantage of anonymized user data to benefit society and improve the human condition.

When it comes to healthcare, data from mobile networks holds the biggest opportunity for the patient experience to be dramatically improved. To truly understand how real-time traffic and big data, in the form of historical network usage and traffic patterns, can be used for social good, let’s look at a few possible scenarios – two of which can be accomplished without needing to disclose individual user information at all.

Public health – Getting ahead of an outbreak

What a decade ago would have seemed impossible is very much a reality today. The pervasiveness of the smartphone and how people are using it has fundamentally changed our ability to leverage real-time communications data to the benefit of our society. For many people, smartphones have replaced computers as the primary device to search for information. This has value in itself, as when people use a smartphone it’s possible to place them in context of their community and travel patterns.

Zika is a recent example of a parasite spread by mosquitos that produces flu-like symptoms and can have grave consequences on a developing fetus, causing microcephaly. To control the mosquito population, local vector control agencies place field traps to capture mosquitos and periodically test the mosquitos they collect. This approach has value, but it’s slow and reactive.

What we have learned from flu epidemics is there’s typically an increase in Google searches of “flu symptoms” that emerge just before or at the same time as an outbreak of influenza. Since Zika is a mosquito-born pathogen, it will occur outside of times of the normal spread of influenza, but the initial symptoms are very similar to the common flu.

By monitoring mobile searches for any of a number of unique search terms, it is possible to quickly identify real-time locations where outbreaks may be occurring; thus allowing for a more targeted response by both vector control and public health agencies. In addition, it’s then possible to identify the extent to which migration through the area has occurred, and to where that population has traveled.

When merged with environmental data such as wind patterns, temperature, and precipitation, public health agencies can be extremely targeted about where to deploy resources and the nature of those resources to deploy. Such a targeted and immediate response is only available through the use of real-time network traffic data.

Public safety and medical deployments – disaster response

Recent earthquakes have emphasized the potential death and destruction that natural disasters can create. When buildings collapse first responders’ rush in to look for survivors, putting themselves in harms way as a series of aftershocks could cause additional damage to already weakened structures. But it’s a calculated risk. The search for life must happen quickly, which often means first responders are operating with no knowledge of the potential number of causalities within a building.

To ensure the appropriate allocation of response teams, public safety agencies working in tandem with healthcare organizations could leverage mobile network data. When a mobile phone is turned on, it automatically registers to the mobile network. At this point, the operator knows the number of devices in a certain area based on the placement of the cell tower and the parameters of that tower.

By comparing the last known number of registrants against historical network usage, the operator could guide public safety and relief agencies by understanding the number of known mobile phones in an impacted area. If needed, the operator could also assist in the identification of precisely who may still be in a damaged structure, should that level of detail be required.

Pandemic control – removing the guesswork

All major health organizations understand the next major pandemic is only a plane ride away from arriving on their doorstep. For example, when an international flight lands from a country that’s had a recent outbreak of flu or disease, there could potentially be hundreds of infected passengers on board. Those passengers will exit the plane, grab their luggage, and quickly head into the community – travelling far from the airport and growing the transmission radius significantly.

In a situation such as this, the challenge of containing or managing an outbreak is intrinsically tied to knowing where those passengers end up. How far have they travelled, how did they diffuse into the existing population, and how many circles of control need to be established in order to mitigate the risk?

Big data can address this issue. By working with mobile network operators the local healthcare community can quickly react, taking advantage of big data to deploy public health resources more effectively than they could otherwise. Operators already have access to this information, including where subscribers join the network and their current location, and this data is tremendously valuable when placed in the hands of healthcare professionals looking to stem a viral outbreak. The airline involved could also assist by providing any the phone numbers of passengers once the risk was identified.

The future of big data analysis for healthcare

Understanding human movement and social activity, powered by big data pulled from mobile networks, will have a fundamental role to play in more efficient healthcare response in the future. National, state, and local public health officials should all look to implement initiatives based on the use of big data for social good.

When you compare the use of big data against the current approach – where patient zero arrives at hospital and the local healthcare body has to try and identify who else is at risk based on the patient’s travel patterns and limited information they can provide – the benefits of this new approach are obvious.

As the conversation around the use of big data for healthcare purposes evolves, there will inevitably be new questions over individual privacy. While the examples outlined above do take advantage of subscriber behavior and individual insights – be that search terms of location information – the purpose is to understand populations or communities, not to identify any one subscriber. With this in mind, it is easy to mask subscriber identifiers while preserving the information about the population. Ultimately, the goal is to provide a more efficient utilization and allocation of society’s resources as we work to improve the human condition, not to undermine any one person’s right to privacy.

About Mike Serrano
Mike has over 20 years of experience in the communications industry. He is currently responsible for Service Provider Marketing at NETSCOUT. He began his career at PacBell (now part of at&t) where he designed service plans for the business market and where he was responsible for demand analysis and modeling. His career continued with Lucent technologies where he brought to market the first mobile data service technology. At Alloptic, he was responsible for marketing the industry’s first EPON access solution and bringing to market the first RFOG solution. At O3B Networks, Mike headed up marketing bringing to market the first MEO based constellation of satellites for serving internet service to the Other 3 Billion on the planet. Mike’s work continued at Cisco where he helped to define MediaNet (Videoscape) and the network technology transformation for cable operators. Mike holds a B.S. in Information Resource Management from San Jose State University and an MBA from Santa Clara University

A Consulting Firm Attempts a Transition to Open Source Health Software (Part 2 of 2)

Posted on September 7, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article covered the history of HLN’s open source offerings. How can it benefit from this far-thinking practice to build a sustainable business?

The obvious place to turn for funding is the Centers for Disease Control, which lies behind many of the contracts signed by public health agencies. One way or another, a public health agency has to step up and pay for development. This practice is called custom-developed code in the open source policy memorandum of the federal Office of Management and Budget (p. 14 of the PDF).

The free rider problem is acute in health care. In particular, the problems faced by a now-defunct organization, Open Health Tools, were covered in another article of mine. I examined why the potential users of the software felt little inclination to pay for its development.

The best hope for sustaining HLN as an open source vendor is the customization model: when an agency needs a new feature or a customized clinical decision support rule, it contracts with HLN to develop it. Naturally, the agency could contract with anyone it wants to upgrade open source software, but HLN would be the first place to look because they are familiar with software they built originally.

Other popular models include offering support as a paid service, and building proprietary tools on top of the basic open source version (“open core”). The temptation to skim off the cream of the product and profit by it is so compelling that one of the most vocal stalwarts of the open source process, MariaDB (based on the popular MySQL database) recently broke radically from its tradition and announced a proprietary license for its primary distinguishing extension.

Support has never scaled as a business model; it’s very labor-intensive. Furthermore, it might have made sense to offer support decades ago when each piece of software posed unique integration problems. But if you create good, modern interfaces–as Arzt claims to do–you use standards that are familiar and require little guidance.

The “open core” model has also proven historically to be a weak business model. Those that use it may stay afloat, but they don’t grow the way popular open source software such as Linux or Python do. The usual explanation for this is that users don’t find the open part of the software useful enough on its own, and don’t want to contribute to it because they feel they are just helping a company build its proprietary business.

Wonks to the Rescue
It may be that Arzt–and others who want to emulate his model in health care–have to foster a policy change in governments. This is certainly starting to happen, as seen in a series of policy announcements by the US government regarding open source software. But this is a long road, and direction could easily be reversed or allowed to falter. We have already seen false starts to open source software in various Latin American governments–the decade of the 2000s saw many flowery promises these, but hardly any follow-through.

I don’t like to be cynical, but hope may lie in the crushing failures of proprietary vendors to produce usable and accurate software for health care settings. The EHR Incentive Programs under Meaningful Use poured about 28 billion dollars into moving clinicians onto electronic records, almost all of it spent on proprietary products (of course, there were also administration costs for things such as Regional Extension Centers), with little to show in quality improvements or data exchange. The government’s open source initiatives, CONNECT and Direct, got lost in the muddle of non-functional proprietary EHRs.

So the health care industry will have to try something radically new, and the institutions willing to be innovate have their fingers on the pulse of cutting-edge trends. This includes open source software. HLN may be able to ride a coming wave.

A Consulting Firm Attempts a Transition to Open Source Health Software (Part 1 of 2)

Posted on September 6, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Open source is increasingly understood to be the future of software, because communities working together on shared needs can produce code that is at least as good as proprietary products, while representing user interests more effectively and interoperating without friction. But running an open source project is a complex task, and keeping a business going on it is absolutely perilous. In his 2001 book The Cathedral & the Bazaar, Eric S. Raymond listed half a dozen ways for businesses to profit on open source software, but today only one or two are visible in the field (and they differ from his list).

An Enduring Commitment
Noam H Arzt, president and founder of HLN Consulting, is trying to make the leap. After getting his PhD from the University of Pennsylvania and working there in various computer-related positions for 20 years, he got the health care bug–like many readers of this article–and decided to devote his career to software for public health. He first encountered the field while working on a public health project among the famous “hot spotters” of depressed Camden, New Jersey, and was inspired by the accomplishments of people in a bad area with minimal resources. Many of his company’s later projects come from the Department of Health and Mental Hygiene in New York City.

Founded in 1997, HLN Consulting has released code under an open source license for some time. It makes sense, because its clients have no reason to compete with anybody, because IT plays a crucial role in public health, and because the needs of different public health agencies overlap a great deal. Furthermore, they’re all strapped for funds. So Arzt tells me that the agency leadership is usually enthusiastic about making the software open source. It just may take a few months to persuade the agency’s lawyers, who are clueless about open source licenses, to put one in the contract.

A few agencies outside of HLN’s clients have picked up the software, though–particularly as the developers adopt modern software practices such as more modular systems and a service-oriented architecture using open, published APIs–but none have yet contributed anything back.

HLN did, however, rack up a recent win over the Immunization Calculation Engine (ICE), software that calculates and alerts clinicians about the vaccinations patients need. The software is normally used by immunization registries that serve states or large municipalities. But eClinicalWorks has also incorporated ICE into its EHR. And the Veterans Health Administration (VHA) chose ICE this year to integrate with its renowned VistA health record. HLN has invested a fair amount of its own time into preparing ICE for integration. Arzt estimates that since HLN developed ICE for a client, the company has invested at least five person-years in upgrading the software, and has received no money directly for doing so. HLN hopes to generate revenue from assisting organizations in configuring and using ICE and its clinical decision support rules, and a new support contract with VHA is the first big step.

Can You Get There From Here?
Arzt is trying now to build on the success of ICE and make a transition from a consulting firm to an open source software firm. A consulting firm typically creates products for a single customer, and has “fight and claw for every contract,” in Arzt’s words. Maintaining a steady stream of work in such firms is always challenging. In contrast, successful open source software is widely used, and the work put into the software by each contributor is repaid by all the contributions made by others. There is no doubt that HLN is developing products with broad applicability. It all makes economic sense–except that somebody actually has to foot the bill. We’ll look at possibilities in the next section of this article.

Are State Health Agencies Ready for Meaningful Use Stage 2?

Posted on September 23, 2013 I Written By

James Ritchie is a freelance writer with a focus on health care. His experience includes eight years as a staff writer with the Cincinnati Business Courier, part of the American City Business Journals network. Twitter @HCwriterJames.

As part of its public health objectives, Meaningful Use 2 requires doctors and hospitals to report sizable amounts of information.

The idea is that when significant patterns are forming — an outbreak of a certain disease, for example, or a peculiar cluster of symptoms — they’ll be apparent right away.

But someone has to be in position to receive the data.

The responsibility falls to local and public health departments. Agencies around the country should, theoretically, be preparing for the immunization records, laboratory results and other information they’ll soon be getting.

Just how many will be ready, though, remains to be seen. Many cash-strapped departments lack the IT infrastructure for what’s being asked of them — and the money allocated by the government hasn’t amounted to much, according to a 2012 American Journal of Public Health article by Drs. Leslie Lenert and David Sundwall.

In fact, the authors wrote, the federal effort “has created unfunded mandates that worsen financial strains” on health departments.

There’s a caveat, though: The mandates aren’t really mandates.

“Nothing compels them to do it” except the desire to do the right thing, said Frieda du Toit, owner of Lakeside, Calif.-based Advanced Business Software. “Some directors are interested, some are not. The lack of money is the main thing.”

In our recent interview, du Toit, whose company specializes in information management solutions for health departments, added: “One customer asked me: ‘Am I going to be punished in any way, form or fashion if I don’t support the efforts of my hospitals and care providers?”

Her firm’s Web-based Public Health Information Management System serves cities and counties throughout the United States, including in California, Texas and Connecticut.

The federal government’s goal is for public health agencies to be involved in four administrative tasks to support MU2, according to the Stage 2 Meaningful Use Public Health Reporting Task Force. The task force is a collaboration between the U.S. Centers for Disease Control and Prevention, nonprofit public health associations and public health practitioners.

The first step is to take place before the start of MU2 — that’s Oct. 1, 2013, for hospitals and Jan. 1, 2014, for individual providers.

The tasks:

  • Declaration of readiness. Public health agencies tell the Centers for Medicare & Medicaid Services what public health initiatives they can support.
  • Registration of intent. Hospitals and providers notify public health agencies in writing what objectives they seek to meet.
  • On-boarding. Medical providers work with health departments work to achieve ongoing Meaningful Use data submission.
  • Acknowledgement. Public health agencies inform providers that reportable data has been received.

For doctors and other eligible professionals, MU2 calls for ongoing submission of electronic data for immunizations. Hospitals are to submit not only immunizations but also reportable laboratory results and syndromic surveillance data.

Health care providers whose local public health departments lack the resources to support MU2 are exempt from the reporting requirements.

In Meaningful Use Stage 3, which health IT journalist Neil Versel wrote is likely to begin in 2017, “electronic health records systems with new capabilities, such as the ability to work with public health alerting systems and on-screen ‘buttons’ for submitting case reports to public health, are envisioned,” according to Lenert and Sundwall.

The authors noted: “Public health departments will be required not just to upgrade their systems once, but also to keep up with evolving changes in the clinical care system” prompted by the regulations.

They proposed cloud computing as a better way. Shared systems and remote hosting, Lenert and Sundwall suggested, could get the work done efficiently and affordably, albeit at a cost to individual jurisdictions’ autonomy.

As EMR adoption grows, it would be a shame not to take advantage of the opportunities for public health. The entire health IT effort being pushed by the federal government is, after all, geared toward improving the health of populations.

Without money for the job, though, public health agencies’ ability to support Meaningful Use will likely always be limited. It looks like a good time to think about committing significant funds, embracing cloud-based solutions or both.

De-identified Healthcare Data – Is It Really Unidentifiable

Posted on September 30, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

There’s always been some really interesting discussion about EHR vendors selling the data from their EHR software. Turns out that many EHR vendors and other healthcare entities are selling de-identified healthcare data now, but I haven’t heard much public outcry from them doing it. Is it because the public just doesn’t realize it’s happening or because the public is ok with de-identified data being sold. I’ve heard many argue that they’re happy to have their de-identified data sold if it improves public health or if it gives them a better service at a cheaper cost.

However, a study coming out of Canada has some interesting results when it comes to uniquely identifying people from de-identified data. The only data they used was date of birth, gender, and full postal code data. “When the full date of birth is used together with the full postal code, then approximately 97% of the population are unique with only one year of data.”

One thing that concerns me a little about this study is that postal code is a pretty unique identifier. Take out postal code and you’ll find much different results. Why? Cause a lot of people share the same birthday and gender. However, the article does offer a reasonable suggestion based on the results of the study:

“Most people tend to think twice before reporting their year of birth [to protect their privacy] but this report forces us all to think about the combination or the totality of data we share,” said Dr. El Emam. “It calls out the urgency for more precise and quantitative approaches to measure the different ways in which individuals can be re-identified in databases – and for the general population to think about all of the pieces of personal information which in combination can erode their anonymity.”

To me, this is the key point. It’s not about creating fear and uncertainty that has no foundation, but to consider more fully the effect on patient privacy of multiple pieces of personal information in de-identified patient data.