Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Exchange Value: A Review of Our Bodies, Our Data by Adam Tanner (Part 1 of 3)

Posted on January 25, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A lot of people are feeling that major institutions of our time have been compromised, hijacked, or perverted in some way: journalism, social media, even politics. Readers of Adam Tanner’s new book, Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records, might well add health care data to that list.

Companies collecting our data–when they are not ruthlessly trying to keep their practices secret–hammer us with claims that this data will improve care and lower costs. Anecdotal evidence suggests it does. But the way this data is used now, it serves the business agendas of drug companies and health care providers who want to sell us treatments we don’t need. When you add up the waste of unnecessary tests and treatments along with the money spent on marketing, as well as the data collection that facilitates that marketing, I’d bet it dwarfs any savings we currently get from data collection.

How we got to our current data collection practices

Tanner provides a bit of history of data brokering in health care, along with some intriguing personalities who pushed the industry forward. At first, there was no economic incentive to collect data–even though visionary clinicians realized it could help find new diagnoses and treatments. Tanner says that the beginnings of data collection came with the miracle drugs developed after World War II. Now that pharmaceutical companies had a compelling story to tell, ground-breaking companies such as IMS Health (still a major player in the industry) started to help them target physicians who had both the means of using their drugs–that is, patients with the target disease–and an openness to persuasion.

Lots of data collection initiatives started with good intentions, some of which paid off. Tanner mentions, as one example, a computer program in the early 1970s that collected pharmacy data in the pursuit of two laudable goals (Chapter 2, page 13): preventing patients from getting multiple prescriptions for the same drug, and preventing adverse interactions between drugs. But the collection of pharmacy data soon found its way to the current dominant use: a way to help drug companies market high-profit medicines to physicians.

The dual role of data collection–improving care but taking advantage of patients, doctors, and payers–persists over the decades. For instance, Tanner mentions a project by IMS Health (which he treats pretty harshly in Chapter 5) collecting personal data from AIDS patients in 1997 (Chapter 7, page 70). Tanner doesn’t follow through to say what IMS did with the AIDS data, but I am guessing that AIDS patients don’t offer juicy marketing opportunities, and that this initiative was aimed at improving the use and effectiveness of treatments for this very needy population. And Chapter 7 ends with a list of true contributions to patient health and safety created by collecting patient data.

Chapter 6 covers the important legal battles fought by several New England states (including the scrappy little outpost known for its worship of independent thinking, New Hampshire) to prevent pharmacies from selling data on what doctors are prescribing. These attempts were quashed by the well-known 2011 Supreme Court ruling on Vermont’s law. All questions of privacy and fairness were submerged by considering the sale of data to be a matter of free speech. As we have seen during several decisions related to campaign financing, the current Supreme Court has a particularly expansive notion of what the First Amendment covers. I just wonder what they will say when someone who breaks into the records of an insurer or hospital and steals several million patient records pleads free speech to override the Computer Fraud and Abuse Act.

Tanner has become intrigued, and even enamored, by the organization Patient Privacy Rights and its founder, Deborah Peel. I am closely associated with this organization and with Peel as well, working on some of their privacy summits and bringing other people into their circle. Because Tanner airs some criticisms of Peel, I’d like to proffer my own observation that she has made exaggerated and unfair criticisms of health IT in the past, but has moderated her views a great deal. Working with experts in health IT sympathetic to patient privacy, she has established Patient Privacy Rights during the 2010 decade as a responsible and respected factor in the health care field. So I counter Tanner’s repeated quotes regarding Peel as “crazy” (Chapter 8, page 83) by hailing her as a reputable and crucial force in modern health IT.

Coincidentally, Tanner refers (Chapter 8, page 79) to a debate that I moderated between IMS representative Kim Gray and Michelle De Mooy (available in a YouTube video). The discussion started off quite tame but turned up valuable insights during the question-and-answer period (starting at 38:33 in the video) about data sharing and the role of de-identification.

While the Supreme Court ruling stripped doctors of control over data about their practices–a bit of poetic irony, perhaps, if you consider their storage of patient data over the decades as an unjust taking–the question of patient rights was treated as irrelevant. The lawyer for the data miners said, “The patients have nothing to do with this” (Chapter 6, page 57) and apparently went unchallenged. How can patients’ interest in their own data be of no concern? For that question we need to look at data anonymization, also known as de-identification. This will begin the next section of our article.

We Share Health Data with Marketing Companies, Why Not with Healthcare Providers? Answer: $$

Posted on November 20, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

For those who don’t realize it, your health data is being shared all over the place. Yes, we like to think that our health care data is being stored and protected and that laws like HIPAA keep them safe, but there are plenty of ways to legally share health care data today. In fact, many EHR vendors sell your health care data for a pretty penny.

Of course, many would argue that it’s shared in a way that complies with all the laws and that it’s done in a way that your health record isn’t individually identified. They’re only sharing your health data in a de-identified manner. Others would argue that you can’t deidentify the health data and that there are ways to reidentify the data. I’ll leave those arguments for another post. We’ll also leave the argument over whether all this sharing of health data (usually to marketing, pharma and insurance companies) is safe or not for a future post as well.

What’s undeniable is that health data for pretty much all of us is being bought and sold all over health care. If you don’t believe it’s so, take a minute to look at the work of Deborah Peel from Patient Privacy Rights and learn about her project theDataMap. She’ll be happy to inform you of all the ways data is currently being bought and sold. It’s a really big business.

Here’s where the irony comes in. We have no trouble sharing health data (Yes, even EHR vendors have no problem sharing data and lets be clear that not all EHR vendors share data with these outside companies but mare are sharing data) with marketing companies, payers and pharma companies that are willing to pay for access to that data. Yet, when we ask EHR vendors to share health data with other EHR vendors or with an HIE, they balk at the idea as if it’s impossible. They follow that up with a bunch of lame excuses about HIPAA privacy or the complexity of health care data.

Let’s call a spade a spade. We could pretty easily be interoperable in health care if we wanted to be interoperable. We know that’s true because when the money is there from these third party companies, EHR vendors can share data with them. The problem has been that the money has never been there before for EHR vendors to be motivated enough to make interoperability between EHR vendors possible. In fact, you could easily argue that the money was instructing EHR vendors not to be interoperable.

However, times are changing. Certainly the government pressure to be interoperable is out there, but that doesn’t really motivate the industry if there’s not some financial teeth behind it. Luckily the financial teeth are starting to appear in the form of value based reimbursement and the move away from fee for service. That and other trends are pushing healthcare providers to want interoperable health records as an important part of their business. That’s a far cry from where interoperability was seen as bad for their business.

I heard about this shift first hand recently when I was talking with Micky Tripathi, President & CEO of the Massachusetts eHealth Collaborative. Micky told me that his organization had recently run a few RFPs for healthcare organizations searching for an EHR. As part of the EHR selection process Micky recounted that interoperability of health records was not only included in the RFP, but was one of the deciding factors in the healthcare organizations’ EHR selections. The same thing would have never been said even 3-5 years ago.

No doubt interoperability of health records has a long way to go, but there are signs that times are changing. The economics are starting to make sense for organizations to embrace interoperablity. That’s a great thing since we know they can do it once the right economic motivations are present.

Government Surveillance and Privacy of Personal Data

Posted on April 6, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Dr. Deborah Peel from Patient Privacy Rights always keeps me updated on some of the latest news coverage around privacy and government surveillance. Obviously, it’s a big challenge in healthcare and she’s the leading advocate for patient privacy.

Today she sent me a link to this John Oliver interview with Snowden. The video is pretty NSFW with quite a bit of vulgarity in it (It’s John Oliver on HBO, so you’ve been warned). However, much like Stephen Colbert and John Stewart, they talk about some really important topics in a funny way. Plus, the part where he’s waiting to see if Snowden is going to actually show for the interview is hilarious.

The humor aside, about 10 minutes in John Oliver makes this incredibly insightful observation:

There are no easy answers here. We all naturally want perfect privacy and perfect safety, but those two things cannot coexist.

Either you have to lose one of them or you have to accept some reasonable restrictions on both of them.

This is the challenge of privacy and security. There are risks to having data available electronically and flowing between healthcare providers. However, there are benefits as well.

I’ve found the right approach is to keenly focused on the benefits you want to achieve in using technology in your organization. Then, after you’ve focused the technology on the benefits, work through all of the risks you face. Once you have that list of risks, you work to mitigate those risks as much as possible.

As my hacker friend said, “You’ll never be 100% secure. Someone can always get in if they’re motivated enough. However, you can make it hard enough for them to breach that they’ll go somewhere else.”

De-Identification of Data in Healthcare

Posted on January 14, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Today I had a chance to sit down with Khaled El Emam, PhD, CEO and Founder of Privacy Analytics, to talk about healthcare data and the de-identification of that healthcare data. Data is at the center of the future of healthcare IT and so I was interested to hear Khaled’s perspectives on how to manage the privacy and security of that data when you’re working with massive healthcare data sets.

Khaled and I started off the conversation talking about whether healthcare data could indeed be de-identified or not. My favorite Patient Privacy Rights advocate, Deborah C. Peel, MD, has often made the case for why supposedly de-identified healthcare data is not really private or secure since it can be re-identified. So, I posed that question to Khaled and he suggested that Dr. Peel is only telling part of the story when she references stories where healthcare data has been re-identified.

Khaled makes the argument that in all of the cases where healthcare data has been reidentified, it was because those organizations did a poor job of de-identifying the data. He acknowledges that many healthcare organizations don’t do a good job de-identifying healthcare data and so it is a major problem that Dr. Peel should be highlighting. However, just because one organization does a poor job de-identifying data, that doesn’t mean that proper de-identification of healthcare data should be thrown out.

This kind of reminds me of when people ask me if EHR software is secure. My answer is always that EHR software can be more secure than paper charts. However, it depends on how well the EHR vendor and the healthcare organization’s staff have done at implementing security procedures. When it’s done right, an EHR is very secure. When it’s done wrong, and EHR could be very insecure. Khaled is making a similar argument when it comes to de-identified health data.

Khaled did acknowledge that the risks are never going to be 0. However, if you de-identify healthcare data using proper techniques, the risks are small enough that they are similar to the risks we take every day with our healthcare data. I think this is an important point since the reality is that organizations are going to access and use healthcare data. That is not going to stop. I really don’t think there’s any debate on this. Therefore, our focus should be on minimizing the risks associated with this healthcare data sharing. Plus, we should hold organizations accountable for the healthcare data sharing their doing.

Khaled also suggested that one of the challenges the healthcare industry faces with de-identifying healthcare data is that there’s a shortage of skilled professionals who know how to do it properly. I’d suggest that many who are faced with de-identifying data have the right intent, but likely lack the skills needed to ensure that the healthcare data de-identification is done properly. This isn’t a problem that will be solved easily, but should be helped as data security and privacy become more important.

What do you think of de-identification in healthcare? Is the way it’s being done a problem today? I see no end to the use of data in healthcare, and so we really need to make sure we’re de-identifying healthcare data properly.

IMS IPO and Health Data Privacy

Posted on January 7, 2014 I Written By

The following is a guest post by Dr. Deborah Peel, Founder of Patient Privacy Rights. There is no bigger advocate of patient privacy in the world than Dr. Peel. I’ll be interested to hear people comments and reactions to Dr. Peel’s guest post below. I look forward to an engaging conversation on the subject.

Clearly the way to understand the massive hidden flows of health data are in SEC filings.

For years, people working in the healthcare and HIT industries and government have claimed PPR was “fear-mongering”, even while they ignored/denied the evidence I presented in hundreds of talks about dozens of companies that sell health data (see slides up on our website)

But IMS SEC filings are formal, legal documents and IMS states that it buys “proprietary data sourced from over 100,000 data suppliers covering over 780,000 data feeds globally”. It buys and aggregates sensitive “prescription” records, “electronic medical records”, “claims data”, and more to create “comprehensive”, “longitudinal” health records on “400 million” patients.

* All purchases and subsequent sales of personal health records are hidden from patients. Patients are not asked for informed consent or given meaningful notice.
* IMS Health Holdings sells health data to “5,000 clients”, including the US Government.

These statements show the GREAT need for a comprehensive health data map—–and that it will include potentially a billion places that Americans’ sensitive health data flows.

In what universe is our health data “private and secure”?

Is Your EMR Compromising Patient Privacy?

Posted on November 20, 2013 I Written By

James Ritchie is a freelance writer with a focus on health care. His experience includes eight years as a staff writer with the Cincinnati Business Courier, part of the American City Business Journals network. Twitter @HCwriterJames.

Two prominent physicians this week pointed out a basic but, in the era of information as a commodity, sometimes overlooked truth about EMRs: They increase the number of people with access to your medical data thousands of times over.

Dr. Mary Jane Minkin said in a Wall Street Journal video panel on EMR and privacy that she dropped out of the Yale Medical Group and Medicare because she didn’t want her patients’ information to be part of an EMR.

She gave an example of why: Minkin, a gynecologist, once treated a patient for decreased libido. When the patient later visited a dermatologist in the Yale system, that sensitive bit of history appeared on a summary printout.

“She was outraged,” she told Journal reporter Melinda Beck. “She felt horrible that this dermatologist would know about her problem. She called us enraged for 10 or 15 minutes.”

Dr. Deborah Peel, an Austin psychiatrist and founder of the nonprofit group Patient Privacy Rights, said she’s concerned about the number of employees, vendors and others who can see patient records. Peel is a well-known privacy advocate but has been accused by some health IT leaders of scaremongering.

“What patients should be worried about is that they don’t have any control over the information,” she said. “It’s very different from the paper age where you knew where your records were. They were finite records and one person could look at them at a time.”

She added: “The kind of change in the number of people who can see and use your records is almost uncountable.”

Peel said the lack of privacy causes people to delay or avoid treatment for conditions such as cancer, depression and sexually transmitted infections.

But Dr. James Salwitz, a medical oncologist in New Jersey, said on the panel that the benefits of EMR, including greater coordination of care and reduced likelihood of medical errors, outweigh any risks.

The privacy debate doesn’t have clear answers. Paper records are, of course, not immune to being lost, stolen or mishandled.

In the case of Minkin’s patient, protests aside, it’s reasonable for each physician involved in her care to have access to the complete record. While she might not think certain parts of her history are relevant to particular doctors, spotting non-obvious connections is an astute clinician’s job. At any rate, even without an EMR, the same information might just as easily have landed with the dermatologist via fax.

That said, privacy advocates have legitimate concerns. Since it’s doubtful that healthcare will go back to paper, the best approach is to improve EMR technology and the procedures that go with it.

Plenty of work is underway.

For example, at the University of Texas at Arlington, researchers are leading a National Science Foundation project to keep healthcare data secure while ensuring that the anonymous records can be used for secondary analysis. They hope to produce groundbreaking algorithms and tools for identifying privacy leaks.

“It’s a fine line we’re walking,” Heng Huang, an associate professor at UT’s Arlington Computer Science & Engineering Department, said in a press release this month “We’re trying to preserve and protect sensitive data, but at the same time we’re trying to allow pertinent information to be read.”

When it comes to balancing technology with patient privacy, healthcare professionals will be walking a fine line for some time to come.