Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Thoughts on Privacy in Health Care in the Wake of Facebook Scrutiny

Posted on April 13, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A lot of health IT experts are taking a fresh look at the field’s (abysmal) record in protecting patient data, following the shocking Cambridge Analytica revelations that cast a new and disturbing light on privacy practices in the computer field. Both Facebook and others in the computer field who would love to emulate its financial success are trying to look at general lessons that go beyond the oddities of the Cambridge Analytica mess. (Among other things, the mess involved a loose Facebook sharing policy that was tightened up a couple years ago, and a purported “academic researcher” who apparently violated Facebook’s terms of service.)

I will devote this article to four lessons from the Facebook scandal that apply especially to health care data–or more correctly, four ways in which Cambridge Analytica reinforces principles that privacy advocates have known for years. Everybody recognizes that the risks modern data sharing practices pose to public life are hard, even intractable, and I will have to content myself with helping to define the issues, not present solutions. The lessons are:

  • There is no such thing as health data.

  • Consent is a meaningless concept.

  • The risks of disclosure go beyond individuals to affect the whole population.

  • Discrimination doesn’t have to be explicit or conscious.

The article will now lay out each concept, how the Facebook events reinforce it, and what it means for health care.

There is no such thing as health data

To be more precise, I should say that there is no hard-and-fast distinction between health data, financial data, voting data, consumer data, or any other category you choose to define. Health care providers are enjoined by HIPAA and other laws to fiercely protect information about diagnoses, medications, and other aspects of their patients’ lives. But a Facebook posting or a receipt from the supermarket can disclose that a person has a certain condition. The compute-intensive analytics that data brokers, marketers, and insurers apply with ever-growing sophistication are aimed at revealing these things. If the greatest impact on your life is that a pop-up ad for some product appears on your browser, count yourself lucky. You don’t know what else someone is doing with the information.

I feel a bit of sympathy for Facebook’s management, because few people anticipated that routine postings could identify ripe targets for fake news and inflammatory political messaging (except for the brilliant operatives who did that messaging). On the other hand, neither Facebook nor the US government acted fast enough to shut down the behavior and tell the public about it, once it was discovered.

HIPAA itself is notoriously limited. If someone can escape being classified as a health care provider or a provider’s business associate, they can collect data with abandon and do whatever they like (except in places such as the European Union, where laws hopefully require them to use the data for the purpose they cited while collecting it). App developers consciously strive to define their products in such a way that they sidestep the dreaded HIPAA coverage. (I won’t even go into the weaknesses of HIPAA and subsequent laws, which fail to take modern data analysis into account.)

Consent is a meaningless concept

Even the European Union’s new regulations (the much-publicized General Data Protection Regulation or GDPR) allows data collection to proceed after user consent. Of course, data must be collected for many purposes, such as payment and shipping at retail web sites. And the GDPR–following a long-established principle of consumer rights–requires further consent if the site collecting the data wants to use it beyond its original purpose. But it’s hard to imagine what use data will be put to, especially a couple years in the future.

Privacy advocates have known from the beginning of the ubiquitous “terms of service” that few people read before the press the Accept button. And this is a rational ignorance. Even if you read the tiresome and legalistic terms of service (I always do), you are unlikely to understand their implications. So the problem lies deeper than tedious verbiage: even the most sophisticated user cannot predict what’s going to happen to the data she consented to share.

The health care field has advanced farther than most by installing legal and regulatory barriers to sharing. We could do even better by storing all health data in a Personal Health Record (PHR) for each individual instead of at the various doctors, pharmacies, and other institutions where it can be used for dubious purposes. But all use requires consent, and consent is always on shaky grounds. There is also a risk (although I think it is exaggerated) that patients can be re-identified from de-identified data. But both data sharing and the uses of data must be more strictly regulated.

The risks of disclosure go beyond individuals to affect the whole population

The illusion that an individual can offer informed consent is matched by an even more dangerous illusion that the harm caused by a breach is limited to the individual affected, or even to his family. In fact, data collected legally and pervasively is used daily to make decisions about demographic groups, as I explained back in 1998. Democracy itself took a bullet when Russian political agents used data to influence the British EU referendum and the US presidential election.

Thus, privacy is not the concern of individuals making supposedly rational decisions about how much to protect their own data. It is a social issue, requiring a coordinated regulatory response.

Discrimination doesn’t have to be explicit or conscious

We have seen that data can be used to draw virtual red lines around entire groups of people. Data analytics, unless strictly monitored, reproduce society’s prejudices in software. This has a particular meaning in health care.

Discrimination against many demographic groups (African-Americans, immigrants, LGBTQ people) has been repeatedly documented. Very few doctors would consciously aver that they wish people harm in these groups, or even that they dismiss their concerns. Yet it happens over and over. The same unconscious or systemic discrimination will affect analytics and the application of its findings in health care.

A final dilemma

Much has been made of Facebook’s policy of collecting data about “friends of friends,” which draws a wide circle around the person giving consent and infringes on the privacy of people who never consented. Facebook did end the practice that allowed Global Science Research to collect data on an estimated 87 million people. But the dilemma behind the “friends of friends” policy is how inextricably it embodies the premise behind social media.

Lots of people like to condemn today’s web sites (not just social media, but news sites and many others–even health sites) for collecting data for marketing purposes. But as I understand it, the “friends of friends” phenomenon lies deeper. Finding connections and building weak networks out of extended relationships is the underpinning of social networking. It’s not just how networks such as Facebook can display to you the names of people they think you should connect with. It underlies everything about bringing you in contact with information about people you care about, or might care about. Take away “friends of friends” and you take away social networking, which has been the most powerful force for connecting people around mutual interests the world has ever developed.

The health care field is currently struggling with a similar demonic trade-off. We desperately hope to cut costs and tame chronic illness through data collection. The more data we scoop up and the more zealously we subject it to analysis, the more we can draw useful conclusions that create better care. But bad actors can use the same techniques to deny insurance, withhold needed care, or exploit trusting patients and sell them bogus treatments. The ethics of data analysis and data sharing in health care require an open, and open-eyed, debate before we go further.

E-Patient Update: Reducing Your Patients’ Security Anxiety

Posted on March 31, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Even if you’re not a computer-savvy person, these days you can hardly miss the fact that healthcare data is a desirable target for cyber-criminals. After all, over the past few years, healthcare data breaches have been in the news almost every day, with some affecting millions of consumers.

As a result, many patients have become at least a bit afraid of interacting with health data online. Some are afraid that data stored on their doctor or hospital’s server will be compromised, some are afraid to manage their data on their own, and others don’t even know what they’re worried about – but they’re scared to get involved with health data online.

As an e-patient who’s lived online in one form or another since the 80s (anyone remember GEnie or Compuserve?) I’ve probably grown a bit too blasé about security risks. While I guard my online banking password as carefully as anyone else, I don’t tend to worry too much about abstract threats posed by someone who might someday, somehow find my healthcare data among millions of other files.

But I realize that most patients – and providers – take these issues very seriously, and with good reason. Even if HIPAA weren’t the law of the land, providers couldn’t afford to have patients feel like their privacy wasn’t being respected. After all, patients can’t get the highest-quality treatment available if they aren’t comfortable being candid about their health behaviors.

What’s more, no provider wants to have their non-clinical data hacked either. Protecting Social Security numbers, credit card details and other financial data is a critical responsibility, and failing at it could cost patients more than their privacy.

Still, if we manage to intimidate the people we’re trying to help, that can’t be good either. Surely we can protect health data without alienating too many patients.

Striking a balance

I believe it’s important to strike a balance between being serious about security and making it difficult or frightening for patients to engage with their data. While I’m not a security expert, here’s some thoughts on how to strike that balance, from the standpoint of a computer-friendly patient.

  • Don’t overdo things: Following strong security practices is a good idea, but if they’re upsetting or cumbersome they may defeat your larger purposes. I’m reminded of the policy of one of my parents’ providers, who would only provide a new password for their Epic portal if my folks came to the office in person. Wouldn’t a snail mail letter serve, at least if they used registered mail?
  • Use common-sense procedures: By all means, see to it that your patients access their data securely, but work that into your standard registration process and workflow. By the time a patient leaves your office they should have access to everything they need for portal access.
  • Guide patients through changes: In some cases, providers will want to change their security approach, which may mean that patients have to choose a new ID and password or otherwise change their routine. If that’s necessary, send them an email or text message letting them know that these changes are expected. Otherwise they might be worried that the changes represent a threat.
  • Remember patient fears: While practice administrators and IT staff may understand security basics, and why such protections are necessary, patients may not. Bear in mind that if you take a grim tone when discussing security issues, they may be afraid to visit your portal. Keep security explanations professional but pleasant.

Remember your goals

Speaking as a consumer of patient health data, I have to say that many of the health data sites I’ve accessed are a bit tricky to use. (OK, to be honest, many seem to be designed by a committee of 40-something engineers that never saw a gimmicky interface they didn’t like.)

And that isn’t all. Unfortunately, even a highly usable patient data portal or app can become far more difficult to use if necessary security protections are added to the mix. And of course, sometimes that may be how things have to be.

I guess I’m just encouraging providers who read this to remember their long-term goals. Don’t forget that even security measures should be evaluated as part of a patient’s experience, and at least see that they do as little as possible to undercut that experience.

After all, if a girl-geek and e-patient like myself finds the security management aspect of accessing my data to be a bummer, I can only imagine other consumers will just walk away from the keyboard. With any luck, we can find ways to be security-conscious without imposing major barriers to patient engagement.

E-Patient Update:  Is Technology Getting Ahead Of Medical Privacy?

Posted on December 9, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

I don’t know about y’all, but I love, love, love interacting with Google’s AI on my smartphone. It’s beyond convenient – it seems to simply read my mind and dish out exactly the content I needed.

That could have unwelcome implications, however, when you bear in mind that Google might be recording your question. Specifically, for a few years now, Google’s AI has apparently been recording users’ conversations whenever it is triggered. While Google makes no secret of the matter, and apparently provides directions on how to erase these recordings, it doesn’t affirmatively ask for your consent either — at least not in any terribly conspicuous way — though it might have buried the request in a block of legal language.

Now, everybody has a different tolerance for risk, and mine is fairly high. So unless an entity does something to suggest to me that it’s a cybercrook, I’m not likely to lose any sleep over the information it has harvested from my conversations. In my way of looking at the world, the odds that gathering such information will harm me are low, while the odds collection will help me are much greater. But I know that others feel much differently than myself.

For these reasons, I think it’s time to stop and take a look at whether we should regulate potential medical conversations with intermediaries like Google, whether or not they have a direct stake in the healthcare world. As this example illustrates, just because they’re neither providers, payers or business associates doesn’t mean they don’t manage highly sensitive healthcare information.

In thinking this over, my first reaction is to throw my hands in the air and give up. After all, how can we possibly track or regulate the flow of medical information falls outside the bounds of HIPAA or state privacy laws? How do we decide what behavior might constitute an egregious leak of medical information, and what could be seen as a mild mistake, given that the rules around provider and associate behavior may not apply? This is certainly a challenging problem.

But the more I consider these issues, the more I am convinced that we could at least develop some guidelines for handling of medical information by non-medical third parties, including what type of consumer disclosures are required when collecting data that might include healthcare information, what steps the intermediary takes to protect the data and how to opt out of data collection.

Given how complex these issues are, it’s unlikely we would succeed at regulating them effectively the first time, or even the fourth or fifth. And realistically, I doubt we can successfully apply the same standards to non-medical entities fielding health questions as we can to providers or business associates. That being said, I think we should pay more attention to such issues. They are likely to become more important, not less, as time goes by.