Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

What Would a Patient-Centered Security Program Look Like? (Part 2 of 2)

Posted on August 30, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article laid down a basic premise that the purpose of security is to protect people, not computer systems or data. Let’s continue our exploration of internal threats.

Security Starts at Home

Before we talk about firewalls and anomaly detection for breaches, let’s ask why hospitals, pharmacies, insurers, and others can spread the data from health care records on their own by selling this data (supposedly de-identified) to all manner of third parties, without patient consent or any benefit to the patient.

This is a policy issue that calls for involvement by a wide range of actors throughout society, of course. Policy-makers have apparently already decided that it is socially beneficial–or at least the most feasible course economically–for clinicians to share data with partners helping them with treatment, operations, or payment. There are even rules now requiring those partners to protect the data. Policy-makers have further decided that de-identified data sharing is beneficial to help researchers and even companies using it to sell more treatments. What no one admits is that de-identification lies on a slope–it is not an all-or-nothing guarantee of privacy. The more widely patient data is shared, the more risk there is that someone will break the protections, and that someone’s motivation will change from relatively benign goals such as marketing to something hostile to the patient.

Were HIMSS to take a patient-centered approach to privacy, it would also ask how credentials are handed out in health care institutions, and who has the right to view patient data. How do we minimize the chance of a Peeping Tom looking at a neighbor’s record? And what about segmentation of data, so that each clinician can see only what she needs for treatment? Segmentation has been justly criticized as impractical, but observers have been asking for it for years and there’s even an HL7 guide to segmentation. Even so, it hasn’t proceeded past the pilot stage.

Nor does it make sense to talk about security unless we talk about the rights of patients to get all their data. Accuracy is related to security, and this means allowing patients to make corrections. I don’t know what I think would be worse: perfectly secure records that are plain wrong in important places, or incorrect assertions being traded around the Internet.

Patients and the Cloud

HIMSS did not ask respondents whether they stored records at their own facilities or in third-party services. For a while, trust in the cloud seemed to enjoy rapid growth–from 9% in 2012 to 40% in 2013. Another HIMSS survey found that 44% of respondents used the cloud to host clinical applications and data–but that was back in 2014, so the percentage has probably increased since then. (Every survey measures different things, of course.)

But before we investigate clinicians’ use of third parties, we must consider taking patient data out of clinicians’ hands entirely and giving it back to patients. Patients will need security training of their own, under those conditions, and will probably use the cloud to avoid catastrophic data loss. The big advantage they have over clinicians, when it comes to avoiding breaches, is that their data will be less concentrated, making it harder for intruders to grab a million records at one blow. Plenty of companies offer personal health records with some impressive features for sharing and analytics. An open source solution called HEART, described in another article, is in the works.

There’s good reason to believe that data is safer in the cloud than on local, network-connected systems. For instance, many of the complex technologies mentioned by HIMSS (network monitoring, single sign on, intrusion detection, and so on) are major configuration tasks that a cloud provider can give to its clients with a click of a button. More fundamentally, hospital IT staffs are burdened with a large set of tasks, of which security is one of the lowest-priority because it doesn’t generate revenue. In contrast, IT staff at the cloud environment spend gobs of time keeping up to date on security. They may need extra training to understand the particular regulatory requirements of health care, but the basic ways of accessing data are the same in health care as any other industry. Respondents to the HIMSS survey acknowledged that cloud systems had low vulnerability (p. 6).

There won’t be any more questions about encryption once patients have their data. When physicians want to see it, they will have to so over an encrypted path. Even Edward Snowden unreservedly boasted, “Encryption works.”

Security is a way of behaving, not a set of technologies. That fundamental attitude was not addressed by the HIMSS survey, and might not be available through any survey. HIMSS treated security as a routine corporate function, not as a patient right. We might ask the health care field different questions if we returned to the basic goal of all this security, which is the dignity and safety of the patient.

We all know the health record system is broken, and the dismal state of security is one symptom of that failure. Before we invest large sums to prop up a bad record system, let’s re-evaluate security on the basis of a realistic and respectful understanding of the patients’ rights.

What Would a Patient-Centered Security Program Look Like? (Part 1 of 2)

Posted on August 29, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

HIMSS has just released its 2016 Cybersecurity Survey. I’m not writing this article just to say that the industry-wide situation is pretty bad. In fact, it would be worth hiring a truck with a megaphone to tour the city if the situation was good. What I want to do instead is take a critical look at the priorities as defined by HIMSS, and call for a different industry focus.

We should start off by dispelling notions that there’s anything especially bad about security in the health care industry. Breaches there get a lot of attention because they’re relatively new and because the personal sensitivity of the data strikes home with us. But the financial industry, which we all thought understood security, is no better–more than 500 million financial records were stolen during just a 12-month period ending in October 2014. Retailers are frequently breached. And what about one of the government institutions most tasked with maintaining personal data, the Office of Personnel Management?

The HIMSS report certainly appears comprehensive to a traditional security professional. They ask about important things–encryption, multi-factor authentication, intrusion detection, audits–and warn the industry of breaches caused by skimping on such things. But before we spend several billion dollars patching the existing system, let’s step back and ask what our priorities are.

People Come Before Technologies

One hint that HIMSS’s assumptions are skewed comes in the section of the survey that asked its respondents what motivated them to pursue greater security. The top motivation, at 76 percent, was a phishing attack (p. 6). In other words, what they noticed out in the field was not some technical breach but a social engineering attack on their staff. It was hard to interpret the text, but it appeared that the respondents had actually experienced these attacks. If so, it’s a reminder that your own staff is your first line of defense. It doesn’t matter how strong your encryption is if you give away your password.

It’s a long-held tenet of the security field that the most common source of breaches is internal: employees who were malicious themselves, or who mistakenly let intruders in through phishing attacks or other exploits. That’s why (you might notice) I don’t use the term “cybersecurity” in this article, even though it’s part of the title of the HIMSS report.

The security field has standardized ways of training staff to avoid scams. Explain to them the most common vectors of attack. Check that they’re creating strong passwords, where increased computing power is creating an escalating war (and the value of frequent password changes has been challenged). Best yet, use two-factor authentication, which may help you avoid the infuriating burden of passwords. Run mock phishing scams to test your users. Set up regular audits of access to sensitive data–a practice that HIMSS found among only 60% of respondents (p. 3). And give someone the job of actually checking the audit logs.

Why didn’t HIMSS ask about most of these practices? It began the project with a technology focus instead a human focus. We’ll take the reverse approach in the second part of this article.