Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

To Improve Health Data Security, Get Your Staff On Board

Posted on February 2, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As most readers know, last year was a pretty lousy one for healthcare data security. For one thing, there was the spectacular attack on health insurer Anthem Inc., which exposed personal information on nearly 80 million people. But that was just the headline event. During 2015, the HHS Office for Civil Rights logged more than 100 breaches affecting 500 or more individuals, including four of the five largest breaches in its database.

But will this year be better? Sadly, as things currently stand, I think the best guess is “no.” When you combine the increased awareness among hackers of health data’s value with the modest amounts many healthcare organizations spend on security, it seems like the problem will actually get worse.

Of course, HIT leaders aren’t just sitting on their hands. According to a HIMSS estimate, hospitals and medical practices will spend about $1 billion on cybersecurity this year. And recent HIMSS survey of healthcare executives found that information security had become a top business priority for 90% of respondents.

But it will take more than a round of new technical investments to truly shore up healthcare security. I’d argue that until the culture around healthcare security changes — and executives outside of the IT department take these threats seriously — it’ll be tough for the industry to make any real security progress.

In my opinion, the changes should include following:

  • Boost security education:  While your staff may have had the best HIPAA training possible, that doesn’t mean they’re prepared for growing threat cyber-strikes pose. They need to know that these days, the data they’re protecting might as well be money itself, and they the bankers who must keep an eye on the vault. Health leaders must make them understand the threat on a visceral level.
  • Make it easy to report security threats: While readers of this publication may be highly IT-savvy, most workers aren’t. If you haven’t done so already, create a hotline to report security concerns (anonymously if callers wish), staffed by someone who will listen patiently to non-techies struggling to explain their misgivings. If you wait for people who are threatened by Windows to call the scary IT department, you’ll miss many legit security questions, especially if the staffer isn’t confident that anything is wrong.
  • Reward non-IT staffers for showing security awareness: Not only should organizations encourage staffers to report possible security issues — even if it’s a matter of something “just not feeling right” — they should acknowledge it when staffers make a good catch, perhaps with a gift card or maybe just a certificate. It’s pretty straightforward: reward behavior and you’ll get more of it.
  • Use security reports to refine staff training: Certainly, the HIT department may benefit from alerts passed on by the rest of the staff. But the feedback this process produces can be put to broader use.  Once a quarter or so, if not more often, analyze the security issues staffers are bringing to light. Then, have brown bag lunches or other types of training meetings in which you educate staffers on issues that have turned up regularly in their reports. This benefits everyone involved.

Of course, I’m not suggesting that security awareness among non-techies is sufficient to prevent data breaches. But I do believe that healthcare organizations could prevent many a breach by taking advantage of their staff’s instincts and observational skills.

Wearable Health Trackers Could Pose Security Risks

Posted on February 1, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Last October, security researchers made waves when they unveiled what they described as a 10-second hack of a Fitbeat wearable health tracker. At the Hack.Lu 2015 conference, Fortinet security researcher Axelle Apvrille laid out a method for hacking the wearable through its Bluetooth radio. Apparently, Aprville was able to infect the Fitbit Flex from as much as 15 feet away, manipulate data on the tracker, and use the Flex to distribute his code to a computer.

Fitbit, for its part, denied that its devices can serve as vehicles for infecting users with malware. And Aprville himself admitted publicly that his demonstration was more theoretical than practical. In a tweet following the conference, he noted that he had not demonstrated a way to execute malicious code on the victim’s host.

But the incident does bring attention to a very serious issue. While consumers are picking up health trackers at a breathless pace, relatively little attention has been paid to whether the data on these devices is secure. Perhaps even more importantly, too few experts are seeking ways to prevent these devices can be turned into a jumping-off point for malware. After all, like any other lightly-guarded Internet of Things device, a wearable tracker could ultimately allow an attacker to access enterprise healthcare networks, and possibly even sensitive PHI or financial data.

It’s not as though we aren’t aware that connected healthcare devices are rich hunting grounds. For example, security groups are beginning to focus on securing networked medical devices such as blood gas analyzers and wireless infusion pumps, as it’s becoming clear that they might be accessible to data thieves or other malicious intruders. But perhaps because wearable trackers are effectively “healthcare lite,” used almost exclusively by consumers, the threat they could pose to healthcare organizations over time hasn’t generated a lot of heat.

But health tracker security strategies deserve a closer look. Here’s some sample suggestions on how to secure health and fitness devices from Milan Patel, IoT Security Program Director at IBM:

  • Device design: Health tracker manufacturers should establish a secure hardware and software development process, including source code analysis to pinpoint code vulnerabilities and security testing to find runtime vulnerabilities. Use trusted manufacturers who secure components, and a trusted supply chain. Also, deliver secure firmware/software updates and audit them.
  • Device deployment:  Be sure to use strong encryption to protect privacy and integrity of data on the device, during transmission from device to the cloud and on the cloud. To further control device data, give consumers the ability to set up user and usage privileges for their data, and an option to anonymize the data.Secure all communication channels to protect against data change, corruption or observation.
  • Manage security:  Include trackers in the set of technology being monitored, and set alerts for intrusion. Audit logging is desirable for the devices, as well as the network connections and the cloud. The tracker should ideally be engineered to include a fail-safe operation — dropping the system down to incapability, safely — to protect against attacks.

This may sound like a great deal of effort to expend on these relatively unsophisticated devices. And at present, it just may be overkill. But it’s worth preparing for a world in which health trackers are increasingly capable and connected, and increasingly attractive to the attackers who want your data.

NIST Goes After Infusion Pump Security Vulnerabilities

Posted on January 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As useful as networked medical devices are, it’s become increasingly apparent that they pose major security risks.  Not only could intruders manipulate networked devices in ways that could harm patients, they could use them as a gateway to sensitive patient health information and financial data.

To make a start at taming this issue, the National Institute of Standards and Technology has kicked off a project focused on boosting the security of wireless infusion pumps (Side Note: I wonder if this is in response to Blackberry’s live hack of an infusion pump). In an effort to be sure researchers understand the hospital environment and how the pumps are deployed, NIST’s National Cybersecurity Center of Excellence (NCCoE) plans to work with vendors in this space. The NCCoE will also collaborate on the effort with the Technological Leadership Institute at the University of Minnesota.

NCCoE researchers will examine the full lifecycle of wireless infusion pumps in hospitals, including purchase, onboarding of the asset, training for use, configuration, use, maintenance, decontamination and decommissioning of the pumps. This makes a great deal of sense. After all, points of network connection are becoming so decentralized that every touchpoint is suspect.

The team will also look at what types of infrastructure interconnect with the pumps, including the pump server, alarm manager, electronic medication administration record system, point of care medication, pharmacy system, CPOE system, drug library, wireless networks and even the hospital’s biomedical engineering department. (It’s sobering to consider the length of this list, but necessary. After all, more or less any of them could conceivably be vulnerable if a pump is compromised.)

Wisely, the researchers also plan to look at the way a wide range of people engage with the pumps, including patients, healthcare professionals, pharmacists, pump vendor engineers, biomedical engineers, IT network risk managers, IT security engineers, IT network engineers, central supply workers and patient visitors — as well as hackers. This data should provide useful workflow information that can be used even beyond cybersecurity fixes.

While the NCCoE and University of Minnesota teams may expand the list of security challenges as they go forward, they’re starting with looking at access codes, wireless access point/wireless network configuration, alarms, asset management and monitoring, authentication and credentialing, maintenance and updates, pump variability, use and emergency use.

Over time, NIST and the U of M will work with vendors to create a lab environment where collaborators can identify, evaluate and test security tools and controls for the pumps. Ultimately, the project’s goal is to create a multi-part practice guide which will help providers evaluate how secure their own wireless infusion pumps are. The guide should be available late this year.

In the mean time, if you want to take a broader look at how secure your facility’s networked medical devices are, you might want to take a look at the FDA’s guidance on the subject, “Cybersecurity for Networked Medical Devices Containing Off-the-Shelf Software.” The guidance doc, which was issued last summer, is aimed at device vendors, but the agency also offers a companion document offering information on the topic for healthcare organizations.

If this topic interests you, you may also want to watch this video interview talking about medical device security with Tony Giandomenico, a security expert at Fortinet.

Security Concerns Threaten Mobile Health App Deployment

Posted on January 26, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Healthcare organizations won’t get much out of deploying mobile apps if consumers won’t use them. And if consumers are afraid that their personal data will be stolen, they’ve got a reason not to use your apps. So the fact that both consumers and HIT execs are having what I’d deem a crisis of confidence over mHealth app security isn’t a good sign for the current crop of mobile health initiatives.

According to a new study by security vendor Arxan, which polled 815 consumers and 268 IT decision-makers, more than half of consumer respondents who use mobile health apps expect their health apps to be hacked in the next six months.

These concerns could have serious implications for healthcare organizations, as 76% of health app users surveyed said they would change providers if they became aware that the provider’s apps weren’t secure. And perhaps even more significantly, 80% of consumer health app users told Arxan that they’d switch to other providers if they found out that the apps that alternate provider offered were better secured. In other words, consumer perceptions of a provider’s health app security aren’t just abstract fears — they’re actually starting to impact patients’ health decision making.

Perhaps you’re telling yourself that your own apps aren’t terribly exposed. But don’t be so sure. When Arxan tested a batch of 71 popular mobile health apps for security vulnerabilities, 86% were shown to have a minimum of two OWASP Mobile Top 10 Risks. The researchers found that vulnerable apps could be tampered with and reverse-engineered, as well as compromised to provide sensitive health information. Easily-done hacks could also force critical health apps to malfunction, Arxan researchers concluded.

The following data also concerned me. Of the apps tested, 19 had been approved by the FDA and 15 by the UK National Health Service. And at least where the FDA is concerned, my assumption would be that FDA-tested apps were more secure than non-approved ones. But Arxan’s research team found that both FDA and National Health Service-blessed apps were among the most vulnerable of all the apps studied.

In truth, I’m not incredibly surprised that health IT leaders have some work to do in securing mobile health apps. After all, mobile health app security is evolving, as the form and function of mHealth apps evolve. In particular, as I’ve noted elsewhere, mobile health apps are becoming more tightly integrated with enterprise infrastructure, which takes the need for thoughtful security precautions to a new level.

But guidelines for mobile health security are emerging. For example, in the summer of last year, the National Institute of Standards and Technology released a draft of its mobile health cybersecurity guidance, “Securing Electronic Records on Mobile Devices” — complete with detailed architecture. Also, I’d wager that more mHealth standards should emerge this year too.

In the mean time, it’s worth remembering that patients are paying close attention to health apps security, and that they’re unlikely to give your organization a pass if they’re hacked. While security has always been a high-stakes issue, the stakes have gotten even higher.

Biometric Use Set To Grow In Healthcare

Posted on January 15, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

I don’t know about you, but until recently I thought of biometrics as almost a toy technology, something you’d imagine a fictional spy like James Bond circumvent (through pure manliness) when entering the archenemy’s hideout. Or perhaps retinal or fingerprint scans would protect Batman’s lair.

But today, in 2016, biometric apps are far from fodder for mythic spies. The price of fingerprint scan-based technology has fallen to nearly zero, with vendors like Apple offering fingerprint-based security options as a standard part of its iOS iPhone operating system. Another free biometric security option comes courtesy of Intel’s True Key app, which allows you to access encrypted app data by scanning and recognizing your facial features. And these are just trivial examples. Biometrics technologies, in short, have become powerful, usable and relatively affordable — elevating them well above other healthcare technologies for some security problems.

If none of this suggests to you that the healthcare industry needs to adopt biometrics, you may have a beef with Raymond Aller, MD, director of informatics at the University of Southern California. In an interview with Healthcare IT News, Dr. Aller argues that our current system of text-based patient identification is actually dangerous, and puts patients at risk of improper treatments and even death. He sees biometric technologies as a badly needed, precise means of patient identification.

What’s more, biometrics can be linked up with patients’ EMR data, making sure the right history is attached to the right person. One health system, Novant Health, uses technology registering a patient’s fingerprints, veins and face at enrollment. Another vendor is developing software that will notify the patient’s health insurer every time that patient arrives and leaves, steps which are intended to be sure providers can’t submit fradulent bills for care not delivered.

As intriguing as these possibilities are, there are certainly some issues holding back the use of biometric approaches in healthcare. And many are exposed, such as Apple’s Touch ID, which is vulnerable to spoofing. Not only that, storing and managing biometric templates securely is more challenging than it seems, researchers note. What’s more, hackers are beginning to target consumer-focused fingerprint sensors, and are likely to seek access to other forms of biometric data.

Fortunately, biometric security solutions like template protection and biocryptography are becoming more mature. As biometric technology grows more sophisticated, patients will be able to use bio-data to safely access their medical records and also pay their bills. For example, MasterCard is exploring biometric authentication for online payments, using biometric data as a password replacement. MasterCard Identity Check allows users to authenticate transactions via video selfie or via fingerprint scanning.

As readers might guess from skimming the surface of biometric security, it comes with its own unique security challenges. It could be years before biometric authentication is used widely in healthcare organizations. But biometric technology use is picking up speed, and this year may see some interesting developments. Stay tuned.

Mobile Health Security Issues To Ponder In 2016

Posted on January 11, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

In some ways, mobile health security safeguards haven’t changed much for quite some time. Making sure that tablets and phones are protected against becoming easy network intrusion points is a given. Also seeing to it that such devices use strong passwords and encrypted data exchange whenever possible is a must.

But increasingly, as mobile apps become more tightly knit with enterprise infrastructure, there’s more security issues to consider. After all, we’re increasingly talking about mission-critical apps which rely on ongoing access to sensitive enterprise networks. Now more than ever, enterprises must come up with strategies which control how data flows into the enterprise network. In other words, we’re not just talking about locking down the end points, but also seeing to it that powerful edge devices are treated like the vulnerable hackable gateways they are.

To date, however, there’s still not a lot of well-accepted guidance out there spelling out what steps health organizations should take to ramp up their mobile security. For example, NIST has issued its “Securing Electronic Health Records On Mobile Devices” guideline, but it’s only a few months old and remains in draft form to date.

The truth is, the healthcare industry isn’t as aware of, or prepared for, the need for mobile healthcare data security as it should be. While healthcare organizations are gradually deploying, testing and rolling out new mobile platforms, securing them isn’t being given enough attention. What’s more, clinicians aren’t being given enough training to protect their mobile devices from hacks, which leaves some extremely valuable data open to the world.

Nonetheless, there are a few core approaches which can be torqued up help protect mobile health data this year:

  • Encryption: Encrypting data in transit wasn’t invented yesterday, but it’s still worth a check in to make sure your organization is doing so. Gregory Cave notes that data should be encrypted when communicated between the (mobile) application and the server. And he recommends that Web traffic be transmitted through a secure connection using only strong security protocols like Secure Sockets Layer or Transport Layer Security. This also should include encrypting data at rest.
  • Application hardening:  Before your organization rolls out mobile applications, it’s best to see to it that security defects are detected before and addressed before deployment. Application hardening tools — which protect code from hackers — can help protect mobile deployments, an especially important step for software placed on machines and locations your organization doesn’t control. They employ techniques such as obfuscation, which hides code structure and flow within an application, making it hard for intruders to reverse engineer or tamper with the source code.
  • Training staff: Regardless of how sophisticated your security systems are, they’re not going to do much good if your staff leaves the proverbial barn door open. As one security expert points out,  healthcare organizations need to make staffers responsible for understanding what activities lead to breaches, or security hackers will still find a toehold.”It’s like installing the most sophisticated security system in the world for your house, but not teaching the family how to use it,” said Grant Elliott, founder and CEO of risk management and compliance firm Ostendio.

In addition to these efforts, I’d argue that staffers need to really get it as to what happens when security goes awry. Knowing that mistakes will upset some IT guy they’ve never met is one thing; understanding that a breach could cost millions and expose the whole organization to disrepute is a bit more memorable. Don’t just teach the security protocols, teach the costs of violating them. A little drama — such as the little old lady who lost her home due to PHI theft — speaks far more powerfully than facts and figures, don’t you agree?

Tiny Budgets Undercut Healthcare’s Cyber Security Efforts

Posted on January 4, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

This has been a lousy year for healthcare data security — so bad a year that IBM has dubbed 2015 “The Year of The Healthcare Security Breach.” In a recent report, Big Blue noted that nearly 100 million records were compromised during the first 10 months of this year.

Part of the reason for the growth in healthcare data breaches seems to be due to the growing value of Protected Health Information. PHI is worth 10x as much as credit card information these days, according to some estimates. It’s hardly surprising that cyber criminals are eager to rob PHI databases.

But another reason for the hacks may be — to my way of looking at things — an indefensible refusal to spend enough on cybersecurity. While the average healthcare organization spends about 3% of their IT budget on cybersecurity, they should really allocate 10% , according to HIMSS cybersecurity expert Lisa Gallagher.

If a healthcare organization has an anemic security budget, they may find it difficult to attract a senior healthcare security pro to join their team. Such professionals are costly to recruit, and command salaries in the $200K to $225K range. And unless you’re a high-profile institution, the competition for such seasoned pros can be fierce. In fact, even high-profile institutions have a challenge recruiting security professionals.

Still, that doesn’t let healthcare organizations off the hook. In fact, the need to tighten healthcare data security is likely to grow more urgent over time, not less. Not only are data thieves after existing PHI stores, and prepared to exploit traditional network vulnerabilities, current trends are giving them new ways to crash the gates.

After all, mobile devices are increasingly being granted access to critical data assets, including PHI. Securing the mix of corporate and personal devices that might access the data, as well as any apps an organization rolls out, is not a job for the inexperienced or the unsophisticated. It takes a well-rounded infosec pro to address not only mobile vulnerabilities, but vulnerabilities in the systems that dish data to these devices.

Not only that, hospitals need to take care to secure their networks as devices such as insulin pumps and heart rate monitors become new gateways data thieves can use to attack their networks. In fact, virtually any node on the emerging Internet of Things can easily serve as a point of compromise.

No one is suggesting that healthcare organizations don’t care about security. But as many wiser heads than mine have pointed out, too many seem to base their security budget on the hope-and-pray model — as in hoping and praying that their luck will hold.

But as a professional observer and a patient, I find such an attitude to be extremely reckless. Personally, I would be quite inclined to drop any provider that allowed my information to be compromised, regardless of excuses. And spending far less on security than is appropriate leaves the barn door wide open.

I don’t know about you, readers, but I say “Not with my horses!”

Are These Types of Breaches Really Necessary?

Posted on December 28, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Over the past couple of days, I took the time to look over Verizon’s 2015 Protected Health Information Data Breach Report.  (You can get it here, though you’ll have to register.)

While it contained many interesting data points and observation — including that 90% percent of the industries researchers studied had seen a personal health information breach this year — the stat that stood out for me was the following. Apparently, almost half (45.5%) of PHI breaches were due to the lost or theft of assets. Meanwhile, issue of privileges and miscellaneous errors came in at distant second and third, at just over 20% of breaches each.

In case you’re the type who likes all the boxes checked, the rest of the PHI breach-causing list, dubbed the “Nefarious Nine,” include “everything else” at 6.7%, point of sale (3.8%), web applications (1.9%), crimeware, (1.4%), cyber-espionage (0.3%), payment card skimmers (0.1%) and denial of service at a big fat zero percent.

According to the report’s authors, lost and stolen assets have been among the most common vectors for PHI exposure for several years. This is particularly troubling given that one of the common categories of breach — theft of a laptop — involves data which was not encrypted.

If stolen or lost assets continue to be a problem year after year, why haven’t companies done more to address this problem?

In the case of firms outside of the healthcare business, it’s less of a surprise, as there are fewer regulations mandating that they protect PHI. While they may have, say, employee worker’s compensation data on a laptop, that isn’t the core of what they do, so their security strategy probably doesn’t focus on safeguarding such data.

But when it comes to healthcare organizations — especially providers — the lack of data encryption is far more puzzling.

As the report’s authors point out, it’s true that encrypting data can be risky in some situations; after all, no one wants to be fumbling with passwords, codes or biometrics if a patient’s health is at risk.

That being said, my best guess is that if a patient is in serious trouble, clinicians will be attending to patients within a hospital. And in that setting, they’re likely to use a connected hospital computer, not a pesky, easily-stealable laptop, tablet or phone. And even if life-saving data is stored on a portable device, why not encrypt at least some of it?

If HIPAA fears and good old common sense aren’t good enough reasons to encrypt that portable PHI, what about the cost of breaches?  According to one estimate, data breaches cost the healthcare industry $6 billion per year, and breaches cost the average healthcare organization $3.5 million per year.

Then there’s the hard-to-measure cost to a healthcare organization’s brand. Patients are becoming increasingly aware that their data might be vulnerable, and a publicly-announced breach might give them a good reason to seek care elsewhere.

Bottom line, it would be nice to see out industry take a disciplined approach to securing easily-stolen portable PHI. After years of being reminded that this is a serious issue, it’s about time to institute a crackdown.

Could the Drive to Value-Based Healthcare Undermine Security?

Posted on November 27, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As we all know, the healthcare industry’s move toward value-based healthcare is forcing providers to make some big changes. In fact, a recent report by peer60 found that 64% of hospitals responding cited oncoming value-based reimbursement as their top challenge. Meanwhile, only 30% could say the same of improving information security according to peer60, which recently surveyed 320 hospital leaders.

Now, the difference in concern over the two issues can be chalked up, at least in part, to the design of the survey. Obviously, there’s a good chance that a survey of CIOs would generate different results. But as the report’s authors noted, the survey might also have exposed a troublesome gap in priorities between health IT and the rest of the hospital C-suite.

It’s hardly surprising hospital leaders are focused on the life-and-death effects of a major change in payment policy. Ultimately, if a hospital can’t stay in business, protecting data won’t be an issue anymore. But if a hospital keeps its doors open, protecting patient data must be given a great deal of attention.

If there is a substantial gap between CIOs and their colleagues on security, my guess is that the reasons include the following:

  • Assuming CIOs can handle things:  Lamentable though it may be, less-savvy healthcare leaders may think of security as a tech-heavy problem that doesn’t concern them on a day-to-day level.
  • Managing by emergency:  Though they might not admit it publicly, reactive health executives may see security problems as only worth addressing when something needs fixing.
  • Fear of knowing what needs to be done:  Any intelligent, educated health exec knows that they can’t afford to let security be compromised, but they don’t want to face up to the time, money and energy it takes to do infosec right.
  • Overconfidence in existing security measures:  After approving the investment of tens or even hundreds of millions on health IT, non-tech health leaders may find it hard to believe that perfect security isn’t “built in” and complete.

I guess the upshot of all of this is that even sophisticated healthcare executives may have dysfunctional beliefs about health data security. And it’s not surprising that health leaders with limited technical backgrounds may prefer to attack problems they do understand.

Ultimately, this suggests to me that CIOs and other HIT leaders still have a lot of ‘splaining to do. To do their best with security challenges, health IT execs need the support from the entire leadership team, and that will mean educating their peers on some painful realities of the trade.

After all, if security is to be an organization-wide process — not just a few patches and HIPAA training sessions — it has to be ingrained in everything employees do. And that may mean some vigorous exchanges of views on how security fosters value.

Is HIPAA Misuse Blocking Patient Use Of Their Data?

Posted on August 18, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Recently, a story in the New York Times told some troubling stories about how HIPAA misunderstandings have crept into both professional and personal settings. These included:

  • A woman getting scolded at a hospital in Boston for “very improper” speech after discussing her husband’s medical situation with a dear friend.
  • Refusal by a Pennsylvania hospital to take a daughter’s information on her mother’s medical history, citing HIPAA, despite the fact that the daughter wasn’t *requesting* any data. The woman’s mother was infirm and couldn’t share medical history — such as her drug allergy — on her own.
  • The announcement, by a minister in California, that he could no longer read the names of sick congregants due to HIPAA.

All of this is bad enough, particularly the case of the Pennsylvania refusing to take information that could have protected a helpless elderly patient, but the effects of this ignorance create even greater ripples, I’d argue.

Let’s face it: our efforts to convince patients to engage with their own medical data haven’t been terribly successful as of yet. According to a study released late last year by Xerox, 64% of patients were not using patient portals, and 31% said that their doctor had never discussed portals with them.

Some of the reasons patients aren’t taking advantage of the medical data available to them include ignorance and fear, I’d argue. Technophobia and a history of just “trusting the doctor” play a role as well. What’s more, pouring through lab results and imaging studies might seem overwhelming to patients who have never done it before.

But that’s not all that’s holding people back. In my opinion, the climate of medical data fear HIPAA misunderstandings have created is playing a major part too.

While I understand why patients have to sign acknowledgements of privacy practices and be taught what HIPAA is intended to do, this doesn’t exactly foster a climate in which patients feel like they own their data. While doctor’s offices and hospitals may not have done this deliberately, the way they administer HIPAA compliance can make medical data seem portentous, scary and dangerous, more like a bomb set to go off than a tool patients can use to manage their care.

I guess what I’m suggesting is that if providers want to see patients engaged and managing their care, they should make sure patients feel comfortable asking for access to and using that data. While some may never feel at ease digging into their test results or correcting their medical history, I believe that there’s a sizable group of patients who would respond well to a reminder that there’s power in doing so.

The truth is that while most providers now give patients the option of logging on to a portal, they typically don’t make it easy. And heaven knows even the best-trained physician office staff rarely take the time to urge patients to log on and learn.

But if providers make the effort to balance stern HIPAA paperwork with encouraging words, patients are more likely to get inspired. Sometimes, all it takes is a little nudge to get people on board with new behavior. And there’s no excuse for letting foolish misinterpretations of HIPAA prevent that from happening.