Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Locking Down Clinician Wi-Fi Use

Posted on November 1, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Now that Wi-Fi-based Internet connections are available in most public spaces where clinician might spend time, they have many additional opportunities to address emerging care issues on the road, be they with their family in a mall or a grabbing a burger at McDonald’s.

However, notes one author, there are many situations in which clinicians who share private patient data via Wi-Fi may be violating HIPAA rules, though they may not be aware of the risks they are taking. Not only can a doctor or nurse end up exposing private health information to the public, they can open a window to their EMR, which can violate countless additional patients’ privacy. Like traditional texting, standard Wi-Fi offers hackers an unencrypted data stream, and that puts their connected mobile device at risk if they’re not careful to take other precautions like a VPN.

According to Paul Cerrato, who writes on cybersecurity for iMedicalApps, Wi-Fi networks are by their design open. If the physician can connect to the network, hostile actors could connect to the network and in turn their device, which would allow them to open files, view the files and even download information to their own device.

It’s not surprising that physicians are tempted to use open public networks to do clinical work. After all, it’s convenient for them to dash off an email message regarding, say, a patient medication issue while having a quick lunch at a coffee shop. Doing so is easy and feels natural, but if the email is unsecured, that physician risks exposing his practice to a large HIPAA-related fine, as well as having its network invaded by intruders. Not only that, any HIPAA problem that arises can blacken the reputation of a practice or hospital.

What’s more, if clinicians use an unsecured public wireless networks, their device could also acquire a malware infection which could cause harm to both the clinician and those who communicate with their device.

Ideally, it’s probably best that physicians never use public Wi-Fi networks, given their security vulnerabilities. But if using Wi-Fi makes sense, one solution proposed by Cerrato is for physicians is to access their organization’s EMR via a Citrix app which creates a secure tunnel for information sharing.

As Cerrato points out, however, smaller practices with scant IT resources may not be able to afford deploying a secure Citrix solution. In that case, HHS recommends that such practices use a VPN to encrypt sensitive information being sent or received across the Wi-Fi network.

But establishing a VPN isn’t the whole story. In addition, clinicians will want to have the data on their mobile devices encrypted, to make sure it’s not readable if their device does get hacked. This is particularly important given that some data on their mobile devices comes from mobile apps whose security may not have been vetted adequately.

Ideally, managing security for clinician devices will be integrated with a larger mobile device management strategy that also addresses BYOD, identity and access management issues. But for smaller organizations (notably small medical groups with no full-time IT manager on staff) beginning by making sure that the exchange of patient information by clinicians on Wi-Fi networks is secured is a good start.

Securing IoT Devices Calls For New Ways Of Doing Business

Posted on June 8, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

While new Internet-connected devices can expose healthcare organizations to security threats in much the same way as a desktop PC or laptop, they aren’t always procured, monitored or maintained the same way. This can lead to potentially major ePHI breaches, as one renowned health system recently found out.

According a piece in SearchHealtlhIT, executives at Intermountain Healthcare recently went through something of a panic when connected audiology device went missing. According to Intermountain CISO Karl West, the device had come into the hospital via a different channel than most of the system’s other devices. For that reason, West told the site, his team couldn’t verify what operating system the audiology device had, how it had come into the hospital and what its lifecycle management status was.

Not only did Intermountain lack some key configuration and operating system data on the device, they didn’t know how to prevent the exposure of stored patient information the device had on board. And because the data was persistent over time, the audiology device had information on multiple patients — in fact, every patient that had used the device. When the device was eventually located, was discovered that it held two-and-a-half years worth of stored patient data.

After this incident, West realized that Intermountain needed to improve on how it managed Internet of Things devices. Specifically, the team decided that simply taking inventory of all devices and applications was far from sufficient to protect the security of IoT medical devices.

To prevent such problems from occurring again, West and his team created a data dictionary, designed to let them know where data originates, how it moves and where it resides. The group is also documenting what each IoT device’s transmission capabilities are, West told SearchHealthIT.

A huge vulnerability

Unfortunately, Intermountain isn’t the first and won’t be the last health system to face problems in managing IoT device security. Such devices can be a huge vulnerability, as they are seldom documented and maintained in the same way that traditional network devices are. In fact, this lack of oversight is almost a given when you consider where they come from.

Sure, some connected devices arrive via traditional medical device channels — such as, for example, connected infusion pumps — but a growing number of network-connected devices are coming through consumer channels. For example, though the problem is well understood these days, healthcare organizations continue to grapple with security issues created by staff-owned smart phones and tablets.

The next wave of smart, connected devices may pose even bigger problems. While operating systems running mobile devices are well understood, and can be maintained and secured using enterprise-level processes,  new connected devices are throwing the entire healthcare industry a curveball.  After all, the smart watch a patient brings into your facility doesn’t turn up on your procurement schedule, may use nonstandard software and its operating system and applications may not be patched. And that’s just one example.

Redesigning processes

While there’s no single solution to this rapidly-growing problem, one thing seems to be clear. As the Intermountain example demonstrates, healthcare organizations must redefine their processes for tracking and securing devices in the face of the IoT security threat.

First and foremost, medical device teams and the IT department must come together to create a comprehensive connected device strategy. Both teams need to know what devices are using the network, how and why. And whatever policy is set for managing IoT devices has to embrace everyone. This is no time for a turf war — it’s time to hunker down and manage this serious threat.

Efforts like Intermountain’s may not work for every organization, but the key is to take a step forward. As the number of IoT network nodes grow to a nearly infinite level, healthcare organizations will have to re-think their entire philosophy on how and why networked devices should interact. Otherwise, a catastrophic breach is nearly guaranteed.

Patient Portal Security Is A Tricky Issue

Posted on April 25, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Much of the discussion around securing health data on computers revolves around enterprise networks, particularly internal devices. But it doesn’t hurt to look elsewhere in assessing your overall vulnerabilities. And unfortunately, that includes gaps that can be exposed by patients, whose security practices you can’t control.

One vulnerability that gets too little attention is the potential for a cyber attack accessing the provider’s patient portal, according to security consultant Keith Fricke of tw-Security in Overland Park, Kan. Fricke, who spoke with Information Management, noted that cyber criminals can access portal data relatively easily.

For example, they can insert malicious code into frequently visited websites, which the patient may inadvertently download. Then, if your patient’s device or computer isn’t secure, you may have big problems. When the patient accesses a hospital or clinic’s patient portal, the attacker can conceivably get access to the health data available there.

Not only does such an attack give the criminal access to the portal, it may also offer the them access to many other patients’ computers, and the opportunity to send malware to those computers. So one patient’s security breach can become a victim of infection for countless patients.

When patients access the portal via mobile device, it raises another set of security issues, as the threat to such devices is growing over time. In a recent survey by Ponemon Institute and CounterTack, 80% of respondents reported that their mobile endpoints have been the target of malware the past year. And there’s little doubt that the attacks via mobile device will more sophisticated over time.

Given how predictable such vulnerabilities are, you’d think that it would be fairly easy to lock the portals down. But the truth is, patient portals have to strike a particularly delicate balance between usability and security. While you can demand almost anything from employees, you don’t want to frustrate patients, who may become discouraged if too much is expected from them when they log in. And if they aren’t going to use it, why build a patient portal at all?

For example, requiring a patient to change your password or login data frequently may simply be too taxing for users to handle. Other barriers include demanding that a patient use only one specific browser to access the portal, or requiring them to use digits rather than an alphanumeric name that they can remember. And insisting that a patient use a long, computer-generated password can be a hassle that patients won’t tolerate.

At this point, it would be great if I could say “here’s the perfect solution to this problem.” But the truth is, as you already know, that there’s no one solution that will work for every provider and every IT department. That being said, in looking at this issue, I do get the sense that providers and IT execs spend too little time on user-testing their portals. There’s lots of room for improvement there.

It seems to me that to strike the right balance between portal security and usability, it makes more sense to bring user feedback into the equation as early in the game as possible. That way, at least, you’ll be making informed choices when you establish your security protocols. Otherwise, you may end up with a white elephant, and nobody wants to see that happen.

Security Concerns Threaten Mobile Health App Deployment

Posted on January 26, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Healthcare organizations won’t get much out of deploying mobile apps if consumers won’t use them. And if consumers are afraid that their personal data will be stolen, they’ve got a reason not to use your apps. So the fact that both consumers and HIT execs are having what I’d deem a crisis of confidence over mHealth app security isn’t a good sign for the current crop of mobile health initiatives.

According to a new study by security vendor Arxan, which polled 815 consumers and 268 IT decision-makers, more than half of consumer respondents who use mobile health apps expect their health apps to be hacked in the next six months.

These concerns could have serious implications for healthcare organizations, as 76% of health app users surveyed said they would change providers if they became aware that the provider’s apps weren’t secure. And perhaps even more significantly, 80% of consumer health app users told Arxan that they’d switch to other providers if they found out that the apps that alternate provider offered were better secured. In other words, consumer perceptions of a provider’s health app security aren’t just abstract fears — they’re actually starting to impact patients’ health decision making.

Perhaps you’re telling yourself that your own apps aren’t terribly exposed. But don’t be so sure. When Arxan tested a batch of 71 popular mobile health apps for security vulnerabilities, 86% were shown to have a minimum of two OWASP Mobile Top 10 Risks. The researchers found that vulnerable apps could be tampered with and reverse-engineered, as well as compromised to provide sensitive health information. Easily-done hacks could also force critical health apps to malfunction, Arxan researchers concluded.

The following data also concerned me. Of the apps tested, 19 had been approved by the FDA and 15 by the UK National Health Service. And at least where the FDA is concerned, my assumption would be that FDA-tested apps were more secure than non-approved ones. But Arxan’s research team found that both FDA and National Health Service-blessed apps were among the most vulnerable of all the apps studied.

In truth, I’m not incredibly surprised that health IT leaders have some work to do in securing mobile health apps. After all, mobile health app security is evolving, as the form and function of mHealth apps evolve. In particular, as I’ve noted elsewhere, mobile health apps are becoming more tightly integrated with enterprise infrastructure, which takes the need for thoughtful security precautions to a new level.

But guidelines for mobile health security are emerging. For example, in the summer of last year, the National Institute of Standards and Technology released a draft of its mobile health cybersecurity guidance, “Securing Electronic Records on Mobile Devices” — complete with detailed architecture. Also, I’d wager that more mHealth standards should emerge this year too.

In the mean time, it’s worth remembering that patients are paying close attention to health apps security, and that they’re unlikely to give your organization a pass if they’re hacked. While security has always been a high-stakes issue, the stakes have gotten even higher.

Biometric Use Set To Grow In Healthcare

Posted on January 15, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

I don’t know about you, but until recently I thought of biometrics as almost a toy technology, something you’d imagine a fictional spy like James Bond circumvent (through pure manliness) when entering the archenemy’s hideout. Or perhaps retinal or fingerprint scans would protect Batman’s lair.

But today, in 2016, biometric apps are far from fodder for mythic spies. The price of fingerprint scan-based technology has fallen to nearly zero, with vendors like Apple offering fingerprint-based security options as a standard part of its iOS iPhone operating system. Another free biometric security option comes courtesy of Intel’s True Key app, which allows you to access encrypted app data by scanning and recognizing your facial features. And these are just trivial examples. Biometrics technologies, in short, have become powerful, usable and relatively affordable — elevating them well above other healthcare technologies for some security problems.

If none of this suggests to you that the healthcare industry needs to adopt biometrics, you may have a beef with Raymond Aller, MD, director of informatics at the University of Southern California. In an interview with Healthcare IT News, Dr. Aller argues that our current system of text-based patient identification is actually dangerous, and puts patients at risk of improper treatments and even death. He sees biometric technologies as a badly needed, precise means of patient identification.

What’s more, biometrics can be linked up with patients’ EMR data, making sure the right history is attached to the right person. One health system, Novant Health, uses technology registering a patient’s fingerprints, veins and face at enrollment. Another vendor is developing software that will notify the patient’s health insurer every time that patient arrives and leaves, steps which are intended to be sure providers can’t submit fradulent bills for care not delivered.

As intriguing as these possibilities are, there are certainly some issues holding back the use of biometric approaches in healthcare. And many are exposed, such as Apple’s Touch ID, which is vulnerable to spoofing. Not only that, storing and managing biometric templates securely is more challenging than it seems, researchers note. What’s more, hackers are beginning to target consumer-focused fingerprint sensors, and are likely to seek access to other forms of biometric data.

Fortunately, biometric security solutions like template protection and biocryptography are becoming more mature. As biometric technology grows more sophisticated, patients will be able to use bio-data to safely access their medical records and also pay their bills. For example, MasterCard is exploring biometric authentication for online payments, using biometric data as a password replacement. MasterCard Identity Check allows users to authenticate transactions via video selfie or via fingerprint scanning.

As readers might guess from skimming the surface of biometric security, it comes with its own unique security challenges. It could be years before biometric authentication is used widely in healthcare organizations. But biometric technology use is picking up speed, and this year may see some interesting developments. Stay tuned.

An Important Look at HIPAA Policies For BYOD

Posted on May 11, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Today I stumbled across an article which I thought readers of this blog would find noteworthy. In the article, Art Gross, president and CEO at HIPAA Secure Now!, made an important point about BYOD policies. He notes that while much of today’s corporate computing is done on mobile devices such as smartphones, laptops and tablets — most of which access their enterprise’s e-mail, network and data — HIPAA offers no advice as to how to bring those devices into compliance.

Given that most of the spectacular HIPAA breaches in recent years have arisen from the theft of laptops, and are likely proceed to theft of tablet and smartphone data, it seems strange that HHS has done nothing to update the rule to address increasing use of mobiles since it was drafted in 2003.  As Gross rightly asks, “If the HIPAA Security Rule doesn’t mention mobile devices, laptops, smartphones, email or texting how do organizations know what is required to protect these devices?”

Well, Gross’ peers have given the issue some thought, and here’s some suggestions from law firm DLA Piper on how to dissect the issues involved. BYOD challenges under HIPAA, notes author Peter McLaughlin, include:

*  Control:  To maintain protection of PHI, providers need to control many layers of computing technology, including network configuration, operating systems, device security and transmissions outside the firewall. McLaughlin notes that Android OS-based devices pose a particular challenge, as the system is often modified to meet hardware needs. And in both iOS and Android environments, IT administrators must also manage users’ tendency to connected to their preferred cloud and download their own apps. Otherwise, a large volume of protected health data can end up outside the firewall.

Compliance:  Healthcare organizations and their business associates must take care to meet HIPAA mandates regardless of the technology they  use.  But securing even basic information, much less regulated data, can be far more difficult than when the company creates restrictive rules for its own devices.

Privacy:  When enterprises let employees use their own device to do company business, it’s highly likely that the employee will feel entitled to use the device as they see fit. However, in reality, McLaughlin suggests, employees don’t really have full, private control of their devices, in part because the company policy usually requires a remote wipe of all data when the device gets lost. Also, employees might find that their device’s data becomes discoverable if the data involved is relevant to litigation.

So, readers, tell us how you’re walking the tightrope between giving employees who BYOD some autonomy, and protecting private, HIPAA-protected information.  Are you comfortable with the policies you have in place?

Full Disclosure: HIPAA Secure Now! is an advertiser on this website.

Wearables And Mobile Apps Pose New Data Security Risks

Posted on December 30, 2014 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

In the early days of mobile health apps and wearable medical devices, providers weren’t sure they could cope with yet another data stream. But as the uptake of these apps and devices has grown over the last two years, at a rate surpassing virtually everyone’s expectations, providers and payers both have had to plan for a day when wearable and smartphone app data become part of the standard dataflow. The potentially billion-dollar question is whether they can figure out when, where and how they need to secure such data.

To do that, providers are going to have to face up to new security risks that they haven’t faced before, as well as doing a good job of educating patients on when such data is HIPAA-protected and when it isn’t. While I am most assuredly not an attorney, wiser legal heads than mine have reported that once wearable/app data is used by providers, it’s protected by HIPAA safeguards, but in other situations — such as when it’s gathered by employers or payers — it may not be protected.

For an example of the gray areas that bedevil mobile health data security, consider the case of upstart health insurance provider Oscar Health, which recently offered free Misfit Flash bands to its members. The company’s leaders have promised members that use the bands that if their collected activity numbers look good, they’ll offer roughly $240 off their annual premium. And they’ve promised that the data will be used for diagnostics or any other medical purpose. This promise may be worthless, however, if they are still legally free to resell this data to say, pharmaceutical companies.

Logical and physical security

Meanwhile, even if providers, payers and employers are very cautious about violating patients’ privacy, their careful policies will be worth little if they don’t take a look at managing the logical and physical security risks inherent in passing around so much data across multiple Wi-Fi, 4G and corporate networks.

While it’s not yet clear what the real vulnerabilities are in shipping such data from place to place, it’s clear that new security holes will pop up as smartphone and wearable health devices ramp up to sharing data on massive scale. In an industry which is still struggling with BYOD security, corralling data that facilities already work with on a daily basis, it’s going to pose an even bigger challenge to protect and appropriately segregate connected health data.

After all, every time you begin to rely on a new network model which involves new data handoff patterns — in this case from wired medical device or wearable data streaming to smartphones across Wi-Fi networks, smart phones forwarding data to providers via 4G LTE cellular protocols and providers processing the data via corporate networks, there has to be a host of security issues we haven’t found yet.

Cybersecurity problems could lead to mHealth setbacks

Worst of all, hospitals’ and medical practices’ cyber security protocols are quite weak (as researcher after researcher has pointed out of late). Particularly given how valuable medical identity data has become, healthcare organizations need to work harder to protect their cyber assets and see to it that they’ve at least caught the obvious holes.

But to date, if our experiences with medical device security are any indication, not only are hospitals and practices vulnerable to standard cyber hacks on network assets, they’re also finding it difficult to protect the core medical devices needed to diagnose and treat patients, such as MRI machines, infusion pumps and even, in theory, personal gear like pacemakers and insulin pumps.  It doesn’t inspire much confidence that the Conficker worm, which attacked medical devices across the world several years ago, is still alive and kicking, and in fact, accounted for 31% the year’s top security threats.

If malevolent outsiders mount attacks on the flow of connected health data, and succeed at stealing it, not only is it a brand-new headache for healthcare IT administrators, it could create a crisis of confidence among mHealth shareholders. In other words, while patients, providers, payers, employers and even pharmaceutical companies seem comfortable with the idea of tapping digital health data, major hacks into that data could slow the progress of such solutions considerably. Let’s hope those who focus on health IT security take the threat to wearables and smartphone health app data seriously going into 2015.

HIPAA Privacy Infographic

Posted on November 4, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Caradigm, a population health company, recently sent me this HIPAA Privacy infographic. As a sucker for infographics, I had to share. While related to HIPAA, the BYOD data at the top of the infographic certainly paints an important picture for healthcare IT administrators. What data stands out to you?

Privacy Breaches

Data Sources:
http://www.arubanetworks.com/pdf/solutions/HIMSSSurvey_2012.pdf
http://www.pcworld.com/article/250642/85_of_hospitals_embrace_byod_survey_shows.html
http://apps.himss.org/content/files/FINALThirdAnnualMobileTechnologySurvey.pdf
“Fourth Annual Benchmark Study on Patient Privacy and Data Security.” Ponemon Institute. 12 March 2014.
http://www.redspin.com/docs/Redspin-2013-Breach-Report-Protected-Health-Information-PHI.pdf
http://www.fiercehealthit.com/story/ocr-levies-2-million-hipaa-fines-stolen-laptops/2014-04-23
http://www.fiercehealthit.com/story/boston-teaching-hospital-fined-15m-ephi-data-breach/2012-09-18
http://blogs.wsj.com/cio/2014/05/09/patient-data-leak-leads-to-largest-health-privacy-law-settlement/
http://www.nytimes.com/2011/09/09/us/09breach.html?pagewanted=all&_r=0