Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Locking Down Clinician Wi-Fi Use

Posted on November 1, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Now that Wi-Fi-based Internet connections are available in most public spaces where clinician might spend time, they have many additional opportunities to address emerging care issues on the road, be they with their family in a mall or a grabbing a burger at McDonald’s.

However, notes one author, there are many situations in which clinicians who share private patient data via Wi-Fi may be violating HIPAA rules, though they may not be aware of the risks they are taking. Not only can a doctor or nurse end up exposing private health information to the public, they can open a window to their EMR, which can violate countless additional patients’ privacy. Like traditional texting, standard Wi-Fi offers hackers an unencrypted data stream, and that puts their connected mobile device at risk if they’re not careful to take other precautions like a VPN.

According to Paul Cerrato, who writes on cybersecurity for iMedicalApps, Wi-Fi networks are by their design open. If the physician can connect to the network, hostile actors could connect to the network and in turn their device, which would allow them to open files, view the files and even download information to their own device.

It’s not surprising that physicians are tempted to use open public networks to do clinical work. After all, it’s convenient for them to dash off an email message regarding, say, a patient medication issue while having a quick lunch at a coffee shop. Doing so is easy and feels natural, but if the email is unsecured, that physician risks exposing his practice to a large HIPAA-related fine, as well as having its network invaded by intruders. Not only that, any HIPAA problem that arises can blacken the reputation of a practice or hospital.

What’s more, if clinicians use an unsecured public wireless networks, their device could also acquire a malware infection which could cause harm to both the clinician and those who communicate with their device.

Ideally, it’s probably best that physicians never use public Wi-Fi networks, given their security vulnerabilities. But if using Wi-Fi makes sense, one solution proposed by Cerrato is for physicians is to access their organization’s EMR via a Citrix app which creates a secure tunnel for information sharing.

As Cerrato points out, however, smaller practices with scant IT resources may not be able to afford deploying a secure Citrix solution. In that case, HHS recommends that such practices use a VPN to encrypt sensitive information being sent or received across the Wi-Fi network.

But establishing a VPN isn’t the whole story. In addition, clinicians will want to have the data on their mobile devices encrypted, to make sure it’s not readable if their device does get hacked. This is particularly important given that some data on their mobile devices comes from mobile apps whose security may not have been vetted adequately.

Ideally, managing security for clinician devices will be integrated with a larger mobile device management strategy that also addresses BYOD, identity and access management issues. But for smaller organizations (notably small medical groups with no full-time IT manager on staff) beginning by making sure that the exchange of patient information by clinicians on Wi-Fi networks is secured is a good start.

KPMG: Most Business Associates Not Ready For Security Standards

Posted on October 17, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study by consulting firm KPMG has concluded that two-thirds of business associates aren’t completely ready to step up to industry demands for protecting patient health information. Specifically, the majority of business associates don’t seem to be ready to meet HITRUST standards for securing protected health information. Plus, it’s worth noting that HITRUST certification doesn’t mean your organization is HIPAA compliant or protected from a breach. It’s just the first steps and many aren’t doing it.

HITRUST has established a Common Security Framework which is used by healthcare organizations (as well as others that create, access, store or exchange sensitive and/or regulated data). The CSF includes a set of controls designed to harmonize the requirements of multiple regulations and standards.

According to KPMG’s Emily Frolick, third-party risk and assurance leader for KPMG’s healthcare practice, a growing number of healthcare organizations are asking their business associates to obtain a HITRUST CSF Certification or pass an SOC 2 + HITRUST CSF examination to demonstrate that they are making a good-faith effort to protect patient information. The CSF assessment is an internal control-based approach allowing organizations such as business associates to assess and demonstrate the measures they are taken to protect healthcare data.

To see if vendors targeting the healthcare industry seemed capable of meeting these standards, KPMG surveyed 600 professionals in this category to determine their organization’s security status. The survey found that half of those responding weren’t ready for HITRUST examination or certification, while 17.4% were planning for the CSF assessment.

When asked how they were progressing toward meeting HITRUST CSF requirements, just 7% said they were completely ready. Meanwhile, 8% said their organization was well along in its implementation process, and 17.4% said they were in the early stages of CSF implementation.

One the biggest barriers to CSF readiness seems to be having adequate staff in place, ranking ahead of cultural, technological and financial concerns, KPMG found. When asked whether they had the staff in place to meet the standard, 53% said they did, but 47% said they did not have “the right staff the right level skills to execute against the HITRUST CSF.” That being said, 27% said all four factors were at issue. (Interestingly, 23% said” none of the above” posed barriers to CSF readiness.)

Readers won’t be surprised to learn that KPMG has reason to encourage vendors to seek the HITRUST cert and examination – specifically, that it works as a HITRUST Qualified CSF Assessor for healthcare organizations. Also, KPMG works with very large organizations which need to establish high levels of structure in how they evaluate their health data security measures. Hopefully this means they go well beyond what HITRUST requires.

Nonetheless, even if you work with a relatively small healthcare organization that doesn’t have the resources to engage in obtaining formal healthcare security certifications, this discussion serves as a good reminder. Particularly given that many breaches take place due to slips by business associates, it doesn’t hurt to take a close look at their security practices now and then. Even asking them some commonsense questions about how they and their contractors handle data is a good idea. After all, even if business associates cause a breach to your data, you still have to explain the breach to your patients.

A Look At Vendor IoT Security And Vulnerability Issues

Posted on October 5, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Much of the time, when we discuss the Internet of Things, we’re looking at issues from an end-user perspective.  We talk about the potential for IoT options like mobile medical applications and wearable devices, and ponder how to connect smart devices to other nodes like the above to offer next-generation care. Though we’re only just beginning to explore such networking models, the possibilities seem nearly infinite.

That being said, most of the responsibility for enabling and securing these devices still lies with the manufacturers, as healthcare networks typically don’t integrate fully with IoT devices as of yet.

So I was intrigued to find a recent article in Dark Reading which lays out some security considerations manufacturers of IoT devices should keep in mind. Not only do the suggestions give you an idea of how vendors should be thinking about vulnerabilities, they also offer some useful insights for healthcare organizations.

Security research Lysa Myers offers IoT device-makers several recommendations to consider, including the following:

  • Notify users of any changes to device features. In fact, it may make sense to remind them repeatedly of significant changes, or they may simply ignore them out of habit.
  • Put a protocol in place for handling vulnerability reports, and display your vulnerability disclosure policy prominently on your website. Ideally, Myers notes, makers of IoT medical devices should send vulnerability reports to the FDA.
  • When determining how to handle a vulnerability issue, let the most qualified person decide what should happen. In the case of automated medical diagnosis, for example, the right person would probably be a doctor.
  • Make it quick and easy to update IoT device software when you find an error. Also, make it simple for customers to spot fraudulent updates.
  • Create an audit log for all devices, even those that might seem too mundane to interest criminals, as even the least important of devices can assist criminals in launching a DDoS attack or spamming.
  • See to it that users can tell when the changes made to an IoT device’s software are made by the authorized user or a designated representative rather than a cybercriminal or other inappropriate person.
  • Given that many IoT devices require cloud-based services to operate, it’s important to see that end users aren’t dropped abruptly with no cloud alternative. Manufacturers should give users time to transition their service if discontinuing a device, going out of business or otherwise ending support for their own cloud-based option.

If we take a high-level look at these recommendations, there’s a few common themes to be considered:

Awareness:  Particularly in the case of IoT devices, it’s critical to raise awareness among both technical staffers and users of changes, both in features and security configurations.

Protection:  It’s becoming more important every day to protect IoT devices from attacks, and see to it that they are configured properly to avoid security and continuity failures. Also, see to that these devices are protected from outages caused by vendor issues.

Monitoring:  Health IT leaders should find ways to integrate IoT devices into their monitoring routine, tracking their behavior, the state of security updates to their software and any suspicious user activity.

As the article suggests, IoT device-makers probably need to play a large role in helping healthcare organizations secure these devices. But clearly, healthcare organizations need to do their part if they hope to maintain these devices successfully as health IT models change.

Security and Privacy Are Pushing Archiving of Legacy EHR Systems

Posted on September 21, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In a recent McAfee Labs Threats Report, they said that “On average, a company detects 17 data loss incidents per day.” That stat is almost too hard to comprehend. No doubt it makes HIPAA compliance officers’ heads spin.

What’s even more disturbing from a healthcare perspective is that the report identifies hospitals as the easy targets for ransomware and that the attacks are relatively unsophisticated. Plus, one of the biggest healthcare security vulnerabilities is legacy systems. This is no surprise to me since I know so many healthcare organizations that set aside, forget about, or de-prioritize security when it comes to legacy systems. Legacy system security is the ticking time bomb of HIPAA compliance for most healthcare organizations.

In a recent EHR archiving infographic and archival whitepaper, Galen Healthcare Solutions highlighted that “50% of health systems are projected to be on second-generation technology by 2020.” From a technology perspective, we’re all saying that it’s about time we shift to next generation technology in healthcare. However, from a security and privacy perspective, this move is really scary. This means that 50% of health systems are going to have to secure legacy healthcare technology. If you take into account smaller IT systems, 100% of health systems have to manage (and secure) legacy technology.

Unlike other industries where you can decommission legacy systems, the same is not true in healthcare where Federal and State laws require retention of health data for lengthy periods of time. Galen Healthcare Solutions’ infographic offered this great chart to illustrate the legacy healthcare system retention requirements across the country:
healthcare-legacy-system-retention-requirements

Every healthcare CIO better have a solid strategy for how they’re going to deal with legacy EHR and other health IT systems. This includes ensuring easy access to legacy data along with ensuring that the legacy system is secure.

While many health systems use to leave their legacy systems running off in the corner of their data center or a random desk in their hospital, I’m seeing more and more healthcare organizations consolidating their EHR and health IT systems into some sort of healthcare data archive. Galen Healthcare Solution has put together this really impressive whitepaper that dives into all the details associated with healthcare data archives.

There are a lot of advantages to healthcare data archives. It retains the data to meet record retention laws, provides easy access to the data by end users, and simplifies the security process since you then only have to secure one health data archive instead of multiple legacy systems. While some think that EHR data archiving is expensive, it turns out that the ROI is much better than you’d expect when you factor in the maintenance costs associated with legacy systems together with the security risks associated with these outdated systems and other compliance and access issues that come with legacy systems.

I have no doubt that as EHR vendors and health IT systems continue consolidating, we’re going to have an explosion of legacy EHR systems that need to be managed and dealt with by every healthcare organization. Those organizations that treat this lightly will likely pay the price when their legacy systems are breached and their organization is stuck in the news for all the wrong reasons.

Galen Healthcare Solutions is a sponsor of the Tackling EHR & EMR Transition Series of blog posts on Hospital EMR and EHR.

An Alternate Way Of Authenticating Patients

Posted on July 5, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Lately, I’ve been experimenting with a security app I downloaded to my Android phone. The app, True Key by Intel Security, allows you to log in by presenting your face for a scan or using your fingerprint. Once inside the app, you can access your preferred apps with a single click, as it stores your user name and passwords securely. Next, I simplified things further by downloading the app to my laptop and tablet, which synchs up whatever access info I enter across all devices.

From what I can see, Intel is positioning this as a direct-to-consumer play. The True Key documentation describes the app as a tool non-techies can use to access sites easily, store passwords securely and visit their favorite sites across all of their devices without re-entering authentication data. But I’m intrigued by the app’s potential for enterprise healthcare security access control.

Right now, there are serious flaws in the way application access is managed. As things stand, authentication information is usually stored in the same network infrastructure as the applications themselves, at least on a high-level basis. So the process goes like this, more or less: Untrusted device uses untrusted app to access a secure system. The secure system requests credentials from the device user, verifies them against an ID/PW database and if they are correct, logs them in.

Of course, there are alternatives to this approach, ranging from biometric-only access and instantly-generated, always-unique passwords, but few organizations have the resources to maintain super-advanced access protocols. So in reality, most enterprises have to firewall up their security and authentication databases and pray that those resources don’t get hacked. Theoretically, institutions might be able to create another hacking speed bump by storing authentication information in the cloud, but that obviously raises a host of additional security questions.

So here’s an idea. What if health IT organizations demanded that users install biometrically-locked apps like True Key on their devices? Then, enterprise HIT software could authenticate users at the device level – surely a possibility given that devices have unique IDs – and let users maintain password security at their end. That way, if an enterprise system was hacked, the attacker could gain access to device information, but wouldn’t have immediate access to a massive ID and PW database that gave them access to all system resources.

What I’m getting at, here, is that I believe healthcare organizations should maintain relationships with patients (as represented by their unique devices) rather than their ID and password. While no form of identity verification is perfect, to me it seems a lot more like that it’s really me logging in if I had to use my facial features or fingerprint as an entry point. After all, virtually any ID/PW pair chosen by a user can be guessed or hacked, but if you authenticate to my face/fingerprint and a registered device, the odds are high that you’re getting me.

So now it’s your turn, readers. What flaws do you see in this approach? Have you run into other apps that might serve this purpose better than True Key? Should HIT vendors create these apps? Have at it.

Joint Commission Now Allows Texting Of Orders

Posted on May 17, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For a long time, it was common for clinicians to share private patient information with each other via standard text messages, despite the fact that the information was in the clear, and could theoretically be intercepted and read (which this along with other factors makes SMS texts a HIPAA violation in most cases). To my knowledge, there have been no major cases based on theft of clinically-oriented texts, but it certainly could’ve happened.

Over the past few years, however, a number of vendors have sprung up to provide HIPAA-compliant text messaging.  And apparently, these vendors have evolved approaches which satisfy the stringent demands of The Joint Commission. The hospital accreditation group had previously prohibited hospitals from sanctioning the texting of orders for patient care, treatment or services, but has now given it the go-ahead under certain circumstances.

This represents an about-face from 2011, when the group had deemed the texting of orders “not acceptable.” At the time, the Joint Commission said, technology available didn’t provide the safety and security necessary to adequately support the use of texted orders. But now that several HIPAA-compliant text-messaging apps are available, the game has changed, according to the accrediting body.

Prescribers may now text such orders to hospitals and other healthcare settings if they meet the Commissioin’s Medication Management Standard MM.04.01.01. In addition, the app prescribers use to text the orders must provide for a secure sign-on process, encrypted messaging, delivery and read receipts, date and time stamp, customized message retention time frames and a specified contact list for individuals authorized to receive and record orders.

I see this is a welcome development. After all, it’s better to guide and control key aspects of a process rather than letting it continue on underneath the surface. Also, the reality is that healthcare entities need to keep adapting to and building upon the way providers actually communicate. Failing to do so can only add layers to a system already fraught with inefficiencies.

That being said, treating provider-to-provider texts as official communications generates some technical issues that haven’t been addressed yet so far as I know.

Most particularly, if clinicians are going to be texting orders — as well as sharing PHI via text — with the full knowledge and consent of hospitals and other healthcare organizations — it’s time to look at what it takes manage that information more efficiently. When used this way, texts go from informal communication to extensions of the medical record, and organizations should address that reality.

At the very least, healthcare players need to develop policies for saving and managing texts, and more importantly, for mining the data found within these texts. And that brings up many questions. For example, should texts be stored as a searchable file? Should they be appended to the medical records of the patients referenced, and if so, how should that be accomplished technically? How should texted information be integrated into a healthcare organization’s data mining efforts?

I don’t have the answers to all of these questions, but I’d argue that if texts are now vehicles for day-to-day clinical communication, we need to establish some best practices for text management. It just makes sense.

The Downside of Interoperability

Posted on May 2, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

It’s hard to argue that achieving health data interoperability is not important — but it comes with risks. And I’ve seen little discussion of the fact that interoperability may actually increase the chance that a major attack could hit a wide swath of healthcare providers. It might be extreme to suggest that we put off such efforts until we step up the industry’s security status, but the problem shouldn’t be ignored either.

Sure, data interoperability is a critical goal for healthcare providers of all stripes. While there’s room to argue about how it should be accomplished, particularly over whether providers or patients should drive health data management, there’s no question it needs to get done. There’s little doubt that most efforts to coordinate care will fall flat if providers are operating with incomplete information.

And what’s more, with the demand for interoperability baked into MACRA, we pretty much have no choice but to make it happen anyway. To my knowledge, HHS has proposed neither carrot nor stick to convince providers to come on board – nor has it defined “widespread” interoperability to my knowledge — but the agency has to achieve something by 2018, and that means change will come.

That being said, I’m struck by how little industry concern there seems to be about the extent to which interoperability can multiply the possibility of a breach occurring. Unfortunately, security is only as good is the weakest link in the chain, and data sharing increases the length of the chain exponentially. Of course, the risk varies a great deal depending on who or what the data-sharing intermediary is, but the fact remains that a connected network is a connected network.

The problem only gets worse if interoperability is achieved by integrating applications. I’m no software engineer, but I’m pretty sure that the more integrated providers’ infrastructure is, the more vulnerabilities they share. To be fair, hospitals theoretically vet their partners, but that defeats the purpose of universal data sharing, doesn’t it?

And even if every provider in the universal data sharing network practices good security hygiene, they can still get attacked. So it’s not a matter of requiring participants to comply with some network security standard, or meet some certification criteria. Given the massive incentives these have to steal health data (and lock it up with ransomware), nobody can hold out forever.

The bottom line is that I believe we should discuss the matter of security in a fully-connected health data sharing network more often.

Yes, we almost certainly need to press ahead and simply find a way to contain the risks. We simply can’t afford our fragmented healthcare system, and data interoperability offers perhaps the best possible chance of pulling it back together.

But before we plunge into the fray, it only makes sense to stop and consider all of the risks involved and how they should be addressed. After all, universal interconnection exposes a virtually infinite number of potential points of failure to cybercrooks. Let’s put some solutions on the table before it’s too late.

Medical Device Security At A Crossroads

Posted on April 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As anyone reading this knows, connected medical devices are vulnerable to attacks from outside malware. Security researchers have been warning healthcare IT leaders for years that network-connected medical devices had poor security in place, ranging from image repository backups with no passwords to CT scanners with easily-changed configuration files, but far too many problems haven’t been addressed.

So why haven’t providers addressed the security problems? It may be because neither medical device manufacturers nor hospitals are set up to address these issues. “The reality is both sides — providers and manufacturers — do not understand how much the other side does not know,” said John Gomez, CEO of cybersecurity firm Sensato. “When I talk with manufacturers, they understand the need to do something, but they have never had to deal with cyber security before. It’s not a part of their DNA. And on the hospital side, they’re realizing that they’ve never had to lock these things down. In fact, medical devices have not even been part of the IT group and hospitals.

Gomez, who spoke with Healthcare IT News, runs one of two companies backing a new initiative dedicated to securing medical devices and health organizations. (The other coordinating company is healthcare security firm Divurgent.)

Together, the two have launched the Medical Device Cybersecurity Task Force, which brings together a grab bag of industry players including hospitals, hospital technologists, medical device manufacturers, cyber security researchers and IT leaders. “We continually get asked by clients with the best practices for securing medical devices,” Gomez told Healthcare IT News. “There is little guidance and a lot of misinformation.“

The task force includes 15 health systems and hospitals, including Children’s Hospital of Atlanta, Lehigh Valley Health Network, Beebe Healthcare and Intermountain, along with tech vendors Renovo Solutions, VMware Inc. and AirWatch.

I mention this initiative not because I think it’s huge news, but rather, as a reminder that the time to act on medical device vulnerabilities is more than nigh. There’s a reason why the Federal Trade Commission, and the HHS Office of Inspector General, along with the IEEE, have launched their own initiatives to help medical device manufacturers boost cybersecurity. I believe we’re at a crossroads; on one side lies renewed faith in medical devices, and on the other nothing less than patient privacy violations, harm and even death.

It’s good to hear that the Task Force plans to create a set of best practices for both healthcare providers and medical device makers which will help get their cybersecurity practices up to snuff. Another interesting effort they have underway in the creation of an app which will help healthcare providers evaluate medical devices, while feeding a database that members can access to studying the market.

But reading about their efforts also hammered home to me how much ground we have to cover in securing medical devices. Well-intentioned, even relatively effective, grassroots efforts are good, but they’re only a drop in the bucket. What we need is nothing less than a continuous knowledge feed between medical device makers, hospitals, clinics and clinicians.

And why not start by taking the obvious step of integrating the medical device and IT departments to some degree? That seems like a no-brainer. But unfortunately, the rest of the work to be done will take a lot of thought.

The Need for Speed (In Breach Protection)

Posted on April 26, 2016 I Written By

The following is a guest blog post by Robert Lord, Co-founder and CEO of Protenus.
Robert Protenus
The speed at which a hospital can detect a privacy breach could mean the difference between a brief, no-penalty notification and a multi-million dollar lawsuit.  This month it was reported that health information from 2,000 patients was exposed when a Texas hospital took four months to identify a data breach caused by an independent healthcare provider.  A health system in New York similarly took two months to determine that 2,500 patient records may have been exposed as a result of a phishing scam and potential breach reported two months prior.

The rise in reported breaches this year, from phishing scams to stolen patient information, only underscores the risk of lag times between breach detection and resolution. Why are lags of months and even years so common? And what can hospitals do to better prepare against threats that may reach the EHR layer?

Traditional compliance and breach detection tools are not nearly as effective as they need to be. The most widely used methods of detection involve either infrequent random audits or extensive manual searches through records following a patient complaint. For example, if a patient suspects that his medical record has been inappropriately accessed, a compliance officer must first review EMR data from the various systems involved.  Armed with a highlighter (or a large excel spreadsheet), the officer must then analyze thousands of rows of access data, and cross-reference this information with the officer’s implicit knowledge about the types of people who have permission to view that patient’s records. Finding an inconsistency – a person who accessed the records without permission – can take dozens of hours of menial work per case.  Another issue with investigating breaches based on complaints is that there is often no evidence that the breach actually occurred. Nonetheless, the hospital is legally required to investigate all claims in a timely manner, and such investigations are costly and time-consuming.

According to a study by the Ponemon Institute, it takes an average of 87 days from the time a breach occurs to the time the officer becomes aware of the problem, and, given the arduous task at hand, it then takes another 105 days for the officer to resolve the issue. In total, it takes approximately 6 months from the time a breach occurs to the time the issue is resolved. Additionally, if a data breach occurs but a patient does not notice, it could take months – or even years – for someone to discover the problem. And of course, the longer it takes the hospital to identify a problem, the higher the cost of identifying how the breach occurred and remediating the situation.

In 2013, Rouge Valley Centenary Hospital in Scarborough, Canada, revealed that the contact information of approximately 8,300 new mothers had been inappropriately accessed by two employees. Since 2009, the two employees had been selling the contact information of new mothers to a private company specializing in Registered Education Savings Plans (RESPs). Some of the patients later reported that days after coming home from the hospital with their newborn child, they started receiving calls from sales representatives at the private RESP company. Marketing representatives were extremely aggressive, and seemed to know the exact date of when their child had been born.

The most terrifying aspect of this story is how the hospital was able to find out about the data breach: remorse and human error! One employee voluntarily turned himself in, while the other accidentally left patient records on a printer. Had these two events not happened, the scam could have continued for much longer than the four years it did before it was finally discovered.

Rouge Valley Hospital is currently facing a $412 million dollar lawsuit over this breach of privacy. Arguably even more damaging, is that they have lost the trust of their patients who relied on the hospital for care and confidentiality of their medical treatments.

As exemplified by the ramifications of the Rouge Valley Hospital breach and the new breaches discovered almost weekly in hospitals around the world, the current tools used to detect privacy breaches in electronic health records are not sufficient. A system needs to have the ability to detect when employees are accessing information outside their clinical and administrative responsibilities. Had the Scarborough hospital known about the inappropriately viewed records the first time they had been accessed, they could have investigated earlier and protected the privacy of thousands of new mothers.

Every person seeks a hospital’s care has the right to privacy and the protection of their medical information. However, due to the sheer volume of patient records accessed each day, it is impossible for compliance officers to efficiently detect breaches without new and practical tools. Current rule-based analytical systems often overburden the officers with alerts, and are only a minor improvement from manual detection methods.

We are in the midst of a paradigm shift with hospitals taking a more proactive and layered approach to health data security. New technology that uses machine learning and big data science to review each access to medical records will replace traditional compliance technology and streamline threat detection and resolution cycles from months to a matter of minutes. Making identifying a privacy breach or violation as simple and fast as the action that may have caused it in the first place.  Understanding how to select and implement these next-generation tools will be a new and important challenge for the compliance officers of the future, but one that they can no longer afford to delay.

Protenus is a health data security platform that protects patient data in electronic medical records for some of the nation’s top-ranked hospitals. Using data science and machine learning, Protenus technology uniquely understands the clinical behavior and context of each user that is accessing patient data to determine the appropriateness of each action, elevating only true threats to patient privacy and health data security.

Patient Portal Security Is A Tricky Issue

Posted on April 25, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Much of the discussion around securing health data on computers revolves around enterprise networks, particularly internal devices. But it doesn’t hurt to look elsewhere in assessing your overall vulnerabilities. And unfortunately, that includes gaps that can be exposed by patients, whose security practices you can’t control.

One vulnerability that gets too little attention is the potential for a cyber attack accessing the provider’s patient portal, according to security consultant Keith Fricke of tw-Security in Overland Park, Kan. Fricke, who spoke with Information Management, noted that cyber criminals can access portal data relatively easily.

For example, they can insert malicious code into frequently visited websites, which the patient may inadvertently download. Then, if your patient’s device or computer isn’t secure, you may have big problems. When the patient accesses a hospital or clinic’s patient portal, the attacker can conceivably get access to the health data available there.

Not only does such an attack give the criminal access to the portal, it may also offer the them access to many other patients’ computers, and the opportunity to send malware to those computers. So one patient’s security breach can become a victim of infection for countless patients.

When patients access the portal via mobile device, it raises another set of security issues, as the threat to such devices is growing over time. In a recent survey by Ponemon Institute and CounterTack, 80% of respondents reported that their mobile endpoints have been the target of malware the past year. And there’s little doubt that the attacks via mobile device will more sophisticated over time.

Given how predictable such vulnerabilities are, you’d think that it would be fairly easy to lock the portals down. But the truth is, patient portals have to strike a particularly delicate balance between usability and security. While you can demand almost anything from employees, you don’t want to frustrate patients, who may become discouraged if too much is expected from them when they log in. And if they aren’t going to use it, why build a patient portal at all?

For example, requiring a patient to change your password or login data frequently may simply be too taxing for users to handle. Other barriers include demanding that a patient use only one specific browser to access the portal, or requiring them to use digits rather than an alphanumeric name that they can remember. And insisting that a patient use a long, computer-generated password can be a hassle that patients won’t tolerate.

At this point, it would be great if I could say “here’s the perfect solution to this problem.” But the truth is, as you already know, that there’s no one solution that will work for every provider and every IT department. That being said, in looking at this issue, I do get the sense that providers and IT execs spend too little time on user-testing their portals. There’s lots of room for improvement there.

It seems to me that to strike the right balance between portal security and usability, it makes more sense to bring user feedback into the equation as early in the game as possible. That way, at least, you’ll be making informed choices when you establish your security protocols. Otherwise, you may end up with a white elephant, and nobody wants to see that happen.