Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Joint Commission Now Allows Texting Of Orders

Posted on May 17, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For a long time, it was common for clinicians to share private patient information with each other via standard text messages, despite the fact that the information was in the clear, and could theoretically be intercepted and read (which this along with other factors makes SMS texts a HIPAA violation in most cases). To my knowledge, there have been no major cases based on theft of clinically-oriented texts, but it certainly could’ve happened.

Over the past few years, however, a number of vendors have sprung up to provide HIPAA-compliant text messaging.  And apparently, these vendors have evolved approaches which satisfy the stringent demands of The Joint Commission. The hospital accreditation group had previously prohibited hospitals from sanctioning the texting of orders for patient care, treatment or services, but has now given it the go-ahead under certain circumstances.

This represents an about-face from 2011, when the group had deemed the texting of orders “not acceptable.” At the time, the Joint Commission said, technology available didn’t provide the safety and security necessary to adequately support the use of texted orders. But now that several HIPAA-compliant text-messaging apps are available, the game has changed, according to the accrediting body.

Prescribers may now text such orders to hospitals and other healthcare settings if they meet the Commissioin’s Medication Management Standard MM.04.01.01. In addition, the app prescribers use to text the orders must provide for a secure sign-on process, encrypted messaging, delivery and read receipts, date and time stamp, customized message retention time frames and a specified contact list for individuals authorized to receive and record orders.

I see this is a welcome development. After all, it’s better to guide and control key aspects of a process rather than letting it continue on underneath the surface. Also, the reality is that healthcare entities need to keep adapting to and building upon the way providers actually communicate. Failing to do so can only add layers to a system already fraught with inefficiencies.

That being said, treating provider-to-provider texts as official communications generates some technical issues that haven’t been addressed yet so far as I know.

Most particularly, if clinicians are going to be texting orders — as well as sharing PHI via text — with the full knowledge and consent of hospitals and other healthcare organizations — it’s time to look at what it takes manage that information more efficiently. When used this way, texts go from informal communication to extensions of the medical record, and organizations should address that reality.

At the very least, healthcare players need to develop policies for saving and managing texts, and more importantly, for mining the data found within these texts. And that brings up many questions. For example, should texts be stored as a searchable file? Should they be appended to the medical records of the patients referenced, and if so, how should that be accomplished technically? How should texted information be integrated into a healthcare organization’s data mining efforts?

I don’t have the answers to all of these questions, but I’d argue that if texts are now vehicles for day-to-day clinical communication, we need to establish some best practices for text management. It just makes sense.

OCR Cracking Down On Business Associate Security

Posted on May 13, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For most patients, a data breach is a data breach. While it may make a big difference to a healthcare organization whether the source of a security vulnerability was outside its direct control, most consumers aren’t as picky. Once you have to disclose to them that the data has been hacked, they aren’t likely be more forgiving if one of your business associates served as the leak.

Just as importantly, federal regulators seem to be growing increasingly frustrated that healthcare organizations aren’t doing a good job of managing business associate security. It’s little wonder, given that about 20% of the 1,542 healthcare data breaches affecting 500 more individuals reported since 2009 involve business associates. (This is probably a conservative estimate, as reports to OCR by covered entities don’t always mention the involvement of a business associate.)

To this point, the HHS Office for Civil Rights has recently issued a cyber-alert stressing the urgency of addressing these issues. The alert, which was issued by OCR earlier this month, noted that a “large percentage” of covered entities assume they will not be notified of security breaches or cyberattacks experienced by the business associates. That, folks, is pretty weak sauce.

Healthcare organizations also believe that it’s difficult to manage security incidents involving business associates, and impossible to determine whether data safeguards and security policies and procedures at the business associates are adequate. Instead, it seems, many covered entities operate on the “keeping our fingers crossed” system, providing little or no business associate security oversight.

However, that is more than unwise, given that the number of major breaches have taken place because of an oversight by business associates. For example, in 2011 information on 4.9 million individuals was exposed when unencrypted backup computer tapes are stolen from the car of a Science Applications International Corp. employee, who was transporting tapes on behalf of military health program, TRICARE.

The solution to this problem is straightforward, if complex to implement, the alert suggests. “Covered entities and business associates should consider how they will confront a breach at their business associates or subcontractors,” and make detailed plans as to how they’ll address and report on security incidents among these group, OCR suggests.

Of course, in theory business associates are required to put their own policies and procedures in place to prevent, detect, contain and correct security violations under HIPAA regs. But that will be no consolation if your data is exposed because they weren’t holding their feet to the fire.

Besides, OCR isn’t just sending out vaguely threatening emails. In March, OCR began Phase 2 of its HIPAA privacy and security audits of covered entities and business associates. These audits will “review the policies and procedures adopted and employed by covered entities and their business associates to meet selected standard interpretation specifications of the Privacy, Security, and Breach Notification Rules,” OCR said at the time.

Time To Leverage EHR Data Analytics

Posted on May 5, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For many healthcare organizations, implementing an EHR has been one of the largest IT projects they’ve ever undertaken. And during that implementation, most have decided to focus on meeting Meaningful Use requirements, while keeping their projects on time and on budget.

But it’s not good to stay in emergency mode forever. So at least for providers that have finished the bulk of their initial implementation, it may be time to pay attention to issues that were left behind in the rush to complete the EHR rollout.

According to a recent report by PricewaterhouseCoopers’ Advanced Risk & Compliance Analytics practice, it’s time for healthcare organizations to focus on a new set of EHR data analytics approaches. PwC argues that there is significant opportunity to boost the value of EHR implementations by using advanced analytics for pre-live testing and post-live monitoring. Steps it suggests include the following:

  • Go beyond sample testing: While typical EHR implementation testing strategies look at the underlying systems build and all records, that may not be enough, as build efforts may remain incomplete. Also, end-user workflow specific testing may be occurring simultaneously. Consider using new data mining, visualization analytics tools to conduct more thorough tests and spot trends.
  • Conduct real-time surveillance: Use data analytics programs to review upstream and downstream EHR workflows to find gaps, inefficiencies and other issues. This allows providers to design analytic programs using existing technology architecture.
  • Find RCM inefficiencies: Rather than relying on static EHR revenue cycle reports, which make it hard to identify root causes of trends and concerns, conduct interactive assessment of RCM issues. By creating dashboards with drill-down capabilities, providers can increase collections by scoring patients invoices, prioritizing patient invoices with the highest scores and calculating the bottom-line impact of missing payments.
  • Build a continuously-monitored compliance program: Use a risk-based approach to data sampling and drill-down testing. Analytics tools can allow providers to review multiple data sources under one dashboard identify high-risk patterns in critical areas such as billing.

It’s worth noting, at this point, that while these goals seem worthy, only a small percentage of providers have the resources to create and manage such programs. Sure, vendors will probably tell you that they can pop a solution in place that will get all the work done, but that’s seldom the case in reality. Not only that, a surprising number of providers are still unhappy with their existing EHR, and are now living in replacing those systems despite the cost. So we’re hardly at the “stop and take a breath” stage in most cases.

That being said, it’s certainly time for providers to get out of whatever defensive crouch they’ve been in and get proactive. For example, it certainly would be great to leverage EHRs as tools for revenue cycle enhancement, rather than the absolute revenue drain they’ve been in the past. PwC’s suggestions certainly offer a useful look on where to go from here. That is, if providers’ efforts don’t get hijacked by MACRA.

The Downside of Interoperability

Posted on May 2, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

It’s hard to argue that achieving health data interoperability is not important — but it comes with risks. And I’ve seen little discussion of the fact that interoperability may actually increase the chance that a major attack could hit a wide swath of healthcare providers. It might be extreme to suggest that we put off such efforts until we step up the industry’s security status, but the problem shouldn’t be ignored either.

Sure, data interoperability is a critical goal for healthcare providers of all stripes. While there’s room to argue about how it should be accomplished, particularly over whether providers or patients should drive health data management, there’s no question it needs to get done. There’s little doubt that most efforts to coordinate care will fall flat if providers are operating with incomplete information.

And what’s more, with the demand for interoperability baked into MACRA, we pretty much have no choice but to make it happen anyway. To my knowledge, HHS has proposed neither carrot nor stick to convince providers to come on board – nor has it defined “widespread” interoperability to my knowledge — but the agency has to achieve something by 2018, and that means change will come.

That being said, I’m struck by how little industry concern there seems to be about the extent to which interoperability can multiply the possibility of a breach occurring. Unfortunately, security is only as good is the weakest link in the chain, and data sharing increases the length of the chain exponentially. Of course, the risk varies a great deal depending on who or what the data-sharing intermediary is, but the fact remains that a connected network is a connected network.

The problem only gets worse if interoperability is achieved by integrating applications. I’m no software engineer, but I’m pretty sure that the more integrated providers’ infrastructure is, the more vulnerabilities they share. To be fair, hospitals theoretically vet their partners, but that defeats the purpose of universal data sharing, doesn’t it?

And even if every provider in the universal data sharing network practices good security hygiene, they can still get attacked. So it’s not a matter of requiring participants to comply with some network security standard, or meet some certification criteria. Given the massive incentives these have to steal health data (and lock it up with ransomware), nobody can hold out forever.

The bottom line is that I believe we should discuss the matter of security in a fully-connected health data sharing network more often.

Yes, we almost certainly need to press ahead and simply find a way to contain the risks. We simply can’t afford our fragmented healthcare system, and data interoperability offers perhaps the best possible chance of pulling it back together.

But before we plunge into the fray, it only makes sense to stop and consider all of the risks involved and how they should be addressed. After all, universal interconnection exposes a virtually infinite number of potential points of failure to cybercrooks. Let’s put some solutions on the table before it’s too late.

Medical Device Security At A Crossroads

Posted on April 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As anyone reading this knows, connected medical devices are vulnerable to attacks from outside malware. Security researchers have been warning healthcare IT leaders for years that network-connected medical devices had poor security in place, ranging from image repository backups with no passwords to CT scanners with easily-changed configuration files, but far too many problems haven’t been addressed.

So why haven’t providers addressed the security problems? It may be because neither medical device manufacturers nor hospitals are set up to address these issues. “The reality is both sides — providers and manufacturers — do not understand how much the other side does not know,” said John Gomez, CEO of cybersecurity firm Sensato. “When I talk with manufacturers, they understand the need to do something, but they have never had to deal with cyber security before. It’s not a part of their DNA. And on the hospital side, they’re realizing that they’ve never had to lock these things down. In fact, medical devices have not even been part of the IT group and hospitals.

Gomez, who spoke with Healthcare IT News, runs one of two companies backing a new initiative dedicated to securing medical devices and health organizations. (The other coordinating company is healthcare security firm Divurgent.)

Together, the two have launched the Medical Device Cybersecurity Task Force, which brings together a grab bag of industry players including hospitals, hospital technologists, medical device manufacturers, cyber security researchers and IT leaders. “We continually get asked by clients with the best practices for securing medical devices,” Gomez told Healthcare IT News. “There is little guidance and a lot of misinformation.“

The task force includes 15 health systems and hospitals, including Children’s Hospital of Atlanta, Lehigh Valley Health Network, Beebe Healthcare and Intermountain, along with tech vendors Renovo Solutions, VMware Inc. and AirWatch.

I mention this initiative not because I think it’s huge news, but rather, as a reminder that the time to act on medical device vulnerabilities is more than nigh. There’s a reason why the Federal Trade Commission, and the HHS Office of Inspector General, along with the IEEE, have launched their own initiatives to help medical device manufacturers boost cybersecurity. I believe we’re at a crossroads; on one side lies renewed faith in medical devices, and on the other nothing less than patient privacy violations, harm and even death.

It’s good to hear that the Task Force plans to create a set of best practices for both healthcare providers and medical device makers which will help get their cybersecurity practices up to snuff. Another interesting effort they have underway in the creation of an app which will help healthcare providers evaluate medical devices, while feeding a database that members can access to studying the market.

But reading about their efforts also hammered home to me how much ground we have to cover in securing medical devices. Well-intentioned, even relatively effective, grassroots efforts are good, but they’re only a drop in the bucket. What we need is nothing less than a continuous knowledge feed between medical device makers, hospitals, clinics and clinicians.

And why not start by taking the obvious step of integrating the medical device and IT departments to some degree? That seems like a no-brainer. But unfortunately, the rest of the work to be done will take a lot of thought.

Patient Portal Security Is A Tricky Issue

Posted on April 25, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Much of the discussion around securing health data on computers revolves around enterprise networks, particularly internal devices. But it doesn’t hurt to look elsewhere in assessing your overall vulnerabilities. And unfortunately, that includes gaps that can be exposed by patients, whose security practices you can’t control.

One vulnerability that gets too little attention is the potential for a cyber attack accessing the provider’s patient portal, according to security consultant Keith Fricke of tw-Security in Overland Park, Kan. Fricke, who spoke with Information Management, noted that cyber criminals can access portal data relatively easily.

For example, they can insert malicious code into frequently visited websites, which the patient may inadvertently download. Then, if your patient’s device or computer isn’t secure, you may have big problems. When the patient accesses a hospital or clinic’s patient portal, the attacker can conceivably get access to the health data available there.

Not only does such an attack give the criminal access to the portal, it may also offer the them access to many other patients’ computers, and the opportunity to send malware to those computers. So one patient’s security breach can become a victim of infection for countless patients.

When patients access the portal via mobile device, it raises another set of security issues, as the threat to such devices is growing over time. In a recent survey by Ponemon Institute and CounterTack, 80% of respondents reported that their mobile endpoints have been the target of malware the past year. And there’s little doubt that the attacks via mobile device will more sophisticated over time.

Given how predictable such vulnerabilities are, you’d think that it would be fairly easy to lock the portals down. But the truth is, patient portals have to strike a particularly delicate balance between usability and security. While you can demand almost anything from employees, you don’t want to frustrate patients, who may become discouraged if too much is expected from them when they log in. And if they aren’t going to use it, why build a patient portal at all?

For example, requiring a patient to change your password or login data frequently may simply be too taxing for users to handle. Other barriers include demanding that a patient use only one specific browser to access the portal, or requiring them to use digits rather than an alphanumeric name that they can remember. And insisting that a patient use a long, computer-generated password can be a hassle that patients won’t tolerate.

At this point, it would be great if I could say “here’s the perfect solution to this problem.” But the truth is, as you already know, that there’s no one solution that will work for every provider and every IT department. That being said, in looking at this issue, I do get the sense that providers and IT execs spend too little time on user-testing their portals. There’s lots of room for improvement there.

It seems to me that to strike the right balance between portal security and usability, it makes more sense to bring user feedback into the equation as early in the game as possible. That way, at least, you’ll be making informed choices when you establish your security protocols. Otherwise, you may end up with a white elephant, and nobody wants to see that happen.

Are Ransomware Attacks A HIPAA Issue, Or Just Our Fault?

Posted on April 18, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

With ransomware attacks hitting hospitals in growing numbers, it’s growing more urgent for healthcare organizations to have a routine and effective response to such attacks. While over the short term, providers are focused mostly on survival, eventually they’ll have to consider big-picture implications — and one of the biggest is whether a ransomware intrusion can be called a “breach” under federal law.

As readers know, providers must report any sizable breach to the HHS Office for Civil Rights. So far, though, it seems that the feds haven’t issued any guidance as to how they see this issue. However, people in the know have been talking about this, and here’s what they have to say.

David Holtzman, a former OCR official who now serves as vice president of compliance strategies at security firm CynergisTek, told Health Data Management that as long as the data was never compromised, a provider may be in the clear. If an organization can show OCR proof that no data was accessed, it may be able to avoid having the incident classed as a breach.

And some legal experts agree. Attorney David Harlow, who focuses on healthcare issues, told Forbes: “We need to remember that HIPAA is narrowly drawn and data breaches defined as the unauthorized ‘access, acquisition, use or disclosure’ of PHI. [And] in many cases, ransomware “wraps” PHI rather than breaches it.”

But as I see it, ransomware attacks should give health IT security pros pause even if they don’t have to report a breach to the federal government. After all, as Holtzman notes, the HIPAA security rule requires that providers put appropriate safeguards in place to ensure the confidentiality, the integrity and availability of ePHI. And fairly or not, any form of malware intrusion that succeeds raises questions about providers’ security policies and approaches.

What’s more, ransomware attacks may point to underlying weaknesses in the organization’s overall systems architecture. “Why is the operating system allowing this application to access this data?” asked one reader in comments on a related EMR and HIPAA post. “There should be no possible way for a database that is only read/write for specified applications to be modified by a foreign encryption application,” the reader noted. “The database should refuse the instruction, the OS should deny access, and the security system should lock the encryption application out.”

To be fair, not all intrusions are someone’s “fault.” Ransomware creators are innovating rapidly, and are arguably equipped to find new vectors of infection more quickly than security experts can track them. In fact, easy-to-deploy ransomware as a service is emerging, making it comparatively simple for less-skilled criminals to use. And they have a substantial incentive to do so. According to one report, one particularly sophisticated ransomware strain has brought $325 million in profits to groups deploying it.

Besides, downloading actual data is so five years ago. If you’re attacking a provider, extorting payment through ransomware is much easier than attempting to resell stolen healthcare data. Why go to all that trouble when you can get your cash up front?

Still, the reality is that healthcare organizations must be particularly careful when it comes to protecting patient privacy, both for ethical and regulatory reasons. Perhaps ransomware will be the jolt that pushes lagging players to step up and invest in security, as it creates a unique form of havoc that could easily put patient care at risk. I certainly hope so.

Breach Affecting 2.2M Patients Highlights New Health Data Threats

Posted on April 4, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A Fort Myers, FL-based cancer care organization is paying a massive price for a health data breach that exposed personal information on 2.2 million patients late last year. This incident is also shedding light on the growing vulnerability of non-hospital healthcare data, as you’ll see below.

Recently, 21st Century Oncology was forced to warn patients that an “unauthorized third party” had broken into one of its databases. Officials said that they had no evidence that medical records were accessed, but conceded that breached information may have included patient names Social Security numbers, insurance information and diagnosis and treatment data.

Notably, the cancer care chain — which operates on hundred and 45 centers in 17 states — didn’t learn about the breach until the FBI informed the company that it had happened.

Since that time, 21st Century has been faced with a broad range of legal consequences. Three lawsuits related to the breach have been filed against the company. All are alleging that the breach exposed them to a great possibility of harm.  Patient indignation seems to have been stoked, in part, because they did not learn about the breach until five months after it happened, allegedly at the request of investigating FBI officials.

“While more than 2.2 million 21st Century Oncology victims have sought out and/or pay for medical care from the company, thieves have been hard at work, stealing and using their hard-to-change Social Security numbers and highly sensitive medical information,” said plaintiff Rona Polovoy in her lawsuit.

Polovoy’s suit also contends that the company should have been better prepared for such breaches, given that it suffered a similar security lapse between October 2011 and August 2012, when an employee used patient names Social Security numbers and dates of birth to file fraudulent tax refund claims. She claims that the current lapse demonstrates that the company did little to clean up its cybersecurity act.

Another plaintiff, John Dickman, says that the breach has filled his life with needless anxiety. In his legal filings he says that he “now must engage in stringent monitoring of, among other things, his financial accounts, tax filings, and health insurance claims.”

All of this may be grimly entertaining if you aren’t the one whose data was exposed, but there’s more to this case than meets the eye. According to a cybersecurity specialist quoted in Infosecurity Magazine, the 21st Century network intrusion highlights how exposed healthcare organizations outside the hospital world are to data breaches.

I can’t help but agree with TrapX Security executive vice president Carl Wright, who told the magazine that skilled nursing facilities, dialysis centers, imaging centers, diagnostic labs, surgical centers and cancer treatment facilities like 21st are all in network intruders’ crosshairs. Not only that, he notes that large extended healthcare networks such as accountable care organizations are vulnerable.

And that’s a really scary thought. While he doesn’t say so specifically, it’s logical to assume that the more unrelated partners you weld together across disparate networks, it multiplies the number of security-related points of failure. Isn’t it lovely how security threats emerge to meet every advance in healthcare?

This Time, It’s Personal: Virus Hits My Local Hospital

Posted on March 30, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

In about two weeks, I am scheduled to have a cardiac ablation to address a long-standing arrhythmia. I was feeling pretty good about this — after all, the procedure is safe at my age and is known to have a very high success rate — until I scanned my Twitter feed yesterday.

It was then that I found out that what was probably a ransomware virus had forced a medical data shutdown at Washington, D.C.-based MedStar Health. And while the community hospital where my procedure will be done is not part of the MedStar network, the cardiac electrophysiologist who will perform the ablation is affiliated with the chain.

During my pre-procedure visit with the doctor, a very pleasant guy who made me feel very safe, we devolved to talking shop about EMR issues after the clinical discussion was over. At the time he shared that his practice ran on GE Centricity which, he understandably complained, was not interoperable with the Epic system at one community chain, MedStar’s enterprise system or even the imaging platforms he uses. Under those circumstances, it’s hard to imagine that my data was affected by this breach. But as you can imagine, I still wonder what’s up.

While there’s been no official public statement saying this virus was part of a ransomware attack, some form of virus has definitely wreaked havoc at MedStar, according to a report by the Washington Post. (As a side note, it’s worth pointing out that if this is a ransomware attack, health system officials have done an admirable job of keeping the amount demanded for data return out of the press. However, some users have commented about ransomware on their individual computers.)

As the news report notes, MedStar has soldiered on in the face of the attack, keeping all of its clinical facilities open. However, a hospital spokesperson told the newspaper that the chain has decided to take down all system interfaces to prevent the spread of the virus. And as has happened with other hospital ransomware incursions, staffers have had to revert to using paper-based records.

And here’s where it might affect me personally. Even though my procedure is being done at a non-MedStar hospital, it’s possible that the virus driven delay in appointments and surgeries will affect my doctor, which could of course affect me.

Meanwhile, imagine how the employees at MedStar facilities feel: “Even the lowest-level staff can’t communicate with anyone. You can’t schedule patients, you can’t access records, you can’t do anything,” an anonymous staffer told the Post. Even if such a breach had little impact on patients, it’s obviously bad for employee morale. And that can’t be good for me either.

Again, it’s possible I’m in the clear, but the fact that the FUD surrounding this episode affects even a trained observer like myself plays right into the virus makers’ hands. Now, so far I haven’t dignified the attack by calling the doctor’s office to ask how it will affect me, but if I keep reading about problems with MedStar systems I’ll have to follow up soon.

Worse, when I’m being anesthetized for the procedure next month, I know I’ll be wondering when the next virus will hit.

Cyber Breach Insurance May Be Useless If You’re Negligent

Posted on March 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Ideally, your healthcare organization will never see a major data breach. But realistically, given how valuable healthcare data is these days — and the extent to which many healthcare firms neglect data security — it’s safer to assume that you will have to cope with a breach at some point.

In fact, it might be wise to assume that some form of costly breach is inevitable. After all, as one infographic points out, 55 healthcare organizations reported network attacks resulting in data breaches last year, which resulted in 111,809,322 individuals’ health record information being compromised. (If you haven’t done the math in your head, that’s a staggering 35% of the US population.)

The capper: if things don’t get better, the US healthcare industry stands to lose $305 billion in cumulative lifetime patient revenue due to cyberattacks likely to take place over the next five years.

So, by all means, protect yourself by any means available. However, as a recent legal battle suggests, simply buying cyber security insurance isn’t a one-step solution. In fact, your policy may not be worth much if you don’t do your due diligence when it comes to network and Internet security.

The lawsuit, Columbia Casualty Company v. Cottage Health System, shows what happens when a healthcare organization (allegedly) relies on its cyber insurance policy to protect it against breach costs rather than working hard to prevent such slips.

Back in December 2013, the three-hospital Cottage Health System notified 32,755 of its patients that their PHI had been compromised. The breach occurred when the health system and one of its vendors, InSync, stored unencrypted medical records on an Internet accessible system.

It later came out that the breach was probably caused by careless FTP settings on both systems servers which permitted anonymous user access, essentially opening up access to patient health records to anyone who could use Google. (Wow. If true that’s really embarrassing. I doubt a sharp 13-year-old script kiddie would make that mistake.)

Anyway, a group of presumably ticked off patients filed a class action suit against Cottage asking for $4.125 million. At first, cyber breach insurer Columbia Casualty paid out the $4.125 million and settled the case. Now, however, the insurer is suing Cottage, asking the health system to pay it back for the money it paid out to the class action members. It argues that Cottage was negligent due to:

  • a failure to continuously implement the procedures and risk controls identified in the application, including, but not limited to, its failure to replace factory default settings and its failure to ensure that its information security systems were securely configured; and
  • a failure to regularly check and maintain security patches on its systems, its failure to regularly re-assess its information security exposure and enhance risk controls, its failure to have a system in place to detect unauthorized access or attempts to access sensitive information stored on its servers and its failure to control and track all changes to its network to ensure it remains secure.

Not only that, Columbia Casualty asserts, Cottage lied about following a minimum set of security practices known as a “Risk Control Self Assessment” required as part of the cyber insurance application.

Now, if the cyber insurer’s allegations are true, Cottage’s behavior may have been particularly egregious. And no one has proven anything yet, as the case is still in the early stages, but this dispute should still stand as a warning to all healthcare organizations. If you neglect security, then try to get an insurance company to cover your behind when breaches occur, you might be out of luck.