Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

No Duh, FTP Servers Pose PHI Security Risk

Posted on April 12, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

The File Transfer Protocol is so old – it was published in April 1971 – that it once ran on NCP, the predecessor of TCP/IP. And surprise, surprise, it’s not terribly secure, and was never designed to be so either.

Security researchers have pointed out that FTP servers are susceptible to a range of problems, including brute force attacks, FTP bounce attacks, packet capture, port stealing, spoofing attacks and username enumeration.

Also, like many IP specifications designed prior before standard encryption approaches like SSL were available, FTP servers don’t encrypt traffic, with all transmissions in clear text and usernames, passwords, commands and data readable by anyone sniffing the network.

So why am I bothering to remind you of all of this? I’m doing so because according to the FBI, cybercriminals have begun targeting FTP servers and in doing so, accessing personal health information. The agency reports that these criminals are attacking anonymous FTP servers associated with medical and dental facilities. Plus, don’t even know they have these servers running.

Getting into these servers is a breeze, the report notes. With anonymous FTP servers, attackers can authenticate to the FTP server using meaningless credentials like “anonymous” or “ftp,” or use a generic password or email address to log in. Once they gain access to PHI, and personally identifiable information (PII), they’re using it to “intimidate, harass, and blackmail business owners,” the FBI report says.

As readers may know, once these cybercriminals get to an anonymous FTP server, they can not only attack it, but also gain write access to the server and upload malicious apps.

Given these concerns, the FBI is recommending that medical and dental entities ask their IT staff to check their networks for anonymous FTP servers. And if they find any, the organization should at least be sure that PHI or PII aren’t stored on those servers.

The obvious question here is why healthcare organizations would host an anonymous FTP server in the first place, given its known vulnerabilities and the wide variety of available alternatives. If nothing else, why not use Secure FTP, which adds encryption for passwords and data transmission while retaining the same interface as basic FTP? Or what about using the HTTP or HTTPS protocol to share files with the world? After all, your existing infrastructure probably includes firewalls, intrusion detection/protection solutions and other technologies already tuned to work with web servers.

Of course, healthcare organizations face a myriad of emerging data security threats. For example, the FDA is so worried about the possibility of medical device attacks that it issued agency guidance on the subject. The agency is asking both device manufacturers and healthcare facilities to protect medical devices from cybersecurity threats. It’s also asking hospitals and healthcare facilities to see that they have adequate network defenses in place.

But when it comes to hosting anonymous FTP servers on your network, I’ve got to say “really?” This has to be a thing that the FBI tracks and warns providers to avoid? One would think that most health IT pros, if not all, would know better than to expose their networks this way. But I suppose there will always be laggards who make life harder for the rest of us!

Will Data Aggregation For Precision Medicine Compromise Patient Privacy?

Posted on April 10, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Like anyone else who follows medical research, I’m fascinated by the progress of precision medicine initiatives. I often find myself explaining to relatives that in the (perhaps far distant) future, their doctor may be able to offer treatments customized specifically for them. The prospect is awe-inspiring even for me, someone who’s been researching and writing about health data for decades.

That being the case, there are problems in bringing so much personal information together into a giant database, suggests Jennifer Kulynych in an article for OUPblog, which is published by Oxford University Press. In particular, bringing together a massive trove of individual medical histories and genomes may have serious privacy implications, she says.

In arguing her point, she makes a sobering observation that rings true for me:

“A growing number of experts, particularly re-identification scientists, believe it simply isn’t possible to de-identify the genomic data and medical information needed for precision medicine. To be useful, such information can’t be modified or stripped of identifiers to the point where there’s no real risk that the data could be linked back to a patient.”

As she points out, norms in the research community make it even more likely that patients could be individually identified. For example, while a doctor might need your permission to test your blood for care, in some states it’s quite legal for a researcher to take possession of blood not needed for that care, she says. Those researchers can then sequence your genome and place that data in a research database, and the patient may never have consented to this, or even know that it happened.

And there are other, perhaps even more troubling ways in which existing laws fail to protect the privacy of patients in researchers’ data stores. For example, current research and medical regs let review boards waive patient consent or even allow researchers to call DNA sequences “de-identified” data. This flies in the face of conventional wisdom that there’s no re-identification risk, she writes.

On top of all of this, the technology already exists to leverage this information for personal identification. For example, genome sequences can potentially be re-identified through comparison to a database of identified genomes. Law enforcement organizations have already used such data to predict key aspects of an individual’s face (such as eye color and race) from genomic data.

Then there’s the issue of what happens with EMR data storage. As the author notes, healthcare organizations are increasingly adding genomic data to their stores, and sharing it widely with individuals on their network. While such practices are largely confined to academic research institutions today, this type of data use is growing, and could also expose patients to involuntary identification.

Not everyone is as concerned as Kulynych about these issues. For example, a group of researchers recently concluded that a single patient anonymization algorithm could offer a “standard” level of privacy protection to patient, even when the organizations involved are sharing clinical data. They argue that larger clinical datasets that use this approach could protect patient privacy without generalizing or suppressing data in a manner that would undermine its usefulness.

But if nothing else, it’s hard to argue Kulynych’s central concern, that too few rules have been updated to reflect the realities of big genomic and medical data stories. Clearly, state and federal rules  need to address the emerging problems associated with big data and privacy. Otherwise, by the time a major privacy breach occurs, neither patients nor researchers will have any recourse.

Study Offers Snapshot Of Provider App Preferences

Posted on March 20, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A recent study backed by HIT industry researchers and an ONC-backed health tech project offers an interesting window into how healthcare organizations see freestanding health apps. The research, by KLAS and the SMART Health IT Project, suggests that providers are developing an increasingly clear of what apps they’d like to see and how they’d use them.

Readers of this blog won’t be surprised to hear that it’s still early in the game for healthcare app use. In fact, the study notes, about half of healthcare organizations don’t formally use apps at the point of care. Also, most existing apps offer basic EMR data access, rather than advanced use cases.

The apps offering EMR data access are typically provided by vendors, and only allow users to view such data (as opposed to documenting care), according to the study report. But providers want to roll out apps which allow inputting of clinical data, as this function would streamline clinicians’ ability to make an initial patient assessment, the report notes.

But there are other important app categories which have gained an audience, including diagnostic apps used to support patient assessment, medical reference apps and patient engagement apps.  Other popular app types include clinical decision support tools, documentation tools and secure messaging apps, according to researchers.

It’s worth noting, though, that there seems to be a gap between what providers are willing to use and what they are willing to buy or develop on their own. For example, the report notes that nearly all respondents would be willing to buy or build a patient engagement app, as well as clinical decision support tools and documentation apps. The patient engagement apps researchers had in would manage chronic conditions like diabetes or heart disease, both very important population health challenges.

Hospital leaders, meanwhile, expressed interest in using sophisticated patient portal apps which go beyond simply allowing patients to view their data. “What I would like a patient app to do for us is to keep patients informed all throughout their two- to four-hours ED stay,” one CMO told researchers. “For instance, the app could inform them that their CBC has come back okay and that their physician is waiting on the read. That way patients would stay updated.”

When it came to selecting apps, respondents placed a top priority on usability, followed by the app’s cost, clinical impact, capacity for integration, functionality, app credibility, peer recommendations and security. (This is interesting, given many providers seem to give usability short shrift when evaluating other health IT platforms, most notably EMRs.)

To determine whether an app will work, respondents placed the most faith in conducting a pilot or other trial. Other popular approaches included vendor demos and peer recommendations. Few favored vendor websites or videos as a means of learning about apps, and even fewer placed working with app endorsement organizations or discovering them at conferences.

But providers still have a few persistent worries about third-party apps, including privacy and security, app credibility, the level of ongoing maintenance needed, the extent of integration and data aggregation required to support apps and issues regarding data ownership. Given that worrisome privacy and security concerns are probably justified, it seems likely that they’ll be a significant drag on app adoption going forward.

Could Patents Freeze Blockchain’s Progress?

Posted on March 6, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Everywhere you look, somebody’s talking about blockchain technology and its amazing future. In fact, the healthcare industry is engaging in perhaps the most aggressive blockchain deployments of any industry, according to Deloitte.

Originally, blockchain was an open-source platform, freely available to anyone who wanted to use it. But that could soon change, if a new item from Reuters is any indication.

According to the news service, Aussie computer scientist Craig Wright – who claims to be the pseudonymous “Satoshi Nakamoto” responsible for the technology – is working with Canadian online gambling entrepreneur Calvin Ayre to patent aspects of bitcoin/blockchain tech.

To date Wright, who’s being funded by the wealthy Canadian, has filed more than 70 patent applications in Britain in cooperation with associates. This may not sound like a big deal, but it is, considering that only 63 blockchain-related patents were filed globally last year, according Reuters.

Not only that, Wright plans to file many more, Reuters research has concluded. The patent applications include approaches specific to healthcare, including storage of medical documents. Ultimately, Wright and his partners plan to file as many as 400 patent applications, the news service reports.

Ayre is investing in blockchain largely because he sees it as a good fit with the gambling business. And that serves Wright’s interests, which have included online gambling for decades. In fact, Reuters notes that the bitcoin code base contains unimplemented functions related to poker. So it makes sense that he wants to lock it down and own it.

That being said, it seems unlikely that well-funded corporate interests – including healthcare organizations – are going to just sit back and ignore these developments. After all, companies spent more than $1.5 billion on blockchain technology during 2016, and they’re likely to scale up further this year. In other words, they’re not going to let go of blockchain technology without a fight.

Also, it’s worth noting that none of the 70 existing patent applications have been granted to date, and according to Reuters it’s not clear if they’ll even be enforceable if they are.

Finally, Ayer’s history in the U.S. raises questions as to whether he’s completely above-board. In the past, most of his online gambling revenue came from the U.S., a highly-lucrative business which made him extremely rich. But offering such a platform was and is illegal in many states, and as a result one of the states (Maryland) indicted his online gambling network Bodog, Ayer himself and four other people. The case is still pending.

All told, it doesn’t seem likely that health IT organizations need to drop their blockchain plans anytime soon. But it’s worth bearing in mind that blockchain development may be more complicated – and more expensive – in the future.

Costs Of Compromised Credentials Rising

Posted on March 3, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Healthcare organizations face unique network access challenges. While some industries only need to control access by professional employees and partners, healthcare organizations are increasingly opening up data to consumers, and the number of consumer access points are multiplying. While other industries face similar problems – banking seems particularly relevant – I don’t know of any other industry that depends on such a sophisticated data exchange with consumers to achieve critical results.

Given the industry’s security issues, I found the following article to be quite interesting. While it doesn’t address healthcare concerns directly, I think it’s relevant nonetheless.

The article, written by InfoArmor CTO Christian Lees, contends that next-generation credentials are “edging toward a precarious place.” He argues that because IT workers are under great pressure to produce, they’re rushing the credentialing process. And that has led to a lack of attention to detail, he says:

“Employees, contractors and even vendors are rapidly credentialed with little attention given to security rules such as limiting access per job roles, enforcing secure passwords, and immediately revoking credentials after an employee moves on…[and as a result], criminals get to choose from a smorgasbord of credentialed identities with which to phish employees and even top executives.”

Meanwhile, if auto-generated passwords are short and ineffective, or so long that users must write them down to remember them, credentials tend to get compromised quickly. What’s more, password sharing and security shortcuts used for sign-in (such as storing a password in a browser) pose further risk, he notes.

Though he doesn’t state this in exactly these words, the problem is obviously multiplied when you’re a healthcare provider. After all, if you’re managing not only thousands of employee and partner credentials, but potentially, millions of consumer credentials for use in accessing portal data, you’re fighting a battle on many fronts.

And unfortunately, the cost of losing control of these credentials is very high. In fact, according to a Verizon study, 63% of confirmed data breaches happening last year involved weak, default or stolen passwords.

To tackle this problem, Lees suggests, organizations should create a work process which handles different types of credentials in different ways.

If you’re providing access to public-facing information, which doesn’t include transaction, identifying or sensitive information, using a standard password may be good enough. The passwords should still be encrypted and protected, but they should still be easy to use, he says.

Meanwhile, if you need to offer users access to highly sensitive information, your IT organization should implement a separate process which assigns stronger, more complex passwords as well as security layers like biometrics, cryptographic keys or out-of-band confirmation codes, Lees recommends.

Another way to improve your credentialing strategy is to associate known behaviors with those credentials. “If you know that Bill comes to the office on Tuesdays and Thursdays but works remotely the rest of the week and that he routinely accesses certain types of files, it becomes much harder for a criminal to use Bill’s compromised credentials undetected,” he writes.

Of course, readers of this blog will have their own strategies in placefor protecting credentials, but Lee’s suggestions are worth considering as well. When you’re dealing with valuable health data, it never hurts to go that extra mile. If you don’t, you might get a visit by the HIPAA police (proverbial, not actual).

Consumers Fear Theft Of Personal Health Information

Posted on February 15, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Probably fueled by constant news about breaches – duh! – consumers continue to worry that their personal health information isn’t safe, according to a new survey.

As the press release for the 2017 Xerox eHealth Survey notes, last year more than one data breach was reported each day. So it’s little wonder that the survey – which was conducted online by Harris poll in January 2017 among more than 3,000 U.S. adults – found that 44% of Americans are worried about having their PHI stolen.

According to the survey, 76% of respondents believe that it’s more secure to share PHI between providers through a secure electronic channel than to fax paper documents. This belief is certainly a plus for providers. After all, they’re already committed to sharing information as effectively as possible, and it doesn’t hurt to have consumers behind them.

Another positive finding from the study is that Americans also believe better information sharing across providers can help improve patient care. Xerox/Harris found that 87% of respondents believe that wait times to get test results and diagnoses would drop if providers securely shared and accessed patient information from varied providers. Not only that, 87% of consumers also said that they felt that quality of service would improve if information sharing and coordination among different providers was more common.

Looked at one way, these stats offer providers an opportunity. If you’re already spending tens or hundreds of millions of dollars on interoperability, it doesn’t hurt to let consumers know that you’re doing it. For example, hospitals and medical practices can put signs in their lobby spelling out what they’re doing by way of sharing data and coordinating care, have their doctors discuss what information they’re sharing and hand out sheets telling consumers how they can leverage interoperable data. (Some organizations have already taken some of these steps, but I’d argue that virtually any of them could do more.)

On the other hand, if nearly half of consumers afraid that their PHI is insecure, providers have to do more to reassure them. Though few would understand how your security program works, letting them know how seriously you take the matter is a step forward. Also, it’s good to educate them on what they can do to keep their health information secure, as people tend to be less fearful when they focus on what they can control.

That being said, the truth is that healthcare data security is a mixed bag. According to a study conducted last year by HIMSS, most organizations conduct IT security risk assessments, many IT execs have only occasional interactions with top-level leaders. Also, many are still planning out their medical device security strategy. Worse, provider security spending is often minimal. HIMSS notes that few organizations spend more than 6% of their IT budgets on data security, and 72% have five or fewer employees allocated to security.

Ultimately, it’s great to see that consumers are getting behind the idea of health data interoperability, and see how it will benefit them. But until health organizations do more to protect PHI, they’re at risk of losing that support overnight.

Hybrid Entities Ripe For HIPAA Enforcement Actions

Posted on February 8, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As some readers will know, HIPAA rules allow large organizations to separate out parts of the organization which engage in HIPAA-covered functions from those that do not. When they follow this model, known as a “hybrid entity” under HIPAA, organizations must take care to identify the “components” of its organization which engage in functions covered by HIPAA, notes attorney Matthew Fisher in a recent article.

If they don’t, they may get into big trouble, as signs suggest that the Office for Civil Rights will be taking a closer look at these arrangements going forward, according to attorneys.  In fact, the OCR recently hit the University of Massachusetts Amherst with a $650,000 fine after a store of unsecured electronic protected health information was breached. This action, the first addressing the hybrid entity standard under HIPAA, asserted that UMass had let this data get breached because it hadn’t treated one of its departments as a healthcare component.

UMass’s troubles began in June 2013, when a workstation at the UMass Center for Language, Speech and Hearing was hit with a malware attack. The malware breach led to the disclosure of patient names, addresses, Social Security numbers, dates of birth, health insurance information and diagnoses and procedure codes for about 1,670 individuals. The attack succeeded because UMass didn’t have a firewall in place.

After investigating the matter, OCR found that UMass had failed to name the Center as a healthcare component which needed to meet HIPAA standards, and as a result had never put policies and procedures in place there to enforce HIPAA compliance. What’s more, OCR concluded that – violating HIPAA on yet another level – UMass didn’t conduct an accurate and thorough risk analysis until September 2015, well after the original breach.

In the end, things didn’t go well for the university. Not only did OCR impose a fine, it also demanded that UMass take corrective action.

According to law firm Baker Donelson, this is a clear sign that the OCR is going to begin coming down on hybrid entities that don’t protect their PHI appropriately or erect walls between healthcare components and non-components. “Hybrid designation requires precise documentation and routine updating and review,” the firm writes. “It also requires implementation of appropriate administrative, technical and physical safeguards to prevent non-healthcare components from gaining PHI access.”

And the process of selecting out healthcare components for special treatment should never end completely. The firm advises its clients review the status of components whenever they are added – such as, for example, a walk-in or community clinic – or even when new enterprise-wide systems are implemented.

My instinct is that problems like the one taking place at UMass, in which hybrid institutions struggle to separate components logically and physically, are only likely to get worse as healthcare organizations consolidate into ACOs.

I assume that under these loosely consolidated business models, individual entities will still have to mind their own security. But at the same time, if they hope to share data and coordinate care effectively, extensive network interconnections will be necessary, and mapping who can and can’t look at PHI is already tricky. I don’t know what such partners will do to keep data not only within their network, but out of the hands of non-components, but I’m sure it’ll be no picnic.

Health IT Leaders Struggle With Mobile Device Management, Security

Posted on January 30, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new survey on healthcare mobility has concluded that IT leaders aren’t thrilled with their security arrangements, and that a significant minority don’t trust their mobile device management solution either. The study, sponsored by Apple device management vendor Jamf, reached out to 550 healthcare IT leaders in the US, UK, France, Germany and Australia working in organizations of all sizes.

Researchers found that 83% or organizations offer smartphones or tablets to their providers, and that 32% of survey respondents hope to offer mobile devices to consumers getting outpatient care over the next two years.  That being said, they also had significant concerns about their ability to manage these devices, including questions about security (83%), data privacy (77%) and inappropriate employee use (49%).

The survey also dug up some tensions between their goals and their capacity to support those goals. Forty percent of respondents said staff access to confidential medical records while on the move was their key reason for their mobile device strategy. On the other hand, while 84% said that their organization was HIPAA-compliant, almost half of respondents said that they didn’t feel confident in their ability to adapt quickly to changing regulations.

To address their concerns about mobile deployments, many providers are leveraging mobile device management platforms.  Of those organizations that either have or plan to put an MDM solution in place, 80% said time savings was the key reason and 79% said enhanced employee productivity were the main benefits they hoped to realize.

Those who had rolled out an MDM solution said the benefits have included easier access to patient data (63%), faster patient turnaround (51%) and enhanced medical record security (48%). At the same time, 27% of respondents whose organizations had an MDM strategy in place said they didn’t feel especially confident about the capabilities of their solution.

In any event, it’s likely that MDM can’t solve some of the toughest mobile deployment problems faced by healthcare organizations anyway.

Health organizations that hope to leverage independently-developed apps will need to vet them carefully, as roughly one-quarter of these developers didn’t have privacy policies in place as of late last year. And the job of selecting the right apps is a gargantuan one. With the volume of health apps hitting almost 260,000 across the Google and Apple app marketplaces, it’s hard to imagine how any provider could keep up.

So yes, the more capabilities MDM systems can offer, the better. But choosing the right apps with the right pedigree strikes me as posing an even bigger challenge.

Locking Down Clinician Wi-Fi Use

Posted on November 1, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Now that Wi-Fi-based Internet connections are available in most public spaces where clinician might spend time, they have many additional opportunities to address emerging care issues on the road, be they with their family in a mall or a grabbing a burger at McDonald’s.

However, notes one author, there are many situations in which clinicians who share private patient data via Wi-Fi may be violating HIPAA rules, though they may not be aware of the risks they are taking. Not only can a doctor or nurse end up exposing private health information to the public, they can open a window to their EMR, which can violate countless additional patients’ privacy. Like traditional texting, standard Wi-Fi offers hackers an unencrypted data stream, and that puts their connected mobile device at risk if they’re not careful to take other precautions like a VPN.

According to Paul Cerrato, who writes on cybersecurity for iMedicalApps, Wi-Fi networks are by their design open. If the physician can connect to the network, hostile actors could connect to the network and in turn their device, which would allow them to open files, view the files and even download information to their own device.

It’s not surprising that physicians are tempted to use open public networks to do clinical work. After all, it’s convenient for them to dash off an email message regarding, say, a patient medication issue while having a quick lunch at a coffee shop. Doing so is easy and feels natural, but if the email is unsecured, that physician risks exposing his practice to a large HIPAA-related fine, as well as having its network invaded by intruders. Not only that, any HIPAA problem that arises can blacken the reputation of a practice or hospital.

What’s more, if clinicians use an unsecured public wireless networks, their device could also acquire a malware infection which could cause harm to both the clinician and those who communicate with their device.

Ideally, it’s probably best that physicians never use public Wi-Fi networks, given their security vulnerabilities. But if using Wi-Fi makes sense, one solution proposed by Cerrato is for physicians is to access their organization’s EMR via a Citrix app which creates a secure tunnel for information sharing.

As Cerrato points out, however, smaller practices with scant IT resources may not be able to afford deploying a secure Citrix solution. In that case, HHS recommends that such practices use a VPN to encrypt sensitive information being sent or received across the Wi-Fi network.

But establishing a VPN isn’t the whole story. In addition, clinicians will want to have the data on their mobile devices encrypted, to make sure it’s not readable if their device does get hacked. This is particularly important given that some data on their mobile devices comes from mobile apps whose security may not have been vetted adequately.

Ideally, managing security for clinician devices will be integrated with a larger mobile device management strategy that also addresses BYOD, identity and access management issues. But for smaller organizations (notably small medical groups with no full-time IT manager on staff) beginning by making sure that the exchange of patient information by clinicians on Wi-Fi networks is secured is a good start.

KPMG: Most Business Associates Not Ready For Security Standards

Posted on October 17, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study by consulting firm KPMG has concluded that two-thirds of business associates aren’t completely ready to step up to industry demands for protecting patient health information. Specifically, the majority of business associates don’t seem to be ready to meet HITRUST standards for securing protected health information. Plus, it’s worth noting that HITRUST certification doesn’t mean your organization is HIPAA compliant or protected from a breach. It’s just the first steps and many aren’t doing it.

HITRUST has established a Common Security Framework which is used by healthcare organizations (as well as others that create, access, store or exchange sensitive and/or regulated data). The CSF includes a set of controls designed to harmonize the requirements of multiple regulations and standards.

According to KPMG’s Emily Frolick, third-party risk and assurance leader for KPMG’s healthcare practice, a growing number of healthcare organizations are asking their business associates to obtain a HITRUST CSF Certification or pass an SOC 2 + HITRUST CSF examination to demonstrate that they are making a good-faith effort to protect patient information. The CSF assessment is an internal control-based approach allowing organizations such as business associates to assess and demonstrate the measures they are taken to protect healthcare data.

To see if vendors targeting the healthcare industry seemed capable of meeting these standards, KPMG surveyed 600 professionals in this category to determine their organization’s security status. The survey found that half of those responding weren’t ready for HITRUST examination or certification, while 17.4% were planning for the CSF assessment.

When asked how they were progressing toward meeting HITRUST CSF requirements, just 7% said they were completely ready. Meanwhile, 8% said their organization was well along in its implementation process, and 17.4% said they were in the early stages of CSF implementation.

One the biggest barriers to CSF readiness seems to be having adequate staff in place, ranking ahead of cultural, technological and financial concerns, KPMG found. When asked whether they had the staff in place to meet the standard, 53% said they did, but 47% said they did not have “the right staff the right level skills to execute against the HITRUST CSF.” That being said, 27% said all four factors were at issue. (Interestingly, 23% said” none of the above” posed barriers to CSF readiness.)

Readers won’t be surprised to learn that KPMG has reason to encourage vendors to seek the HITRUST cert and examination – specifically, that it works as a HITRUST Qualified CSF Assessor for healthcare organizations. Also, KPMG works with very large organizations which need to establish high levels of structure in how they evaluate their health data security measures. Hopefully this means they go well beyond what HITRUST requires.

Nonetheless, even if you work with a relatively small healthcare organization that doesn’t have the resources to engage in obtaining formal healthcare security certifications, this discussion serves as a good reminder. Particularly given that many breaches take place due to slips by business associates, it doesn’t hurt to take a close look at their security practices now and then. Even asking them some commonsense questions about how they and their contractors handle data is a good idea. After all, even if business associates cause a breach to your data, you still have to explain the breach to your patients.