Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

What Would a Patient-Centered Security Program Look Like? (Part 2 of 2)

Posted on August 30, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article laid down a basic premise that the purpose of security is to protect people, not computer systems or data. Let’s continue our exploration of internal threats.

Security Starts at Home

Before we talk about firewalls and anomaly detection for breaches, let’s ask why hospitals, pharmacies, insurers, and others can spread the data from health care records on their own by selling this data (supposedly de-identified) to all manner of third parties, without patient consent or any benefit to the patient.

This is a policy issue that calls for involvement by a wide range of actors throughout society, of course. Policy-makers have apparently already decided that it is socially beneficial–or at least the most feasible course economically–for clinicians to share data with partners helping them with treatment, operations, or payment. There are even rules now requiring those partners to protect the data. Policy-makers have further decided that de-identified data sharing is beneficial to help researchers and even companies using it to sell more treatments. What no one admits is that de-identification lies on a slope–it is not an all-or-nothing guarantee of privacy. The more widely patient data is shared, the more risk there is that someone will break the protections, and that someone’s motivation will change from relatively benign goals such as marketing to something hostile to the patient.

Were HIMSS to take a patient-centered approach to privacy, it would also ask how credentials are handed out in health care institutions, and who has the right to view patient data. How do we minimize the chance of a Peeping Tom looking at a neighbor’s record? And what about segmentation of data, so that each clinician can see only what she needs for treatment? Segmentation has been justly criticized as impractical, but observers have been asking for it for years and there’s even an HL7 guide to segmentation. Even so, it hasn’t proceeded past the pilot stage.

Nor does it make sense to talk about security unless we talk about the rights of patients to get all their data. Accuracy is related to security, and this means allowing patients to make corrections. I don’t know what I think would be worse: perfectly secure records that are plain wrong in important places, or incorrect assertions being traded around the Internet.

Patients and the Cloud

HIMSS did not ask respondents whether they stored records at their own facilities or in third-party services. For a while, trust in the cloud seemed to enjoy rapid growth–from 9% in 2012 to 40% in 2013. Another HIMSS survey found that 44% of respondents used the cloud to host clinical applications and data–but that was back in 2014, so the percentage has probably increased since then. (Every survey measures different things, of course.)

But before we investigate clinicians’ use of third parties, we must consider taking patient data out of clinicians’ hands entirely and giving it back to patients. Patients will need security training of their own, under those conditions, and will probably use the cloud to avoid catastrophic data loss. The big advantage they have over clinicians, when it comes to avoiding breaches, is that their data will be less concentrated, making it harder for intruders to grab a million records at one blow. Plenty of companies offer personal health records with some impressive features for sharing and analytics. An open source solution called HEART, described in another article, is in the works.

There’s good reason to believe that data is safer in the cloud than on local, network-connected systems. For instance, many of the complex technologies mentioned by HIMSS (network monitoring, single sign on, intrusion detection, and so on) are major configuration tasks that a cloud provider can give to its clients with a click of a button. More fundamentally, hospital IT staffs are burdened with a large set of tasks, of which security is one of the lowest-priority because it doesn’t generate revenue. In contrast, IT staff at the cloud environment spend gobs of time keeping up to date on security. They may need extra training to understand the particular regulatory requirements of health care, but the basic ways of accessing data are the same in health care as any other industry. Respondents to the HIMSS survey acknowledged that cloud systems had low vulnerability (p. 6).

There won’t be any more questions about encryption once patients have their data. When physicians want to see it, they will have to so over an encrypted path. Even Edward Snowden unreservedly boasted, “Encryption works.”

Security is a way of behaving, not a set of technologies. That fundamental attitude was not addressed by the HIMSS survey, and might not be available through any survey. HIMSS treated security as a routine corporate function, not as a patient right. We might ask the health care field different questions if we returned to the basic goal of all this security, which is the dignity and safety of the patient.

We all know the health record system is broken, and the dismal state of security is one symptom of that failure. Before we invest large sums to prop up a bad record system, let’s re-evaluate security on the basis of a realistic and respectful understanding of the patients’ rights.

What Would a Patient-Centered Security Program Look Like? (Part 1 of 2)

Posted on August 29, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

HIMSS has just released its 2016 Cybersecurity Survey. I’m not writing this article just to say that the industry-wide situation is pretty bad. In fact, it would be worth hiring a truck with a megaphone to tour the city if the situation was good. What I want to do instead is take a critical look at the priorities as defined by HIMSS, and call for a different industry focus.

We should start off by dispelling notions that there’s anything especially bad about security in the health care industry. Breaches there get a lot of attention because they’re relatively new and because the personal sensitivity of the data strikes home with us. But the financial industry, which we all thought understood security, is no better–more than 500 million financial records were stolen during just a 12-month period ending in October 2014. Retailers are frequently breached. And what about one of the government institutions most tasked with maintaining personal data, the Office of Personnel Management?

The HIMSS report certainly appears comprehensive to a traditional security professional. They ask about important things–encryption, multi-factor authentication, intrusion detection, audits–and warn the industry of breaches caused by skimping on such things. But before we spend several billion dollars patching the existing system, let’s step back and ask what our priorities are.

People Come Before Technologies

One hint that HIMSS’s assumptions are skewed comes in the section of the survey that asked its respondents what motivated them to pursue greater security. The top motivation, at 76 percent, was a phishing attack (p. 6). In other words, what they noticed out in the field was not some technical breach but a social engineering attack on their staff. It was hard to interpret the text, but it appeared that the respondents had actually experienced these attacks. If so, it’s a reminder that your own staff is your first line of defense. It doesn’t matter how strong your encryption is if you give away your password.

It’s a long-held tenet of the security field that the most common source of breaches is internal: employees who were malicious themselves, or who mistakenly let intruders in through phishing attacks or other exploits. That’s why (you might notice) I don’t use the term “cybersecurity” in this article, even though it’s part of the title of the HIMSS report.

The security field has standardized ways of training staff to avoid scams. Explain to them the most common vectors of attack. Check that they’re creating strong passwords, where increased computing power is creating an escalating war (and the value of frequent password changes has been challenged). Best yet, use two-factor authentication, which may help you avoid the infuriating burden of passwords. Run mock phishing scams to test your users. Set up regular audits of access to sensitive data–a practice that HIMSS found among only 60% of respondents (p. 3). And give someone the job of actually checking the audit logs.

Why didn’t HIMSS ask about most of these practices? It began the project with a technology focus instead a human focus. We’ll take the reverse approach in the second part of this article.

OCR Cracking Down On Business Associate Security

Posted on May 13, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For most patients, a data breach is a data breach. While it may make a big difference to a healthcare organization whether the source of a security vulnerability was outside its direct control, most consumers aren’t as picky. Once you have to disclose to them that the data has been hacked, they aren’t likely be more forgiving if one of your business associates served as the leak.

Just as importantly, federal regulators seem to be growing increasingly frustrated that healthcare organizations aren’t doing a good job of managing business associate security. It’s little wonder, given that about 20% of the 1,542 healthcare data breaches affecting 500 more individuals reported since 2009 involve business associates. (This is probably a conservative estimate, as reports to OCR by covered entities don’t always mention the involvement of a business associate.)

To this point, the HHS Office for Civil Rights has recently issued a cyber-alert stressing the urgency of addressing these issues. The alert, which was issued by OCR earlier this month, noted that a “large percentage” of covered entities assume they will not be notified of security breaches or cyberattacks experienced by the business associates. That, folks, is pretty weak sauce.

Healthcare organizations also believe that it’s difficult to manage security incidents involving business associates, and impossible to determine whether data safeguards and security policies and procedures at the business associates are adequate. Instead, it seems, many covered entities operate on the “keeping our fingers crossed” system, providing little or no business associate security oversight.

However, that is more than unwise, given that the number of major breaches have taken place because of an oversight by business associates. For example, in 2011 information on 4.9 million individuals was exposed when unencrypted backup computer tapes are stolen from the car of a Science Applications International Corp. employee, who was transporting tapes on behalf of military health program, TRICARE.

The solution to this problem is straightforward, if complex to implement, the alert suggests. “Covered entities and business associates should consider how they will confront a breach at their business associates or subcontractors,” and make detailed plans as to how they’ll address and report on security incidents among these group, OCR suggests.

Of course, in theory business associates are required to put their own policies and procedures in place to prevent, detect, contain and correct security violations under HIPAA regs. But that will be no consolation if your data is exposed because they weren’t holding their feet to the fire.

Besides, OCR isn’t just sending out vaguely threatening emails. In March, OCR began Phase 2 of its HIPAA privacy and security audits of covered entities and business associates. These audits will “review the policies and procedures adopted and employed by covered entities and their business associates to meet selected standard interpretation specifications of the Privacy, Security, and Breach Notification Rules,” OCR said at the time.

Medical Device Security At A Crossroads

Posted on April 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As anyone reading this knows, connected medical devices are vulnerable to attacks from outside malware. Security researchers have been warning healthcare IT leaders for years that network-connected medical devices had poor security in place, ranging from image repository backups with no passwords to CT scanners with easily-changed configuration files, but far too many problems haven’t been addressed.

So why haven’t providers addressed the security problems? It may be because neither medical device manufacturers nor hospitals are set up to address these issues. “The reality is both sides — providers and manufacturers — do not understand how much the other side does not know,” said John Gomez, CEO of cybersecurity firm Sensato. “When I talk with manufacturers, they understand the need to do something, but they have never had to deal with cyber security before. It’s not a part of their DNA. And on the hospital side, they’re realizing that they’ve never had to lock these things down. In fact, medical devices have not even been part of the IT group and hospitals.

Gomez, who spoke with Healthcare IT News, runs one of two companies backing a new initiative dedicated to securing medical devices and health organizations. (The other coordinating company is healthcare security firm Divurgent.)

Together, the two have launched the Medical Device Cybersecurity Task Force, which brings together a grab bag of industry players including hospitals, hospital technologists, medical device manufacturers, cyber security researchers and IT leaders. “We continually get asked by clients with the best practices for securing medical devices,” Gomez told Healthcare IT News. “There is little guidance and a lot of misinformation.“

The task force includes 15 health systems and hospitals, including Children’s Hospital of Atlanta, Lehigh Valley Health Network, Beebe Healthcare and Intermountain, along with tech vendors Renovo Solutions, VMware Inc. and AirWatch.

I mention this initiative not because I think it’s huge news, but rather, as a reminder that the time to act on medical device vulnerabilities is more than nigh. There’s a reason why the Federal Trade Commission, and the HHS Office of Inspector General, along with the IEEE, have launched their own initiatives to help medical device manufacturers boost cybersecurity. I believe we’re at a crossroads; on one side lies renewed faith in medical devices, and on the other nothing less than patient privacy violations, harm and even death.

It’s good to hear that the Task Force plans to create a set of best practices for both healthcare providers and medical device makers which will help get their cybersecurity practices up to snuff. Another interesting effort they have underway in the creation of an app which will help healthcare providers evaluate medical devices, while feeding a database that members can access to studying the market.

But reading about their efforts also hammered home to me how much ground we have to cover in securing medical devices. Well-intentioned, even relatively effective, grassroots efforts are good, but they’re only a drop in the bucket. What we need is nothing less than a continuous knowledge feed between medical device makers, hospitals, clinics and clinicians.

And why not start by taking the obvious step of integrating the medical device and IT departments to some degree? That seems like a no-brainer. But unfortunately, the rest of the work to be done will take a lot of thought.

The Need for Speed (In Breach Protection)

Posted on April 26, 2016 I Written By

The following is a guest blog post by Robert Lord, Co-founder and CEO of Protenus.
Robert Protenus
The speed at which a hospital can detect a privacy breach could mean the difference between a brief, no-penalty notification and a multi-million dollar lawsuit.  This month it was reported that health information from 2,000 patients was exposed when a Texas hospital took four months to identify a data breach caused by an independent healthcare provider.  A health system in New York similarly took two months to determine that 2,500 patient records may have been exposed as a result of a phishing scam and potential breach reported two months prior.

The rise in reported breaches this year, from phishing scams to stolen patient information, only underscores the risk of lag times between breach detection and resolution. Why are lags of months and even years so common? And what can hospitals do to better prepare against threats that may reach the EHR layer?

Traditional compliance and breach detection tools are not nearly as effective as they need to be. The most widely used methods of detection involve either infrequent random audits or extensive manual searches through records following a patient complaint. For example, if a patient suspects that his medical record has been inappropriately accessed, a compliance officer must first review EMR data from the various systems involved.  Armed with a highlighter (or a large excel spreadsheet), the officer must then analyze thousands of rows of access data, and cross-reference this information with the officer’s implicit knowledge about the types of people who have permission to view that patient’s records. Finding an inconsistency – a person who accessed the records without permission – can take dozens of hours of menial work per case.  Another issue with investigating breaches based on complaints is that there is often no evidence that the breach actually occurred. Nonetheless, the hospital is legally required to investigate all claims in a timely manner, and such investigations are costly and time-consuming.

According to a study by the Ponemon Institute, it takes an average of 87 days from the time a breach occurs to the time the officer becomes aware of the problem, and, given the arduous task at hand, it then takes another 105 days for the officer to resolve the issue. In total, it takes approximately 6 months from the time a breach occurs to the time the issue is resolved. Additionally, if a data breach occurs but a patient does not notice, it could take months – or even years – for someone to discover the problem. And of course, the longer it takes the hospital to identify a problem, the higher the cost of identifying how the breach occurred and remediating the situation.

In 2013, Rouge Valley Centenary Hospital in Scarborough, Canada, revealed that the contact information of approximately 8,300 new mothers had been inappropriately accessed by two employees. Since 2009, the two employees had been selling the contact information of new mothers to a private company specializing in Registered Education Savings Plans (RESPs). Some of the patients later reported that days after coming home from the hospital with their newborn child, they started receiving calls from sales representatives at the private RESP company. Marketing representatives were extremely aggressive, and seemed to know the exact date of when their child had been born.

The most terrifying aspect of this story is how the hospital was able to find out about the data breach: remorse and human error! One employee voluntarily turned himself in, while the other accidentally left patient records on a printer. Had these two events not happened, the scam could have continued for much longer than the four years it did before it was finally discovered.

Rouge Valley Hospital is currently facing a $412 million dollar lawsuit over this breach of privacy. Arguably even more damaging, is that they have lost the trust of their patients who relied on the hospital for care and confidentiality of their medical treatments.

As exemplified by the ramifications of the Rouge Valley Hospital breach and the new breaches discovered almost weekly in hospitals around the world, the current tools used to detect privacy breaches in electronic health records are not sufficient. A system needs to have the ability to detect when employees are accessing information outside their clinical and administrative responsibilities. Had the Scarborough hospital known about the inappropriately viewed records the first time they had been accessed, they could have investigated earlier and protected the privacy of thousands of new mothers.

Every person seeks a hospital’s care has the right to privacy and the protection of their medical information. However, due to the sheer volume of patient records accessed each day, it is impossible for compliance officers to efficiently detect breaches without new and practical tools. Current rule-based analytical systems often overburden the officers with alerts, and are only a minor improvement from manual detection methods.

We are in the midst of a paradigm shift with hospitals taking a more proactive and layered approach to health data security. New technology that uses machine learning and big data science to review each access to medical records will replace traditional compliance technology and streamline threat detection and resolution cycles from months to a matter of minutes. Making identifying a privacy breach or violation as simple and fast as the action that may have caused it in the first place.  Understanding how to select and implement these next-generation tools will be a new and important challenge for the compliance officers of the future, but one that they can no longer afford to delay.

Protenus is a health data security platform that protects patient data in electronic medical records for some of the nation’s top-ranked hospitals. Using data science and machine learning, Protenus technology uniquely understands the clinical behavior and context of each user that is accessing patient data to determine the appropriateness of each action, elevating only true threats to patient privacy and health data security.

Are Ransomware Attacks A HIPAA Issue, Or Just Our Fault?

Posted on April 18, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

With ransomware attacks hitting hospitals in growing numbers, it’s growing more urgent for healthcare organizations to have a routine and effective response to such attacks. While over the short term, providers are focused mostly on survival, eventually they’ll have to consider big-picture implications — and one of the biggest is whether a ransomware intrusion can be called a “breach” under federal law.

As readers know, providers must report any sizable breach to the HHS Office for Civil Rights. So far, though, it seems that the feds haven’t issued any guidance as to how they see this issue. However, people in the know have been talking about this, and here’s what they have to say.

David Holtzman, a former OCR official who now serves as vice president of compliance strategies at security firm CynergisTek, told Health Data Management that as long as the data was never compromised, a provider may be in the clear. If an organization can show OCR proof that no data was accessed, it may be able to avoid having the incident classed as a breach.

And some legal experts agree. Attorney David Harlow, who focuses on healthcare issues, told Forbes: “We need to remember that HIPAA is narrowly drawn and data breaches defined as the unauthorized ‘access, acquisition, use or disclosure’ of PHI. [And] in many cases, ransomware “wraps” PHI rather than breaches it.”

But as I see it, ransomware attacks should give health IT security pros pause even if they don’t have to report a breach to the federal government. After all, as Holtzman notes, the HIPAA security rule requires that providers put appropriate safeguards in place to ensure the confidentiality, the integrity and availability of ePHI. And fairly or not, any form of malware intrusion that succeeds raises questions about providers’ security policies and approaches.

What’s more, ransomware attacks may point to underlying weaknesses in the organization’s overall systems architecture. “Why is the operating system allowing this application to access this data?” asked one reader in comments on a related EMR and HIPAA post. “There should be no possible way for a database that is only read/write for specified applications to be modified by a foreign encryption application,” the reader noted. “The database should refuse the instruction, the OS should deny access, and the security system should lock the encryption application out.”

To be fair, not all intrusions are someone’s “fault.” Ransomware creators are innovating rapidly, and are arguably equipped to find new vectors of infection more quickly than security experts can track them. In fact, easy-to-deploy ransomware as a service is emerging, making it comparatively simple for less-skilled criminals to use. And they have a substantial incentive to do so. According to one report, one particularly sophisticated ransomware strain has brought $325 million in profits to groups deploying it.

Besides, downloading actual data is so five years ago. If you’re attacking a provider, extorting payment through ransomware is much easier than attempting to resell stolen healthcare data. Why go to all that trouble when you can get your cash up front?

Still, the reality is that healthcare organizations must be particularly careful when it comes to protecting patient privacy, both for ethical and regulatory reasons. Perhaps ransomware will be the jolt that pushes lagging players to step up and invest in security, as it creates a unique form of havoc that could easily put patient care at risk. I certainly hope so.

Securing Mobile Devices in Healthcare

Posted on February 8, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This post is sponsored by Samsung Business. All thoughts and opinions are my own.

When you look at healthcare security on the whole, I think everyone would agree that healthcare has a lot of work to do. Just taking into account the top 5 health data breaches in 2015, approximately 30-35% of people in the US have had their health data breached. I’m afraid that in 2016 these numbers are likely going to get worse. Let me explain why I think this is the case.

First, meaningful use required healthcare organizations to do a HIPAA risk assessment. While many organizations didn’t really do a high quality HIPAA risk assessment, it still motivated a number of organizations to do something about privacy and security. Even if it wasn’t the step forward many would like, it was still a step forward.

Now that meaningful use is being replaced, what other incentive are doctors going to have to take a serious look at privacy and security? If 1/3 of patients having their records breached in 2015 isn’t motivating enough, what’s going to change in 2016?

Second, hackers are realizing the value of health data and the ease with which they can breach health data systems. Plus, with so many organizations going online with their EHR software and other healthcare IT software, these are all new targets for hackers to attack.

Third, while every doctor in healthcare had a mobile device, not that many of them accessed their EHR on their mobile device since many EHR vendors didn’t support mobile devices very well. Over the next few years we’ll see EHR vendors finally produce high quality, native mobile apps that access EHR software. Once they do, not only will doctors be accessing patient data on their mobile device, but so will nurses, lab staff, HIM, etc. While all of this mobility is great, it creates a whole new set of vulnerabilities that can be exploited if not secured properly.

I’m not sure what we can do to make organizations care about privacy and security. Although, once a breach happens they start to care. We’re also not going to be able to stem the tide of hackers being interested in stealing health data. However, we can do something about securing the plethora of mobile devices in healthcare. In fact, it’s a travesty when we don’t since mobile device security has become so much easier.

I remember in the early days of smartphones, there weren’t very many great enterprise tools to secure your smartphones. These days there are a ton of great options and many of them come natively from the vendor who provides you the phone. Many are even integrated into the phone’s hardware as well as software. A good example of this is the mobile security platform, Samsung KNOX™. Take a look at some of its features:

  • Separate Work and Personal Data (Great for BYOD)
  • Multi-layered Hardware and Software Security
  • Easy Mobile Device Management Integration
  • Enterprise Grade Security and Encryption

It wasn’t that long ago that we had to kludge together multiple solutions to achieve all of these things. Now they come in one nice, easy to implement package. The excuses of why we don’t secure mobile devices in healthcare should disappear. If a breach occurs in your organization because a mobile device wasn’t secure, I assure you that those excuses will feel pretty hollow.

For more content like this, follow Samsung on Insights, Twitter, LinkedIn , YouTube and SlideShare

Mobile Health Security Issues To Ponder In 2016

Posted on January 11, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

In some ways, mobile health security safeguards haven’t changed much for quite some time. Making sure that tablets and phones are protected against becoming easy network intrusion points is a given. Also seeing to it that such devices use strong passwords and encrypted data exchange whenever possible is a must.

But increasingly, as mobile apps become more tightly knit with enterprise infrastructure, there’s more security issues to consider. After all, we’re increasingly talking about mission-critical apps which rely on ongoing access to sensitive enterprise networks. Now more than ever, enterprises must come up with strategies which control how data flows into the enterprise network. In other words, we’re not just talking about locking down the end points, but also seeing to it that powerful edge devices are treated like the vulnerable hackable gateways they are.

To date, however, there’s still not a lot of well-accepted guidance out there spelling out what steps health organizations should take to ramp up their mobile security. For example, NIST has issued its “Securing Electronic Health Records On Mobile Devices” guideline, but it’s only a few months old and remains in draft form to date.

The truth is, the healthcare industry isn’t as aware of, or prepared for, the need for mobile healthcare data security as it should be. While healthcare organizations are gradually deploying, testing and rolling out new mobile platforms, securing them isn’t being given enough attention. What’s more, clinicians aren’t being given enough training to protect their mobile devices from hacks, which leaves some extremely valuable data open to the world.

Nonetheless, there are a few core approaches which can be torqued up help protect mobile health data this year:

  • Encryption: Encrypting data in transit wasn’t invented yesterday, but it’s still worth a check in to make sure your organization is doing so. Gregory Cave notes that data should be encrypted when communicated between the (mobile) application and the server. And he recommends that Web traffic be transmitted through a secure connection using only strong security protocols like Secure Sockets Layer or Transport Layer Security. This also should include encrypting data at rest.
  • Application hardening:  Before your organization rolls out mobile applications, it’s best to see to it that security defects are detected before and addressed before deployment. Application hardening tools — which protect code from hackers — can help protect mobile deployments, an especially important step for software placed on machines and locations your organization doesn’t control. They employ techniques such as obfuscation, which hides code structure and flow within an application, making it hard for intruders to reverse engineer or tamper with the source code.
  • Training staff: Regardless of how sophisticated your security systems are, they’re not going to do much good if your staff leaves the proverbial barn door open. As one security expert points out,  healthcare organizations need to make staffers responsible for understanding what activities lead to breaches, or security hackers will still find a toehold.”It’s like installing the most sophisticated security system in the world for your house, but not teaching the family how to use it,” said Grant Elliott, founder and CEO of risk management and compliance firm Ostendio.

In addition to these efforts, I’d argue that staffers need to really get it as to what happens when security goes awry. Knowing that mistakes will upset some IT guy they’ve never met is one thing; understanding that a breach could cost millions and expose the whole organization to disrepute is a bit more memorable. Don’t just teach the security protocols, teach the costs of violating them. A little drama — such as the little old lady who lost her home due to PHI theft — speaks far more powerfully than facts and figures, don’t you agree?

Medical Device and Healthcare IT Security

Posted on December 21, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In case you haven’t noticed, we’ve been starting to do a whole series of Healthcare Scene interviews on a new video platform called Blab. We also archive those videos to the Healthcare Scene YouTube channel. It’s been exciting to talk with so many smart people. I’m hoping in 2016 to average 1 interview a week with the top leaders in healthcare IT. Yes, 52 interviews in a year. It’s ambitious, but exciting.

My most recent interview was with Tony Giandomenico, a security expert at Fortinet, where we talked about healthcare IT security and medical device security. In this interview we cover a lot of ground with Tony around healthcare IT security and medical device security. We had a really broad ranging conversation talking about the various breaches in healthcare, why people want healthcare data, the value of healthcare data, and also some practical recommendations for organizations that want to do better at privacy and security in their organization. Check out the full interview below:

After every interview we do, we hold a Q&A after party where we open up the floor to questions from the live audience. We even allow those watching live to hop on camera and ask questions and talk with our experts. This can be unpredictable, but can also be a lot of fun. In this after party we were lucky enough to have Tony’s colleague Aamir join us and extend the conversation. We also talked about the impact of a national patient identifier from a security and privacy perspective. Finally, we had a patient advocate join us and remind us all of the patient perspective when it comes to the loss of trust that happens when a healthcare organization doesn’t take privacy and security seriously. Enjoy the video below:

The Evolution of Encryption Infographic – Where’s Your Healthcare Organization?

Posted on October 23, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

It’s taken a while for health care to finally get on board with encryption, but that’s basically become the standard for healthcare. That includes encrypting devices like laptops and servers, but also includes encrypting health care data that’s being sent across the internet. I’ve sometimes called encryption the “get out of jail free” card when your laptop or other device is stolen. If it’s encrypted, then it’s likely not a HIPAA violation. If it’s not encrypted, then you’re likely heading to the HHS wall of shame. Of course, there’s a lot more to HIPAA compliance than just encryption, but it’s a good start.

While health care has come a long way with encryption, we could still improve. This great Evolution of Encryption infographic from DataMotion illustrates how far encryption has come, but also how health care needs to continue to evolve its approach to encryption as well. Looking at the infographic, most of healthcare is in the 1990s-2000s with a few still using 1991 technology. I don’t know many that have ubiquitous encryption (2015), but that’s where we’re headed.
The Evolution of Healthcare Encryption

What’s your organization’s approach to encryption? Where do you fall in this evolution? Where do your vendors fall on the scale?