Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Security and Privacy Are Pushing Archiving of Legacy EHR Systems

Posted on September 21, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In a recent McAfee Labs Threats Report, they said that “On average, a company detects 17 data loss incidents per day.” That stat is almost too hard to comprehend. No doubt it makes HIPAA compliance officers’ heads spin.

What’s even more disturbing from a healthcare perspective is that the report identifies hospitals as the easy targets for ransomware and that the attacks are relatively unsophisticated. Plus, one of the biggest healthcare security vulnerabilities is legacy systems. This is no surprise to me since I know so many healthcare organizations that set aside, forget about, or de-prioritize security when it comes to legacy systems. Legacy system security is the ticking time bomb of HIPAA compliance for most healthcare organizations.

In a recent EHR archiving infographic and archival whitepaper, Galen Healthcare Solutions highlighted that “50% of health systems are projected to be on second-generation technology by 2020.” From a technology perspective, we’re all saying that it’s about time we shift to next generation technology in healthcare. However, from a security and privacy perspective, this move is really scary. This means that 50% of health systems are going to have to secure legacy healthcare technology. If you take into account smaller IT systems, 100% of health systems have to manage (and secure) legacy technology.

Unlike other industries where you can decommission legacy systems, the same is not true in healthcare where Federal and State laws require retention of health data for lengthy periods of time. Galen Healthcare Solutions’ infographic offered this great chart to illustrate the legacy healthcare system retention requirements across the country:
healthcare-legacy-system-retention-requirements

Every healthcare CIO better have a solid strategy for how they’re going to deal with legacy EHR and other health IT systems. This includes ensuring easy access to legacy data along with ensuring that the legacy system is secure.

While many health systems use to leave their legacy systems running off in the corner of their data center or a random desk in their hospital, I’m seeing more and more healthcare organizations consolidating their EHR and health IT systems into some sort of healthcare data archive. Galen Healthcare Solution has put together this really impressive whitepaper that dives into all the details associated with healthcare data archives.

There are a lot of advantages to healthcare data archives. It retains the data to meet record retention laws, provides easy access to the data by end users, and simplifies the security process since you then only have to secure one health data archive instead of multiple legacy systems. While some think that EHR data archiving is expensive, it turns out that the ROI is much better than you’d expect when you factor in the maintenance costs associated with legacy systems together with the security risks associated with these outdated systems and other compliance and access issues that come with legacy systems.

I have no doubt that as EHR vendors and health IT systems continue consolidating, we’re going to have an explosion of legacy EHR systems that need to be managed and dealt with by every healthcare organization. Those organizations that treat this lightly will likely pay the price when their legacy systems are breached and their organization is stuck in the news for all the wrong reasons.

Galen Healthcare Solutions is a sponsor of the Tackling EHR & EMR Transition Series of blog posts on Hospital EMR and EHR.

Will a Duo of AI and Machine Learning Catch Data Thieves Lurking in Hospital EHR Corridors?

Posted on September 19, 2016 I Written By

The following is a guest blog post by Santosh Varughese, President of Cognetyx, an organization devoted to using artificial intelligence and machine learning innovation to bring an end to the theft of patient medical data.
santosh-varughese-president-cognetyx
As Halloween approaches, the usual spate of horror movies will intrigue audiences across the US, replete with slashers named Jason or Freddie running amuck in the corridors of all too easily accessible hospitals. They grab a hospital gown and the zombies fit right in.  While this is just a movie you can turn off, the real horror of patient data theft can follow you.

(I know how terrible this type of crime can be. I myself have been the victim of a data theft by hackers who stole my deceased father’s medical files, running up more than $300,000 in false charges. I am still disputing on-going bills that have been accruing for the last 15 years).

Unfortunately, this horror movie scenario is similar to how data thefts often occur at medical facilities. In 2015, the healthcare industry was one of the top three hardest hit industries with serious data breaches and major attacks, along with government and manufacturers. Packed with a wealth of exploitable information such as credit card data, email addresses, Social Security numbers, employment information and medical history records, much of which will remain valid for years, if not decades and fetch a high price on the black market.

Who Are The Hackers?
It is commonly believed attacks are from outside intruders looking to steal valuable patient data and 45 percent of the hacks are external. However, “phantom” hackers are also often your colleagues, employees and business associates who are unwittingly careless in the use of passwords or lured by phishing schemes that open the door for data thieves. Not only is data stolen, but privacy violations are insidious.

The problem is not only high-tech, but also low-tech, requiring that providers across the continuum simply become smarter about data protection and privacy issues. Medical facilities are finding they must teach doctors and nurses not to click on suspicious links.

For healthcare consultants, here is a great opportunity to not only help end this industry wide problem, but build up your client base by implementing some new technologies to help medical facilities bring an end to data theft.  With EHRs being more vulnerable than ever before, CIOs and CISOs are looking for new solutions.  These range from thwarting accidental and purposeful hackers by implementing physical security procedures to securing network hardware and storage media through measures like maintaining a visitor log and installing security cameras. Also limiting physical access to server rooms and restricting the ability to remove devices from secure areas.

Of course enterprise solutions for the entire hospital system using new innovations are the best way to cast a digital safety net over all IT operations and leaving administrators and patients with a sense of security and safety.

Growing Nightmare
Medical data theft is a growing national nightmare.  IDC’s Health Insights group predicts that 1 in 3 healthcare recipients will be the victim of a medical data breach in 2016.  Other surveys found that in the last two years, 89% of healthcare organizations reported at least one data breach, with 79% reporting two or more breaches. The most commonly compromised data are medical records, followed by billing and insurance records. The average cost of a healthcare data breach is about $2.2 million.

At health insurer Anthem, Inc., foreign hackers stole up to 80 million records using social engineering to dig their way into the company’s network using the credentials of five tech workers. The hackers stole names, Social Security numbers and other sensitive information, but were thwarted when an Anthem computer system administrator discovered outsiders were using his own security credentials to log into the company system and to hack databases.

Investigators believe the hackers somehow compromised the tech worker’s security through a phishing scheme that tricked the employee into unknowingly revealing a password or downloading malicious software. Using this login information, they were able to access the company’s database and steal files.

Healthcare Hacks Spread Hospital Mayhem in Diabolical Ways
Not only is current patient data security an issue, but thieves can also drain the electronic economic blood from hospitals’ jugular vein—its IT systems. Hospitals increasingly rely on cloud delivery of big enterprise data from start-ups like iCare that can predict epidemics, cure disease, and avoid preventable deaths. They also add Personal Health Record apps to the system from fitness apps like FitBit and Jawbone.

Banner Health, operating 29 hospitals in Arizona, had to notify millions of individuals that their data was exposed. The breach began when hackers gained access to payment card processing systems at some of its food and beverage outlets. That apparently also opened the door to the attackers accessing a variety of healthcare-related information.

Because Banner Health says its breach began with an attack on payment systems, it differentiates from other recent hacker breaches. While payment system attacks have plagued the retail sector, they are almost unheard of by healthcare entities.

What also makes this breach more concerning is the question of how did hackers access healthcare systems after breaching payment systems at food/beverage facilities, when these networks should be completely separated from one another? Healthcare system networks are very complex and become more complicated as other business functions are added to the infrastructure – even those that don’t necessarily have anything to do with systems handling and protected health information.

Who hasn’t heard of “ransomware”? The first reported attack was Hollywood Presbyterian Medical Center which had its EHR and clinical information systems shut down for more than week. The systems were restored after the hospital paid $17,000 in Bitcoins.

Will Data Thieves Also Rob Us of Advances in Healthcare Technology?
Is the data theft at MedStar Health, a major healthcare system in the DC region, a foreboding sign that an industry racing to digitize and interoperate EHRs is facing a new kind of security threat that it is ill-equipped to handle? Hospitals are focused on keeping patient data from falling into the wrong hands, but attacks at MedStar and other hospitals highlight an even more frightening downside of security breaches—as hospitals strive for IT interoperability. Is this goal now a concern?

As hospitals increasingly depend on EHRs and other IT systems to coordinate care, communicate critical health data and avoid medication errors, they could also be risking patients’ well-being when hackers strike. While chasing the latest medical innovations, healthcare facilities are rapidly learning that caring for patients also means protecting their medical records and technology systems against theft and privacy violations.

“We continue the struggle to integrate EHR systems,” says anesthesiologist Dr. Donald M. Voltz, Medical Director of the Main Operating Room at Aultman Hospital in Canton, OH, and an advocate and expert on EHR interoperability. “We can’t allow patient data theft and privacy violations to become an insurmountable problem and curtail the critical technology initiative of resolving health system interoperability. Billions have been pumped into this initiative and it can’t be risked.”

Taking Healthcare Security Seriously
Healthcare is an easy target. Its security systems tend to be less mature than those of other industries, such as finance and tech. Its doctors and nurses depend on data to perform time-sensitive and life-saving work.

Where a financial-services firm might spend a third of its budget on information technology, hospitals spend only about 2% to 3%. Healthcare providers are averaging less than 6% of their information technology budget expenditures on security, according to a recent HIMSS survey. In contrast, the federal government spends 16% of its IT budget on security, while financial and banking institutions spend 12% to 15%.

Meanwhile, the number of healthcare attacks over the last five years has increased 125%, as the industry has become an easy target. Personal health information is 50 times more valuable on the black market than financial information. Stolen patient health records can fetch as much as $363 per record.

“If you’re a hacker… would you go to Fidelity or an underfunded hospital?” says John Halamka, the chief information officer of Beth Israel Deaconess Medical Center in Boston. “You’re going to go where the money is and the safe is the easiest to open.”

Many healthcare executives believe that the healthcare industry is at greater risk of breaches than other industries. Despite these concerns, many organizations have either decreased their cyber security budgets or kept them the same. While the healthcare industry has traditionally spent a small fraction of its budget on cyber defense, it has also not shored up its technical systems against hackers.

Disrupting the Healthcare Security Industry with Behavior Analysis   
Common defenses in trying to keep patient data safe have included firewalls and keeping the organization’s operating systems, software, anti-virus packages and other protective solutions up-to-date.  This task of constantly updating and patching security gaps or holes is ongoing and will invariably be less than 100% functional at any given time.  However, with only about 10% of healthcare organizations not having experienced a data breach, sophisticated hackers are clearly penetrating through these perimeter defenses and winning the healthcare data security war. So it’s time for a disruption.

Many organizations employ network surveillance tactics to prevent the misuse of login credentials. These involve the use of behavior analysis, a technique that the financial industry uses to detect credit card fraud. By adding some leading innovation, behavior analysis can offer C-suite healthcare executives a cutting-edge, game-changing innovation.

The technology relies on the proven power of cloud technology to combine artificial intelligence with machine learning algorithms to create and deploy “digital fingerprints” using ambient cognitive cyber surveillance to cast a net over EHRs and other hospital data sanctuaries. It exposes user behavior deviations while accessing EHRs and other applications with PHI that humans would miss and can not only augment current defenses against outside hackers and malicious insiders, but also flag problem employees who continually violate cyber security policy.

“Hospitals have been hit hard by data theft,” said Doug Brown, CEO, Black Book Research. “It is time for them to consider new IT security initiatives. Harnessing machine learning artificial intelligence is a smart way to sort through large amounts of data. When you unleash that technology collaboration, combined with existing cloud resources, the security parameters you build for detecting user pattern anomalies will be difficult to defeat.”

While the technology is advanced, the concept is simple. A pattern of user behavior is established and any actions that deviate from that behavior, such as logging in from a new location or accessing a part of the system the user normally doesn’t access are flagged.  Depending on the deviation, the user may be required to provide further authentication to continue or may be forbidden from proceeding until a system administrator can investigate the issue.

The cost of this technology will be positively impacted by the continuing decline in the cost of storage and processing power from cloud computing giants such as Amazon Web Services, Microsoft and Alphabet.

The healthcare data security war can be won, but it will require action and commitment from the industry. In addition to allocating adequate human and monetary resources to information security and training employees on best practices, the industry would do well to implement network surveillance that includes behavior analysis. It is the single best technological defense against the misuse of medical facility systems and the most powerful weapon the healthcare industry has in its war against cyber criminals.

Mobile Health App Makers Still Shaky On Privacy Policies

Posted on September 16, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study has concluded that while mobile health app developers are developing better privacy practices, these developers vary widely in how they share those policies with consumers. The research, part of a program launched in 2011 by the Future of Privacy Forum, concludes that while mHealth app makers have improved their practices, too many are still not as clear as they could be with users as to how they handle private health information.

This year’s FPF Mobile App Study notes that mHealth players are working to make privacy policies available to users before purchase or download, by posting links on the app listing page. It probably has helped that the two major mobile health app distribution sites require apps that collect personal info to have a privacy policy in place, but consumer and government pressure has played a role as well, the report said. According to FPF researchers, mHealth app makers are beginning to explain how personal data is collected, used and shared, a step privacy advocates see as the bare minimum standard.

Researchers found that this year, 76% of top overall apps on the iOS App Store and Google Play had a privacy policy, up from 68% noted in the previous iteration of the study. In contrast, only 61% of health and fitness apps surveyed this year included a link to their privacy policies in their app store listing, 10% less than among top apps cutting across all categories.  “Given that some health and fitness apps can access sensitive, physiological data collected by sensors on a mobile phone, wearable, or other device, their below-average performance is both unexpected and troubling,” the report noted.

This disquieting lack of thorough privacy protections extended even to apps collecting some of the most intimate data, the FPF report pointed out. In particular, a subset of mHealth developers aren’t doing anything much to make their policies accessible.

For example, researchers found that while 80% of apps helping women track periods and fertility across Google Play and the iOS App Store had privacy policies, just 63% of the apps had posted links to these policies. In another niche, sleep tracking apps, only 66% of even had a privacy policy in place, and just 54% of these apps linked back to the policy on their store page. (FPF terms this level of performance “dismal,” and it’s hard to disagree.)

Underlying this analysis is the unfortunate truth that there’s still no gold standard for mHealth privacy policies. This may be due more to the complexity of the still-maturing mobile health ecosystem than resistance to creating robust policies, certainly. But either way, this issue won’t go away on its own, so mHealth app developers will need to give their privacy strategy more thought.

What Would a Patient-Centered Security Program Look Like? (Part 2 of 2)

Posted on August 30, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article laid down a basic premise that the purpose of security is to protect people, not computer systems or data. Let’s continue our exploration of internal threats.

Security Starts at Home

Before we talk about firewalls and anomaly detection for breaches, let’s ask why hospitals, pharmacies, insurers, and others can spread the data from health care records on their own by selling this data (supposedly de-identified) to all manner of third parties, without patient consent or any benefit to the patient.

This is a policy issue that calls for involvement by a wide range of actors throughout society, of course. Policy-makers have apparently already decided that it is socially beneficial–or at least the most feasible course economically–for clinicians to share data with partners helping them with treatment, operations, or payment. There are even rules now requiring those partners to protect the data. Policy-makers have further decided that de-identified data sharing is beneficial to help researchers and even companies using it to sell more treatments. What no one admits is that de-identification lies on a slope–it is not an all-or-nothing guarantee of privacy. The more widely patient data is shared, the more risk there is that someone will break the protections, and that someone’s motivation will change from relatively benign goals such as marketing to something hostile to the patient.

Were HIMSS to take a patient-centered approach to privacy, it would also ask how credentials are handed out in health care institutions, and who has the right to view patient data. How do we minimize the chance of a Peeping Tom looking at a neighbor’s record? And what about segmentation of data, so that each clinician can see only what she needs for treatment? Segmentation has been justly criticized as impractical, but observers have been asking for it for years and there’s even an HL7 guide to segmentation. Even so, it hasn’t proceeded past the pilot stage.

Nor does it make sense to talk about security unless we talk about the rights of patients to get all their data. Accuracy is related to security, and this means allowing patients to make corrections. I don’t know what I think would be worse: perfectly secure records that are plain wrong in important places, or incorrect assertions being traded around the Internet.

Patients and the Cloud

HIMSS did not ask respondents whether they stored records at their own facilities or in third-party services. For a while, trust in the cloud seemed to enjoy rapid growth–from 9% in 2012 to 40% in 2013. Another HIMSS survey found that 44% of respondents used the cloud to host clinical applications and data–but that was back in 2014, so the percentage has probably increased since then. (Every survey measures different things, of course.)

But before we investigate clinicians’ use of third parties, we must consider taking patient data out of clinicians’ hands entirely and giving it back to patients. Patients will need security training of their own, under those conditions, and will probably use the cloud to avoid catastrophic data loss. The big advantage they have over clinicians, when it comes to avoiding breaches, is that their data will be less concentrated, making it harder for intruders to grab a million records at one blow. Plenty of companies offer personal health records with some impressive features for sharing and analytics. An open source solution called HEART, described in another article, is in the works.

There’s good reason to believe that data is safer in the cloud than on local, network-connected systems. For instance, many of the complex technologies mentioned by HIMSS (network monitoring, single sign on, intrusion detection, and so on) are major configuration tasks that a cloud provider can give to its clients with a click of a button. More fundamentally, hospital IT staffs are burdened with a large set of tasks, of which security is one of the lowest-priority because it doesn’t generate revenue. In contrast, IT staff at the cloud environment spend gobs of time keeping up to date on security. They may need extra training to understand the particular regulatory requirements of health care, but the basic ways of accessing data are the same in health care as any other industry. Respondents to the HIMSS survey acknowledged that cloud systems had low vulnerability (p. 6).

There won’t be any more questions about encryption once patients have their data. When physicians want to see it, they will have to so over an encrypted path. Even Edward Snowden unreservedly boasted, “Encryption works.”

Security is a way of behaving, not a set of technologies. That fundamental attitude was not addressed by the HIMSS survey, and might not be available through any survey. HIMSS treated security as a routine corporate function, not as a patient right. We might ask the health care field different questions if we returned to the basic goal of all this security, which is the dignity and safety of the patient.

We all know the health record system is broken, and the dismal state of security is one symptom of that failure. Before we invest large sums to prop up a bad record system, let’s re-evaluate security on the basis of a realistic and respectful understanding of the patients’ rights.

2.7 Million Reasons Cloud Vendors and Data Centers ARE HIPAA Business Associates

Posted on July 25, 2016 I Written By

The following is a guest blog post by Mike Semel, President of Semel Consulting.
Cloud backup
Some cloud service providers and data centers have been in denial that they are HIPAA Business Associates. They refuse to sign Business Associate Agreements and comply with HIPAA.

Their excuses:

“We don’t have access to the data so we aren’t a HIPAA Business Associate.”

“The data is encrypted so we aren’t a HIPAA Business Associate.”

Cloud and hosted phone vendors claim “We are a conduit where the data just passes through us temporarily so we aren’t a HIPAA Business Associate.”

“We tell people not to store PHI in our cloud so we aren’t a HIPAA Business Associate.”

Wrong. Wrong. Wrong. And Wrong.

2.7 million reasons Wrong.
Lawsuit
Oregon Health & Science University (OHSU) just paid $2.7 million to settle a series of HIPAA data breaches “including the storage of the electronic protected health information (ePHI) of over 3,000 individuals on a cloud-based server without a business associate agreement.”

Another recent penalty cost a medical practice $750,000 for sharing PHI with a vendor without having a Business Associate Agreement in place.

The 2013 changes to HIPAA that published in the Federal Register (with our emphasis) state that:

“…we have modified the definition of “business associate” to generally provide that a business associate includes a person who “creates, receives, maintains, or transmits” protected health information on behalf of a covered entity.

…an entity that maintains protected health information on behalf of a covered entity is a business associate and not a conduit, even if the entity does not actually view the protected health information.  We recognize that in both situations, the entity providing the service to the covered entity has the opportunity to access the protected health information.  However, the difference between the two situations is the transient versus persistent nature of that opportunity.  For example, a data storage company that has access to protected health information (whether digital or hard copy) qualifies as a business associate, even if the entity does not view the information or only does so on a random or infrequent basis.” 

A cloud service doesn’t need access to PHI – it just needs to manage or store it– to be a Business Associate. They must secure PHI and sign Business Associate Agreements.

The free, consumer-grade versions of DropBox and Google Drive are not HIPAA compliant. But, the fee-based cloud services, that utilize higher levels of security and for which the vendor will sign a Business Associate Agreement, are OK to use. DropBox Business and Google Apps cost more but provide both security and HIPAA compliance. Make sure you select the right service for PHI.
Encrypted
Encryption
Encryption is a great way to protect health information, because the data is secure and the HIPAA Breach Notification Rule says that encrypted data that is lost or stolen is not a reportable breach.

However, encrypting data is not an exemption to being a Business Associate. Besides, many cloud vendors that deny they have access to encrypted data really do.

I know because I was the Chief Operating Officer for a cloud backup company. We told everyone that the client data was encrypted and we could not access it. The problem was that when someone had trouble recovering their data, the first thing our support team asked for were the encryption keys so we could help them. For medical clients that gave us access to unencrypted PHI.

I also know of situations where data was supposed to be encrypted but, because of human error, made it to the cloud unencrypted.

Simply remembering that Business Associates are covered in the HIPAA Privacy Rule while encryption is discussed in the Breach Notification Rule is an easy way to understand that encryption doesn’t cancel out a vendor’s status as a Business Associate.
27864148 - it engineer or consultant working with backup server. shot in data center.
Data Centers
A “business associate” also is a subcontractor that creates, receives, maintains, or transmits protected health information on behalf of another business associate.

Taken together, a cloud vendor that stores PHI, and the data centers that house servers and storage devices, are all HIPAA Business Associates. If you have your own servers containing PHI in a rack at a data center, that makes the data center a HIPAA Business Associate. If you use a cloud service for offsite backups, or file sharing, they and their data centers are Business Associates.

Most data centers offer ‘Network Operations Center (NOC) services,’ an on-site IT department that can go to a server rack to perform services, so you don’t have to travel (sometimes across the country) to fix a problem.  A data center manager was denying they had access to the servers locked in racks and cages, while we watched his NOC services technician open a locked rack to restart a client server.

Our client, who had its servers containing thousands of patient records housed in that data center, used the on-site NOC services when their servers needed maintenance or just to be manually restarted.
37388020 - pushing cloud computing button on touch screen
Cloud-Based and Hosted Phone Services
In the old days, a voice message left on a phone system was not tied to computers. Faxes were paper-in and paper-out between two fax machines.

HIPAA defines a conduit as a business that simply passes PHI and ePHI through their system, like the post office, FedX, UPS, phone companies and Internet Service Providers that simply transport data and do not ever store it. Paper-based faxing was exempt from HIPAA.

One way the world has changed is that Voice Over Internet Protocol (VOIP) systems, that are local or cloud-based, convert voice messages containing PHI into data files, which can then be stored for access through a portal, phone, or mobile device, or are attached to an e-mail.

Another change is that faxing PHI is now the creation of an image file, which is then transmitted through a fax number to a computer system that stores it for access through a portal, or attaches it to an e-mail.

Going back to the Federal Register statement that it is the persistence of storage that is the qualifier to be a Business Associate, the fact that the data files containing PHI are stored at the phone service means that the vendor is a Business Associate. It doesn’t matter that the PHI started out as voice messages or faxes.

RingCentral is one hosted phone vendor that now offers a HIPAA-compliant phone solution. It encrypts voice and fax files during transit and when stored, and RingCentral will sign a Business Associate Agreement.

Don’t Store PHI With Us
Telling clients not to store PHI, or stating that they are not allowed to do so in the fine print of an agreement or on a website, is just a wink-wink-nod-nod way of a cloud service or data center denying they are a Business Associate even though they know they are maintaining PHI.

Even if they refuse to work with medical clients, there are so many other types of organizations that are HIPAA Business Associates – malpractice defense law firms, accounting firms, billing companies, collections companies, insurance agents – they may as well give it up and just comply with HIPAA.

If they don’t, it can cost their clients if they are audited or through a breach investigation.

Don’t let that be you!

About Mike Semel
Mike Semel is the President of Semel Consulting, which specializes in healthcare and financial regulatory compliance, and business continuity planning.

Mike is a Certified Security Compliance Specialist, has multiple HIPAA certifications, and has authored HIPAA courseware. He has been an MSP, and the CIO for a hospital and a K-12 school district. Mike helped develop the CompTIA Security Trustmark and coaches companies preparing for the certification.

Semel Consulting conducts HIPAA workshops for MSPs and has a referrals program for partners. Visit www.semelconsulting.com for more info.

An Alternate Way Of Authenticating Patients

Posted on July 5, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Lately, I’ve been experimenting with a security app I downloaded to my Android phone. The app, True Key by Intel Security, allows you to log in by presenting your face for a scan or using your fingerprint. Once inside the app, you can access your preferred apps with a single click, as it stores your user name and passwords securely. Next, I simplified things further by downloading the app to my laptop and tablet, which synchs up whatever access info I enter across all devices.

From what I can see, Intel is positioning this as a direct-to-consumer play. The True Key documentation describes the app as a tool non-techies can use to access sites easily, store passwords securely and visit their favorite sites across all of their devices without re-entering authentication data. But I’m intrigued by the app’s potential for enterprise healthcare security access control.

Right now, there are serious flaws in the way application access is managed. As things stand, authentication information is usually stored in the same network infrastructure as the applications themselves, at least on a high-level basis. So the process goes like this, more or less: Untrusted device uses untrusted app to access a secure system. The secure system requests credentials from the device user, verifies them against an ID/PW database and if they are correct, logs them in.

Of course, there are alternatives to this approach, ranging from biometric-only access and instantly-generated, always-unique passwords, but few organizations have the resources to maintain super-advanced access protocols. So in reality, most enterprises have to firewall up their security and authentication databases and pray that those resources don’t get hacked. Theoretically, institutions might be able to create another hacking speed bump by storing authentication information in the cloud, but that obviously raises a host of additional security questions.

So here’s an idea. What if health IT organizations demanded that users install biometrically-locked apps like True Key on their devices? Then, enterprise HIT software could authenticate users at the device level – surely a possibility given that devices have unique IDs – and let users maintain password security at their end. That way, if an enterprise system was hacked, the attacker could gain access to device information, but wouldn’t have immediate access to a massive ID and PW database that gave them access to all system resources.

What I’m getting at, here, is that I believe healthcare organizations should maintain relationships with patients (as represented by their unique devices) rather than their ID and password. While no form of identity verification is perfect, to me it seems a lot more like that it’s really me logging in if I had to use my facial features or fingerprint as an entry point. After all, virtually any ID/PW pair chosen by a user can be guessed or hacked, but if you authenticate to my face/fingerprint and a registered device, the odds are high that you’re getting me.

So now it’s your turn, readers. What flaws do you see in this approach? Have you run into other apps that might serve this purpose better than True Key? Should HIT vendors create these apps? Have at it.

Are You Ready for Stage 2 HIPAA Audits?

Posted on June 27, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Many organizations probably didn’t even realize that OCR (HHS’ department in charge of HIPAA) had put in place HIPAA audits since the pilot program only audited 115 covered entities. That’s likely to change for a lot more healthcare organizations (including business associates) as Stage 2 HIPAA Audits are rolled out. Is your organization ready for a HIPAA Audit?

After spending about 2 months scouring the Stage 2 HIPAA Audit prototol, HIPAA One put together a great comparison of the simplicity of stage 1 HIPAA audits versus stage 2 HIPAA audits:

What it was – Phase 1 of the OCR’s Privacy, Security and Breach Notification Audit Program:
  1. HITECH added Breach Notification to HIPAA and endorsed the OCR‘s Audit Program.
  2. Contained 169 total protocols.
  3. Pilot program included 115 covered entities.
What it is now – the HIPAA Audit Program-Phase 2:
  1. OCR is implementing Phase 2 to include both CEs and business associates (every covered entity and business associate is eligible for an audit)
  2. Provides an opportunity for the OCR to identify best practices, risks and issues before they result in bigger problems (e.g. resulting in a breach) through the expanded random audit program.
  3. 180 Enhanced protocols (groups of instructions) which contain the following updates:
    1. Privacy – 708 updates (individual lines of instructions)
      1. Most notable changes are more policies and procedures surrounding the HIPAA Privacy Officer as well as some changes for Health Plans and Business Associates.
    2. Security – 880 updates (individual lines of instructions)
      1. Most notable changes are that Health Plans must have assurances from their plan sponsors and all companies now have to get proof of HIPAA compliance from their business associates, vendors and subcontractors.

That’s a lot of changes that are going to impact a lot of organizations. How many organizations have spent the time seeing which of these changes are going to impact their organization? I’m sure the answer to that is not many since “ignorance is bliss” is the mantra of many healthcare organizations when it comes to HIPAA compliance.

Particularly interesting is that HIPAA One points out that many of the checklists, books, commercial compliance software, and even ONC’s own SRA tool are likely outdated for these new changes to the HIPAA audit protocol. They’re probably right, so make sure whatever tool you’re using to do a HIPAA SRA takes into account the new HIPAA audit protocol.

Just so we’re clear, there actually hasn’t been a change to the HIPAA Omnibus update in 2013. However, the HIPAA audit protocol clarifies how the HIPAA law will be interpreted during an audit. That means that many of the gray areas in the law have been clarified through the audit protocol.

In HIPAA One’s blog post, they outline some important next steps for healthcare organizations. I won’t replicate it here, but go and check it out if you’re a HIPAA compliance officer for your organization or forward it to your HIPAA compliance officer if you’re not. The first suggestion is a really key one since you want to make sure you’re getting your HIPAA audit emails from OCR.

It’s taken HHS and OCR a while to roll out the full HIPAA audit program. However, it’s fully functioning now and I expect 2016 will be a real wake-up call for many organizations that aren’t prepared for a HIPAA audit. Plus, many others will be woken up when their friends fail their HIPAA audit.

Is your organization ready for a HIPAA audit?

Full Disclosure: HIPAA One is an advertiser on Healthcare Scene.

NFL Players’ Medical Records Stolen

Posted on June 21, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’d been meaning to write about this story for a while now, but finally got around to it. In case you missed it, Thousands of NFL players’ medical records were stolen. Here’s a piece of the DeadSpin summary of the incident:

In late April, the NFL recently informed its players, a Skins athletic trainer’s car was broken into. The thief took a backpack, and inside that backpack was a cache of electronic and paper medical records for thousands of players, including NFL Combine attendees from the last 13 years. That would encompass the vast majority of NFL players

The Redskins later issues this statement:

The Washington Redskins can confirm that a theft occurred mid-morning on April 15 in downtown Indianapolis, where a thief broke through the window of an athletic trainer’s locked car. No social security numbers, Protected Health Information (PHI) under HIPAA, or financial information were stolen or are at risk of exposure.

The laptop was password-protected but unencrypted, but we have no reason to believe the laptop password was compromised. The NFL’s electronic medical records system was not impacted.

It’s interesting that the Redskins said that it didn’t include any PHI that would be covered by HIPAA rules and regulations. I was interested in how HIPAA would apply to an NFL team, so I reached out to David Harlow for the answer. David Harlow, Health Blawg writer, offered these insights into whether NFL records are required to comply with HIPAA or not:

These records fall in a gray zone between employment records and health records. Clearly the NFL understands what’s at stake if, as reported, they’ve proactively reached out to the HIPAA police. At least one federal court is on record in a similar case saying, essentially, C’mon, you know you’re a covered entity; get with the program.

Michael Magrath, current Chairman, HIMSS Identity Management Task Force, and Director of Healthcare Business, VASCO Data Security offered this insight into the breach:

This is a clear example that healthcare breaches are not isolated to healthcare organizations. They apply to employers, including the National Football League. Teams secure and protect their playbooks and need to apply that philosophy to securing their players’ medical information.

Laptop thefts are common place and one of the most common entries (310 incidents) on the HHS’ Office of Civil Rights portal listing Breaches Affecting 500 or More Individuals. Encryption is one of the basic requirements to secure a laptop, yet organizations continue to gamble without it and innocent victims can face a lifetime of identity theft and medical identity theft.

Assuming the laptop was Windows based, security can be enhanced by replacing the static Windows password with two-factor authentication in the form of a one-time password. Without the authenticator to generate the one-time password, gaining entry to the laptop will be extremely difficult. By combining encryption and strong authentication to gain entry into the laptop the players and prospects protected health information would not be at risk, all because organizations and members wish to avoid few moments of inconvenience.

This story brings up some important points. First, healthcare is far from the only industry that has issues with breaches and things like stolen or lost laptops. Second, healthcare isn’t the only one that sees the importance of encrypting mobile devices. However, despite the importance, many organizations still aren’t doing so. Third, HIPAA is an interesting law since it only covers PHI and covered entities. HIPAA omnibus expanded that to business associates. However, there are still a bunch of grey areas that aren’t sure if HIPAA applies. Plus, there are a lot of white areas where your health information is stored and HIPAA doesn’t apply.

Long story short, be smart and encrypt your health data no matter where it’s stored. Be careful where you share your health data. Anyone could be breached and HIPAA will only protect you so much (covered entity or not).

Don’t Blame HIPAA: It Didn’t Require Orlando Regional Medical Center To Call the President

Posted on June 13, 2016 I Written By

The following is a guest blog post by Mike Semel, President of Semel Consulting. As a Healthcare Scene community, our hearts go out to all the victims of this tragedy.

Orlando Mayor Buddy Dyer said the influx of patients to the hospitals created problems due to confidentiality regulations, which he worked to have waived for victims’ families.

“The CEO of the hospital came to me and said they had an issue related to the families who came to the emergency room. Because of HIPAA regulations, they could not give them any information,” Dyer said. “So I reached out to the White House to see if we could get the HIPAA regulations waived. The White House went through the appropriate channels to waive those so the hospital could communicate with the families who were there.”    Source: WBTV.com

I applaud the Orlando Regional Medical Center for its efforts to help the shooting victims. As the region’s trauma center, I think it could have done a lot better by not letting HIPAA get in the way of communicating with the patients’ families and friends.

In the wake of the horrific nightclub shooting, the hospital made things worse for the victim’s families and friends. And it wasn’t necessary, because built into HIPAA is a hospital’s ability to share information without calling the President of the United States. There are other exemptions for communicating with law enforcement.

The Orlando hospital made this situation worse for the families when its Mass Casualty Incident (MCI) plan should have anticipated the situation. A trauma center should have been better prepared than to ask the mayor for help.

As usual, HIPAA got the blame for someone’s lack of understanding about HIPAA. Based on my experience, many executives think they are too busy, or think themselves too important, to learn about HIPAA’s fundamental civil rights for patients. Civil Rights? HIPAA is enforced by the US Department of Health & Human Services’ Office for Civil Rights.

HIPAA compliance and data security are both executive level responsibilities, although many executives think it is something that should get tasked out to a subordinate. Having to call the White House because the hospital didn’t understand that HIPAA already gave it the right to talk to the families is shameful. It added unnecessary delays and more stress to the distraught families.

Doctors are often just as guilty as hospital executives of not taking HIPAA training and then giving HIPAA a bad rap. (I can imagine the medical practice managers and compliance officers silently nodding their heads.)

“HIPAA interferes with patient care” is something I hear often from doctors. When I ask how, I am told by the doctors that they can’t communicate with specialists, call for a consult, or talk to their patients’ families. These are ALL WRONG.

I ask those doctors two questions that are usually met with a silent stare:

  1. When was the last time you received HIPAA training?
  2. If you did get trained, did it take more than 5 minutes or was it just to get the requirement out of the way?

HIPAA allows doctors to share patient information with other doctors, hospitals, pharmacies, and Business Associates as long as it is for a patient’s Treatment, Payment, and for healthcare Operations (TPO.) This is communicated to patients through a Notice of Privacy Practices.

HIPAA allows doctors to use their judgment to determine what to say to friends and families of patients who are incapacitated or incompetent. The Orlando hospital could have communicated with family members and friends.

From Frequently Asked Questions at the HHS website:

Does the HIPAA Privacy Rule permit a hospital to inform callers or visitors of a patient’s location and general condition in the emergency room, even if the patient’s information would not normally be included in the main hospital directory of admitted patients?

Answer: Yes.

If a patient’s family member, friend, or other person involved in the patient’s care or payment for care calls a health care provider to ask about the patient’s condition, does HIPAA require the health care provider to obtain proof of who the person is before speaking with them?

Answer: No.  If the caller states that he or she is a family member or friend of the patient, or is involved in the patient’s care or payment for care, then HIPAA doesn’t require proof of identity in this case.  However, a health care provider may establish his or her own rules for verifying who is on the phone.  In addition, when someone other than a friend or family member is involved, the health care provider must be reasonably sure that the patient asked the person to be involved in his or her care or payment for care.

Can the fact that a patient has been “treated and released,” or that a patient has died, be released as part of the facility directory?

Answer: Yes.

Does the HIPAA Privacy Rule permit a doctor to discuss a patient’s health status, treatment, or payment arrangements with the patient’s family and friends?

Answer: Yes. The HIPAA Privacy Rule at 45 CFR 164.510(b) specifically permits covered entities to share information that is directly relevant to the involvement of a spouse, family members, friends, or other persons identified by a patient, in the patient’s care or payment for health care. If the patient is present, or is otherwise available prior to the disclosure, and has the capacity to make health care decisions, the covered entity may discuss this information with the family and these other persons if the patient agrees or, when given the opportunity, does not object. The covered entity may also share relevant information with the family and these other persons if it can reasonably infer, based on professional judgment, that the patient does not object. Under these circumstances, for example:

  • A doctor may give information about a patient’s mobility limitations to a friend driving the patient home from the hospital.
  • A hospital may discuss a patient’s payment options with her adult daughter.
  • A doctor may instruct a patient’s roommate about proper medicine dosage when she comes to pick up her friend from the hospital.
  • A physician may discuss a patient’s treatment with the patient in the presence of a friend when the patient brings the friend to a medical appointment and asks if the friend can come into the treatment room.

Even when the patient is not present or it is impracticable because of emergency circumstances or the patient’s incapacity for the covered entity to ask the patient about discussing her care or payment with a family member or other person, a covered entity may share this information with the person when, in exercising professional judgment, it determines that doing so would be in the best interest of the patient. See 45 CFR 164.510(b).

Thus, for example:

  • A surgeon may, if consistent with such professional judgment, inform a patient’s spouse, who accompanied her husband to the emergency room, that the patient has suffered a heart attack and provide periodic updates on the patient’s progress and prognosis.
  • A doctor may, if consistent with such professional judgment, discuss an incapacitated patient’s condition with a family member over the phone.
  • In addition, the Privacy Rule expressly permits a covered entity to use professional judgment and experience with common practice to make reasonable inferences about the patient’s best interests in allowing another person to act on behalf of the patient to pick up a filled prescription, medical supplies, X-rays, or other similar forms of protected health information. For example, when a person comes to a pharmacy requesting to pick up a prescription on behalf of an individual he identifies by name, a pharmacist, based on professional judgment and experience with common practice, may allow the person to do so.

Other examples of hospital executives’ lack of HIPAA knowledge include:

  • Shasta Regional Medical Center, where the CEO and Chief Medical Officer took a patient’s chart to the local newspaper and shared details of her treatment without her permission.
  • NY Presbyterian Hospital, which allowed the film crew from ABC’s ‘NY Med’ TV show to film dying and incapacitated patients.

To healthcare executives and doctors, many of your imagined challenges caused by HIPAA can be eliminated by learning more about the rules. You need to be prepared for the 3 a.m. phone call. And you don’t have to call the White House for help.

About Mike Semel
Mike Semel, President of Semel Consulting,  is a certified HIPAA expert with over 12 years’ HIPAA experience and 30 years in IT. He has been the CIO for a hospital and a K-12 school district; owned and managed IT companies; ran operations at an online backup provider; and is a recognized HIPAA expert and speaker. He can be reached at mike@semelconsulting.com or 888-997-3635 x 101.

Securing IoT Devices Calls For New Ways Of Doing Business

Posted on June 8, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

While new Internet-connected devices can expose healthcare organizations to security threats in much the same way as a desktop PC or laptop, they aren’t always procured, monitored or maintained the same way. This can lead to potentially major ePHI breaches, as one renowned health system recently found out.

According a piece in SearchHealtlhIT, executives at Intermountain Healthcare recently went through something of a panic when connected audiology device went missing. According to Intermountain CISO Karl West, the device had come into the hospital via a different channel than most of the system’s other devices. For that reason, West told the site, his team couldn’t verify what operating system the audiology device had, how it had come into the hospital and what its lifecycle management status was.

Not only did Intermountain lack some key configuration and operating system data on the device, they didn’t know how to prevent the exposure of stored patient information the device had on board. And because the data was persistent over time, the audiology device had information on multiple patients — in fact, every patient that had used the device. When the device was eventually located, was discovered that it held two-and-a-half years worth of stored patient data.

After this incident, West realized that Intermountain needed to improve on how it managed Internet of Things devices. Specifically, the team decided that simply taking inventory of all devices and applications was far from sufficient to protect the security of IoT medical devices.

To prevent such problems from occurring again, West and his team created a data dictionary, designed to let them know where data originates, how it moves and where it resides. The group is also documenting what each IoT device’s transmission capabilities are, West told SearchHealthIT.

A huge vulnerability

Unfortunately, Intermountain isn’t the first and won’t be the last health system to face problems in managing IoT device security. Such devices can be a huge vulnerability, as they are seldom documented and maintained in the same way that traditional network devices are. In fact, this lack of oversight is almost a given when you consider where they come from.

Sure, some connected devices arrive via traditional medical device channels — such as, for example, connected infusion pumps — but a growing number of network-connected devices are coming through consumer channels. For example, though the problem is well understood these days, healthcare organizations continue to grapple with security issues created by staff-owned smart phones and tablets.

The next wave of smart, connected devices may pose even bigger problems. While operating systems running mobile devices are well understood, and can be maintained and secured using enterprise-level processes,  new connected devices are throwing the entire healthcare industry a curveball.  After all, the smart watch a patient brings into your facility doesn’t turn up on your procurement schedule, may use nonstandard software and its operating system and applications may not be patched. And that’s just one example.

Redesigning processes

While there’s no single solution to this rapidly-growing problem, one thing seems to be clear. As the Intermountain example demonstrates, healthcare organizations must redefine their processes for tracking and securing devices in the face of the IoT security threat.

First and foremost, medical device teams and the IT department must come together to create a comprehensive connected device strategy. Both teams need to know what devices are using the network, how and why. And whatever policy is set for managing IoT devices has to embrace everyone. This is no time for a turf war — it’s time to hunker down and manage this serious threat.

Efforts like Intermountain’s may not work for every organization, but the key is to take a step forward. As the number of IoT network nodes grow to a nearly infinite level, healthcare organizations will have to re-think their entire philosophy on how and why networked devices should interact. Otherwise, a catastrophic breach is nearly guaranteed.