Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Regulatory Heat: Is Your BAA House in Order?

Posted on August 9, 2018 I Written By

The following is a guest blog post by Greg Waldstreicher, Founder and CEO of PHIflow.

Actions by the Office for Civil Rights (OCR) have clearly demonstrated stricter enforcement of HIPAA rules in recent years, specifically upping the ante on compliance with business associate agreements (BAAs). Much of this activity can be attributed to a grim outlook on security risks: globally, 70% of healthcare organizations have suffered a data breach, and a recent Ponemon Institute report found that the vast majority have experienced multiple security incidents involving protected health information (PHI).

BAAs play an important role in security as the framework by which an organization ensures that any vendor creating, receiving, maintaining or transmitting PHI complies with HIPAA. In recent years, these contracts have come under increased scrutiny amid high-level audits launched by OCR. Mismanagement of BAAs have thus far resulted in penalties ranging from $31,000 for simply not having a BAA in place to upwards of $5.5 million for more serious offenses.

While the stakes are high, healthcare organizations often lack effective oversight strategies for these important patient protection tools. In fact, it’s not uncommon for even the most basic information to elude the executive suite such as:

  • the number of BAAs that exist across an enterprise
  • where BAAs are located
  • the terms of each BAA

In an industry that has witnessed a significant uptick in security incidents and breaches in recent years, this current state of affairs is less than optimal. In truth, the reach of recent audit activity is still an unknown as the healthcare industry awaits full disclosure and recommendations from OCR. One of the latest OCR settlements —$3.5 million levied against Fresenuis Medical Care North America—resulted from multiple incidents that occurred in 2012, underscoring the lengthy timeframe associated with finalizing investigations and legal processes.

All told, current trends point to the need for better oversight and management of BAAs. While penalty activity subsided some in recent months as OCR went through internal transitions, industry legal experts expect that investigative momentum will continue to increase in proportion to heightened security risks across the healthcare landscape.

Unfortunately, healthcare organizations face notable roadblocks to getting their BAA house in order. Amid competing priorities, many simply lack the resources for tracking these agreements. Health systems are increasingly multi-faceted, and current trends associated with mergers, acquisitions and consolidations only exacerbate the challenge. The reality is that some large organizations have as many as 10,000 BAAs across the enterprise. Because these agreements are typically spread across multiple departments and facilities and have a multitude of different owners, managing them in a strategic way via manual processes is nearly impossible.

In tandem with the internal resource challenge, the language contained in BAAs has become significantly more complicated due to not only a fluid and evolving regulatory environment, but also the vital role they play in an overall security strategy. While a simple, cookie-cutter approach to these agreements was fitting a decade ago, BAAs are now intensely negotiated between covered entities and business associates and between business associates and sub-business associates, often involving HIPAA attorneys and resulting in requirements that go beyond HIPAA and HITECH. Subsequently, the terms of each BAA across an organization may vary, making efficient and effective management extremely difficult.

The good news is that there is a relatively simple solution—automated management of BAAs. The right technological framework can lay the foundation for timely access to all contracts across an enterprise, improving compliance and ensuring readiness for audits or breach response. Once consolidated, artificial intelligence can then be applied to BAAs to draw actionable insights in near real-time, informing key personnel of the key terms across all agreements.

The healthcare industry at large has drawn heavily on the promise of automation and data analytics in recent years to power more efficient and effective processes. Management of BAAs is no different and is an area ripe for improvement. Today’s healthcare executives need to consider the high stakes associated with ineffective management of BAAs and take action to shore up strategies amid greater security risks and a challenging regulatory environment.

About Greg Waldstreicher
Greg Waldstreicher is the founder and CEO of PHIflow, and the cofounder and former CEO of DoseSpot, where he worked at the forefront of the electronic prescribing (e-Prescribing) market for nine years. Under Greg’s leadership, DoseSpot licensed its SaaS e-Prescribing solutions to 175 healthcare software companies across the medical, dental, hospice and digital health markets. Greg received a B.S. from the University of Maryland College Park in Accounting and an M.S. from Northeastern University in Technological Entrepreneurship.

Are You Investing Enough in IT Security?

Posted on July 20, 2018 I Written By

Mike Semel is a noted thought leader, speaker, blogger, and best-selling author of HOW TO AVOID HIPAA HEADACHES . He is the President and Chief Security Officer of Semel Consulting, focused on HIPAA and other compliance requirements; cyber security; and Business Continuity planning. Mike is a Certified Business Continuity Professional through the Disaster Recovery Institute, a Certified HIPAA Professional, Certified Security Compliance Specialist, and Certified Health IT Specialist. He has owned or managed technology companies for over 30 years; served as Chief Information Officer (CIO) for a hospital and a K-12 school district; and managed operations at an online backup company.

Would you put a $ 10 fence around a $ 100 horse?

Does it make sense to put a $ 100 fence around a $ 10 horse?

For the right security, you need to know what your horse is worth.

The same concepts apply to protecting your data. What is your data worth?

Ask Cottage Health , which had two data breaches, totaling 55,000 records., and settled a $ 4.1 million lawsuit with patients, then paid a $ 2 million California penalty. They were sued by their insurer, which wanted the $ 4.1 million settlement money back, after it discovered Cottage Health had not consistently implemented the security controls it claimed on its insurance application. The $ 6.1 million in the settlement and penalty does not include its costs for legal fees, credit monitoring, notifying patients, public relations, or recovering the business lost from patients who moved to another provider.

One of our clients was audited for HIPAA compliance by the venture capital firm that wanted to invest in their company. Another client had us do a compliance assessment on a healthcare company they wanted to purchase. In both cases, HIPAA compliance was worth millions of dollars.

We asked a client how much the financial impact would be on their business if they lost the sensitive personal data they collected about business partners, and had to notify everyone. The owner said they would be out of business, costing millions of dollars.

Breaches result in lawsuits, with settlements in the millions. If you are a licensed or certified professional, you can lose your license or certification if you are breached.

Federal HIPAA penalties in 2014 – 2015 were $ 14 million. In 2016 – 2017 they tripled to $ 42 million. In 2018, they have already reached $ 7.9 million.

Data is worth more than gold.

Instead of words and images in a computer, think of your data as a pile of gold bars that is worth protecting.

When we work with our clients, we help you identify the types of data you have, where it is located, and how it is protected. We recently worked with a client that came to us for help protecting their patient information. They were shocked when we showed them that they had bigger risks related to the data they stored about workforce members, and job applicants they did not hire, than the people they served.

  • What data do you have that is regulated, that you must protect to comply with laws and other regulations?
  • What fines and lawsuit judgments might you face if your data is breached?
  • Beyond HIPAA that protects patient information, do you know your state data breach laws that apply to employee data?
  • Do you know the regulations that protect credit card data?
  • Do you have enough of the right type of insurance to protect your finances if you are breached?

Everyone has unregulated data that is sensitive or proprietary, that could hurt your business if it is lost, stolen, or accessed by a competitor or someone who wants to hurt you? Salaries, trade secrets, employment records, pricing models, merger and acquisition plans, lawsuit files, have all been stolen.

As part of our assessments, we search the Dark Web (the criminal side of the Internet) to see if our clients have employee passwords for sale by hackers. Over 90% have had at least one employee’s credentials stolen and offered for sale.

Most of our clients start out not knowing the value of their risks. They hadn’t approved IT security purchases, because the costs were high, and they didn’t know if security was worth the investment.

So, how much should you invest in protecting your data?

The recently-released 2018 Cost of a Data Breach report shows, through research of actual breaches, that in 2017 the average cost to a breached organization for a single lost healthcare record was $408. Across all industries the cost was $ 233 per record. Only a third of the cost was for the direct response to the breach – notifying patients, hiring lawyers and IT security experts, and paying for credit monitoring. Two-thirds of the $ 408/record was the financial effect on the healthcare organizations, by losing patients after violating their trust.

Here is a calculation you can use to estimate the value of protecting your patient data.

Number of Patient Records x $ 408 (cost per record of a breach) = $ ________________ in risk.

Example: 25,000 records x $ 408 = $ 10.2 million. (If this number startles you, imagine if your costs were only 25% of the total, which is still $ 2.5 million.)

Other ways to put a dollar value on your risk

  • How much would a breach affect the market value of your business?
  • How much investment capital do you need for expansion?
  • Personally, what will your retirement look like if you had to pay $ 1 million, $ 2 million, or more, to cover the costs of a breach?
  • What would your life be like if you went out of business?

Know the value of your cyber security risk. Do the math.

Ask your IT department, or an outsourced independent IT security consultant, to assess your risks, and recommend what you need to be fully protected. Our assessments calculate your risks based on dollars, and provide ‘under the skin’ data about the current status of your security. Don’t settle for guesses.

Base your security investment on the value of your risks, not just the general idea that your data needs to be protected.

And, if you own a $ 100 horse, upgrade your $ 10 fence.

The Need for Speed (In Breach Protection)

Posted on April 26, 2016 I Written By

The following is a guest blog post by Robert Lord, Co-founder and CEO of Protenus.
Robert Protenus
The speed at which a hospital can detect a privacy breach could mean the difference between a brief, no-penalty notification and a multi-million dollar lawsuit.  This month it was reported that health information from 2,000 patients was exposed when a Texas hospital took four months to identify a data breach caused by an independent healthcare provider.  A health system in New York similarly took two months to determine that 2,500 patient records may have been exposed as a result of a phishing scam and potential breach reported two months prior.

The rise in reported breaches this year, from phishing scams to stolen patient information, only underscores the risk of lag times between breach detection and resolution. Why are lags of months and even years so common? And what can hospitals do to better prepare against threats that may reach the EHR layer?

Traditional compliance and breach detection tools are not nearly as effective as they need to be. The most widely used methods of detection involve either infrequent random audits or extensive manual searches through records following a patient complaint. For example, if a patient suspects that his medical record has been inappropriately accessed, a compliance officer must first review EMR data from the various systems involved.  Armed with a highlighter (or a large excel spreadsheet), the officer must then analyze thousands of rows of access data, and cross-reference this information with the officer’s implicit knowledge about the types of people who have permission to view that patient’s records. Finding an inconsistency – a person who accessed the records without permission – can take dozens of hours of menial work per case.  Another issue with investigating breaches based on complaints is that there is often no evidence that the breach actually occurred. Nonetheless, the hospital is legally required to investigate all claims in a timely manner, and such investigations are costly and time-consuming.

According to a study by the Ponemon Institute, it takes an average of 87 days from the time a breach occurs to the time the officer becomes aware of the problem, and, given the arduous task at hand, it then takes another 105 days for the officer to resolve the issue. In total, it takes approximately 6 months from the time a breach occurs to the time the issue is resolved. Additionally, if a data breach occurs but a patient does not notice, it could take months – or even years – for someone to discover the problem. And of course, the longer it takes the hospital to identify a problem, the higher the cost of identifying how the breach occurred and remediating the situation.

In 2013, Rouge Valley Centenary Hospital in Scarborough, Canada, revealed that the contact information of approximately 8,300 new mothers had been inappropriately accessed by two employees. Since 2009, the two employees had been selling the contact information of new mothers to a private company specializing in Registered Education Savings Plans (RESPs). Some of the patients later reported that days after coming home from the hospital with their newborn child, they started receiving calls from sales representatives at the private RESP company. Marketing representatives were extremely aggressive, and seemed to know the exact date of when their child had been born.

The most terrifying aspect of this story is how the hospital was able to find out about the data breach: remorse and human error! One employee voluntarily turned himself in, while the other accidentally left patient records on a printer. Had these two events not happened, the scam could have continued for much longer than the four years it did before it was finally discovered.

Rouge Valley Hospital is currently facing a $412 million dollar lawsuit over this breach of privacy. Arguably even more damaging, is that they have lost the trust of their patients who relied on the hospital for care and confidentiality of their medical treatments.

As exemplified by the ramifications of the Rouge Valley Hospital breach and the new breaches discovered almost weekly in hospitals around the world, the current tools used to detect privacy breaches in electronic health records are not sufficient. A system needs to have the ability to detect when employees are accessing information outside their clinical and administrative responsibilities. Had the Scarborough hospital known about the inappropriately viewed records the first time they had been accessed, they could have investigated earlier and protected the privacy of thousands of new mothers.

Every person seeks a hospital’s care has the right to privacy and the protection of their medical information. However, due to the sheer volume of patient records accessed each day, it is impossible for compliance officers to efficiently detect breaches without new and practical tools. Current rule-based analytical systems often overburden the officers with alerts, and are only a minor improvement from manual detection methods.

We are in the midst of a paradigm shift with hospitals taking a more proactive and layered approach to health data security. New technology that uses machine learning and big data science to review each access to medical records will replace traditional compliance technology and streamline threat detection and resolution cycles from months to a matter of minutes. Making identifying a privacy breach or violation as simple and fast as the action that may have caused it in the first place.  Understanding how to select and implement these next-generation tools will be a new and important challenge for the compliance officers of the future, but one that they can no longer afford to delay.

Protenus is a health data security platform that protects patient data in electronic medical records for some of the nation’s top-ranked hospitals. Using data science and machine learning, Protenus technology uniquely understands the clinical behavior and context of each user that is accessing patient data to determine the appropriateness of each action, elevating only true threats to patient privacy and health data security.

Breach Affecting 2.2M Patients Highlights New Health Data Threats

Posted on April 4, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A Fort Myers, FL-based cancer care organization is paying a massive price for a health data breach that exposed personal information on 2.2 million patients late last year. This incident is also shedding light on the growing vulnerability of non-hospital healthcare data, as you’ll see below.

Recently, 21st Century Oncology was forced to warn patients that an “unauthorized third party” had broken into one of its databases. Officials said that they had no evidence that medical records were accessed, but conceded that breached information may have included patient names Social Security numbers, insurance information and diagnosis and treatment data.

Notably, the cancer care chain — which operates on hundred and 45 centers in 17 states — didn’t learn about the breach until the FBI informed the company that it had happened.

Since that time, 21st Century has been faced with a broad range of legal consequences. Three lawsuits related to the breach have been filed against the company. All are alleging that the breach exposed them to a great possibility of harm.  Patient indignation seems to have been stoked, in part, because they did not learn about the breach until five months after it happened, allegedly at the request of investigating FBI officials.

“While more than 2.2 million 21st Century Oncology victims have sought out and/or pay for medical care from the company, thieves have been hard at work, stealing and using their hard-to-change Social Security numbers and highly sensitive medical information,” said plaintiff Rona Polovoy in her lawsuit.

Polovoy’s suit also contends that the company should have been better prepared for such breaches, given that it suffered a similar security lapse between October 2011 and August 2012, when an employee used patient names Social Security numbers and dates of birth to file fraudulent tax refund claims. She claims that the current lapse demonstrates that the company did little to clean up its cybersecurity act.

Another plaintiff, John Dickman, says that the breach has filled his life with needless anxiety. In his legal filings he says that he “now must engage in stringent monitoring of, among other things, his financial accounts, tax filings, and health insurance claims.”

All of this may be grimly entertaining if you aren’t the one whose data was exposed, but there’s more to this case than meets the eye. According to a cybersecurity specialist quoted in Infosecurity Magazine, the 21st Century network intrusion highlights how exposed healthcare organizations outside the hospital world are to data breaches.

I can’t help but agree with TrapX Security executive vice president Carl Wright, who told the magazine that skilled nursing facilities, dialysis centers, imaging centers, diagnostic labs, surgical centers and cancer treatment facilities like 21st are all in network intruders’ crosshairs. Not only that, he notes that large extended healthcare networks such as accountable care organizations are vulnerable.

And that’s a really scary thought. While he doesn’t say so specifically, it’s logical to assume that the more unrelated partners you weld together across disparate networks, it multiplies the number of security-related points of failure. Isn’t it lovely how security threats emerge to meet every advance in healthcare?

Cyber Breach Insurance May Be Useless If You’re Negligent

Posted on March 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Ideally, your healthcare organization will never see a major data breach. But realistically, given how valuable healthcare data is these days — and the extent to which many healthcare firms neglect data security — it’s safer to assume that you will have to cope with a breach at some point.

In fact, it might be wise to assume that some form of costly breach is inevitable. After all, as one infographic points out, 55 healthcare organizations reported network attacks resulting in data breaches last year, which resulted in 111,809,322 individuals’ health record information being compromised. (If you haven’t done the math in your head, that’s a staggering 35% of the US population.)

The capper: if things don’t get better, the US healthcare industry stands to lose $305 billion in cumulative lifetime patient revenue due to cyberattacks likely to take place over the next five years.

So, by all means, protect yourself by any means available. However, as a recent legal battle suggests, simply buying cyber security insurance isn’t a one-step solution. In fact, your policy may not be worth much if you don’t do your due diligence when it comes to network and Internet security.

The lawsuit, Columbia Casualty Company v. Cottage Health System, shows what happens when a healthcare organization (allegedly) relies on its cyber insurance policy to protect it against breach costs rather than working hard to prevent such slips.

Back in December 2013, the three-hospital Cottage Health System notified 32,755 of its patients that their PHI had been compromised. The breach occurred when the health system and one of its vendors, InSync, stored unencrypted medical records on an Internet accessible system.

It later came out that the breach was probably caused by careless FTP settings on both systems servers which permitted anonymous user access, essentially opening up access to patient health records to anyone who could use Google. (Wow. If true that’s really embarrassing. I doubt a sharp 13-year-old script kiddie would make that mistake.)

Anyway, a group of presumably ticked off patients filed a class action suit against Cottage asking for $4.125 million. At first, cyber breach insurer Columbia Casualty paid out the $4.125 million and settled the case. Now, however, the insurer is suing Cottage, asking the health system to pay it back for the money it paid out to the class action members. It argues that Cottage was negligent due to:

  • a failure to continuously implement the procedures and risk controls identified in the application, including, but not limited to, its failure to replace factory default settings and its failure to ensure that its information security systems were securely configured; and
  • a failure to regularly check and maintain security patches on its systems, its failure to regularly re-assess its information security exposure and enhance risk controls, its failure to have a system in place to detect unauthorized access or attempts to access sensitive information stored on its servers and its failure to control and track all changes to its network to ensure it remains secure.

Not only that, Columbia Casualty asserts, Cottage lied about following a minimum set of security practices known as a “Risk Control Self Assessment” required as part of the cyber insurance application.

Now, if the cyber insurer’s allegations are true, Cottage’s behavior may have been particularly egregious. And no one has proven anything yet, as the case is still in the early stages, but this dispute should still stand as a warning to all healthcare organizations. If you neglect security, then try to get an insurance company to cover your behind when breaches occur, you might be out of luck.

Owensboro Health Muhlenberg Community Hospital Breach

Posted on November 17, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In this week in HIPAA Breach rubber-necking, we have the FBI discovering suspicious network activity from third parties at Owensboro Health Muhlenberg Community Hospital, a 135 bed acute care hospital in Kentucky. Here’s a description of the incident:

On September 16, 2015, the Federal Bureau of Investigation (FBI) notified the hospital of suspicious network activity involving third parties. Upon learning this information, the hospital took immediate action, including initiating an internal investigation and engaging a leading digital forensics and security firm to investigate this matter. Based upon this review, the hospital confirmed that a limited number of computers were infected with a keystroke logger designed to capture and transmit data as it was entered onto the affected computers. The infection may have started as early as January 2012.

I’m quite interested in how they came up with the January 2012 date. Was that the date that the infected computers were installed? Are they just being cautious and assuming that the computers could have had the keylogger since the beginning and they’re handling the breach that way?

Of course, Muhlenberg Community Hospital is sending breach notifications to all patients in their records database, employees and contractors and providers that were credentialed at the hospital since 2012. They don’t give a number of how many records or people this constitutes, but it have to be a massive number.

Here’s a look at what information they think could have been accessed by the keylogger:

The affected computers were used to enter patient financial data and health information, information about persons responsible for a patient’s bill and employee/contractor data, including potentially name, address, telephone number(s), birthdate, Social Security number, driver’s license/state identification number, medical and health plan information (such health insurance number, medical record number, diagnoses and treatment information, and payment information), financial account number, payment card information (such as primary account number and expiration date) and employment-related information. Additionally, some credentialing-related information for providers may be impacted. The hospital also believes that the malware could have captured username and password information for accounts or websites that were accessed by employees, contractors or providers using the affected terminals. The hospital has no indication that the data has been used inappropriately.

They’re offering the usual identity protection services to all those affected. However, I was quite interested in their expanded list of steps people can take to guard against possible identity theft and fraud:

  • Enroll in Identity Protection Services
  • Explanation of Benefits Review
  • Check Credit Reports
  • Review Payment Card Statements
  • Change Your Passwords
  • Consult the Identity Theft Protection Guide

It’s clear that the number of breaches is accelerating. However, this case is particularly interesting because it could have been breached for the past 3 years and they’re just now finding it out. I expect we’ll see a lot more of this activity in the future.

Phase 2 HIPAA Audits Kick Off With Random Surveys

Posted on June 9, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Ideally, the only reason you would know about the following is due to scribes such as myself — but for the record, the HHS Office for Civil Rights has sent out a bunch of pre-audit screening surveys to covered entities. Once it gets responses, it will do a Phase 2 audit not only of covered entities but also business associates, so things should get heated.

While these take the form of Meaningful Use audits, covering incentives paid from January 1, 2011 through June 30, 2014, it’s really more about checking how well you protect ePHI.

This effort is a drive to be sure that providers and BAs are complying with the HIPAA privacy, security and breach notification requirements. Apparently OCR found, during Phase 1 pilot audits in 2011 and 2012, that there was “pervasive non-compliance” with regs designed to safeguard protected health information, the National Law Review reports.

However, these audits aren’t targeting the “bad guys.” Selection for the audits is random, according to HHS Office of the Inspector General.

So if you get one of the dreaded pre-screening letters, how should you respond? According a thoughtful blog post by Maryanne Lambert for CureMD, auditors will be focused on the following areas:

  • Risk Assessment audits and reports
  • EHR security plan
  • Organizational chart
  • Network diagram
  • EHR web sites and patient portals
  • Policies and procedures
  • System inventory
  • Tools to perform vulnerability scans
  • Central log and event reports
  • EHR system users list
  • Contractors supporting the EHR and network perimeter devices.

According to Lambert, the feds will want to talk to the person primarily responsible for each of these areas, a process which could quickly devolve into a disaster if those people aren’t prepared. She recommends that if you’re selected for an audit, you run through a mock audit ahead of time to make sure these staff members can answer questions about how well policies and processed are followed.

Not that anyone would take the presence of HHS on their premises lightly, but it’s worth bearing in mind that a stumble in one corner of your operation could have widespread consequences. Lambert notes that in addition to defending your security precautions, you have to make sure that all parts of your organization are in line:

Be mindful while planning for this audit as deficiencies identified for one physician in a physician group or one hospital within a multi-hospital system, may apply to the other physicians and hospitals using the same EHR system and/or implementing meaningful use in the same way.  Thus, the incentive payments at risk in this audit may be greater than the payments to the particular provider being audited.

But as she points out, there is one possible benefit to being audited. If you prepare well, it might save you not only trouble with HHS but possibly lawsuits for breaches of information. Hey, everything has some kind of silver lining, right?

An Important Look at HIPAA Policies For BYOD

Posted on May 11, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Today I stumbled across an article which I thought readers of this blog would find noteworthy. In the article, Art Gross, president and CEO at HIPAA Secure Now!, made an important point about BYOD policies. He notes that while much of today’s corporate computing is done on mobile devices such as smartphones, laptops and tablets — most of which access their enterprise’s e-mail, network and data — HIPAA offers no advice as to how to bring those devices into compliance.

Given that most of the spectacular HIPAA breaches in recent years have arisen from the theft of laptops, and are likely proceed to theft of tablet and smartphone data, it seems strange that HHS has done nothing to update the rule to address increasing use of mobiles since it was drafted in 2003.  As Gross rightly asks, “If the HIPAA Security Rule doesn’t mention mobile devices, laptops, smartphones, email or texting how do organizations know what is required to protect these devices?”

Well, Gross’ peers have given the issue some thought, and here’s some suggestions from law firm DLA Piper on how to dissect the issues involved. BYOD challenges under HIPAA, notes author Peter McLaughlin, include:

*  Control:  To maintain protection of PHI, providers need to control many layers of computing technology, including network configuration, operating systems, device security and transmissions outside the firewall. McLaughlin notes that Android OS-based devices pose a particular challenge, as the system is often modified to meet hardware needs. And in both iOS and Android environments, IT administrators must also manage users’ tendency to connected to their preferred cloud and download their own apps. Otherwise, a large volume of protected health data can end up outside the firewall.

Compliance:  Healthcare organizations and their business associates must take care to meet HIPAA mandates regardless of the technology they  use.  But securing even basic information, much less regulated data, can be far more difficult than when the company creates restrictive rules for its own devices.

Privacy:  When enterprises let employees use their own device to do company business, it’s highly likely that the employee will feel entitled to use the device as they see fit. However, in reality, McLaughlin suggests, employees don’t really have full, private control of their devices, in part because the company policy usually requires a remote wipe of all data when the device gets lost. Also, employees might find that their device’s data becomes discoverable if the data involved is relevant to litigation.

So, readers, tell us how you’re walking the tightrope between giving employees who BYOD some autonomy, and protecting private, HIPAA-protected information.  Are you comfortable with the policies you have in place?

Full Disclosure: HIPAA Secure Now! is an advertiser on this website.

HIPAA Slip Leads To PHI Being Posted on Facebook

Posted on July 1, 2014 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

HHS has begun investigating a HIPAA breach at the University of Cincinnati Medical Center which ended with a patient’s STD status being posted on Facebook.

The disaster — for both the hospital and the patient — happened when a financial services employee shared detailed medical information with father of the patient’s then-unborn baby.  The father took the information, which included an STD diagnosis, and posted it publicly on Facebook, ridiculing the patient in the process.

The hospital fired the employee in question once it learned about the incident (and a related lawsuit) but there’s some question as to whether it reported the breach to HHS. The hospital says that it informed HHS about the breach in a timely manner, and has proof that it did so, but according to HealthcareITNews, the HHS Office of Civil Rights hadn’t heard about the breach when questioned by a reporter lastweek.

While the public posting of data and personal attacks on the patient weren’t done by the (ex) employee, that may or may not play a factor in how HHS sees the case. Given HHS’ increasingly low tolerance for breaches of any kind, I’d be surprised if the hospital didn’t end up facing a million-dollar OCR fine in addition to whatever liabilities it incurs from the privacy lawsuit.

HHS may be losing its patience because the pace of HIPAA violations doesn’t seem to be slowing.  Sometimes, breaches are taking place due to a lack of the most basic security protocols. (See this piece on last year’s wackiest HIPAA violations for a taste of what I’m talking about.)

Ultimately, some breaches will occur because a criminal outsmarted the hospital or medical practice. But sadly, far more seem to take place because providers have failed to give their staff an adequate education on why security measures matter. Experts note that staffers need to know not just what to do, but why they should do it, if you want them to act appropriately in unexpected situations.

While we’ll never know for sure, the financial staffer who gave the vengeful father his girlfriend’s PHI may not have known he was  up to no good. But the truth is, he should have.

HIPAA Fines and Penalties in a HIPAA Omnibus World

Posted on July 25, 2013 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Lately I’ve been seeing a number of really lazy approaches to making sure a company is HIPAA compliant. I think there’s a pandora’s box just waiting to explode where many companies are going to get slammed with HIPAA compliance issues. Certainly there are plenty of HIPAA compliance issues at healthcare provider organizations, but the larger compliance issue is going to likely come from all of these business associates that are now going to be held responsible for any HIPAA violations that occur with their systems.

For those not keeping up with the changes to HIPAA as part of the HITECH Act and HIPAA Omnibus, here are a couple of the biggest changes. First, HITECH provided some real teeth when it comes to penalties for HIPAA violations. Second, HIPAA Omnibus puts business associates in a position of responsibility when it comes to any HIPAA violations. Yes, this means that healthcare companies that experience HIPAA violations could be fined just like previous covered entities.

To put it simply, hundreds of organizations who didn’t have to worry too much about HIPAA will now be held responsible.

This is likely going to be a recipe for disaster for those organizations who aren’t covering their bases when it comes to HIPAA compliance. Consider two of the most recent fines where Idaho State University was fined $400k for HIPAA violations and the $1.7 million penalty for WellPoint’s HIPAA violations. In the first case, they had a disabled firewall for a year, and the second one failed to secure an online application database containing sensitive data.

Of course, none of the above examples take into account the possible civil cases that can be created against these organizations or the brand impact to the organization of a HIPAA violation. The penalties of a HIPAA violation range between $100 to $50,000 per violation depending on the HIPAA violation category. I’ll be interested to see how HHS defines “Reasonable Cause” versus “Willfull Neglect – Corrected.”

I’ve seen far too many organizations not taking the HIPAA requirements seriously. This is going to come back to bite many organizations. Plus, healthcare organizations better make sure they have proper business associate agreements with these companies in order to insulate them against the neglect of the business associate. I don’t see HHS starting to search for companies that aren’t compliant. However, if they get a report of issues, they’ll have to investigate and they won’t likely be happy with what they find.

The message to all is to make sure your HIPAA house is in order. Unfortunately, I don’t think many will really listen until the first shoe falls.