Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Don’t Worry About HIPAA – When Your License Is At-Risk!

Posted on October 24, 2016 I Written By

The following is a guest blog post by Mike Semel, President and Chief Compliance Officer at Semel Consulting.
medical-license-revoked
Not long ago I was at an ambulance service for a HIPAA project when one of their paramedics asked what the odds were that his employer would get a HIPAA fine if he talked about one of his patients. I replied that the odds of a HIPAA penalty were very slim compared to him losing his state-issued paramedic license, that would cost him his job and his career. He could also be sued. He had never thought of these risks.

Doctors, dentists, lawyers, accountants, psychologists, nurses, EMT’s, paramedics, social workers, mental health counselors, and pharmacists, are just some of the professions that have to abide by confidentiality requirements to keep their licenses.

License and ethical requirements have required patient and client confidentiality long before HIPAA and other confidentiality laws went into effect.  HIPAA became effective in 2003, 26 years after I became a New York State certified Emergency Medical Technician (EMT). Way back in 1977, the very first EMT class I took talked about my responsibility to keep patient information confidential, or I would risk losing my certification.

While licensed professionals may not talk about an individual patient or client, weak cybersecurity controls could cause a breach of ALL of their patient and client information – instantly.
health-data-encryption
Most certified and licensed professionals will agree that they are careful not to talk about patients and clients, but how well do they secure their data? Are their laptops encrypted? Are security patches and updates current? Do they have a business-class firewall protecting their network? Do they have IT security professionals managing their technology?
psychologist-loses-license-prostitute-takes-laptop
Lawyers have been sanctioned for breaching confidentiality. Therapists have lost their licenses. In one well-publicized case a psychologist lost his license when a prostitute stole his laptop. In rare cases a confidentiality breach will result in a jail sentence, along with the loss of a license.

Cyber Security Ethics Requirements
Lawyers are bound by ethical rules that apply to confidentiality and competence. The competence requirements typically restrict lawyers from taking cases in unfamiliar areas of the law. However, The American Bar Association has published model guidance that attorneys not competent in the area of cyber security must hire professionals to help them secure their data.

The State Bar of North Dakota adopted technology amendments to its ethics rules in early 2016. The State Bar of Wisconsin has published a guide entitled Cybersecurity and SCR Rules of Professional Conduct. In 2014, The New York State Bar Association adopted Social Media Ethics Guidelines. Lawyers violating these ethical requirements can be sanctioned or disbarred.

A State Bar of Arizona ethics opinion said “an attorney must either have the competence to evaluate the nature of the potential threat to the client’s electronic files and to evaluate and deploy appropriate computer hardware and software to accomplish that end, or if the attorney lacks or cannot reasonably obtain that competence, to retain an expert consultant who does have such competence.”

Some licensed professionals argue that their ethical and industry requirements mean they don’t have to comply with other requirements. Ethical obligations do not trump federal and state laws. Lawyers defending health care providers in malpractice cases are HIPAA Business Associates. Doctors that have to comply with HIPAA also must adhere to state data breach laws. Psychiatric counselors, substance abuse therapists, pharmacists, and HIV treatment providers have to comply with multiple federal and state confidentiality laws in addition to their license requirements.

There are some exemptions from confidentiality laws and license requirements when it comes to reporting child abuse, notifying law enforcement when a patient becomes a threat, and in some court proceedings.

While the odds of a federal penalty for a confidentiality breach are pretty slim, it is much more likely that someone will complain to your licensing board and kill your career. Don’t take the chance after all you have gone through to earn your license.

About Mike Semel
mike-semel-ambulance
Mike Semel is the President and Chief Compliance Officer for Semel Consulting. He has owned IT businesses for over 30 years, has served as the Chief Information Officer for a hospital and a K-12 school district, and as the Chief Operating Officer for a cloud backup company. Mike is recognized as a HIPAA thought leader throughout the healthcare and IT industries, and has spoken at conferences including NASA’s Occupational Health conference, the New York State Cybersecurity conference, and many IT conferences. He has written HIPAA certification classes and consults with healthcare organizations, cloud services, Managed Service Providers, and other business associates to help build strong cybersecurity and compliance programs. Mike can be reached at 888-997-3635 x 101 or mike@semelconsulting.com.

States Strengthen Data Breach Laws & Regulations

Posted on October 18, 2016 I Written By

The following is a guest blog post by Mike Semel, President and Chief Compliance Officer at Semel Consulting.

If your cyber security and compliance program is focused on just one regulation, like HIPAA or banking laws, many steps you are taking are probably wrong.

Since 2015 a number of states have amended their data breach laws which can affect ALL BUSINESSES, even those out of state, that store information about their residents. The changes address issues identified in breach investigations, and public displeasure with the increasing number of data breaches that can result in identity theft.

Forty-seven states, plus DC, Puerto Rico, Guam, and the US Virgin Islands, protect personally identifiable information, that includes a person’s name plus their Driver’s License number, Social Security Number, and the access information for bank and credit card accounts.

Many organizations mistakenly focus only on the data in their main business application, like an Electronic Health Record system or other database they use for patients or clients. They ignore the fact that e-mails, reports, letters, spreadsheets, scanned images, and other loose documents contain data that is also protected by laws and regulations. These documents can be anywhere – on servers, local PC’s, portable laptops, tablets, mobile phones, thumb drives, CDs and DVDs, or somewhere up in the Cloud.

Some businesses also mistakenly believe that moving data to the cloud means that they do not have to have a secure office network. This is a fallacy because your cloud can be accessed by hackers if they can compromise the local devices you use to get to the cloud. In most cases there is local data even though the main business applications are in the cloud. Local computers should have business-class operating systems, with encryption, endpoint protection software, current security patches and updates, and strong physical security. Local networks need business-class firewalls with active intrusion prevention.

States are strengthening their breach laws to make up for weaknesses in HIPAA and other federal regulations. Between a state and federal law, whichever requirement is better for the consumer is what those storing data on that state’s residents (including out of state companies) must follow.

Some states have added to the types of information protected by their data breach reporting laws. Many states give their residents the right to sue organizations for not providing adequate cyber security protection. Many states have instituted faster reporting requirements than federal laws, meaning that incident management plans that are based on federal requirements may mean you will miss a shorter state reporting deadline.

In 2014, California began requiring mandatory free identity theft prevention services even when harm cannot be proven. This year Connecticut adopted a similar standard. Tennessee eliminated the encryption safe harbor, meaning that the loss of encrypted data must be reported. Nebraska eliminated the encryption safe harbor if the encryption keys might have been compromised. Illinois is adding medical records to its list of protected information.

Massachusetts requires every business to implement a comprehensive data protection program including a written plan. Texas requires that all businesses that have medical information (not just health care providers and health plans) implement a staff training program.

REGULATIONS

Laws are not the only regulations that can affect businesses.

The New York State Department of Financial Services has proposed that “any Person operating under or required to operate under a license, registration, charter, certificate, permit, accreditation or similar authorization under the banking law, the insurance law or the financial services law” comply with new cyber security regulations. This includes banks, insurance companies, investment houses, charities, and even covers organizations like car dealers and mortgage companies who handle consumer financial information.

The new rule will require:

  • A risk analysis
  • An annual penetration test and quarterly vulnerability assessments
  • Implementation of a cyber event detection system
  • appointing a Chief Information Security Officer (and maintaining compliance responsibility if outsourcing the function)
  • System logging and event management
  • A comprehensive security program including policies, procedures, and evidence of compliance

Any organization connected to the Texas Department of Health & Human Services must agree to its Data Use Agreement, which requires that a suspected breach of some of its information be reported within ONE HOUR of discovery.

MEDICAL RECORDS

People often assume that their medical records are protected by HIPAA wherever they are, and are surprised to find out this is not the case. HIPAA only covers organizations that bill electronically for health care services, validate coverage, or act as health plans (which also includes companies that self-fund their health plans).

  • Doctors that only accept cash do not have to comply with HIPAA.
  • Companies like fitness centers and massage therapists collect your medical information but are not covered by HIPAA because they do not bill health plans.
  • Health information in employment records are exempt from HIPAA, like letters from doctors excusing an employee after an injury or illness.
  • Workers Compensation records are exempt from HIPAA.

Some states protect medical information with every entity that may store it. This means that every business must protect medical information it stores, and must report it if it is lost, stolen, or accessed by an unauthorized person.

  • Arkansas
  • California
  • Connecticut
  • Florida
  • Illinois (beginning January 1, 2017)
  • Massachusetts
  • Missouri
  • Montana
  • Nevada
  • New Hampshire
  • North Dakota
  • Oregon
  • Puerto Rico
  • Rhode Island
  • Texas
  • Virginia
  • Wyoming

Most organizations are not aware that they are governed by so many laws and regulations. They don’t realize that information about their employees and other workforce members are covered. Charities don’t realize the risks they have protecting donor information, or the impact on donations a breach can cause when it becomes public.

We have worked with many healthcare and financial organizations, as well as charities and general businesses, to build cyber security programs that comply with federal and state laws, industry regulations, contractual obligations, and insurance policy requirements. We have been certified in our compliance with the federal NIST Cyber Security Framework (CSF) and have helped others adopt this security framework, that is gaining rapid acceptance.

About Mike Semel
mike-semel-hipaa-consulting
Mike Semel is the President and Chief Compliance Officer for Semel Consulting. He has owned IT businesses for over 30 years, has served as the Chief Information Officer for a hospital and a K-12 school district, and as the Chief Operating Officer for a cloud backup company. Mike is recognized as a HIPAA thought leader throughout the healthcare and IT industries, and has spoken at conferences including NASA’s Occupational Health conference, the New York State Cybersecurity conference, and many IT conferences. He has written HIPAA certification classes and consults with healthcare organizations, cloud services, Managed Service Providers, and other business associates to help build strong cybersecurity and compliance programs. Mike can be reached at 888-997-3635 x 101 or mike@semelconsulting.com.

HIPAA Cloud Bursts: New Guidance Proves Cloud Services Are Business Associates

Posted on October 10, 2016 I Written By

The following is a guest blog post by Mike Semel, President and Chief Compliance Officer at Semel Consulting.
hipaa-cloud
It’s over. New guidance from the federal Office for Civil Rights (OCR) confirms that cloud services that store patient information must comply with HIPAA.

Many cloud services and data centers have denied their obligations by claiming they are not HIPAA Business Associates because:

  1. They have no access to their customer’s electronic Protected Health Information (ePHI),
  2. Their customer’s ePHI is encrypted and they don’t have the encryption key,
  3. They never look at their customer’s ePHI,
  4. Their customers manage the access to their own ePHI in the cloud,
  5. Their terms and conditions prohibit the storage of ePHI, and
  6. They only store ePHI ‘temporarily’ and therefore must be exempt as a ‘conduit.’

Each of these excuses has been debunked in HIPAA Cloud Guidance released on October 7, 2016, by the Office for Civil Rights.

The new guidance clearly explains that any cloud vendor that stores ePHI must:

  1. Sign a HIPAA Business Associate Agreement,
  2. Conduct a HIPAA Security Risk Analysis,
  3. Comply with the HIPAA Privacy Rule,
  4. Implement HIPAA Security Rule safeguards the ePHI to ensure its confidentiality, integrity, and availability.
  5. Comply with the HIPAA Breach Reporting Rule by reporting any breaches of ePHI to its customers, and be directly liable for breaches it has caused.

The OCR provides examples of cloud services where clients manage access to their stored data. It discusses how a client can manage its users’ access to the stored data, while the cloud service manages the security of the technical infrastructure. Each needs to have a risk analysis that relates to its share of the responsibilities.
access-denied-phi
OCR also recently published guidance that cloud services cannot block or terminate a client’s access to ePHI, for example, if they are in a dispute with their customer or the customer hasn’t paid its bill.

As we have been saying for years, the 2013 HIPAA Omnibus Final Rule expanded the definition of HIPAA Business Associates to include anyone outside a HIPAA Covered Entity’s workforce that “creates, receives, maintains, or transmits PHI” on behalf of the Covered Entity. It defines subcontractors as anyone outside of a Business Associate’s workforce that “creates, receives, maintains, or transmits PHI on behalf of another Business Associate.”

‘Maintains’ means storing ePHI, and does not distinguish whether the ePHI is encrypted, whether the Business Associate looks at the ePHI, or even if its staff has physical access to the devices housing the ePHI (like servers stored in locked cabinets in a data center.)
hipaa-fines-payment
A small medical clinic was fined $100,000 for using a free cloud mail service to communicate ePHI, and for using a free online calendar to schedule patient visits. Recently the OCR issued a $2.7 million penalty against Oregon Health & Science University (OHSU) partly for storing ePHI with a cloud service in the absence of a Business Associate Agreement.

“OHSU should have addressed the lack of a Business Associate Agreement before allowing a vendor to store ePHI,” said OCR Director Jocelyn Samuels.  “This settlement underscores the importance of leadership engagement and why it is so critical for the C-suite to take HIPAA compliance seriously.”

So what does this mean to you?

If you are Covered Entity or a Business Associate…

  • A common myth is that all ePHI is in a structured system like an Electronic Health Record system. This is wrong because ePHI includes anything that identifies a patient, nursing home resident, or health plan member that is identifiable (many more identifiers than just a name) and relates to the treatment, diagnosis, or payment for health care.

    EPHI can be in many forms. It does not have to be in a formal system like an Electronic Health Record (EHR) system, but can be contained in an e-mail, document, spreadsheet, scanned or faxed image, medical images, photographs, and even voice files, like a patient leaving a message in your computerized phone system requesting a prescription refill. During our risk analyses we find ePHI everywhere- on servers, local devices, portable media, mobile devices, and on cloud services. Our clients are usually shocked when we show them where their ePHI is hiding.

  • Never store ePHI in any cloud service without first knowing that the service is compliant with HIPAA and will sign a HIPAA Business Associate Agreement.

    This automatically disqualifies:

    • The free texting that came with your cellular phone service;
    • Free e-mail services like Gmail, Yahoo!, Hotmail, etc.;
    • Free e-mail from your Internet service provider like Cox, Comcast, Time Warner, Charter, CenturyLink, Verizon, Frontier, etc.;
    • Free file sharing services from DropBox, Box.com, Google Drive, etc.
    • Consumer-grade online backup services.

hacked-healthcare

  • Another common myth is that if data is stored in the cloud that you don’t have to secure your local devices. This is wrong because if someone can compromise a local device they can gain access to your data in the cloud. Be sure the mobile devices and local devices you use to access the cloud are properly protected, including those on your office network, and at users’ homes. This means that all mobile devices like phones and tablets; PCs; and laptops should be secured to prevent unauthorized access. All devices should be constantly updated with security patches, and anti-virus/anti-malware software should be installed and current. If ePHI is stored on a local network, it must be a domain with logging turned on, and logs retained for six years.
  • Use an e-mail service that complies with HIPAA. Microsoft Office 365 and similar business-class services advertise that they provide secure communications and will sign a HIPAA Business Associate Agreement.
  • You may be using a vendor to remotely filter your e-mail before it arrives in your e‑mail system. These services often retain a copy of each message so it can be accessed in the event your mail server goes down. Make sure your spam filtering service secures your messages and will sign a HIPAA Business Associate Agreement.

mobile-device-security-in-healthcare

  • Never send or text ePHI, even encrypted, to a caregiver or business associate at one of the free e-mail services.
  • Never use the free texting that came with your cell service to communicate with patients and other caregivers.
  • If you have sent text messages, e-mails, or stored documents containing ePHI using an unapproved service, delete those messages now, and talk with your compliance officer.
  • Review your HIPAA compliance program, to ensure it really meets all of HIPAA’s requirements under the Privacy, Security, and Data Breach Reporting rules. There are 176 auditable HIPAA items. You may also need to comply with other federal and state laws, plus contractual and insurance requirements.

If you are a cloud service, data center, or IT Managed Service Provider …

  • If you have been denying that you are a HIPAA Business Associate, read the new guidance document and re-evaluate your decisions.
  • If you do sign HIPAA Business Associate Agreements, you need to review your internal HIPAA compliance program to ensure that it meets all of the additional requirements in the HIPAA Privacy, Security, and Data Breach Reporting rules.
  • Also become familiar with state regulations that protect personally identifiable information, including driver’s license numbers, Social Security numbers, credit card and banking information. Know which states include protection of medical information, which will require breach reporting to the state attorney general in addition to the federal government. Know what states have more stringent reporting timeframes than HIPAA. You may have to deal with a large number of states with varying laws, depending on the data you house for customers.

hipaa-terms-and-conditions

  • Make sure your Service Level Agreements and Terms & Conditions are not in conflict with the new guidance about blocking access to ePHI. Compare your policies for non-payment with the new guidance prohibiting locking out access to ePHI.
  • Make sure your Service Level Agreements and Terms & Conditions include how you will handle a breach caused by your clients when they are using your service. Everyone should know what will happen, and who pays, if you get dragged into a client’s data breach investigation.
  • Make sure all of your subcontractors, and their subcontractors, comply with HIPAA. This includes the data centers you use to house and/or manage your infrastructure, programmers, help desk services, and backup vendors.
  • Learn about HIPAA. We see many cloud vendors that promote their HIPAA compliance but can seldom answer even the most basic questions about the compliance requirements. Some believe they are compliant because they sign Business Associate Agreements. That is just the first step in a complex process to properly secure data and comply with the multiple regulations that affect you. We have helped many cloud services build compliance programs that protected them against significant financial risks.
  • If you have administrative access to your client’s networks that contain ePHI, you are a Business Associate. Even if your clients have not signed, or refused to sign, Business Associate Agreements, you are still a Business Associate and must follow all of the HIPAA rules.
  • If you are reselling hosting services, co-location services, cloud storage, file sharing, online backup, Office 365/hosted Exchange, e-mail encryption, or spam filtering, you need to make sure your vendors are all compliant with HIPAA and that they will sign a Business Associate Agreement with you.
  • Look at all the services your regulated clients need. Include in your project and managed service proposals clear links between your clients’ needs and your services. For example, when installing replacement equipment, describe in detail the steps you will take to properly wipe and dispose of devices being replaced that have stored any ePHI. Link your managed services to your client’s needs and include reports that directly tie to your clients’ HIPAA requirements.

About Mike Semel
mike-semel-hipaa-consulting
Mike Semel is the President and Chief Compliance Officer for Semel Consulting. He has owned IT businesses for over 30 years, has served as the Chief Information Officer for a hospital and a K-12 school district, and as the Chief Operating Officer for a cloud backup company. Mike is recognized as a HIPAA thought leader throughout the healthcare and IT industries, and has spoken at conferences including NASA’s Occupational Health conference, the New York State Cybersecurity conference, and many IT conferences. He has written HIPAA certification classes and consults with healthcare organizations, cloud services, Managed Service Providers, and other business associates to help build strong cybersecurity and compliance programs. Mike can be reached at 888-997-3635 x 101 or mike@semelconsulting.com.

What Would a Patient-Centered Security Program Look Like? (Part 2 of 2)

Posted on August 30, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article laid down a basic premise that the purpose of security is to protect people, not computer systems or data. Let’s continue our exploration of internal threats.

Security Starts at Home

Before we talk about firewalls and anomaly detection for breaches, let’s ask why hospitals, pharmacies, insurers, and others can spread the data from health care records on their own by selling this data (supposedly de-identified) to all manner of third parties, without patient consent or any benefit to the patient.

This is a policy issue that calls for involvement by a wide range of actors throughout society, of course. Policy-makers have apparently already decided that it is socially beneficial–or at least the most feasible course economically–for clinicians to share data with partners helping them with treatment, operations, or payment. There are even rules now requiring those partners to protect the data. Policy-makers have further decided that de-identified data sharing is beneficial to help researchers and even companies using it to sell more treatments. What no one admits is that de-identification lies on a slope–it is not an all-or-nothing guarantee of privacy. The more widely patient data is shared, the more risk there is that someone will break the protections, and that someone’s motivation will change from relatively benign goals such as marketing to something hostile to the patient.

Were HIMSS to take a patient-centered approach to privacy, it would also ask how credentials are handed out in health care institutions, and who has the right to view patient data. How do we minimize the chance of a Peeping Tom looking at a neighbor’s record? And what about segmentation of data, so that each clinician can see only what she needs for treatment? Segmentation has been justly criticized as impractical, but observers have been asking for it for years and there’s even an HL7 guide to segmentation. Even so, it hasn’t proceeded past the pilot stage.

Nor does it make sense to talk about security unless we talk about the rights of patients to get all their data. Accuracy is related to security, and this means allowing patients to make corrections. I don’t know what I think would be worse: perfectly secure records that are plain wrong in important places, or incorrect assertions being traded around the Internet.

Patients and the Cloud

HIMSS did not ask respondents whether they stored records at their own facilities or in third-party services. For a while, trust in the cloud seemed to enjoy rapid growth–from 9% in 2012 to 40% in 2013. Another HIMSS survey found that 44% of respondents used the cloud to host clinical applications and data–but that was back in 2014, so the percentage has probably increased since then. (Every survey measures different things, of course.)

But before we investigate clinicians’ use of third parties, we must consider taking patient data out of clinicians’ hands entirely and giving it back to patients. Patients will need security training of their own, under those conditions, and will probably use the cloud to avoid catastrophic data loss. The big advantage they have over clinicians, when it comes to avoiding breaches, is that their data will be less concentrated, making it harder for intruders to grab a million records at one blow. Plenty of companies offer personal health records with some impressive features for sharing and analytics. An open source solution called HEART, described in another article, is in the works.

There’s good reason to believe that data is safer in the cloud than on local, network-connected systems. For instance, many of the complex technologies mentioned by HIMSS (network monitoring, single sign on, intrusion detection, and so on) are major configuration tasks that a cloud provider can give to its clients with a click of a button. More fundamentally, hospital IT staffs are burdened with a large set of tasks, of which security is one of the lowest-priority because it doesn’t generate revenue. In contrast, IT staff at the cloud environment spend gobs of time keeping up to date on security. They may need extra training to understand the particular regulatory requirements of health care, but the basic ways of accessing data are the same in health care as any other industry. Respondents to the HIMSS survey acknowledged that cloud systems had low vulnerability (p. 6).

There won’t be any more questions about encryption once patients have their data. When physicians want to see it, they will have to so over an encrypted path. Even Edward Snowden unreservedly boasted, “Encryption works.”

Security is a way of behaving, not a set of technologies. That fundamental attitude was not addressed by the HIMSS survey, and might not be available through any survey. HIMSS treated security as a routine corporate function, not as a patient right. We might ask the health care field different questions if we returned to the basic goal of all this security, which is the dignity and safety of the patient.

We all know the health record system is broken, and the dismal state of security is one symptom of that failure. Before we invest large sums to prop up a bad record system, let’s re-evaluate security on the basis of a realistic and respectful understanding of the patients’ rights.

What Would a Patient-Centered Security Program Look Like? (Part 1 of 2)

Posted on August 29, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

HIMSS has just released its 2016 Cybersecurity Survey. I’m not writing this article just to say that the industry-wide situation is pretty bad. In fact, it would be worth hiring a truck with a megaphone to tour the city if the situation was good. What I want to do instead is take a critical look at the priorities as defined by HIMSS, and call for a different industry focus.

We should start off by dispelling notions that there’s anything especially bad about security in the health care industry. Breaches there get a lot of attention because they’re relatively new and because the personal sensitivity of the data strikes home with us. But the financial industry, which we all thought understood security, is no better–more than 500 million financial records were stolen during just a 12-month period ending in October 2014. Retailers are frequently breached. And what about one of the government institutions most tasked with maintaining personal data, the Office of Personnel Management?

The HIMSS report certainly appears comprehensive to a traditional security professional. They ask about important things–encryption, multi-factor authentication, intrusion detection, audits–and warn the industry of breaches caused by skimping on such things. But before we spend several billion dollars patching the existing system, let’s step back and ask what our priorities are.

People Come Before Technologies

One hint that HIMSS’s assumptions are skewed comes in the section of the survey that asked its respondents what motivated them to pursue greater security. The top motivation, at 76 percent, was a phishing attack (p. 6). In other words, what they noticed out in the field was not some technical breach but a social engineering attack on their staff. It was hard to interpret the text, but it appeared that the respondents had actually experienced these attacks. If so, it’s a reminder that your own staff is your first line of defense. It doesn’t matter how strong your encryption is if you give away your password.

It’s a long-held tenet of the security field that the most common source of breaches is internal: employees who were malicious themselves, or who mistakenly let intruders in through phishing attacks or other exploits. That’s why (you might notice) I don’t use the term “cybersecurity” in this article, even though it’s part of the title of the HIMSS report.

The security field has standardized ways of training staff to avoid scams. Explain to them the most common vectors of attack. Check that they’re creating strong passwords, where increased computing power is creating an escalating war (and the value of frequent password changes has been challenged). Best yet, use two-factor authentication, which may help you avoid the infuriating burden of passwords. Run mock phishing scams to test your users. Set up regular audits of access to sensitive data–a practice that HIMSS found among only 60% of respondents (p. 3). And give someone the job of actually checking the audit logs.

Why didn’t HIMSS ask about most of these practices? It began the project with a technology focus instead a human focus. We’ll take the reverse approach in the second part of this article.

OCR Cracking Down On Business Associate Security

Posted on May 13, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For most patients, a data breach is a data breach. While it may make a big difference to a healthcare organization whether the source of a security vulnerability was outside its direct control, most consumers aren’t as picky. Once you have to disclose to them that the data has been hacked, they aren’t likely be more forgiving if one of your business associates served as the leak.

Just as importantly, federal regulators seem to be growing increasingly frustrated that healthcare organizations aren’t doing a good job of managing business associate security. It’s little wonder, given that about 20% of the 1,542 healthcare data breaches affecting 500 more individuals reported since 2009 involve business associates. (This is probably a conservative estimate, as reports to OCR by covered entities don’t always mention the involvement of a business associate.)

To this point, the HHS Office for Civil Rights has recently issued a cyber-alert stressing the urgency of addressing these issues. The alert, which was issued by OCR earlier this month, noted that a “large percentage” of covered entities assume they will not be notified of security breaches or cyberattacks experienced by the business associates. That, folks, is pretty weak sauce.

Healthcare organizations also believe that it’s difficult to manage security incidents involving business associates, and impossible to determine whether data safeguards and security policies and procedures at the business associates are adequate. Instead, it seems, many covered entities operate on the “keeping our fingers crossed” system, providing little or no business associate security oversight.

However, that is more than unwise, given that the number of major breaches have taken place because of an oversight by business associates. For example, in 2011 information on 4.9 million individuals was exposed when unencrypted backup computer tapes are stolen from the car of a Science Applications International Corp. employee, who was transporting tapes on behalf of military health program, TRICARE.

The solution to this problem is straightforward, if complex to implement, the alert suggests. “Covered entities and business associates should consider how they will confront a breach at their business associates or subcontractors,” and make detailed plans as to how they’ll address and report on security incidents among these group, OCR suggests.

Of course, in theory business associates are required to put their own policies and procedures in place to prevent, detect, contain and correct security violations under HIPAA regs. But that will be no consolation if your data is exposed because they weren’t holding their feet to the fire.

Besides, OCR isn’t just sending out vaguely threatening emails. In March, OCR began Phase 2 of its HIPAA privacy and security audits of covered entities and business associates. These audits will “review the policies and procedures adopted and employed by covered entities and their business associates to meet selected standard interpretation specifications of the Privacy, Security, and Breach Notification Rules,” OCR said at the time.

Medical Device Security At A Crossroads

Posted on April 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As anyone reading this knows, connected medical devices are vulnerable to attacks from outside malware. Security researchers have been warning healthcare IT leaders for years that network-connected medical devices had poor security in place, ranging from image repository backups with no passwords to CT scanners with easily-changed configuration files, but far too many problems haven’t been addressed.

So why haven’t providers addressed the security problems? It may be because neither medical device manufacturers nor hospitals are set up to address these issues. “The reality is both sides — providers and manufacturers — do not understand how much the other side does not know,” said John Gomez, CEO of cybersecurity firm Sensato. “When I talk with manufacturers, they understand the need to do something, but they have never had to deal with cyber security before. It’s not a part of their DNA. And on the hospital side, they’re realizing that they’ve never had to lock these things down. In fact, medical devices have not even been part of the IT group and hospitals.

Gomez, who spoke with Healthcare IT News, runs one of two companies backing a new initiative dedicated to securing medical devices and health organizations. (The other coordinating company is healthcare security firm Divurgent.)

Together, the two have launched the Medical Device Cybersecurity Task Force, which brings together a grab bag of industry players including hospitals, hospital technologists, medical device manufacturers, cyber security researchers and IT leaders. “We continually get asked by clients with the best practices for securing medical devices,” Gomez told Healthcare IT News. “There is little guidance and a lot of misinformation.“

The task force includes 15 health systems and hospitals, including Children’s Hospital of Atlanta, Lehigh Valley Health Network, Beebe Healthcare and Intermountain, along with tech vendors Renovo Solutions, VMware Inc. and AirWatch.

I mention this initiative not because I think it’s huge news, but rather, as a reminder that the time to act on medical device vulnerabilities is more than nigh. There’s a reason why the Federal Trade Commission, and the HHS Office of Inspector General, along with the IEEE, have launched their own initiatives to help medical device manufacturers boost cybersecurity. I believe we’re at a crossroads; on one side lies renewed faith in medical devices, and on the other nothing less than patient privacy violations, harm and even death.

It’s good to hear that the Task Force plans to create a set of best practices for both healthcare providers and medical device makers which will help get their cybersecurity practices up to snuff. Another interesting effort they have underway in the creation of an app which will help healthcare providers evaluate medical devices, while feeding a database that members can access to studying the market.

But reading about their efforts also hammered home to me how much ground we have to cover in securing medical devices. Well-intentioned, even relatively effective, grassroots efforts are good, but they’re only a drop in the bucket. What we need is nothing less than a continuous knowledge feed between medical device makers, hospitals, clinics and clinicians.

And why not start by taking the obvious step of integrating the medical device and IT departments to some degree? That seems like a no-brainer. But unfortunately, the rest of the work to be done will take a lot of thought.

The Need for Speed (In Breach Protection)

Posted on April 26, 2016 I Written By

The following is a guest blog post by Robert Lord, Co-founder and CEO of Protenus.
Robert Protenus
The speed at which a hospital can detect a privacy breach could mean the difference between a brief, no-penalty notification and a multi-million dollar lawsuit.  This month it was reported that health information from 2,000 patients was exposed when a Texas hospital took four months to identify a data breach caused by an independent healthcare provider.  A health system in New York similarly took two months to determine that 2,500 patient records may have been exposed as a result of a phishing scam and potential breach reported two months prior.

The rise in reported breaches this year, from phishing scams to stolen patient information, only underscores the risk of lag times between breach detection and resolution. Why are lags of months and even years so common? And what can hospitals do to better prepare against threats that may reach the EHR layer?

Traditional compliance and breach detection tools are not nearly as effective as they need to be. The most widely used methods of detection involve either infrequent random audits or extensive manual searches through records following a patient complaint. For example, if a patient suspects that his medical record has been inappropriately accessed, a compliance officer must first review EMR data from the various systems involved.  Armed with a highlighter (or a large excel spreadsheet), the officer must then analyze thousands of rows of access data, and cross-reference this information with the officer’s implicit knowledge about the types of people who have permission to view that patient’s records. Finding an inconsistency – a person who accessed the records without permission – can take dozens of hours of menial work per case.  Another issue with investigating breaches based on complaints is that there is often no evidence that the breach actually occurred. Nonetheless, the hospital is legally required to investigate all claims in a timely manner, and such investigations are costly and time-consuming.

According to a study by the Ponemon Institute, it takes an average of 87 days from the time a breach occurs to the time the officer becomes aware of the problem, and, given the arduous task at hand, it then takes another 105 days for the officer to resolve the issue. In total, it takes approximately 6 months from the time a breach occurs to the time the issue is resolved. Additionally, if a data breach occurs but a patient does not notice, it could take months – or even years – for someone to discover the problem. And of course, the longer it takes the hospital to identify a problem, the higher the cost of identifying how the breach occurred and remediating the situation.

In 2013, Rouge Valley Centenary Hospital in Scarborough, Canada, revealed that the contact information of approximately 8,300 new mothers had been inappropriately accessed by two employees. Since 2009, the two employees had been selling the contact information of new mothers to a private company specializing in Registered Education Savings Plans (RESPs). Some of the patients later reported that days after coming home from the hospital with their newborn child, they started receiving calls from sales representatives at the private RESP company. Marketing representatives were extremely aggressive, and seemed to know the exact date of when their child had been born.

The most terrifying aspect of this story is how the hospital was able to find out about the data breach: remorse and human error! One employee voluntarily turned himself in, while the other accidentally left patient records on a printer. Had these two events not happened, the scam could have continued for much longer than the four years it did before it was finally discovered.

Rouge Valley Hospital is currently facing a $412 million dollar lawsuit over this breach of privacy. Arguably even more damaging, is that they have lost the trust of their patients who relied on the hospital for care and confidentiality of their medical treatments.

As exemplified by the ramifications of the Rouge Valley Hospital breach and the new breaches discovered almost weekly in hospitals around the world, the current tools used to detect privacy breaches in electronic health records are not sufficient. A system needs to have the ability to detect when employees are accessing information outside their clinical and administrative responsibilities. Had the Scarborough hospital known about the inappropriately viewed records the first time they had been accessed, they could have investigated earlier and protected the privacy of thousands of new mothers.

Every person seeks a hospital’s care has the right to privacy and the protection of their medical information. However, due to the sheer volume of patient records accessed each day, it is impossible for compliance officers to efficiently detect breaches without new and practical tools. Current rule-based analytical systems often overburden the officers with alerts, and are only a minor improvement from manual detection methods.

We are in the midst of a paradigm shift with hospitals taking a more proactive and layered approach to health data security. New technology that uses machine learning and big data science to review each access to medical records will replace traditional compliance technology and streamline threat detection and resolution cycles from months to a matter of minutes. Making identifying a privacy breach or violation as simple and fast as the action that may have caused it in the first place.  Understanding how to select and implement these next-generation tools will be a new and important challenge for the compliance officers of the future, but one that they can no longer afford to delay.

Protenus is a health data security platform that protects patient data in electronic medical records for some of the nation’s top-ranked hospitals. Using data science and machine learning, Protenus technology uniquely understands the clinical behavior and context of each user that is accessing patient data to determine the appropriateness of each action, elevating only true threats to patient privacy and health data security.

Are Ransomware Attacks A HIPAA Issue, Or Just Our Fault?

Posted on April 18, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

With ransomware attacks hitting hospitals in growing numbers, it’s growing more urgent for healthcare organizations to have a routine and effective response to such attacks. While over the short term, providers are focused mostly on survival, eventually they’ll have to consider big-picture implications — and one of the biggest is whether a ransomware intrusion can be called a “breach” under federal law.

As readers know, providers must report any sizable breach to the HHS Office for Civil Rights. So far, though, it seems that the feds haven’t issued any guidance as to how they see this issue. However, people in the know have been talking about this, and here’s what they have to say.

David Holtzman, a former OCR official who now serves as vice president of compliance strategies at security firm CynergisTek, told Health Data Management that as long as the data was never compromised, a provider may be in the clear. If an organization can show OCR proof that no data was accessed, it may be able to avoid having the incident classed as a breach.

And some legal experts agree. Attorney David Harlow, who focuses on healthcare issues, told Forbes: “We need to remember that HIPAA is narrowly drawn and data breaches defined as the unauthorized ‘access, acquisition, use or disclosure’ of PHI. [And] in many cases, ransomware “wraps” PHI rather than breaches it.”

But as I see it, ransomware attacks should give health IT security pros pause even if they don’t have to report a breach to the federal government. After all, as Holtzman notes, the HIPAA security rule requires that providers put appropriate safeguards in place to ensure the confidentiality, the integrity and availability of ePHI. And fairly or not, any form of malware intrusion that succeeds raises questions about providers’ security policies and approaches.

What’s more, ransomware attacks may point to underlying weaknesses in the organization’s overall systems architecture. “Why is the operating system allowing this application to access this data?” asked one reader in comments on a related EMR and HIPAA post. “There should be no possible way for a database that is only read/write for specified applications to be modified by a foreign encryption application,” the reader noted. “The database should refuse the instruction, the OS should deny access, and the security system should lock the encryption application out.”

To be fair, not all intrusions are someone’s “fault.” Ransomware creators are innovating rapidly, and are arguably equipped to find new vectors of infection more quickly than security experts can track them. In fact, easy-to-deploy ransomware as a service is emerging, making it comparatively simple for less-skilled criminals to use. And they have a substantial incentive to do so. According to one report, one particularly sophisticated ransomware strain has brought $325 million in profits to groups deploying it.

Besides, downloading actual data is so five years ago. If you’re attacking a provider, extorting payment through ransomware is much easier than attempting to resell stolen healthcare data. Why go to all that trouble when you can get your cash up front?

Still, the reality is that healthcare organizations must be particularly careful when it comes to protecting patient privacy, both for ethical and regulatory reasons. Perhaps ransomware will be the jolt that pushes lagging players to step up and invest in security, as it creates a unique form of havoc that could easily put patient care at risk. I certainly hope so.

Securing Mobile Devices in Healthcare

Posted on February 8, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This post is sponsored by Samsung Business. All thoughts and opinions are my own.

When you look at healthcare security on the whole, I think everyone would agree that healthcare has a lot of work to do. Just taking into account the top 5 health data breaches in 2015, approximately 30-35% of people in the US have had their health data breached. I’m afraid that in 2016 these numbers are likely going to get worse. Let me explain why I think this is the case.

First, meaningful use required healthcare organizations to do a HIPAA risk assessment. While many organizations didn’t really do a high quality HIPAA risk assessment, it still motivated a number of organizations to do something about privacy and security. Even if it wasn’t the step forward many would like, it was still a step forward.

Now that meaningful use is being replaced, what other incentive are doctors going to have to take a serious look at privacy and security? If 1/3 of patients having their records breached in 2015 isn’t motivating enough, what’s going to change in 2016?

Second, hackers are realizing the value of health data and the ease with which they can breach health data systems. Plus, with so many organizations going online with their EHR software and other healthcare IT software, these are all new targets for hackers to attack.

Third, while every doctor in healthcare had a mobile device, not that many of them accessed their EHR on their mobile device since many EHR vendors didn’t support mobile devices very well. Over the next few years we’ll see EHR vendors finally produce high quality, native mobile apps that access EHR software. Once they do, not only will doctors be accessing patient data on their mobile device, but so will nurses, lab staff, HIM, etc. While all of this mobility is great, it creates a whole new set of vulnerabilities that can be exploited if not secured properly.

I’m not sure what we can do to make organizations care about privacy and security. Although, once a breach happens they start to care. We’re also not going to be able to stem the tide of hackers being interested in stealing health data. However, we can do something about securing the plethora of mobile devices in healthcare. In fact, it’s a travesty when we don’t since mobile device security has become so much easier.

I remember in the early days of smartphones, there weren’t very many great enterprise tools to secure your smartphones. These days there are a ton of great options and many of them come natively from the vendor who provides you the phone. Many are even integrated into the phone’s hardware as well as software. A good example of this is the mobile security platform, Samsung KNOX™. Take a look at some of its features:

  • Separate Work and Personal Data (Great for BYOD)
  • Multi-layered Hardware and Software Security
  • Easy Mobile Device Management Integration
  • Enterprise Grade Security and Encryption

It wasn’t that long ago that we had to kludge together multiple solutions to achieve all of these things. Now they come in one nice, easy to implement package. The excuses of why we don’t secure mobile devices in healthcare should disappear. If a breach occurs in your organization because a mobile device wasn’t secure, I assure you that those excuses will feel pretty hollow.

For more content like this, follow Samsung on Insights, Twitter, LinkedIn , YouTube and SlideShare