Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

The Need for Speed (In Breach Protection)

Posted on April 26, 2016 I Written By

The following is a guest blog post by Robert Lord, Co-founder and CEO of Protenus.
Robert Protenus
The speed at which a hospital can detect a privacy breach could mean the difference between a brief, no-penalty notification and a multi-million dollar lawsuit.  This month it was reported that health information from 2,000 patients was exposed when a Texas hospital took four months to identify a data breach caused by an independent healthcare provider.  A health system in New York similarly took two months to determine that 2,500 patient records may have been exposed as a result of a phishing scam and potential breach reported two months prior.

The rise in reported breaches this year, from phishing scams to stolen patient information, only underscores the risk of lag times between breach detection and resolution. Why are lags of months and even years so common? And what can hospitals do to better prepare against threats that may reach the EHR layer?

Traditional compliance and breach detection tools are not nearly as effective as they need to be. The most widely used methods of detection involve either infrequent random audits or extensive manual searches through records following a patient complaint. For example, if a patient suspects that his medical record has been inappropriately accessed, a compliance officer must first review EMR data from the various systems involved.  Armed with a highlighter (or a large excel spreadsheet), the officer must then analyze thousands of rows of access data, and cross-reference this information with the officer’s implicit knowledge about the types of people who have permission to view that patient’s records. Finding an inconsistency – a person who accessed the records without permission – can take dozens of hours of menial work per case.  Another issue with investigating breaches based on complaints is that there is often no evidence that the breach actually occurred. Nonetheless, the hospital is legally required to investigate all claims in a timely manner, and such investigations are costly and time-consuming.

According to a study by the Ponemon Institute, it takes an average of 87 days from the time a breach occurs to the time the officer becomes aware of the problem, and, given the arduous task at hand, it then takes another 105 days for the officer to resolve the issue. In total, it takes approximately 6 months from the time a breach occurs to the time the issue is resolved. Additionally, if a data breach occurs but a patient does not notice, it could take months – or even years – for someone to discover the problem. And of course, the longer it takes the hospital to identify a problem, the higher the cost of identifying how the breach occurred and remediating the situation.

In 2013, Rouge Valley Centenary Hospital in Scarborough, Canada, revealed that the contact information of approximately 8,300 new mothers had been inappropriately accessed by two employees. Since 2009, the two employees had been selling the contact information of new mothers to a private company specializing in Registered Education Savings Plans (RESPs). Some of the patients later reported that days after coming home from the hospital with their newborn child, they started receiving calls from sales representatives at the private RESP company. Marketing representatives were extremely aggressive, and seemed to know the exact date of when their child had been born.

The most terrifying aspect of this story is how the hospital was able to find out about the data breach: remorse and human error! One employee voluntarily turned himself in, while the other accidentally left patient records on a printer. Had these two events not happened, the scam could have continued for much longer than the four years it did before it was finally discovered.

Rouge Valley Hospital is currently facing a $412 million dollar lawsuit over this breach of privacy. Arguably even more damaging, is that they have lost the trust of their patients who relied on the hospital for care and confidentiality of their medical treatments.

As exemplified by the ramifications of the Rouge Valley Hospital breach and the new breaches discovered almost weekly in hospitals around the world, the current tools used to detect privacy breaches in electronic health records are not sufficient. A system needs to have the ability to detect when employees are accessing information outside their clinical and administrative responsibilities. Had the Scarborough hospital known about the inappropriately viewed records the first time they had been accessed, they could have investigated earlier and protected the privacy of thousands of new mothers.

Every person seeks a hospital’s care has the right to privacy and the protection of their medical information. However, due to the sheer volume of patient records accessed each day, it is impossible for compliance officers to efficiently detect breaches without new and practical tools. Current rule-based analytical systems often overburden the officers with alerts, and are only a minor improvement from manual detection methods.

We are in the midst of a paradigm shift with hospitals taking a more proactive and layered approach to health data security. New technology that uses machine learning and big data science to review each access to medical records will replace traditional compliance technology and streamline threat detection and resolution cycles from months to a matter of minutes. Making identifying a privacy breach or violation as simple and fast as the action that may have caused it in the first place.  Understanding how to select and implement these next-generation tools will be a new and important challenge for the compliance officers of the future, but one that they can no longer afford to delay.

Protenus is a health data security platform that protects patient data in electronic medical records for some of the nation’s top-ranked hospitals. Using data science and machine learning, Protenus technology uniquely understands the clinical behavior and context of each user that is accessing patient data to determine the appropriateness of each action, elevating only true threats to patient privacy and health data security.

Breach Affecting 2.2M Patients Highlights New Health Data Threats

Posted on April 4, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A Fort Myers, FL-based cancer care organization is paying a massive price for a health data breach that exposed personal information on 2.2 million patients late last year. This incident is also shedding light on the growing vulnerability of non-hospital healthcare data, as you’ll see below.

Recently, 21st Century Oncology was forced to warn patients that an “unauthorized third party” had broken into one of its databases. Officials said that they had no evidence that medical records were accessed, but conceded that breached information may have included patient names Social Security numbers, insurance information and diagnosis and treatment data.

Notably, the cancer care chain — which operates on hundred and 45 centers in 17 states — didn’t learn about the breach until the FBI informed the company that it had happened.

Since that time, 21st Century has been faced with a broad range of legal consequences. Three lawsuits related to the breach have been filed against the company. All are alleging that the breach exposed them to a great possibility of harm.  Patient indignation seems to have been stoked, in part, because they did not learn about the breach until five months after it happened, allegedly at the request of investigating FBI officials.

“While more than 2.2 million 21st Century Oncology victims have sought out and/or pay for medical care from the company, thieves have been hard at work, stealing and using their hard-to-change Social Security numbers and highly sensitive medical information,” said plaintiff Rona Polovoy in her lawsuit.

Polovoy’s suit also contends that the company should have been better prepared for such breaches, given that it suffered a similar security lapse between October 2011 and August 2012, when an employee used patient names Social Security numbers and dates of birth to file fraudulent tax refund claims. She claims that the current lapse demonstrates that the company did little to clean up its cybersecurity act.

Another plaintiff, John Dickman, says that the breach has filled his life with needless anxiety. In his legal filings he says that he “now must engage in stringent monitoring of, among other things, his financial accounts, tax filings, and health insurance claims.”

All of this may be grimly entertaining if you aren’t the one whose data was exposed, but there’s more to this case than meets the eye. According to a cybersecurity specialist quoted in Infosecurity Magazine, the 21st Century network intrusion highlights how exposed healthcare organizations outside the hospital world are to data breaches.

I can’t help but agree with TrapX Security executive vice president Carl Wright, who told the magazine that skilled nursing facilities, dialysis centers, imaging centers, diagnostic labs, surgical centers and cancer treatment facilities like 21st are all in network intruders’ crosshairs. Not only that, he notes that large extended healthcare networks such as accountable care organizations are vulnerable.

And that’s a really scary thought. While he doesn’t say so specifically, it’s logical to assume that the more unrelated partners you weld together across disparate networks, it multiplies the number of security-related points of failure. Isn’t it lovely how security threats emerge to meet every advance in healthcare?

Cyber Breach Insurance May Be Useless If You’re Negligent

Posted on March 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Ideally, your healthcare organization will never see a major data breach. But realistically, given how valuable healthcare data is these days — and the extent to which many healthcare firms neglect data security — it’s safer to assume that you will have to cope with a breach at some point.

In fact, it might be wise to assume that some form of costly breach is inevitable. After all, as one infographic points out, 55 healthcare organizations reported network attacks resulting in data breaches last year, which resulted in 111,809,322 individuals’ health record information being compromised. (If you haven’t done the math in your head, that’s a staggering 35% of the US population.)

The capper: if things don’t get better, the US healthcare industry stands to lose $305 billion in cumulative lifetime patient revenue due to cyberattacks likely to take place over the next five years.

So, by all means, protect yourself by any means available. However, as a recent legal battle suggests, simply buying cyber security insurance isn’t a one-step solution. In fact, your policy may not be worth much if you don’t do your due diligence when it comes to network and Internet security.

The lawsuit, Columbia Casualty Company v. Cottage Health System, shows what happens when a healthcare organization (allegedly) relies on its cyber insurance policy to protect it against breach costs rather than working hard to prevent such slips.

Back in December 2013, the three-hospital Cottage Health System notified 32,755 of its patients that their PHI had been compromised. The breach occurred when the health system and one of its vendors, InSync, stored unencrypted medical records on an Internet accessible system.

It later came out that the breach was probably caused by careless FTP settings on both systems servers which permitted anonymous user access, essentially opening up access to patient health records to anyone who could use Google. (Wow. If true that’s really embarrassing. I doubt a sharp 13-year-old script kiddie would make that mistake.)

Anyway, a group of presumably ticked off patients filed a class action suit against Cottage asking for $4.125 million. At first, cyber breach insurer Columbia Casualty paid out the $4.125 million and settled the case. Now, however, the insurer is suing Cottage, asking the health system to pay it back for the money it paid out to the class action members. It argues that Cottage was negligent due to:

  • a failure to continuously implement the procedures and risk controls identified in the application, including, but not limited to, its failure to replace factory default settings and its failure to ensure that its information security systems were securely configured; and
  • a failure to regularly check and maintain security patches on its systems, its failure to regularly re-assess its information security exposure and enhance risk controls, its failure to have a system in place to detect unauthorized access or attempts to access sensitive information stored on its servers and its failure to control and track all changes to its network to ensure it remains secure.

Not only that, Columbia Casualty asserts, Cottage lied about following a minimum set of security practices known as a “Risk Control Self Assessment” required as part of the cyber insurance application.

Now, if the cyber insurer’s allegations are true, Cottage’s behavior may have been particularly egregious. And no one has proven anything yet, as the case is still in the early stages, but this dispute should still stand as a warning to all healthcare organizations. If you neglect security, then try to get an insurance company to cover your behind when breaches occur, you might be out of luck.

Owensboro Health Muhlenberg Community Hospital Breach

Posted on November 17, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In this week in HIPAA Breach rubber-necking, we have the FBI discovering suspicious network activity from third parties at Owensboro Health Muhlenberg Community Hospital, a 135 bed acute care hospital in Kentucky. Here’s a description of the incident:

On September 16, 2015, the Federal Bureau of Investigation (FBI) notified the hospital of suspicious network activity involving third parties. Upon learning this information, the hospital took immediate action, including initiating an internal investigation and engaging a leading digital forensics and security firm to investigate this matter. Based upon this review, the hospital confirmed that a limited number of computers were infected with a keystroke logger designed to capture and transmit data as it was entered onto the affected computers. The infection may have started as early as January 2012.

I’m quite interested in how they came up with the January 2012 date. Was that the date that the infected computers were installed? Are they just being cautious and assuming that the computers could have had the keylogger since the beginning and they’re handling the breach that way?

Of course, Muhlenberg Community Hospital is sending breach notifications to all patients in their records database, employees and contractors and providers that were credentialed at the hospital since 2012. They don’t give a number of how many records or people this constitutes, but it have to be a massive number.

Here’s a look at what information they think could have been accessed by the keylogger:

The affected computers were used to enter patient financial data and health information, information about persons responsible for a patient’s bill and employee/contractor data, including potentially name, address, telephone number(s), birthdate, Social Security number, driver’s license/state identification number, medical and health plan information (such health insurance number, medical record number, diagnoses and treatment information, and payment information), financial account number, payment card information (such as primary account number and expiration date) and employment-related information. Additionally, some credentialing-related information for providers may be impacted. The hospital also believes that the malware could have captured username and password information for accounts or websites that were accessed by employees, contractors or providers using the affected terminals. The hospital has no indication that the data has been used inappropriately.

They’re offering the usual identity protection services to all those affected. However, I was quite interested in their expanded list of steps people can take to guard against possible identity theft and fraud:

  • Enroll in Identity Protection Services
  • Explanation of Benefits Review
  • Check Credit Reports
  • Review Payment Card Statements
  • Change Your Passwords
  • Consult the Identity Theft Protection Guide

It’s clear that the number of breaches is accelerating. However, this case is particularly interesting because it could have been breached for the past 3 years and they’re just now finding it out. I expect we’ll see a lot more of this activity in the future.

Phase 2 HIPAA Audits Kick Off With Random Surveys

Posted on June 9, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Ideally, the only reason you would know about the following is due to scribes such as myself — but for the record, the HHS Office for Civil Rights has sent out a bunch of pre-audit screening surveys to covered entities. Once it gets responses, it will do a Phase 2 audit not only of covered entities but also business associates, so things should get heated.

While these take the form of Meaningful Use audits, covering incentives paid from January 1, 2011 through June 30, 2014, it’s really more about checking how well you protect ePHI.

This effort is a drive to be sure that providers and BAs are complying with the HIPAA privacy, security and breach notification requirements. Apparently OCR found, during Phase 1 pilot audits in 2011 and 2012, that there was “pervasive non-compliance” with regs designed to safeguard protected health information, the National Law Review reports.

However, these audits aren’t targeting the “bad guys.” Selection for the audits is random, according to HHS Office of the Inspector General.

So if you get one of the dreaded pre-screening letters, how should you respond? According a thoughtful blog post by Maryanne Lambert for CureMD, auditors will be focused on the following areas:

  • Risk Assessment audits and reports
  • EHR security plan
  • Organizational chart
  • Network diagram
  • EHR web sites and patient portals
  • Policies and procedures
  • System inventory
  • Tools to perform vulnerability scans
  • Central log and event reports
  • EHR system users list
  • Contractors supporting the EHR and network perimeter devices.

According to Lambert, the feds will want to talk to the person primarily responsible for each of these areas, a process which could quickly devolve into a disaster if those people aren’t prepared. She recommends that if you’re selected for an audit, you run through a mock audit ahead of time to make sure these staff members can answer questions about how well policies and processed are followed.

Not that anyone would take the presence of HHS on their premises lightly, but it’s worth bearing in mind that a stumble in one corner of your operation could have widespread consequences. Lambert notes that in addition to defending your security precautions, you have to make sure that all parts of your organization are in line:

Be mindful while planning for this audit as deficiencies identified for one physician in a physician group or one hospital within a multi-hospital system, may apply to the other physicians and hospitals using the same EHR system and/or implementing meaningful use in the same way.  Thus, the incentive payments at risk in this audit may be greater than the payments to the particular provider being audited.

But as she points out, there is one possible benefit to being audited. If you prepare well, it might save you not only trouble with HHS but possibly lawsuits for breaches of information. Hey, everything has some kind of silver lining, right?

An Important Look at HIPAA Policies For BYOD

Posted on May 11, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Today I stumbled across an article which I thought readers of this blog would find noteworthy. In the article, Art Gross, president and CEO at HIPAA Secure Now!, made an important point about BYOD policies. He notes that while much of today’s corporate computing is done on mobile devices such as smartphones, laptops and tablets — most of which access their enterprise’s e-mail, network and data — HIPAA offers no advice as to how to bring those devices into compliance.

Given that most of the spectacular HIPAA breaches in recent years have arisen from the theft of laptops, and are likely proceed to theft of tablet and smartphone data, it seems strange that HHS has done nothing to update the rule to address increasing use of mobiles since it was drafted in 2003.  As Gross rightly asks, “If the HIPAA Security Rule doesn’t mention mobile devices, laptops, smartphones, email or texting how do organizations know what is required to protect these devices?”

Well, Gross’ peers have given the issue some thought, and here’s some suggestions from law firm DLA Piper on how to dissect the issues involved. BYOD challenges under HIPAA, notes author Peter McLaughlin, include:

*  Control:  To maintain protection of PHI, providers need to control many layers of computing technology, including network configuration, operating systems, device security and transmissions outside the firewall. McLaughlin notes that Android OS-based devices pose a particular challenge, as the system is often modified to meet hardware needs. And in both iOS and Android environments, IT administrators must also manage users’ tendency to connected to their preferred cloud and download their own apps. Otherwise, a large volume of protected health data can end up outside the firewall.

Compliance:  Healthcare organizations and their business associates must take care to meet HIPAA mandates regardless of the technology they  use.  But securing even basic information, much less regulated data, can be far more difficult than when the company creates restrictive rules for its own devices.

Privacy:  When enterprises let employees use their own device to do company business, it’s highly likely that the employee will feel entitled to use the device as they see fit. However, in reality, McLaughlin suggests, employees don’t really have full, private control of their devices, in part because the company policy usually requires a remote wipe of all data when the device gets lost. Also, employees might find that their device’s data becomes discoverable if the data involved is relevant to litigation.

So, readers, tell us how you’re walking the tightrope between giving employees who BYOD some autonomy, and protecting private, HIPAA-protected information.  Are you comfortable with the policies you have in place?

Full Disclosure: HIPAA Secure Now! is an advertiser on this website.

HIPAA Slip Leads To PHI Being Posted on Facebook

Posted on July 1, 2014 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

HHS has begun investigating a HIPAA breach at the University of Cincinnati Medical Center which ended with a patient’s STD status being posted on Facebook.

The disaster — for both the hospital and the patient — happened when a financial services employee shared detailed medical information with father of the patient’s then-unborn baby.  The father took the information, which included an STD diagnosis, and posted it publicly on Facebook, ridiculing the patient in the process.

The hospital fired the employee in question once it learned about the incident (and a related lawsuit) but there’s some question as to whether it reported the breach to HHS. The hospital says that it informed HHS about the breach in a timely manner, and has proof that it did so, but according to HealthcareITNews, the HHS Office of Civil Rights hadn’t heard about the breach when questioned by a reporter lastweek.

While the public posting of data and personal attacks on the patient weren’t done by the (ex) employee, that may or may not play a factor in how HHS sees the case. Given HHS’ increasingly low tolerance for breaches of any kind, I’d be surprised if the hospital didn’t end up facing a million-dollar OCR fine in addition to whatever liabilities it incurs from the privacy lawsuit.

HHS may be losing its patience because the pace of HIPAA violations doesn’t seem to be slowing.  Sometimes, breaches are taking place due to a lack of the most basic security protocols. (See this piece on last year’s wackiest HIPAA violations for a taste of what I’m talking about.)

Ultimately, some breaches will occur because a criminal outsmarted the hospital or medical practice. But sadly, far more seem to take place because providers have failed to give their staff an adequate education on why security measures matter. Experts note that staffers need to know not just what to do, but why they should do it, if you want them to act appropriately in unexpected situations.

While we’ll never know for sure, the financial staffer who gave the vengeful father his girlfriend’s PHI may not have known he was  up to no good. But the truth is, he should have.

HIPAA Fines and Penalties in a HIPAA Omnibus World

Posted on July 25, 2013 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Lately I’ve been seeing a number of really lazy approaches to making sure a company is HIPAA compliant. I think there’s a pandora’s box just waiting to explode where many companies are going to get slammed with HIPAA compliance issues. Certainly there are plenty of HIPAA compliance issues at healthcare provider organizations, but the larger compliance issue is going to likely come from all of these business associates that are now going to be held responsible for any HIPAA violations that occur with their systems.

For those not keeping up with the changes to HIPAA as part of the HITECH Act and HIPAA Omnibus, here are a couple of the biggest changes. First, HITECH provided some real teeth when it comes to penalties for HIPAA violations. Second, HIPAA Omnibus puts business associates in a position of responsibility when it comes to any HIPAA violations. Yes, this means that healthcare companies that experience HIPAA violations could be fined just like previous covered entities.

To put it simply, hundreds of organizations who didn’t have to worry too much about HIPAA will now be held responsible.

This is likely going to be a recipe for disaster for those organizations who aren’t covering their bases when it comes to HIPAA compliance. Consider two of the most recent fines where Idaho State University was fined $400k for HIPAA violations and the $1.7 million penalty for WellPoint’s HIPAA violations. In the first case, they had a disabled firewall for a year, and the second one failed to secure an online application database containing sensitive data.

Of course, none of the above examples take into account the possible civil cases that can be created against these organizations or the brand impact to the organization of a HIPAA violation. The penalties of a HIPAA violation range between $100 to $50,000 per violation depending on the HIPAA violation category. I’ll be interested to see how HHS defines “Reasonable Cause” versus “Willfull Neglect – Corrected.”

I’ve seen far too many organizations not taking the HIPAA requirements seriously. This is going to come back to bite many organizations. Plus, healthcare organizations better make sure they have proper business associate agreements with these companies in order to insulate them against the neglect of the business associate. I don’t see HHS starting to search for companies that aren’t compliant. However, if they get a report of issues, they’ll have to investigate and they won’t likely be happy with what they find.

The message to all is to make sure your HIPAA house is in order. Unfortunately, I don’t think many will really listen until the first shoe falls.

EHR and Malpractice Lawsuits

Posted on January 23, 2013 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Long time reader Carl recently pointed me to this excellent AHIMA article on EHR and Malpractice Lawsuits. It’s first section sums up the current state of EHR and lawsuits quite well:

Medical records are a vital part of any healthcare lawsuit because they document what happened during treatment. Paper medical records are relatively simple aspects of litigation. HIM staff pull the requested chart, track down additional information as necessary, and sometimes provide a deposition on the record’s accuracy.

The process is far more complex with an EHR. The record of a patient’s care that a clinician views on screen may not exist in that form anywhere else. When the information is taken out of the system and submitted into legal proceedings, the court has a very different view—one that often confuses the proceedings and, in the worst instances, raises suspicions about the record’s validity.

The challenges stem from the design of the systems, which were built for care—not court. If the provider struggles in providing documentation, a trial involving malpractice can easily shift its focus from an examination of care to a fault-finding mission with the recordkeeping system. At other times, the provider’s inability to put forward the information in a comprehensible format may raise suspicions that it is missing, withholding, or obscuring information.

I’d probably modify the sentence that says that EHR’s were “built for care-not court” to say that EHR’s were “built for billing-not court”, but the idea is still the same. The big issues for EHR in lawsuits is that there’s no really good precedent for how an EHR will be treated in court. We’re so early in the process of legal cases that use EHR documentation, that we just don’t know how the courts are going to deal with EHR documentation.

Plus, when you consider that there are 300+ EHR companies out there, I’m not sure that a legal case with one EHR software is going to be applied the same way to the other EHR software. Each EHR displays data differently. Each EHR audits users differently. Each EHR stores data differently. So, I expect that each EHR will be looked at in a different way.

The AHIMA article linked above is a good read for those interested in this topic and points out a lot of other issues that could face an HIM staff that’s dealing with a case involving documentation in an EHR. Although, one of the overriding messages is that HIM staff and healthcare organizations are going to need an expert of their EHR involved in the process. In fact, I can see many HIM departments getting trained up on EHR in order to fulfill this need.

What I also see coming is a new group of EHR expert witnesses. Again, I think that these expert witnesses will have to have specific knowledge of a particular EHR to be really effective. I’m sure they’ll come from the ranks of EHR consultants, former EHR employees, and some EHR users. Considering the millions of dollars on the line in these malpractice cases, these EHR expert witnesses stand to make a lot of money.

I don’t want to make it all sound doom and gloom. I expect that there will be many cases involving EHR where a doctor or institution is covered better by an EHR than they were in the paper world. This will be even more true as EHR vendors continue to shore up their EHR audit logs and processes. There’s new legal risks with EHR, but there are also old risks that are removed by using an EHR. We just need to make sure we’re ready for the new risks.

Doctors Increasingly Texting, But HIPAA Protection Lacking

Posted on November 2, 2012 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study of physicians working at pediatric hospitals has concluded what we might have assumed anyway — that they prefer the use of SMS texting via mobile phone to pagers. What’s worrisome, however, is that little if any of this communication seems to be going on in a HIPAA-secure manner.

The study, by the University of Kansas School of Medicine at Wichita, asked 106 doctors at pediatric hospitals what avenues they prefer for “brief communication” while at work. Of this group, 27 percent chose texting as their favorite method, 23 percent preferred hospital-issued pagers and 21 percent face to face conversation, according to a report in mHealthWatch.

What’s interesting is that text-friendly or not, 57 percent of doctors said they sent or got work-related text messages.  And 12 percent of pediatricians reported sending more than 10 messages per shift.

With all that texting going on,  you’d figure hospitals would have a policy in place to ensure HIPAA requirements were met. But in reality, few doctors said that their hospital had such a policy in place.

That’s particularly concerning considering that 41 percent of respondents said they received work-related text messages on a personal phone, and only 18 percent on a hospital-assigned phone. I think it’s fair to say that this arrangement is rife with opportunities for HIPAA no-nos.

It’s not that the health IT vendor world isn’t aware that this is a problem; I know my colleague John has covered technology for secure texting between medical professionals and he’s also an advisor to secure text messaging company docBeat. However, not much is going to happen until hospitals get worried enough to identify this as a serious issue and they realize that secure text message can be just as easy as regular text along with additional benefits.

In the mean time, doctors will continue texting away — some getting 50-100 messages a day, according to one researcher — in an uncertain environment.  Seems to me this is a recipe for HIPAA disaster.