Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Knotty Problems Surround Substance Abuse Data Sharing via EMRs

Posted on May 27, 2015 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As I see it, rules giving mental health and substance abuse data extra protection are critical. Maybe someday, there will be little enough stigma around these illnesses that special privacy precautions aren’t necessary, but that day is far in the future.

That’s why a new bill filed by Reps. Tim Murphy (R-PA.) and Paul Tonko (D-N.Y.), aimed at simplifying sharing of substance misuse data between EMRs, deserves a close look by those of us who track EMR data privacy. Tonko and Murphy propose to loosen federal rules on such data sharing  such that a single filled-out consent form from a patient would allow data sharing throughout a hospital or health system.

As things currently stand, federal law requires that in the majority of cases, federally-assisted substance abuse programs are barred from sharing personally-identifiable patient information with other entities if the programs don’t have a disclosure consent. What’s more, each other entity must itself obtain another consent from a patient before the data gets shared again.

At a recent hearing on the 21st Century Cures Act, Rep. Tonko argued that the federal requirements, which became law before EMRs were in wide use, were making it more difficult for individuals fighting a substance abuse problem to get the coordinated care that they needed.  While they might have been effective privacy protections at one point, today the need for patients to repeatedly approve data sharing merely interferes with the providers’ ability to offer value-based care, he suggested. (It’s hard to argue that it can’t be too great for ACOs to hit such walls.)

Clearly, Tonko’s goals can be met in some form.  In fact, other areas of the clinical world are making great progress in sharing mental health data while avoiding data privacy entanglements. For example, a couple of months ago the National Institute of Mental Health announced that its NIMH Limited Datasets project, including data from 23 large NIMH-supported clinical trials, just sent out its 300th dataset.

Rather than offer broader access to data and protect individual identifiers stringently, the datasets contain private human study participant information but are shared only with qualified researchers. Those researchers must win approval for a Data Use Certification agreement which specifies how the data may be used, including what data confidentiality and security measures must be taken.

Of course, practicing clinicians don’t have time to get special approval to see the data for every patient they treat, so this NIMH model doesn’t resolve the issues hospitals and providers face in providing coordinated substance abuse care on the fly.

But until a more flexible system is put in place, perhaps some middle ground exists in which clinicians outside of the originating institution can grant temporary, role-based “passes” offering limited use to patient-identifiable substance abuse data. That is something EMRs should be well equipped to support. And if they’re not, this would be a great time to ask why!

Emerging Health Apps Pose Major Security Risk

Posted on May 18, 2015 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As new technologies like fitness bands, telemedicine and smartphone apps have become more important to healthcare, the issue of how to protect the privacy of the data they generate has become more important, too.

After all, all of these devices use the public Internet to broadcast data, at least at some point in the transmission. Typically, telemedicine involves a direct connection via an unsecured Internet connection with a remote server (Although, they are offering doing some sort of encryption of the data that’s being sent on the unsecured connection).  If they’re being used clinically, monitoring technologies such as fitness bands use hop from the band across wireless spectrum to a smartphone, which also uses the public Internet to communicate data to clinicians. Plus, using the public internet is just the pathway that leads to a myriad of ways that hackers could get access to this health data.

My hunch is that this exposure of data to potential thieves hasn’t generated a lot of discussion because the technology isn’t mature. And what’s more, few doctors actually work with wearables data or offer telemedicine services as a routine part of their practice.

But it won’t be long before these emerging channels for tracking and caring for patients become a standard part of medical practice.  For example, the use of wearable fitness bands is exploding, and middleware like Apple’s HealthKit is increasingly making it possible to collect and mine the data that they produce. (And the fact that Apple is working with Epic on HealthKit has lured a hefty percentage of the nation’s leading hospitals to give it a try.)

Telemedicine is growing at a monster pace as well.  One study from last year by Deloitte concluded that the market for virtual consults in 2014 would hit 70 million, and that the market for overall telemedical visits could climb to 300 million over time.

Given that the data generated by these technologies is medical, private and presumably protected by HIPAA, where’s the hue and cry over protecting this form of patient data?

After all, though a patient’s HIV or mental health status won’t be revealed by a health band’s activity status, telemedicine consults certainly can betray those concerns. And while a telemedicine consult won’t provide data on a patient’s current cardiovascular health, wearables can, and that data that might be of interest to payers or even life insurers.

I admit that when the data being broadcast isn’t clear text summaries of a patient’s condition, possibly with their personal identity, credit card and health plan information, it doesn’t seem as likely that patients’ well-being can be compromised by medical data theft.

But all you have to do is look at human nature to see the flaw in this logic. I’d argue that if medical information can be intercepted and stolen, someone can find a way to make money at it. It’d be a good idea to prepare for this eventuality before a patient’s privacy is betrayed.

An Important Look at HIPAA Policies For BYOD

Posted on May 11, 2015 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Today I stumbled across an article which I thought readers of this blog would find noteworthy. In the article, Art Gross, president and CEO at HIPAA Secure Now!, made an important point about BYOD policies. He notes that while much of today’s corporate computing is done on mobile devices such as smartphones, laptops and tablets — most of which access their enterprise’s e-mail, network and data — HIPAA offers no advice as to how to bring those devices into compliance.

Given that most of the spectacular HIPAA breaches in recent years have arisen from the theft of laptops, and are likely proceed to theft of tablet and smartphone data, it seems strange that HHS has done nothing to update the rule to address increasing use of mobiles since it was drafted in 2003.  As Gross rightly asks, “If the HIPAA Security Rule doesn’t mention mobile devices, laptops, smartphones, email or texting how do organizations know what is required to protect these devices?”

Well, Gross’ peers have given the issue some thought, and here’s some suggestions from law firm DLA Piper on how to dissect the issues involved. BYOD challenges under HIPAA, notes author Peter McLaughlin, include:

*  Control:  To maintain protection of PHI, providers need to control many layers of computing technology, including network configuration, operating systems, device security and transmissions outside the firewall. McLaughlin notes that Android OS-based devices pose a particular challenge, as the system is often modified to meet hardware needs. And in both iOS and Android environments, IT administrators must also manage users’ tendency to connected to their preferred cloud and download their own apps. Otherwise, a large volume of protected health data can end up outside the firewall.

Compliance:  Healthcare organizations and their business associates must take care to meet HIPAA mandates regardless of the technology they  use.  But securing even basic information, much less regulated data, can be far more difficult than when the company creates restrictive rules for its own devices.

Privacy:  When enterprises let employees use their own device to do company business, it’s highly likely that the employee will feel entitled to use the device as they see fit. However, in reality, McLaughlin suggests, employees don’t really have full, private control of their devices, in part because the company policy usually requires a remote wipe of all data when the device gets lost. Also, employees might find that their device’s data becomes discoverable if the data involved is relevant to litigation.

So, readers, tell us how you’re walking the tightrope between giving employees who BYOD some autonomy, and protecting private, HIPAA-protected information.  Are you comfortable with the policies you have in place?

Full Disclosure: HIPAA Secure Now! is an advertiser on this website.

Wearables And Mobile Apps Pose New Data Security Risks

Posted on December 30, 2014 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

In the early days of mobile health apps and wearable medical devices, providers weren’t sure they could cope with yet another data stream. But as the uptake of these apps and devices has grown over the last two years, at a rate surpassing virtually everyone’s expectations, providers and payers both have had to plan for a day when wearable and smartphone app data become part of the standard dataflow. The potentially billion-dollar question is whether they can figure out when, where and how they need to secure such data.

To do that, providers are going to have to face up to new security risks that they haven’t faced before, as well as doing a good job of educating patients on when such data is HIPAA-protected and when it isn’t. While I am most assuredly not an attorney, wiser legal heads than mine have reported that once wearable/app data is used by providers, it’s protected by HIPAA safeguards, but in other situations — such as when it’s gathered by employers or payers — it may not be protected.

For an example of the gray areas that bedevil mobile health data security, consider the case of upstart health insurance provider Oscar Health, which recently offered free Misfit Flash bands to its members. The company’s leaders have promised members that use the bands that if their collected activity numbers look good, they’ll offer roughly $240 off their annual premium. And they’ve promised that the data will be used for diagnostics or any other medical purpose. This promise may be worthless, however, if they are still legally free to resell this data to say, pharmaceutical companies.

Logical and physical security

Meanwhile, even if providers, payers and employers are very cautious about violating patients’ privacy, their careful policies will be worth little if they don’t take a look at managing the logical and physical security risks inherent in passing around so much data across multiple Wi-Fi, 4G and corporate networks.

While it’s not yet clear what the real vulnerabilities are in shipping such data from place to place, it’s clear that new security holes will pop up as smartphone and wearable health devices ramp up to sharing data on massive scale. In an industry which is still struggling with BYOD security, corralling data that facilities already work with on a daily basis, it’s going to pose an even bigger challenge to protect and appropriately segregate connected health data.

After all, every time you begin to rely on a new network model which involves new data handoff patterns — in this case from wired medical device or wearable data streaming to smartphones across Wi-Fi networks, smart phones forwarding data to providers via 4G LTE cellular protocols and providers processing the data via corporate networks, there has to be a host of security issues we haven’t found yet.

Cybersecurity problems could lead to mHealth setbacks

Worst of all, hospitals’ and medical practices’ cyber security protocols are quite weak (as researcher after researcher has pointed out of late). Particularly given how valuable medical identity data has become, healthcare organizations need to work harder to protect their cyber assets and see to it that they’ve at least caught the obvious holes.

But to date, if our experiences with medical device security are any indication, not only are hospitals and practices vulnerable to standard cyber hacks on network assets, they’re also finding it difficult to protect the core medical devices needed to diagnose and treat patients, such as MRI machines, infusion pumps and even, in theory, personal gear like pacemakers and insulin pumps.  It doesn’t inspire much confidence that the Conficker worm, which attacked medical devices across the world several years ago, is still alive and kicking, and in fact, accounted for 31% the year’s top security threats.

If malevolent outsiders mount attacks on the flow of connected health data, and succeed at stealing it, not only is it a brand-new headache for healthcare IT administrators, it could create a crisis of confidence among mHealth shareholders. In other words, while patients, providers, payers, employers and even pharmaceutical companies seem comfortable with the idea of tapping digital health data, major hacks into that data could slow the progress of such solutions considerably. Let’s hope those who focus on health IT security take the threat to wearables and smartphone health app data seriously going into 2015.

Amazing Live Visualization of Internet Attacks

Posted on October 22, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently heard Elliot Lewis, Dell’s Chief Security Architect, comment that “The average new viruses per day is about 5-10k appearing new each day.” To be honest, I wasn’t quite sure how to process that type of volume of viruses. It felt pretty unbelievable to me even though, I figured he was right.

Today, I came across this amazing internet attack map by Norse which illustrates a small portion of the attacks that are happening on the internet in real time. I captured a screenshot of the map below, but you really need to check out the live map to get a feel for how many internet attacks are happening. It’s astounding to watch.

Norse - Internet Attack Map

For those tech nerds out there, here’s the technical description of what’s happening on the map:

Every second, Norse collects and analyzes live threat intelligence from darknets in hundreds of locations in over 40 countries. The attacks shown are based on a small subset of live flows against the Norse honeypot infrastructure, representing actual worldwide cyber attacks by bad actors. At a glance, one can see which countries are aggressors or targets at the moment, using which type of attacks (services-ports).

It’s worth noting that these are the attacks that are happening. Just because something is getting attacked doesn’t mean that the attack was successful. A large majority of the attacks aren’t successful. However, when you see the volume of attacks (and that map only shows a small portion of them) is so large, you only need a small number of them to be successful to wreak a lot of havoc.

If this type of visualization doesn’t make you stop and worry just a little bit, then you’re not human. There’s a lot of crazy stuff going on out there. It’s actually quite amazing that with all the crazy stuff that’s happening, the internet works as well as it does.

Hopefully this visualization will wake up a few healthcare organizations to be just a little more serious about their IT security.

CMS’ HIPAA Risk Analysis Myths and Truths

Posted on October 21, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve been writing about the need to do a HIPAA Risk Assessment since it was included as part of meaningful use. Many organizations have been really confused by this requirement and no doubt it will be an issue for many organizations that get a meaningful use audit. It’s a little ironic since this really isn’t anything that wasn’t already part of the HIPAA security rule. Although, that illustrates how well we’re doing at complying with the HIPAA security rule.

It seems that CMS has taken note of this confusion around the HIPAA risk assessment as well. Today, they sent out some more guidance, tools and resources to hopefully help organizations better understand the Security Risk Analysis requirement. Here’s a portion of that email that provides some important clarification:

A security risk analysis needs to be conducted or reviewed during each program year for Stage 1 and Stage 2. These steps may be completed outside OR during the EHR reporting period timeframe, but must take place no earlier than the start of the reporting year and no later than the end of the reporting year.

For example, an eligible professional who is reporting for a 90-day EHR reporting period in 2014 may complete the appropriate security risk analysis requirements outside of this 90-day period as long as it is completed between January 1st and December 31st in 2014. Fore more information, read this FAQ.

Please note:
*Conducting a security risk analysis is required when certified EHR technology is adopted in the first reporting year.
*In subsequent reporting years, or when changes to the practice or electronic systems occur, a review must be conducted.

CMS also created this Security Risk Analysis Tipsheet that has a lot of good information including these myths and facts which address many of the issues I’ve seen and heard:
CMS HIPAA Security Risk Analysis Myths and Facts

Finally, it’s worth reminding people that the HIPAA Security Risk Analysis is not just for your tech systems. Check out this overview of security areas and example measures to secure them to see what I mean:
CMS HIPAA Security Risk Analysis Overview

Have you done your HIPAA Risk Assessment for your organization?

Is Your EMR Compromising Patient Privacy?

Posted on November 20, 2013 I Written By

James Ritchie is a freelance writer with a focus on health care. His experience includes eight years as a staff writer with the Cincinnati Business Courier, part of the American City Business Journals network. Twitter @HCwriterJames.

Two prominent physicians this week pointed out a basic but, in the era of information as a commodity, sometimes overlooked truth about EMRs: They increase the number of people with access to your medical data thousands of times over.

Dr. Mary Jane Minkin said in a Wall Street Journal video panel on EMR and privacy that she dropped out of the Yale Medical Group and Medicare because she didn’t want her patients’ information to be part of an EMR.

She gave an example of why: Minkin, a gynecologist, once treated a patient for decreased libido. When the patient later visited a dermatologist in the Yale system, that sensitive bit of history appeared on a summary printout.

“She was outraged,” she told Journal reporter Melinda Beck. “She felt horrible that this dermatologist would know about her problem. She called us enraged for 10 or 15 minutes.”

Dr. Deborah Peel, an Austin psychiatrist and founder of the nonprofit group Patient Privacy Rights, said she’s concerned about the number of employees, vendors and others who can see patient records. Peel is a well-known privacy advocate but has been accused by some health IT leaders of scaremongering.

“What patients should be worried about is that they don’t have any control over the information,” she said. “It’s very different from the paper age where you knew where your records were. They were finite records and one person could look at them at a time.”

She added: “The kind of change in the number of people who can see and use your records is almost uncountable.”

Peel said the lack of privacy causes people to delay or avoid treatment for conditions such as cancer, depression and sexually transmitted infections.

But Dr. James Salwitz, a medical oncologist in New Jersey, said on the panel that the benefits of EMR, including greater coordination of care and reduced likelihood of medical errors, outweigh any risks.

The privacy debate doesn’t have clear answers. Paper records are, of course, not immune to being lost, stolen or mishandled.

In the case of Minkin’s patient, protests aside, it’s reasonable for each physician involved in her care to have access to the complete record. While she might not think certain parts of her history are relevant to particular doctors, spotting non-obvious connections is an astute clinician’s job. At any rate, even without an EMR, the same information might just as easily have landed with the dermatologist via fax.

That said, privacy advocates have legitimate concerns. Since it’s doubtful that healthcare will go back to paper, the best approach is to improve EMR technology and the procedures that go with it.

Plenty of work is underway.

For example, at the University of Texas at Arlington, researchers are leading a National Science Foundation project to keep healthcare data secure while ensuring that the anonymous records can be used for secondary analysis. They hope to produce groundbreaking algorithms and tools for identifying privacy leaks.

“It’s a fine line we’re walking,” Heng Huang, an associate professor at UT’s Arlington Computer Science & Engineering Department, said in a press release this month “We’re trying to preserve and protect sensitive data, but at the same time we’re trying to allow pertinent information to be read.”

When it comes to balancing technology with patient privacy, healthcare professionals will be walking a fine line for some time to come.

Atlanta Hospital Sues Exec Over Allegedly Stolen Health Data

Posted on November 1, 2013 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

In most cases of hospital data theft, you usually learn that a laptop was stolen or a PC hacked. But in this case, a hospital is claiming that one of its executives stole a wide array of data from the facility, according to the Atlanta Business Chronicle.

In a complaint filed last week in Atlanta federal court, Children’s Healthcare of Atlanta asserts that corporate audit advisor Sharon McCray stole a boatload of proprietary information. The list of compromised data includes PHI of children, DEA numbers, health provider license numbers for over 500 healthcare providers, financial information and more, the newspaper reports.

According to the Children’s complaint, McCray announced her resignation on October 16th, then on the 18th, began e-mailing the information to herself using a personal account. On the 21st, Children’s cut off her access to her corporate e-mail account, and the next day she was fired.

Not surprisingly, Children’s has demanded that McCray return the information, but as of the date of the filing, McCray had neither returned or destroyed the data nor permitted Children’s to inspect her personal computer, the hospital says. Children’s is asking a federal judge to force McCray to give back the information.

According to IT security firm Redspin, nearly 60 percent of the PHI breaches reported to HHS under notification rules involved a business associate, and 67 percent were the result of theft or loss. In other words, theft by an executive with the facility — if that is indeed what happened — is still an unusual occurrence.

But given the high commercial value of the PHI and medical practitioner data, I wouldn’t be surprised if hospital execs were tempted into theft. Hospitals are just going to have to monitor execs as closely they do front-line employees.

Healthcare Cloud Spending To Ramp Up Over Next Few Years

Posted on October 4, 2013 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For years, healthcare IT executives have wrestled with the idea of deploying cloud services, concerned that the cloud would not offer enough security for their data. However, a new study suggests that this trend is shifting direction.

A new study by market research firm MarketsandMarkets has concluded that the healthcare industry will invest $5.4 billion in cloud computing by 2017.  This year should see a particularly big change, with total healthcare cloud investment moving from 4 percent to 20.5 percent of the industry, according to an article in the Cloud Times.

The current US cloud market for healthcare is dominated by SaaS vendors such as CareCloud, Carestream Health and Merge Healthcare, according to MarketsandMarkets. These vendors are tapping into an overall cloud computing market which should grow at a combined annual growth rate of 20.5 percent between 2012 and 2017, the researchers say.

As the report notes, there are good reasons why healthcare IT leaders are taking a closer look at cloud computing. For example, the cloud offers easy access to high-performance computing and high-volume storage, access which would be very costly to duplicate with on-premise computing.

On the other hand, the MarketsandMarkets researchers admit, healthcare still has particularly stringent data security requirements, and a need for strict confidentiality, access control and long-term data storage. Cloud vendors will need to offer services and products which meet these unique needs, and just as importantly, change and adapt as regulatory requirements shift. And they’ll have to have an impeccable reputation.

That last item — the cloud vendor’s reputation — will play a major role in the coming shift to cloud-based deployments. If giants like AT&T, IBM and Verizon stay in the healthcare cloud business, which seems likely to me, then healthcare institutions will be able to admit that they’re engaged in cloud deployments without suffering a public black eye over potential security problems.

On the other hand, if the giants were to get cold feet, cloud adoption would probably slow substantially, and remain at the trickle it has been for several years. While vendors like Merge and Carestream may be doing well, I’d argue that the presence of the 2,000-pound gorilla vendors ultimately dictates whether a market thrives.

A Primer On HIPAA Compliance For BYOD

Posted on June 13, 2013 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Here’s a statistic that caught me off guard: according to IDC Healthcare Insights, clinicians on average use 6.4 mobile devices in a day. That stat, courtesy of HIT Consultant, underscores the need for a smart and thorough security policy for clinicians who use their own devices at work.

Increasingly, healthcare organizations are crafting security policies for BYOD, but they vary greatly in how much such devices are allowed to access the hospital network, which hospital applications they can access and which devices can access the Internet, HIT Consultant notes.

However, according to Andrew Shearer, CTO at Care Thread, there’s some do’s and don’ts which should be common to all BYOD programs. Here’s some thoughts from Shearer, below.

DO:

Make sure your vendor and its sub-vendors are compliant with the new HIPAA Omnibus requirements

Be aware that under the new rules, HIPAA requirements now extend to business associates of entities that receive  protected health informatoin, such as contractors and subcontractors. Also new, not only vendors to healthcare organizations required to have business associate agreements, vendors must also hold BAAs with their sub-vendors.

Use two levels of security when users login to enterprise applications

Shearer recommends using Active Directory for the first level, allowing providers to use their hospital login credentials.  The second stage, he suggests, is to use a separate PIN for quick access to mobile apps which are in use, one which should disconnect when it goes idle.

Have the ability to remotely wipe a device if it is missing

This isn’t required by HIPAA, but it’s still an essential part of a strong mobile/BYOD security management program. Be prepared to do anything from deleting data in selected folders to turning the device into a brick (removing all programming or returning it to factor settings).

DON’T:

Allow PHI to be written to the mobile device

While it’s very common for clinicians to use mobile messaging apps to share patient information, such sharing is generally not HIPAA-compliant, Shearer notes.  In his view, the ideal healthcare communication app should allow access to messages and PHI only when the use is logged in.

Permit integration with insecure file-sharing / hosting services

Cloud-based hosting and file-sharing services like Evernote and Dropbox are very popular, but they’re not HIPAA compliant. To be HIPAA compliant, organizations must use multiple security protocols, including physical security, technical security in PHI storage and user authentication.

Ignore security updates

Make sure you do periodic audits of mobile devices to make sure any that transmit work-related information meet regulatory standards. Also, make sure apps on mobile devices are up to date, as older versions may not meet current security threats.