As new technologies like fitness bands, telemedicine and smartphone apps have become more important to healthcare, the issue of how to protect the privacy of the data they generate has become more important, too.
After all, all of these devices use the public Internet to broadcast data, at least at some point in the transmission. Typically, telemedicine involves a direct connection via an unsecured Internet connection with a remote server (Although, they are offering doing some sort of encryption of the data that’s being sent on the unsecured connection). If they’re being used clinically, monitoring technologies such as fitness bands use hop from the band across wireless spectrum to a smartphone, which also uses the public Internet to communicate data to clinicians. Plus, using the public internet is just the pathway that leads to a myriad of ways that hackers could get access to this health data.
My hunch is that this exposure of data to potential thieves hasn’t generated a lot of discussion because the technology isn’t mature. And what’s more, few doctors actually work with wearables data or offer telemedicine services as a routine part of their practice.
But it won’t be long before these emerging channels for tracking and caring for patients become a standard part of medical practice. For example, the use of wearable fitness bands is exploding, and middleware like Apple’s HealthKit is increasingly making it possible to collect and mine the data that they produce. (And the fact that Apple is working with Epic on HealthKit has lured a hefty percentage of the nation’s leading hospitals to give it a try.)
Telemedicine is growing at a monster pace as well. One study from last year by Deloitte concluded that the market for virtual consults in 2014 would hit 70 million, and that the market for overall telemedical visits could climb to 300 million over time.
Given that the data generated by these technologies is medical, private and presumably protected by HIPAA, where’s the hue and cry over protecting this form of patient data?
After all, though a patient’s HIV or mental health status won’t be revealed by a health band’s activity status, telemedicine consults certainly can betray those concerns. And while a telemedicine consult won’t provide data on a patient’s current cardiovascular health, wearables can, and that data that might be of interest to payers or even life insurers.
I admit that when the data being broadcast isn’t clear text summaries of a patient’s condition, possibly with their personal identity, credit card and health plan information, it doesn’t seem as likely that patients’ well-being can be compromised by medical data theft.
But all you have to do is look at human nature to see the flaw in this logic. I’d argue that if medical information can be intercepted and stolen, someone can find a way to make money at it. It’d be a good idea to prepare for this eventuality before a patient’s privacy is betrayed.
Regardless of whether a device uses the internet to communicate with services or not, all device communication with a service should use a transport with confidentiality guarantees, e.g., HTTPS (i.e., HTTP over TLS).
If a tele-medicine service does not use a secured transport, don’t buy it and stop using it; there is no reason to not secure all device-to-service communication.
Consumer health apps and devices show a lot of promise in promoting better healthcare outcomes. But if security is an afterthought to functionality, the inevitable will happen and confidence in the entire sector will be shaken.
Health app / device vendors need to consider many risks. Is there a secure connection between a vendor’s device/app and their backend cloud, and from their cloud to clinical systems? Has identity-verification been performed before data is transmitted and received? At what Level of Assurance (LoA)? Is data at rest secured on the device/app and in the cloud? Are passwords one-way hashed and irreversible? Have vendor devices, apps and cloud systems been security hardened and subject to regular third-party penetration tests and audits? Is a shared public cloud used for backend services, or is a dedicated, independently auditable data center used.
When sending PHI from one clinical system to another, many Direct Messaging providers (aka HISPs) have to go through audits covering these areas and many more. Consumer health device and app vendors would do well for themselves to find a good partner / mentor.
I have a hard time believing that any production health-care app that transmits video or PHI is not using as a minimum off the shelf SSL to secure the data during transport. I say this because enabling this is simple and requires little incremental development effort. Securing the data at rest and ensuring the data is not vulnerable at various points within he server infrastructure is more difficult and is where we are seeing the majority of data compromise as an industry. This may be due to technical or social hacking in many cases. So I don’t really see the client device or wearable as the source of concern or the historical target of hacking efforts.