Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Is HIPAA Misuse Blocking Patient Use Of Their Data?

Posted on August 18, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Recently, a story in the New York Times told some troubling stories about how HIPAA misunderstandings have crept into both professional and personal settings. These included:

  • A woman getting scolded at a hospital in Boston for “very improper” speech after discussing her husband’s medical situation with a dear friend.
  • Refusal by a Pennsylvania hospital to take a daughter’s information on her mother’s medical history, citing HIPAA, despite the fact that the daughter wasn’t *requesting* any data. The woman’s mother was infirm and couldn’t share medical history — such as her drug allergy — on her own.
  • The announcement, by a minister in California, that he could no longer read the names of sick congregants due to HIPAA.

All of this is bad enough, particularly the case of the Pennsylvania refusing to take information that could have protected a helpless elderly patient, but the effects of this ignorance create even greater ripples, I’d argue.

Let’s face it: our efforts to convince patients to engage with their own medical data haven’t been terribly successful as of yet. According to a study released late last year by Xerox, 64% of patients were not using patient portals, and 31% said that their doctor had never discussed portals with them.

Some of the reasons patients aren’t taking advantage of the medical data available to them include ignorance and fear, I’d argue. Technophobia and a history of just “trusting the doctor” play a role as well. What’s more, pouring through lab results and imaging studies might seem overwhelming to patients who have never done it before.

But that’s not all that’s holding people back. In my opinion, the climate of medical data fear HIPAA misunderstandings have created is playing a major part too.

While I understand why patients have to sign acknowledgements of privacy practices and be taught what HIPAA is intended to do, this doesn’t exactly foster a climate in which patients feel like they own their data. While doctor’s offices and hospitals may not have done this deliberately, the way they administer HIPAA compliance can make medical data seem portentous, scary and dangerous, more like a bomb set to go off than a tool patients can use to manage their care.

I guess what I’m suggesting is that if providers want to see patients engaged and managing their care, they should make sure patients feel comfortable asking for access to and using that data. While some may never feel at ease digging into their test results or correcting their medical history, I believe that there’s a sizable group of patients who would respond well to a reminder that there’s power in doing so.

The truth is that while most providers now give patients the option of logging on to a portal, they typically don’t make it easy. And heaven knows even the best-trained physician office staff rarely take the time to urge patients to log on and learn.

But if providers make the effort to balance stern HIPAA paperwork with encouraging words, patients are more likely to get inspired. Sometimes, all it takes is a little nudge to get people on board with new behavior. And there’s no excuse for letting foolish misinterpretations of HIPAA prevent that from happening.

Knotty Problems Surround Substance Abuse Data Sharing via EMRs

Posted on May 27, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As I see it, rules giving mental health and substance abuse data extra protection are critical. Maybe someday, there will be little enough stigma around these illnesses that special privacy precautions aren’t necessary, but that day is far in the future.

That’s why a new bill filed by Reps. Tim Murphy (R-PA.) and Paul Tonko (D-N.Y.), aimed at simplifying sharing of substance misuse data between EMRs, deserves a close look by those of us who track EMR data privacy. Tonko and Murphy propose to loosen federal rules on such data sharing  such that a single filled-out consent form from a patient would allow data sharing throughout a hospital or health system.

As things currently stand, federal law requires that in the majority of cases, federally-assisted substance abuse programs are barred from sharing personally-identifiable patient information with other entities if the programs don’t have a disclosure consent. What’s more, each other entity must itself obtain another consent from a patient before the data gets shared again.

At a recent hearing on the 21st Century Cures Act, Rep. Tonko argued that the federal requirements, which became law before EMRs were in wide use, were making it more difficult for individuals fighting a substance abuse problem to get the coordinated care that they needed.  While they might have been effective privacy protections at one point, today the need for patients to repeatedly approve data sharing merely interferes with the providers’ ability to offer value-based care, he suggested. (It’s hard to argue that it can’t be too great for ACOs to hit such walls.)

Clearly, Tonko’s goals can be met in some form.  In fact, other areas of the clinical world are making great progress in sharing mental health data while avoiding data privacy entanglements. For example, a couple of months ago the National Institute of Mental Health announced that its NIMH Limited Datasets project, including data from 23 large NIMH-supported clinical trials, just sent out its 300th dataset.

Rather than offer broader access to data and protect individual identifiers stringently, the datasets contain private human study participant information but are shared only with qualified researchers. Those researchers must win approval for a Data Use Certification agreement which specifies how the data may be used, including what data confidentiality and security measures must be taken.

Of course, practicing clinicians don’t have time to get special approval to see the data for every patient they treat, so this NIMH model doesn’t resolve the issues hospitals and providers face in providing coordinated substance abuse care on the fly.

But until a more flexible system is put in place, perhaps some middle ground exists in which clinicians outside of the originating institution can grant temporary, role-based “passes” offering limited use to patient-identifiable substance abuse data. That is something EMRs should be well equipped to support. And if they’re not, this would be a great time to ask why!

Emerging Health Apps Pose Major Security Risk

Posted on May 18, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As new technologies like fitness bands, telemedicine and smartphone apps have become more important to healthcare, the issue of how to protect the privacy of the data they generate has become more important, too.

After all, all of these devices use the public Internet to broadcast data, at least at some point in the transmission. Typically, telemedicine involves a direct connection via an unsecured Internet connection with a remote server (Although, they are offering doing some sort of encryption of the data that’s being sent on the unsecured connection).  If they’re being used clinically, monitoring technologies such as fitness bands use hop from the band across wireless spectrum to a smartphone, which also uses the public Internet to communicate data to clinicians. Plus, using the public internet is just the pathway that leads to a myriad of ways that hackers could get access to this health data.

My hunch is that this exposure of data to potential thieves hasn’t generated a lot of discussion because the technology isn’t mature. And what’s more, few doctors actually work with wearables data or offer telemedicine services as a routine part of their practice.

But it won’t be long before these emerging channels for tracking and caring for patients become a standard part of medical practice.  For example, the use of wearable fitness bands is exploding, and middleware like Apple’s HealthKit is increasingly making it possible to collect and mine the data that they produce. (And the fact that Apple is working with Epic on HealthKit has lured a hefty percentage of the nation’s leading hospitals to give it a try.)

Telemedicine is growing at a monster pace as well.  One study from last year by Deloitte concluded that the market for virtual consults in 2014 would hit 70 million, and that the market for overall telemedical visits could climb to 300 million over time.

Given that the data generated by these technologies is medical, private and presumably protected by HIPAA, where’s the hue and cry over protecting this form of patient data?

After all, though a patient’s HIV or mental health status won’t be revealed by a health band’s activity status, telemedicine consults certainly can betray those concerns. And while a telemedicine consult won’t provide data on a patient’s current cardiovascular health, wearables can, and that data that might be of interest to payers or even life insurers.

I admit that when the data being broadcast isn’t clear text summaries of a patient’s condition, possibly with their personal identity, credit card and health plan information, it doesn’t seem as likely that patients’ well-being can be compromised by medical data theft.

But all you have to do is look at human nature to see the flaw in this logic. I’d argue that if medical information can be intercepted and stolen, someone can find a way to make money at it. It’d be a good idea to prepare for this eventuality before a patient’s privacy is betrayed.

An Important Look at HIPAA Policies For BYOD

Posted on May 11, 2015 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Today I stumbled across an article which I thought readers of this blog would find noteworthy. In the article, Art Gross, president and CEO at HIPAA Secure Now!, made an important point about BYOD policies. He notes that while much of today’s corporate computing is done on mobile devices such as smartphones, laptops and tablets — most of which access their enterprise’s e-mail, network and data — HIPAA offers no advice as to how to bring those devices into compliance.

Given that most of the spectacular HIPAA breaches in recent years have arisen from the theft of laptops, and are likely proceed to theft of tablet and smartphone data, it seems strange that HHS has done nothing to update the rule to address increasing use of mobiles since it was drafted in 2003.  As Gross rightly asks, “If the HIPAA Security Rule doesn’t mention mobile devices, laptops, smartphones, email or texting how do organizations know what is required to protect these devices?”

Well, Gross’ peers have given the issue some thought, and here’s some suggestions from law firm DLA Piper on how to dissect the issues involved. BYOD challenges under HIPAA, notes author Peter McLaughlin, include:

*  Control:  To maintain protection of PHI, providers need to control many layers of computing technology, including network configuration, operating systems, device security and transmissions outside the firewall. McLaughlin notes that Android OS-based devices pose a particular challenge, as the system is often modified to meet hardware needs. And in both iOS and Android environments, IT administrators must also manage users’ tendency to connected to their preferred cloud and download their own apps. Otherwise, a large volume of protected health data can end up outside the firewall.

Compliance:  Healthcare organizations and their business associates must take care to meet HIPAA mandates regardless of the technology they  use.  But securing even basic information, much less regulated data, can be far more difficult than when the company creates restrictive rules for its own devices.

Privacy:  When enterprises let employees use their own device to do company business, it’s highly likely that the employee will feel entitled to use the device as they see fit. However, in reality, McLaughlin suggests, employees don’t really have full, private control of their devices, in part because the company policy usually requires a remote wipe of all data when the device gets lost. Also, employees might find that their device’s data becomes discoverable if the data involved is relevant to litigation.

So, readers, tell us how you’re walking the tightrope between giving employees who BYOD some autonomy, and protecting private, HIPAA-protected information.  Are you comfortable with the policies you have in place?

Full Disclosure: HIPAA Secure Now! is an advertiser on this website.

Wearables And Mobile Apps Pose New Data Security Risks

Posted on December 30, 2014 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

In the early days of mobile health apps and wearable medical devices, providers weren’t sure they could cope with yet another data stream. But as the uptake of these apps and devices has grown over the last two years, at a rate surpassing virtually everyone’s expectations, providers and payers both have had to plan for a day when wearable and smartphone app data become part of the standard dataflow. The potentially billion-dollar question is whether they can figure out when, where and how they need to secure such data.

To do that, providers are going to have to face up to new security risks that they haven’t faced before, as well as doing a good job of educating patients on when such data is HIPAA-protected and when it isn’t. While I am most assuredly not an attorney, wiser legal heads than mine have reported that once wearable/app data is used by providers, it’s protected by HIPAA safeguards, but in other situations — such as when it’s gathered by employers or payers — it may not be protected.

For an example of the gray areas that bedevil mobile health data security, consider the case of upstart health insurance provider Oscar Health, which recently offered free Misfit Flash bands to its members. The company’s leaders have promised members that use the bands that if their collected activity numbers look good, they’ll offer roughly $240 off their annual premium. And they’ve promised that the data will be used for diagnostics or any other medical purpose. This promise may be worthless, however, if they are still legally free to resell this data to say, pharmaceutical companies.

Logical and physical security

Meanwhile, even if providers, payers and employers are very cautious about violating patients’ privacy, their careful policies will be worth little if they don’t take a look at managing the logical and physical security risks inherent in passing around so much data across multiple Wi-Fi, 4G and corporate networks.

While it’s not yet clear what the real vulnerabilities are in shipping such data from place to place, it’s clear that new security holes will pop up as smartphone and wearable health devices ramp up to sharing data on massive scale. In an industry which is still struggling with BYOD security, corralling data that facilities already work with on a daily basis, it’s going to pose an even bigger challenge to protect and appropriately segregate connected health data.

After all, every time you begin to rely on a new network model which involves new data handoff patterns — in this case from wired medical device or wearable data streaming to smartphones across Wi-Fi networks, smart phones forwarding data to providers via 4G LTE cellular protocols and providers processing the data via corporate networks, there has to be a host of security issues we haven’t found yet.

Cybersecurity problems could lead to mHealth setbacks

Worst of all, hospitals’ and medical practices’ cyber security protocols are quite weak (as researcher after researcher has pointed out of late). Particularly given how valuable medical identity data has become, healthcare organizations need to work harder to protect their cyber assets and see to it that they’ve at least caught the obvious holes.

But to date, if our experiences with medical device security are any indication, not only are hospitals and practices vulnerable to standard cyber hacks on network assets, they’re also finding it difficult to protect the core medical devices needed to diagnose and treat patients, such as MRI machines, infusion pumps and even, in theory, personal gear like pacemakers and insulin pumps.  It doesn’t inspire much confidence that the Conficker worm, which attacked medical devices across the world several years ago, is still alive and kicking, and in fact, accounted for 31% the year’s top security threats.

If malevolent outsiders mount attacks on the flow of connected health data, and succeed at stealing it, not only is it a brand-new headache for healthcare IT administrators, it could create a crisis of confidence among mHealth shareholders. In other words, while patients, providers, payers, employers and even pharmaceutical companies seem comfortable with the idea of tapping digital health data, major hacks into that data could slow the progress of such solutions considerably. Let’s hope those who focus on health IT security take the threat to wearables and smartphone health app data seriously going into 2015.

Confusing HIPAA Compliance With Security

Posted on October 2, 2014 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Most people  who read this publication know that while HIPAA compliance is necessary, it’s not sufficient to protect your data. Too many healthcare leaders, especially in hospitals, seem satisfied with the song and dance their cloud vendor gave them, or the business associate that promises on a stack of Bibles that it’s in compliance.

I was reminded of this just the other day when Reuters came out with some shocking statistics. One particularly discomforting stat it reported was the fact that medical data is now worth 10 times more than your credit card number on the black market (even if John has argued otherwise). Why? Well, among other things, because medical identity theft isn’t tracked well by providers and payers, which means that a stolen identity can last for months or years before it’s closed down.

Healthcare is not only lagging behind other industries in terms of its hardware and software infrastructure, but the extent to which its executives give a care as to how exposed they are to a breach. Security experts note that senior executives in hospitals see security as a tactical, not a strategic problem, and they don’t spend much time or money on it.

But this could be a deadly mistake. As Jeff Horne, vice president at cybersecurity firm Accuvant, noted to Reuters, “healthcare providers and hospitals are just some of the easiest networks to break into. When I’ve looked at hospitals, and when I’ve talked to other people inside of a breach, they are using very old legacy systems – Windows systems that are 10+ years old that have not seen a patch.”

As if that wasn’t enough, it’s been increasingly demonstrated that medical devices — from infusion pumps to MRIs — are also frighteningly vulnerable to cyber attacks. The vulnerabilities might not be found for months, and when they are, the hapless provider has to wait for the vendor to do the patching to stay in FDA compliance.

So far, even the biggest HIPAA breaches — notably the 4.5 million patient records stolen from hospital giant Community Health Systems — don’t seem to have generated much change. But the sad truth is that unless hospitals get their act together, focused senior executive attention on the issue, and spend enough money to fix the many vulnerabilities that exist, we’re likely to be at the forefront of a very ugly time indeed.

Chinese Hackers Reportedly Access 4.5 Million Medical Records

Posted on August 18, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The headline of a tech startup blog I read pretty regularly caught my attention today, “Another day, another Chinese hack: 4.5M medical records reportedly accessed at national hospital operator“. The title seems to say it all. It’s almost like the journalist sees the breach as the standard affair these days. Just to be clear, I don’t think he thinks breaches are standard in healthcare, I think he thinks breaches are standard in all IT. As he says at the end of the article:

Community Health Systems joins a long list of large companies suffering from major cybersecurity breaches. Among them, Target, Sony, Global Payment Systems, eBay, Visa, Adobe, Yahoo, AOL, Zappos, Marriott/Hilton, 7-Eleven, NASDAQ, and others.

Yes, healthcare is not alone in their attempt to battle the powers of evil (and some not so evil, but possibly dangerous) forces that are hacking into systems large and small. We can certainly expect this trend to continue and likely get worse as more and more data is stored electronically.

For those interested in the specific story, Community Health Systems, a national hospital provider based in Nashville reported the HIPAA breach in their latest SEC filings. Pando Daily reported that “Chinese Hackers” used a “highly sophisticated malware” to breach Community Health Systems between April and June. What doesn’t make sense to me is this part of the Pando Daily article:

The outside investigators described the breach as dealing with “non-medical patient identification data,” adding that no financial data was stolen. The data, which includes patient names, addresses, birth dates, telephone numbers, and Social Security numbers, was, however, protected under the Health Insurance Portability and Accountability Act (HIPPA).

I’m not sure what they define as financial data, but social security numbers feel like financial data to me. Maybe they meant hospital financial data, but that’s an odd comment since a stack of social security numbers is likely a lot more valuable than some hospital financial data. The patient data they describe could be an issue for HIPAA though.

As is usually the case in major breaches like this, I can’t imagine a chinese hacker is that interested in “patient data.” In fact, from the list, I’d define the data listed as financial data. I’ve read lots of stories that pin the value of a medical record on the black market as $50 per record. A credit card is worth much less. However, I bet if I were to dig into the black market of data (which I haven’t since that’s not my thing), I bet I’d find a lot of buyers for credit card data tied to other personal data like birth date and addresses. I bet it would be hard to find a buyer for medical data. As in many parts of life, something is only as valuable as what someone else is willing to pay for it. People are willing to pay for financial data. We know that.

We shouldn’t use this idea as a reason why we don’t have to worry about the security and privacy of healthcare data. We should take every precaution available to create a culture of security and privacy in our institutions and in our healthcare IT implementations. However, I’m just as concerned with the local breach of a much smaller handful of patient data as I am the 4.5 million medical record breach to someone in China. They both need to be prevented, but the former is not 4.5 million times worse. Well, unless you’re talking about potential HIPAA penalties.

HIPAA Slip Leads To PHI Being Posted on Facebook

Posted on July 1, 2014 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

HHS has begun investigating a HIPAA breach at the University of Cincinnati Medical Center which ended with a patient’s STD status being posted on Facebook.

The disaster — for both the hospital and the patient — happened when a financial services employee shared detailed medical information with father of the patient’s then-unborn baby.  The father took the information, which included an STD diagnosis, and posted it publicly on Facebook, ridiculing the patient in the process.

The hospital fired the employee in question once it learned about the incident (and a related lawsuit) but there’s some question as to whether it reported the breach to HHS. The hospital says that it informed HHS about the breach in a timely manner, and has proof that it did so, but according to HealthcareITNews, the HHS Office of Civil Rights hadn’t heard about the breach when questioned by a reporter lastweek.

While the public posting of data and personal attacks on the patient weren’t done by the (ex) employee, that may or may not play a factor in how HHS sees the case. Given HHS’ increasingly low tolerance for breaches of any kind, I’d be surprised if the hospital didn’t end up facing a million-dollar OCR fine in addition to whatever liabilities it incurs from the privacy lawsuit.

HHS may be losing its patience because the pace of HIPAA violations doesn’t seem to be slowing.  Sometimes, breaches are taking place due to a lack of the most basic security protocols. (See this piece on last year’s wackiest HIPAA violations for a taste of what I’m talking about.)

Ultimately, some breaches will occur because a criminal outsmarted the hospital or medical practice. But sadly, far more seem to take place because providers have failed to give their staff an adequate education on why security measures matter. Experts note that staffers need to know not just what to do, but why they should do it, if you want them to act appropriately in unexpected situations.

While we’ll never know for sure, the financial staffer who gave the vengeful father his girlfriend’s PHI may not have known he was  up to no good. But the truth is, he should have.

HIMSS: Insider Threats Still Biggest Health IT Security Worry

Posted on February 27, 2014 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

You can do whatever you like to lock down your data, but  it if they do they do it did buy a block of members of the earth is the work doesn’t go for all it takes is one insider who knows how to unlock it to create a serious security breach.

Results from the 2013 HIMSS Security Survey suggest that despite progress towards hardening security and use of analytics, healthcare organizations must still do more to mitigate the risk of insider threat, such as the inappropriate access of data via employees.

The HIMSS survey, which was supported by The Medical Group Management Association and underwritten by Experian Data Breach Resolution, surveyed 283 information technology and security professionals employed in US hospitals and physician practices. What the researchers found was that the greatest “that motivator” was that of healthcare workers potentially snooping into EMRs to find friends, neighbors, spouses or coworkers.

Given that healthcare IT leaders are particularly concerned about inappropriate use of health data by insiders, you won’t be surprised to hear that there’s been an increase use of several technologies related to access to patient data, including user access control and audit logs in each access to patient records.

But you may be surprised to learn that of the 51 percent of respondents increase the security of the past year, 49 percent of these organizations are still spending just 3 percent  or less of their overall IT budget on securing patient data.

Other findings from the HIMSS survey include that healthcare organizations are using multiple means of controlling employee access to patient information;  67 percent use at least two mechanisms, such as user base and role-based controls, for controlling access the data.

Is Your EMR Compromising Patient Privacy?

Posted on November 20, 2013 I Written By

James Ritchie is a freelance writer with a focus on health care. His experience includes eight years as a staff writer with the Cincinnati Business Courier, part of the American City Business Journals network. Twitter @HCwriterJames.

Two prominent physicians this week pointed out a basic but, in the era of information as a commodity, sometimes overlooked truth about EMRs: They increase the number of people with access to your medical data thousands of times over.

Dr. Mary Jane Minkin said in a Wall Street Journal video panel on EMR and privacy that she dropped out of the Yale Medical Group and Medicare because she didn’t want her patients’ information to be part of an EMR.

She gave an example of why: Minkin, a gynecologist, once treated a patient for decreased libido. When the patient later visited a dermatologist in the Yale system, that sensitive bit of history appeared on a summary printout.

“She was outraged,” she told Journal reporter Melinda Beck. “She felt horrible that this dermatologist would know about her problem. She called us enraged for 10 or 15 minutes.”

Dr. Deborah Peel, an Austin psychiatrist and founder of the nonprofit group Patient Privacy Rights, said she’s concerned about the number of employees, vendors and others who can see patient records. Peel is a well-known privacy advocate but has been accused by some health IT leaders of scaremongering.

“What patients should be worried about is that they don’t have any control over the information,” she said. “It’s very different from the paper age where you knew where your records were. They were finite records and one person could look at them at a time.”

She added: “The kind of change in the number of people who can see and use your records is almost uncountable.”

Peel said the lack of privacy causes people to delay or avoid treatment for conditions such as cancer, depression and sexually transmitted infections.

But Dr. James Salwitz, a medical oncologist in New Jersey, said on the panel that the benefits of EMR, including greater coordination of care and reduced likelihood of medical errors, outweigh any risks.

The privacy debate doesn’t have clear answers. Paper records are, of course, not immune to being lost, stolen or mishandled.

In the case of Minkin’s patient, protests aside, it’s reasonable for each physician involved in her care to have access to the complete record. While she might not think certain parts of her history are relevant to particular doctors, spotting non-obvious connections is an astute clinician’s job. At any rate, even without an EMR, the same information might just as easily have landed with the dermatologist via fax.

That said, privacy advocates have legitimate concerns. Since it’s doubtful that healthcare will go back to paper, the best approach is to improve EMR technology and the procedures that go with it.

Plenty of work is underway.

For example, at the University of Texas at Arlington, researchers are leading a National Science Foundation project to keep healthcare data secure while ensuring that the anonymous records can be used for secondary analysis. They hope to produce groundbreaking algorithms and tools for identifying privacy leaks.

“It’s a fine line we’re walking,” Heng Huang, an associate professor at UT’s Arlington Computer Science & Engineering Department, said in a press release this month “We’re trying to preserve and protect sensitive data, but at the same time we’re trying to allow pertinent information to be read.”

When it comes to balancing technology with patient privacy, healthcare professionals will be walking a fine line for some time to come.