Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Embarrassment, Career Suicide, or Jail

Posted on July 26, 2018 I Written By

Mike Semel is a noted thought leader, speaker, blogger, and best-selling author of HOW TO AVOID HIPAA HEADACHES . He is the President and Chief Security Officer of Semel Consulting, focused on HIPAA and other compliance requirements; cyber security; and Business Continuity planning. Mike is a Certified Business Continuity Professional through the Disaster Recovery Institute, a Certified HIPAA Professional, Certified Security Compliance Specialist, and Certified Health IT Specialist. He has owned or managed technology companies for over 30 years; served as Chief Information Officer (CIO) for a hospital and a K-12 school district; and managed operations at an online backup company.

What You Can Learn from the Russian Army, the US Navy, and a Suspended Nurse

The General Counsel at one of our clients is a former district attorney who prosecuted identity theft cases. When I told him we work with people who think Identity Theft is a victimless crime, he got very angry, and rattled off a list of cases he had tried that had lasting damage to the victims. Cybercrimes and compliance violations are not victimless.

Identity theft victims have suffered threats of violence, financial ruin, threats of arrest, effects of business interruptions, damaged careers, and emotional and physical stress.  Some considered suicide.

Most data breaches are malicious, but some who committed bad acts did not know they were breaking laws. They thought their actions were just ‘mischief’, or mistakenly thought what they were doing was OK, but found out the hard way that they had committed crimes. Their careers were killed and some faced criminal charges. Some blamed their training, which may have been incomplete, but ignorance of the law is no excuse.

SPEAR-PHISHING by the RUSSIAN ARMY

Twelve members of the GRU, the Russian military intelligence service, were indicted by the United States for meddling with our elections, by using spear-phishing techniques that were remarkably effective. Those who were targeted suffered public shame and career damage.

Phishing is when hackers send out broadly-targeted e-mails, seemingly from banks, fax services, and businesses, trying to sucker many people into clicking on the link and sharing their personal data, or having malicious software silently install on their computer.

Spear-phishing is when a personally-targeted message is sent just to you, seemingly from a colleague or vendor – using names you recognize – asking you to send sensitive information or to click on a link that will install malicious software. These messages can be very tough to spot, because the hackers make you think that this is a personal message from someone you know. One popular method is to send the message from an e-mail address that is one or two letters different from a real address. Your eyes play tricks and you miss the slight difference in the address.

Spear-phishing resulted in the Russians allegedly getting the logins and passwords of Democratic and Republican party officials, which they used to get access to e-mails and other sensitive data.

Another personally targeted attack resulted in a company’s HR staff sending its W-2 tax details, including all employee Social Security Numbers, at the request of their CEO, who actually was a hacker using a very similar e-mail address to the CEO at the targeted company. Employees filed their tax returns, only to find out the hackers had already filed phony tax returns and gotten refunds, using their names and Social Security Numbers. Now these employees are on special lists of victims, delaying their future tax refunds; making it more difficult to get loans and maintain their credit ratings; and creating real stress and anxiety.

Spear-phishing has been used successfully by hackers to get CFO’s to transfer money to a hacker’s bank account, at the supposed request of their company’s CEO. These scams are often discovered way too late, only after a CFO casually says to a CEO that they transferred the $ 500,000 the CEO requested, only to see the look of panic on the CEO’s face.

What You Should Do

  • Individuals: Beware of every e-mail asking you to provide personal information, click on a link, transfer money, or send sensitive information. Call or meet face-to-face with the person requesting the information, to ensure it is legitimate.
  • Employers: Use a phishing training vendor to train your employees to recognize and report phishing and spear-phishing attempts. Use spam filters to block messages from known hackers. Implement policies to slow down the transfer of sensitive data, by requiring a phone or in-person verification any time someone in your organization receives a request for sensitive data, or a money transfer. While inconvenient, a delay is much better than discovering the request was fraudulent.

STEALING DATA – US NAVY SECRETS, and a SUSPENDED NURSING LICENSE

A former employee of a US Navy contractor was found guilty in federal court of stealing secret information simply by using a company computer to create a personal DropBox account, and transferring thousands of company documents. Jared Dylan Sparks is awaiting sentencing on six convictions that can each bring 10 years in federal prison, after he stole trade secrets from his current employer while seeking employment at another company.

In another case, the New York State Department of Health suspended a FORMER nurse after she took 3,000 patient records from a previous employer to her new job.

According to healthitsecurity.com, “the list included the patients’ names, addresses, dates of birth, and diagnoses. Martha Smith-Lightfoot asked for the list to ensure continuity of care for the patients. However, she did not receive the permission of URMC or the patients to give the information to her new employer.”

Smith-Lightfoot agreed to a one-year suspension, one year stayed suspension, and three years’ probation. She can’t work as a nurse for a year. What do you think her career chances will be, after her suspension, any time someone verifies her license status and sees why she was suspended?

What You Should Do

  • Individuals: Understand the requirements of your license or certification, and the laws that protect data. Licensing requirements for privacy and confidentiality pre-date HIPAA. While your organization may face a HIPAA penalty, you may face a damaged or destroyed career, as well as jail time.
  • Employers: Educate your workforce (EVERYONE, including employees, volunteers, contractors, vendors, etc.) about keeping patient, employment, and sensitive business information secure and confidential. Have everyone sign confidentiality agreements. You must be willing to evenly enforce your policies. Terminating a long-term employee when they break your rules may seem harsh, but necessary if you want to avoid corporate theft, compliance violations, and wrongful termination lawsuits if you fire someone after letting another person get away with a policy violation.

We have worked with clients whose current and workforce members used cloud-sharing services, like DropBox, Google Drive, and Microsoft OneDrive. By the time we discovered that these tools were installed on their network, many times it was too late. Data was already out the door, and no one knew what was taken. Implement Data Loss Prevention (DLP) security software that will automatically block critical data from being transferred to e-mail, cloud services, or portable thumb drives. Those that need to move data can be exempt from blocking, but you should protect your organization against everyone else.

People get hurt by data theft and violating regulations. Protect yourself, your patients, and your organization.

Are You Investing Enough in IT Security?

Posted on July 20, 2018 I Written By

Mike Semel is a noted thought leader, speaker, blogger, and best-selling author of HOW TO AVOID HIPAA HEADACHES . He is the President and Chief Security Officer of Semel Consulting, focused on HIPAA and other compliance requirements; cyber security; and Business Continuity planning. Mike is a Certified Business Continuity Professional through the Disaster Recovery Institute, a Certified HIPAA Professional, Certified Security Compliance Specialist, and Certified Health IT Specialist. He has owned or managed technology companies for over 30 years; served as Chief Information Officer (CIO) for a hospital and a K-12 school district; and managed operations at an online backup company.

Would you put a $ 10 fence around a $ 100 horse?

Does it make sense to put a $ 100 fence around a $ 10 horse?

For the right security, you need to know what your horse is worth.

The same concepts apply to protecting your data. What is your data worth?

Ask Cottage Health , which had two data breaches, totaling 55,000 records., and settled a $ 4.1 million lawsuit with patients, then paid a $ 2 million California penalty. They were sued by their insurer, which wanted the $ 4.1 million settlement money back, after it discovered Cottage Health had not consistently implemented the security controls it claimed on its insurance application. The $ 6.1 million in the settlement and penalty does not include its costs for legal fees, credit monitoring, notifying patients, public relations, or recovering the business lost from patients who moved to another provider.

One of our clients was audited for HIPAA compliance by the venture capital firm that wanted to invest in their company. Another client had us do a compliance assessment on a healthcare company they wanted to purchase. In both cases, HIPAA compliance was worth millions of dollars.

We asked a client how much the financial impact would be on their business if they lost the sensitive personal data they collected about business partners, and had to notify everyone. The owner said they would be out of business, costing millions of dollars.

Breaches result in lawsuits, with settlements in the millions. If you are a licensed or certified professional, you can lose your license or certification if you are breached.

Federal HIPAA penalties in 2014 – 2015 were $ 14 million. In 2016 – 2017 they tripled to $ 42 million. In 2018, they have already reached $ 7.9 million.

Data is worth more than gold.

Instead of words and images in a computer, think of your data as a pile of gold bars that is worth protecting.

When we work with our clients, we help you identify the types of data you have, where it is located, and how it is protected. We recently worked with a client that came to us for help protecting their patient information. They were shocked when we showed them that they had bigger risks related to the data they stored about workforce members, and job applicants they did not hire, than the people they served.

  • What data do you have that is regulated, that you must protect to comply with laws and other regulations?
  • What fines and lawsuit judgments might you face if your data is breached?
  • Beyond HIPAA that protects patient information, do you know your state data breach laws that apply to employee data?
  • Do you know the regulations that protect credit card data?
  • Do you have enough of the right type of insurance to protect your finances if you are breached?

Everyone has unregulated data that is sensitive or proprietary, that could hurt your business if it is lost, stolen, or accessed by a competitor or someone who wants to hurt you? Salaries, trade secrets, employment records, pricing models, merger and acquisition plans, lawsuit files, have all been stolen.

As part of our assessments, we search the Dark Web (the criminal side of the Internet) to see if our clients have employee passwords for sale by hackers. Over 90% have had at least one employee’s credentials stolen and offered for sale.

Most of our clients start out not knowing the value of their risks. They hadn’t approved IT security purchases, because the costs were high, and they didn’t know if security was worth the investment.

So, how much should you invest in protecting your data?

The recently-released 2018 Cost of a Data Breach report shows, through research of actual breaches, that in 2017 the average cost to a breached organization for a single lost healthcare record was $408. Across all industries the cost was $ 233 per record. Only a third of the cost was for the direct response to the breach – notifying patients, hiring lawyers and IT security experts, and paying for credit monitoring. Two-thirds of the $ 408/record was the financial effect on the healthcare organizations, by losing patients after violating their trust.

Here is a calculation you can use to estimate the value of protecting your patient data.

Number of Patient Records x $ 408 (cost per record of a breach) = $ ________________ in risk.

Example: 25,000 records x $ 408 = $ 10.2 million. (If this number startles you, imagine if your costs were only 25% of the total, which is still $ 2.5 million.)

Other ways to put a dollar value on your risk

  • How much would a breach affect the market value of your business?
  • How much investment capital do you need for expansion?
  • Personally, what will your retirement look like if you had to pay $ 1 million, $ 2 million, or more, to cover the costs of a breach?
  • What would your life be like if you went out of business?

Know the value of your cyber security risk. Do the math.

Ask your IT department, or an outsourced independent IT security consultant, to assess your risks, and recommend what you need to be fully protected. Our assessments calculate your risks based on dollars, and provide ‘under the skin’ data about the current status of your security. Don’t settle for guesses.

Base your security investment on the value of your risks, not just the general idea that your data needs to be protected.

And, if you own a $ 100 horse, upgrade your $ 10 fence.

Key Articles in Health IT from 2017 (Part 2 of 2)

Posted on January 4, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article set a general context for health IT in 2017 and started through the year with a review of interesting articles and studies. We’ll finish the review here.

A thoughtful article suggests a positive approach toward health care quality. The author stresses the value of organic change, although using data for accountability has value too.

An article extolling digital payments actually said more about the out-of-control complexity of the US reimbursement system. It may or not be coincidental that her article appeared one day after the CommonWell Health Alliance announced an API whose main purpose seems to be to facilitate payment and other data exchanges related to law and regulation.

A survey by KLAS asked health care providers what they want in connected apps. Most apps currently just display data from a health record.

A controlled study revived the concept of Health Information Exchanges as stand-alone institutions, examining the effects of emergency departments using one HIE in New York State.

In contrast to many leaders in the new Administration, Dr. Donald Rucker received positive comments upon acceding to the position of National Coordinator. More alarm was raised about the appointment of Scott Gottlieb as head of the FDA, but a later assessment gave him high marks for his first few months.

Before Dr. Gottlieb got there, the FDA was already loosening up. The 21st Century Cures Act instructed it to keep its hands off many health-related digital technologies. After kneecapping consumer access to genetic testing and then allowing it back into the ring in 2015, the FDA advanced consumer genetics another step this year with approval for 23andMe tests about risks for seven diseases. A close look at another DNA site’s privacy policy, meanwhile, warns that their use of data exploits loopholes in the laws and could end up hurting consumers. Another critique of the Genetic Information Nondiscrimination Act has been written by Dr. Deborah Peel of Patient Privacy Rights.

Little noticed was a bill authorizing the FDA to be more flexible in its regulation of digital apps. Shortly after, the FDA announced its principles for approving digital apps, stressing good software development practices over clinical trials.

No improvement has been seen in the regard clinicians have for electronic records. Subjective reports condemned the notorious number of clicks required. A study showed they spend as much time on computer work as they do seeing patients. Another study found the ratio to be even worse. Shoving the job onto scribes may introduce inaccuracies.

The time spent might actually pay off if the resulting data could generate new treatments, increase personalized care, and lower costs. But the analytics that are critical to these advances have stumbled in health care institutions, in large part because of the perennial barrier of interoperability. But analytics are showing scattered successes, being used to:

Deloitte published a guide to implementing health care analytics. And finally, a clarion signal that analytics in health care has arrived: WIRED covers it.

A government cybersecurity report warns that health technology will likely soon contribute to the stream of breaches in health care.

Dr. Joseph Kvedar identified fruitful areas for applying digital technology to clinical research.

The Government Accountability Office, terror of many US bureaucracies, cam out with a report criticizing the sloppiness of quality measures at the VA.

A report by leaders of the SMART platform listed barriers to interoperability and the use of analytics to change health care.

To improve the lower outcomes seen by marginalized communities, the NIH is recruiting people from those populations to trust the government with their health data. A policy analyst calls on digital health companies to diversify their staff as well. Google’s parent company, Alphabet, is also getting into the act.

Specific technologies

Digital apps are part of most modern health efforts, of course. A few articles focused on the apps themselves. One study found that digital apps can improve depression. Another found that an app can improve ADHD.

Lots of intriguing devices are being developed:

Remote monitoring and telehealth have also been in the news.

Natural language processing and voice interfaces are becoming a critical part of spreading health care:

Facial recognition is another potentially useful technology. It can replace passwords or devices to enable quick access to medical records.

Virtual reality and augmented reality seem to have some limited applications to health care. They are useful foremost in education, but also for pain management, physical therapy, and relaxation.

A number of articles hold out the tantalizing promise that interoperability headaches can be cured through blockchain, the newest hot application of cryptography. But one analysis warned that blockchain will be difficult and expensive to adopt.

3D printing can be used to produce models for training purposes as well as surgical tools and implants customized to the patient.

A number of other interesting companies in digital health can be found in a Fortune article.

We’ll end the year with a news item similar to one that began the article: serious good news about the ability of Accountable Care Organizations (ACOs) to save money. I would also like to mention three major articles of my own:

I hope this review of the year’s articles and studies in health IT has helped you recall key advances or challenges, and perhaps flagged some valuable topics for you to follow. 2018 will continue to be a year of adjustment to new reimbursement realities touched off by the tax bill, so health IT may once again languish somewhat.

Nuance Takes Page from Healthcare Clients in Petya Outage Aftermath

Posted on November 6, 2017 I Written By

Colin Hung is the co-founder of the #hcldr (healthcare leadership) tweetchat one of the most popular and active healthcare social media communities on Twitter. Colin speaks, tweets and blogs regularly about healthcare, technology, marketing and leadership. He is currently an independent marketing consultant working with leading healthIT companies. Colin is a member of #TheWalkingGallery. His Twitter handle is: @Colin_Hung.

On June 27th the Petya Malware (or NotPetya or ExPteya) struck Nuance Communications (NASDAQ: NUAN). For days the company’s eScription speech-recognition platform were unavailable, forcing thousands of healthcare clients to find alternatives for their medical transcription. During the crisis and in the weeks that followed, Nuance borrowed a page from their healthcare clients: not offering false hope and deconstructing the incident to learn from it.

At the recent CHIME Fall Forum in San Antonio Texas, I had the opportunity to sit down with Brenda Hodge, Chief Marketing Officer – Healthcare and Ed Rucinski, Senior Vice President of World Wide Healthcare Sales of Nuance to talk about the Petya outage and where the company is headed.

“The challenge we faced with Petya brought us all together as a company,” explained Ed. “When our systems went offline, the entire organization rallied together. We had engineers and support staff who slept at the office on couches and cots. We had developers who went with less than 2hrs of sleep for 4 days straight because they wanted to help clients and bring our systems back online as quickly as possible. We became a nameless and rank-less organization working towards a common goal.”

As the outage went from minutes to hours to days, Nuance resisted the temptation to offer false hope to its clients. Instead, the company opted to be truthful and transparent. Nuance sent emails and directly called clients to let them know they had suffered a cyber attack, that the full extent of the damage was not known and that they did not know when their systems would be back online. The company did, however, commit to providing regular updates and being available to answer questions and address concerns.

The following is an abbreviated excerpt from a Nuance communication posted online by one of its clients:

Nuance corporate systems were unfortunately affected by a global cyber attack today. We went into immediate security protocol by shutting down our hosted production systems and platforms. There is no update at this time as to when the accounts will be back online but we will be holding regular calls throughout the day and night to gain insight into the timeline for resolution and I will update you again when I have more info. We are sorry for the inconvenience this outage has caused and we are working diligently to get things back online.

Clinicians are coached never to give patients in crisis or their families false hope. They calmly explain what happened, state the facts and talk about potential next steps. They do not, however, say that “things will be alright”, even though they know that is what everyone desperately wants to hear. Nuance used this same protocol during the Petya outage.

The company also used protocols similar to those used following an adverse event.

Healthcare is complex and despite the best efforts and best intentions of care teams, errors occur. These errors are referred to as adverse events. Adverse events that impact patient safety or that cause actual harm to patients are thoroughly documented, deconstructed and analyzed by clinical leaders as well as risk managers. The lessons gleaned from these unfortunate events are captured and used to improve operations. The goal is to prevent or mitigate the impact of similar events in the future.

After their systems were fully restored, the Nuance team embarked on a thorough review of the incident – from technical procedures to client communication protocols.

“We learned a lot through this incident” says Hodge. “We got a first-hand education on how sophisticated malware has become. We’ve gone from viruses to malware to ransomware to coordinated nation-state attacks. That’s what Petya really is – a coordinated attack on company infrastructure. Now that we have been through this type of attack, we have put in new processes and technologies to prevent similar attacks in the future. Most importantly we have made investments in improving our response to these types of attacks.”

Nuance has gone one step further. They have committed to sharing their painful lessons learned with other companies and healthcare institutions. “Like it or not, we are all in this together”, continued Hodge. “The Petya attack came on the heels of the WannaCry ransomware attack that impacted many of our healthcare clients – so there was a lot of empathy from our clients. In fact this whole incident has created a sense of solidarity in the healthcare technology community. Cyber attacks are not going to stop and we need to come together as an industry so that we are as prepared as we can be for the next one.”

“It’s unfortunate that it took an incident like this to show us what we are made of,” says Rucinski. “We had executives making coffee and fetching lunch for the support teams. We had leaders offering to run errands for staff because they knew they were too tired to keep up with those types of things. In the end we found out we truly embody the values and principles that we have hanging on posters around the office.”

NY-Based HIE Captures One Million Patient Consents

Posted on September 28, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

One of the big obstacles to the free exchange of health data is obtaining patient consent to share that data. It’s all well and good if we can bring exchange partners onto a single data sharing format, but if patients don’t consent to that exchange things get ugly. It’s critical that healthcare organizations solve this problem, because without patient consent HIEs are dead in the water.

Given these issues, I was intrigued to read a press release from HEALTHeLINK, an HIE serving Western New York, which announced that it had obtained one million patient consents to share their PHI. HEALTHeLINK connects nearly 4,600 physicians, along with hospitals, health plans and other healthcare providers. It’s part of a larger HIE, the Statewide Health Information Network of New York.

How did HEALTHeLINK obtain the consents? Apparently, there was no magic involved. The HIE made consent forms available at hospitals and doctors’ offices throughout its network, as well as making the forms available for download at whyhealthelink.com. (It may also have helped that they can be downloaded in any of 12 languages.)

I downloaded the consent form myself, and I must say it’s not complicated.

Patients only need to fill out a single page, which gives them the option to a) permit participating providers to access all of their electronic health information via the HIE, b) allow full access to the data except for specific participants, c) permit health data sharing only with specific participants, d) only offer access to their records in an emergency situation, and e) forbid HIE participants to access their health data even in the case of an emergency situation.

About 95% of those who consented chose option a, which seems a bit remarkable to me. Given the current level of data breaches in news, I would’ve predicted that more patients would opt out to some degree.

Nonetheless, the vast majority of patients gave treating providers the ability to view their lab reports, medication history, diagnostic images and several additional categories of health information.

I wish I could tell you what HEALTHeLINK has done to inspire trust, but I don’t know completely. I suspect, however, that provider buy-in played a significant role here. While none of this is mentioned in the HIE’s press release or even on its website, I’m betting that the HIE team did a good job of firing up physicians. After all, if you’re going to pick someone patients would trust, physicians would be your best choice.

On the other hand, it’s also possible patients are beginning to get the importance of having all of the data available during care. While much of health IT is too abstruse for the layman (or woman), the idea that doctors need to know your medical history is clearly beginning to resonate with your average patient.

A Programmatic Approach to Print Security

Posted on July 17, 2017 I Written By

The following is a guest blog post by Sean Hughes, EVP Managed Document Services at CynergisTek.

Print devices are a necessary tool to support our workflows but at the same time represent an increasing threat to the security of our environment.

Most organizations today have a variety of devices; printers, copiers, scanners, thermal printers and even fax machines that make up their “print fleet”. This complex fleet often represents a wide variety of manufacturers, makes and models of devices critical to supporting the business of healthcare.

Healthcare organizations continue to print a tremendous amount of paper as evidenced by an estimated 11% increase in print despite the introduction of the EHR and other new systems (ERPs, CRMs, etc.). More paper generally means more devices, and more devices means more risk, resulting in increased security and privacy concerns.

Look inside most healthcare organizations today and even those with a Managed Print Services program (MPS) probably have a very disjointed management responsibility of their inventory. Printers are most often the responsibility of IT, copiers run through supply chain with the manufacturer providing support, and fax machines may even be part of Telecommunications. Those organizations that have an MPS provider probably don’t have all devices managed under that program – what about devices in research or off-site locations, or what if you have an academic medical facility or are part of a university?

These devices do have a couple of things in common that are of concern – they are somehow connected to your network and they hold or process PHI.

This fact and the associated risk requires an organization to look at how these devices are being managed and whether the responsibility for security and privacy are being met. Are they part of your overall security program, does your third party manage that for you, do you even know where they all are and what risks are in your fleet today?  If multiple organizations manage, do they follow consistent security practices?

Not being able to answer these questions is a source of concern and probably means that the risk is real. So how do we resolve this?

We need to take a programmatic approach to print and print security to ensure we are addressing the whole. Let’s lay out some steps to accomplish this.

  • Know your environment – the first thing we must do is identify ALL print devices in our organization. This includes printers, scanners, copiers, thermals, and fax machines, whether they are facility owned, third-party managed, networked or local, or sitting in a storage room.
  • Assess your risk – perform a comprehensive security risk assessment of the entire fleet and develop a remediation plan. This is not a one-time event but rather needs to be part of your overall security plan.
  • Assign singular ownership of assets – either through an internal program or a third-party program, the healthcare organization should fold all print-related devices into a single program for accountability and management.
  • Workflow optimization – you probably have millions of dollars of software in your organization that is the source of the output of these devices. Even more was spent securing the environment these applications are housed in, and accessed from, to make sure the data is secure and privacy is maintained. The data in those systems is at its lowest price point, most optimal from a workflow efficiency standpoint, and most secure — yet every time we hit print we multiply the cost, decrease the operational efficiency and increase the risk to that data.
  • Decrease risk – while it is great that we identify all the devices, assess and document risk and develop a mitigation/remediation plan, the goal should be to put controls in place to stem the proliferation of devices and ultimately to begin the process of decreasing the unnecessary devices thereby eliminating the risk associated to those devices.

The concept of trying to reduce the number of printers from a cost perspective is not new to healthcare. However, many have achieved mixed results, even those that have used an MPS partner. The reason that happens is generally because they are focused on the wrong things.

The best way to accomplish a cost-effective print program is to understand what is driving the need or want for printers, and that is volume. You don’t need a print device if you don’t need to print. I know it sounds like I am talking about the nirvana that is the paperless environment but I am not. This is simply understanding what and where is unnecessary to print and eliminating it, thereby eliminating the underlying need for the associated device, and with it the inherent security risk as well as the privacy concern of the printed page. Refocusing on volume helps us to solve many problems simultaneously.

Putting a program in place that provides this visibility, and using that data to make the decisions on device reduction can significantly reduce your current risk. Couple this with security and privacy as part of your acquisition determination, and you can make intelligent decisions that ensure you only add those devices you need, and when you do add a device it meets your security and privacy requirements. More often than not the first line of defense in IT is better management of the environment.

The Fight For Patient Health Data Access Is Just Beginning

Posted on July 11, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

When some of us fight to give patients more access to their health records, we pitch everyone on the benefits it can offer — and act as though everyone feels the same way.  But as most of us know, in their heart of hearts, many healthcare industry groups aren’t exactly thrilled about sharing their clinical data.

I’ve seen this first hand, far too many times. As I noted in a previous column, some providers all but refuse to provide me with my health data, and others act like they’re doing me a big favor by deigning to share it. Yet others have put daunting processes in place for collecting your records or make you wait weeks or months for your data. Unfortunately, the truth, however inconvenient it may be, is that they have reasons to act this way.

Sure, in public, hospital execs argue for sharing data with both patients and other institutions. They all know that this can increase patient engagement and boost population health. But in private, they worry that sharing such data will encourage patients to go to other hospitals at will, and possibly arm their competitors in their battle for market share.

Medical groups have their own concerns. Physicians understand that putting data in patient’s hands can lead to better patient self-management, which can tangibly improve outcomes. That’s pretty important in an era when government and commercial payers are demanding measurably improved outcomes.

Still, though they might not admit it, doctors don’t want to deluge patients with a flood of data which could cause them to worry about inconsequential issues, or feel that data-equipped patients will challenge their judgment. And can we please admit that some simply don’t like ceding power over their domain?

Given all of this, I wasn’t surprised to read that several groups are working to improve patients’ access to their health data. Nor was it news to me that such groups are struggling (though it was interesting to hear what they’re doing to help).

MedCity News spoke to the cofounder of one such group, Share for Cures, which works to encourage patients to share their health data for medical research. The group also hopes to foster other forms of patient health data sharing.

Cofounder Jennifer King told MCN that patients face a technology barrier to accessing such records. For example, she notes, existing digital health tools may offer limited interoperability with other data sets, and patients may not be sure how to use portals. Her group is working to remove these obstacles, but “it’s still not easy,” King told a reporter.

Meanwhile, she notes, almost every hospital has implemented a customized medical record, which can often block data sharing even if the hospitals buy EMRs from the same vendor. Meanwhile, if patients have multiple doctors, at least a few will have EMRs that don’t play well with others, so sharing records between them may not be possible, King said.

To address such data sharing issues, King’s nonprofit has created a platform called SHARE, an acronym for System for Health and Research Data Exchange. SHARE lets users collect and aggregate health and wellness data from multiple sources, including physician EMRs, drug stores, mobile health apps and almost half the hospitals in the U.S.

Not only does SHARE make it easy for patients to access their own data, it’s also simple to share that data with medical research teams. This approach offers researchers an important set of benefits, notably the ability to be sure patients have consented to having their data used, King notes. “One of the ways around [HIPAA] is that patient are the true owners,” she said. “With direct patient authorization…it’s not a HIPAA issue because it’s not the doctor sharing it with someone else. It’s the patient sharing it.”

Unfortunately (and this is me talking again) the platform faces the same challenges as any other data sharing initiative.

In this case, the problem is that like other interoperability solutions, SHARE can only amass data that providers are actually able to share, and that leaves a lot of them out of the picture. In other words, it can’t do much to solve the underlying problem. Another major issue is that if patients are reluctant to use even something as simplified as a portal, they’re not to likely to use SHARE either.

I’m all in favor of pushing for greater patient data access, for personal as well as professional reasons. And I’m glad to hear that there are groups springing up to address the problem, which is obviously pretty substantial. I suspect, though, that this is just the beginning of the fight for patient data access.

Until someone comes up with a solution that makes it easy and comfortable for providers to share data, while diffusing their competitive concerns, it’s just going to be more of the same old, same old. I’m not going to hold my breath waiting for that to happen.

Will Data Aggregation For Precision Medicine Compromise Patient Privacy?

Posted on April 10, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Like anyone else who follows medical research, I’m fascinated by the progress of precision medicine initiatives. I often find myself explaining to relatives that in the (perhaps far distant) future, their doctor may be able to offer treatments customized specifically for them. The prospect is awe-inspiring even for me, someone who’s been researching and writing about health data for decades.

That being the case, there are problems in bringing so much personal information together into a giant database, suggests Jennifer Kulynych in an article for OUPblog, which is published by Oxford University Press. In particular, bringing together a massive trove of individual medical histories and genomes may have serious privacy implications, she says.

In arguing her point, she makes a sobering observation that rings true for me:

“A growing number of experts, particularly re-identification scientists, believe it simply isn’t possible to de-identify the genomic data and medical information needed for precision medicine. To be useful, such information can’t be modified or stripped of identifiers to the point where there’s no real risk that the data could be linked back to a patient.”

As she points out, norms in the research community make it even more likely that patients could be individually identified. For example, while a doctor might need your permission to test your blood for care, in some states it’s quite legal for a researcher to take possession of blood not needed for that care, she says. Those researchers can then sequence your genome and place that data in a research database, and the patient may never have consented to this, or even know that it happened.

And there are other, perhaps even more troubling ways in which existing laws fail to protect the privacy of patients in researchers’ data stores. For example, current research and medical regs let review boards waive patient consent or even allow researchers to call DNA sequences “de-identified” data. This flies in the face of conventional wisdom that there’s no re-identification risk, she writes.

On top of all of this, the technology already exists to leverage this information for personal identification. For example, genome sequences can potentially be re-identified through comparison to a database of identified genomes. Law enforcement organizations have already used such data to predict key aspects of an individual’s face (such as eye color and race) from genomic data.

Then there’s the issue of what happens with EMR data storage. As the author notes, healthcare organizations are increasingly adding genomic data to their stores, and sharing it widely with individuals on their network. While such practices are largely confined to academic research institutions today, this type of data use is growing, and could also expose patients to involuntary identification.

Not everyone is as concerned as Kulynych about these issues. For example, a group of researchers recently concluded that a single patient anonymization algorithm could offer a “standard” level of privacy protection to patient, even when the organizations involved are sharing clinical data. They argue that larger clinical datasets that use this approach could protect patient privacy without generalizing or suppressing data in a manner that would undermine its usefulness.

But if nothing else, it’s hard to argue Kulynych’s central concern, that too few rules have been updated to reflect the realities of big genomic and medical data stories. Clearly, state and federal rules  need to address the emerging problems associated with big data and privacy. Otherwise, by the time a major privacy breach occurs, neither patients nor researchers will have any recourse.

Consumers Fear Theft Of Personal Health Information

Posted on February 15, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Probably fueled by constant news about breaches – duh! – consumers continue to worry that their personal health information isn’t safe, according to a new survey.

As the press release for the 2017 Xerox eHealth Survey notes, last year more than one data breach was reported each day. So it’s little wonder that the survey – which was conducted online by Harris poll in January 2017 among more than 3,000 U.S. adults – found that 44% of Americans are worried about having their PHI stolen.

According to the survey, 76% of respondents believe that it’s more secure to share PHI between providers through a secure electronic channel than to fax paper documents. This belief is certainly a plus for providers. After all, they’re already committed to sharing information as effectively as possible, and it doesn’t hurt to have consumers behind them.

Another positive finding from the study is that Americans also believe better information sharing across providers can help improve patient care. Xerox/Harris found that 87% of respondents believe that wait times to get test results and diagnoses would drop if providers securely shared and accessed patient information from varied providers. Not only that, 87% of consumers also said that they felt that quality of service would improve if information sharing and coordination among different providers was more common.

Looked at one way, these stats offer providers an opportunity. If you’re already spending tens or hundreds of millions of dollars on interoperability, it doesn’t hurt to let consumers know that you’re doing it. For example, hospitals and medical practices can put signs in their lobby spelling out what they’re doing by way of sharing data and coordinating care, have their doctors discuss what information they’re sharing and hand out sheets telling consumers how they can leverage interoperable data. (Some organizations have already taken some of these steps, but I’d argue that virtually any of them could do more.)

On the other hand, if nearly half of consumers afraid that their PHI is insecure, providers have to do more to reassure them. Though few would understand how your security program works, letting them know how seriously you take the matter is a step forward. Also, it’s good to educate them on what they can do to keep their health information secure, as people tend to be less fearful when they focus on what they can control.

That being said, the truth is that healthcare data security is a mixed bag. According to a study conducted last year by HIMSS, most organizations conduct IT security risk assessments, many IT execs have only occasional interactions with top-level leaders. Also, many are still planning out their medical device security strategy. Worse, provider security spending is often minimal. HIMSS notes that few organizations spend more than 6% of their IT budgets on data security, and 72% have five or fewer employees allocated to security.

Ultimately, it’s great to see that consumers are getting behind the idea of health data interoperability, and see how it will benefit them. But until health organizations do more to protect PHI, they’re at risk of losing that support overnight.

Healthcare Robots! – #HITsm Chat Topic

Posted on January 31, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

We’re excited to share the topic and questions for this week’s #HITsm chat happening Friday, 2/3 at Noon ET (9 AM PT). This week’s chat will be hosted by Mr RIMP (@MrRimp, Robot-In-My-Pocket), mascot of the first ever #HIMSS17 Innovation Makerspace! (Booth 7785) (with assistance from @wareflo) We’ll be discussing the topic “Healthcare Robots!” and so it seems appropriate to have a robot hosting the chat.

In a first, #HIMSS17 has a #makerspace (Booth 7785), in the HIMSS17 Innovation Zone. It has robots! They are rudimentary, but educational and fun. One of those robots is @MrRIMP, for Robot-In-My-Pocket. Here is an YouTube interview with @MrRIMP. As you can tell, little Mr. R. has a bit of an attitude. He also wrote the questions below and will moderate tweets about them during the #HITsm tweetchat.

From the recent “How medical robots will change healthcare” (@PeterBNichol), there are three main areas of robotic health:

1. Direct patient care robots: surgical robots (used for performing clinical procedures), exoskeletons (for bionic extensions of self like the Ekso suit), and prosthetics (replacing lost limbs).  Over 500 people a day loses a limb in America with 2 million Americans living with limb loss according to the CDC.

2. Indirect patient care robots: pharmacy robots (streamlining automation, autonomous robots for inventory control reducing labor costs), delivery robots (providing medical goods throughout a hospital autonomously), and disinfection robots (interacting with people with known infectious diseases such as healthcare-associated infections or HAIs).

3. Home healthcare robots: robotic telepresence solutions (addressing the aging population with robotic assistance).

Before the #HITsm tweetchat I hope you’ll watch Robot & Frank, about a household robot and an increasingly infirm retiree (86% on Rotten Tomatoes, available on YouTube, Amazon, Itunes, Vudu, and Google for $2.99) I’ll also note a subcategory to the direct care robots: pediatric therapy robots. Consider, for example, New Friends 2016, The Second International Conference on Social Robots in Therapy and Education. I, Mr. RIMP, have a special interest in this area.

Join us as we discuss Healthcare Robots during the February 3rd #HITsm chat. Here are the questions we’ll discuss:

T1: What is your favorite robot movie? Why? How many years in the future would you guess it will take to achieve similar robots? #HITsm

T2: Robots promise to replace a lot of human labor. Cost-wise, humanity-wise, will this be more good than bad, or more bad than good? #HITsm

T3: Have you played with, or observed any “toy” robots. Impressed? Not impressed? Why? #HITsm

T4: IMO, “someday” normal, everyday people will be able design and program their own robots. What kind of robot would you design for healthcare? #HITsm

T5: Robots and workflow? Connections? Think about healthcare robots working *together* with healthcare workers. What are potential implications? #HITsm

Bonus: Isn’t @MrRIMP (Robot-In-My-Pocket) the cutest, funniest, little, robot you’ve ever seen? Any suggestions for the next version (V.4) of me? #HITsm

Here’s a look at the upcoming #HITsm chat schedule:
2/10 – Maximizing Your HIMSS17 Experience – Whether Attending Physically or Virtually
Hosted by Steve Sisko (@HITConfGuy and @shimcode)

2/17 – Enough talk, lets #GSD (Get Stuff Done)
Hosted by Burt Rosen (@burtrosen) from @healthsparq

2/24 – HIMSSanity Recovery Chat
With #HIMSS17 happening the week of this chat, we’ll take the week off from a formal chat. However, we encourage people that attended HIMSS or watched HIMSS remotely to share a “Tweetstorm” that tells a #HIMSS17 story, shares insights about a topic, rants on a topic of interest, or shows gratitude. Plus, it will be fun to test out a new form of tweetstorm Twitter chat. We’ll post more details as we get closer.

We look forward to learning from the #HITsm community! As always let us know if you have ideas for how to make #HITsm better.

If you’re searching for the latest #HITsm chat, you can always find the latest #HITsm chat and schedule of chats here.