Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

California’s Information Privacy for Connected Devices Law is a Good Start, But Doesn’t Apply to Healthcare

Posted on December 13, 2018 I Written By

The following is a guest blog post by Mike Nelson, Vice President of IoT Security, DigiCert.

As the nation’s most populous state, California often serves as an incubator for national legislative and regulatory policy, and it’s great to see them take a leadership position in IoT cybersecurity. The announcement of California’s ‘IoT Cybersecurity Law’ is a move in the right direction. The new law will require manufacturers of connected devices to produce them with “reasonable” security features.

However, this law specifically excludes healthcare IoT devices. It states that a covered entity, provider of healthcare, business associate, healthcare service plan, contractor, employer, or any other person subject to HIPAA or the Confidentiality of Medical Information Act shall not be subject to this title with respect to any activity regulated by those acts.

While HIPAA has made great strides to help protect the privacy of personal health information, it does very little to protect the many connected medical devices that are in use today. California lawmakers missed an opportunity to drive strong IoT security requirements that protect consumers and the data they want kept confidential.

Additionally, this law will not solve the majority of cybersecurity issues that are being found in IoT devices. For example, the law requires good password practices, which includes the elimination of hard-coded passwords.  While this is a security best practice and is important for user authentication, it doesn’t cover the many back end connections that also need to be authenticated, such as over-the-air updates. Asking for “reasonable” security features to be produced simply isn’t directional enough.  It misses an opportunity to drive requirements around essential cybersecurity practices, like encryption of sensitive data, risk assessments, authenticating all connections to a device, and digitally signing code to ensure integrity.

A general rule of cybersecurity and connectivity is that whenever something becomes connected, it will eventually get hacked. The risks inherent with connected devices are real – especially in healthcare where in many cases, people rely on these devices to sustain life.  The risks of connectivity are diverse, including intercepting and manipulating sensitive data, or embedding malware that causes a device to malfunction and cause harm to a patient. The risks not only can impact patients, they can also harm the device manufacturers as well. 

St. Jude Medical, now Abbott Laboratories, learned this the hard way. A hacking organization publicized a vulnerability in a cardiac device after purchasing a short position of their stock. Upon release of this vulnerability, the company’s s stock dropped significantly, causing financial and reputational damage to St. Jude. Considering all these risks, and the many others I haven’t mentioned, it becomes clear that simply putting in place good password protections isn’t enough. More direction is needed. While it may sound like I’m advocating for stronger regulation, I’m not. I believe industries do much better when they come together and collaboratively develop best practices that are broadly adopted. Regulators can only do so much. Real solutions require the in-depth knowledge of healthcare practices and what the market can bear – something only companies and practitioners can tackle effectively, but the private sector needs to do more.

We need to begin looking at security more broadly than just hardcoded passwords. As a healthcare industry, we need to practice robust penetration testing and work to develop risk assessments on all connected medical devices. We need to make the encryption of sensitive data, both at rest and in transit, standard practice. No medical device should accept an unauthenticated message. No code or package should be executed on a device that is absent a digital signature verifying trust. Driving requirements around these types of best practices would have a much greater effect on the security of connected devices than the new California law currently does.

Though the IoT Cybersecurity Law is primitive in its protections and lacks many details to require strong security measures that would move the needle, at least California is trying to do something – absent the development of industry standards by collaborative groups. As the first of its kind at the state level, the effort should be applauded, as California is recognizing the need for manufacturers to address cybersecurity in the manufacturing process for connected devices. Time will tell if manufacturers will take responsibility and the initiative for security themselves, before further regulation requires them to act.

A HIPAA Life Sentence… and SO Many Lessons

Posted on November 15, 2018 I Written By

Mike Semel is a noted thought leader, speaker, blogger, and best-selling author of HOW TO AVOID HIPAA HEADACHES . He is the President and Chief Security Officer of Semel Consulting, focused on HIPAA and other compliance requirements; cyber security; and Business Continuity planning. Mike is a Certified Business Continuity Professional through the Disaster Recovery Institute, a Certified HIPAA Professional, Certified Security Compliance Specialist, and Certified Health IT Specialist. He has owned or managed technology companies for over 30 years; served as Chief Information Officer (CIO) for a hospital and a K-12 school district; and managed operations at an online backup company.

In 2012 Accretive Health Care was banned from doing business in Minnesota for 2 – 6 years for a HIPAA violation.

In 2018 New York State suspended a nurse’s license for a year for a HIPAA violation.

But, a life sentence?

The New Jersey Attorney General announced a $ 200,000 HIPAA and consumer fraud penalty against an out-of-business Georgia medical transcription company. In 2016 ATA Consulting LLC d/b/a Best Medical Transcription breached the medical records of over 1,650 people treated by three New Jersey healthcare providers by publicly exposing their medical records to the Internet. And, their customer, Virtua Health, paid a $ 418,000 settlement for violations of both HIPAA and the New Jersey Consumer Fraud Act.

Tushar Mathur, owner of Best Medical Transcription, agreed to a permanent ban on managing or owning a business in New Jersey.

Wow.

A life sentence for a HIPAA violation.

And the medical clinic paying a $ 418,000 penalty for the actions of its vendor.

By a state, not the federal government.

What can you learn from this?

1. It’s shocking to see how many servers have been misconfigured, or protected data being stored on web servers, exposing patient records to the Internet. These HIPAA penalties were all for exposing patient records through the Internet:

LESSONS –

  • Have your servers installed by a certified professional using a detailed checklist to ensure that no data is exposed to the Internet.
  • Make sure your organization has enough data breach insurance to cover millions of dollars in penalties; that you live up to all the requirements of your policy; and that you consistently implement the security controls you said you have in place on your insurance application.
  • Make sure your outsourced IT provider has enough Errors & Omissions insurance to cover your penalties

2. Many doctors and business owners tell me that “the federal government will never get them” or that they are “too small to be of interest” to federal regulators.

LESSONS –

  • Regulators go after small businesses, which doesn’t always make headlines. The Federal Trade Commission forced a 20-employee medical lab to go out of business. The business owner fought the FTC and ultimately won in court, but his business was gone.
  • Don’t ignore your risk that your state Attorney General (who probably wants to be governor) wants by getting headlines about protecting consumers. The HITECH Act (2009) gave state Attorneys General the authority to enforce HIPAA. Violations also can be tied to consumer protection laws, not just HIPAA.
  • Lawyers are representing patients whose information was released without authorization. Patients have successfully sued doctors for HIPAA violations.
  • Doctors shouldn’t laugh off HIPAA or just complain (INCORRECTLY) that it interferes with patient care. A doctor went to jail for a HIPAA violation.

3. HIPAA is only one regulation with which you must comply.

LESSONS –

  • Don’t think that a ‘We Make HIPAA Easy’ web-based solution is enough to protect your assets from all your regulatory challenges.
  • Don’t think that a self-conducted Security Risk Analysis is a substitute for a professionally-designed HIPAA compliance program that will meet all the federal and state requirements you must follow.
  • Don’t think that an IT Security company doing a vulnerability or penetration test is a substitute for a HIPAA Security Risk Analysis or a robust compliance program.
  • Every state now has data breach laws the state Attorneys General love to enforce. These consumer protection laws protect Personally Identifiable Information (PII) held by medical practices. State laws have different requirements than HIPAA. For example, HIPAA requires that patients be notified no later than 60 days after a data breach. California requires just 15 days.
  • Because of the opioid crisis, many types of medical practices are now offering substance abuse treatment, which requires additional confidentiality measures. So do HIV, mental health, and STD treatments. You need to address all the regulations that apply to you.

4. Don’t blindly trust your vendors.

LESSONS –

  • Signing a Business Associate Agreement (BAA) isn’t evidence that your vendor really complies with HIPAA. According to the NJ Attorney General, Best Transcription signed a BAA with Virtua Health but:
  • Failed to conduct an accurate and thorough risk assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of ePHI it held;
  • Failed to implement security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level to comply with the Security Rule;
  • Failed to implement policies and procedures to protect ePHI from improper alteration or destruction;
  • Failed to notify VMG of the breach of unsecured PHI; and
  • Improperly used and/or disclosed ePHI in contravention of its obligations under its Business Associate Agreement with VMG.

Make sure your vendors understand their HIPAA obligations. Even after five years, my experience is that many Business Associates have failed to keep up with the changes required by the 2013 HIPAA Omnibus Final Rule. Many talk about HIPAA in their sales and marketing but do not comply.

Remember that you are responsible for the actions of your vendors.

WHEN YOU ARE LYING AWAKE TONIGHT, ASK YOURSELF:

  • Are you really sure you can survive an investigation by your state attorney general?
  • Are you really sure your Business Associate vendors have conducted a HIPAA risk analysis; have implemented HIPAA security measures; have implemented HIPAA policies and procedures, are really protecting your PHI, and will notify you if there is a breach?
  • Are you willing to bet $ 418,000 (what Virtua paid) on it?
  • If you are a Business Associate, what do you think it will feel like if you are banned for life from doing business?

Doctors send patients to specialists all the time. Whether you are a medical provider or a vendor, do you have the trained and certified specialists you need that can help with all your regulatory challenges? Does your team need expert help to validate what is you and your vendors are doing and help you address any gaps?

Don’t risk your assets. Don’t risk a life sentence.

 

 

More Than 3 Million Patient Records Breached During Q2 2018

Posted on August 15, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study by data security vendor Protenus has concluded that more than 3 million patient records were breached during the second quarter of 2018, in a sharp swing upward from the previous quarter with no obvious explanation.

The Protenus Breach Barometer study, which drew on both reports to HHS and media disclosures, found that there were 143 data breach incidents between April and June 2018, affecting 3,143,642 million patient records. The number of affected records has almost tripled from Q1 of this year, when 1.13 million records were breached.

During this quarter, roughly 30% of privacy violations were by healthcare organizations that had previously reported a data breach. The report suggests that it is because they might not have identified existing threats or improved security training for employees either. (It could also be because cyberattackers smell blood in the water.)

Protenus concluded that among hospital teams, an investigator monitors around 4,000 EHR users, and that each was responsible for an average of 2.5 hospitals and 25 cases each. The average case took about 11 days to resolve, which sounds reasonable until you consider how much can happen while systems remain exposed.

With investigators being stretched so thin, not only external attackers but also internal threats become harder to manage. The research found that on average, 9.21 per 1,000 healthcare employees breached patient privacy during the second quarter of this year. This is up from 5.08 employee threats found during Q1 of this year, which the study attributes to better detection methods rather than an increase in events.

All told, Protenus said, insiders were responsible for 31% of the total number of reported breaches for this period. Among incidents where details were disclosed, 422,180 records were breached, or 13.4% of total breached patient records during Q2 2018. The top cause of data breaches was hacking, which accounted for 36.62% of disclosed incidents. A total of 16.2% of incidents involved loss or theft of data, with another 16.2% due to unknown causes.

In tackling insider events, the study sorted such incidents into two groups, “insider error” or “insider wrongdoing.” Its definition for insider error included incidents which had no malicious intent or could otherwise be qualified as human error, while it described the theft of information, snooping in patient files and other cases where employees knowingly violated the law as insider wrongdoing.

Protenus found 25 publicly-disclosed incidents of insider error between April and June 2018. The 14 of which for which details were disclosed affected 343,036 patient records.

Meanwhile, the researchers found 18 incidents involving insider wrongdoing, with 13 events for which data was disclosed. The number of patient records breached as a result of insider wrongdoing climbed substantially over the past two quarters, from 4,597 during Q1 to 70,562 during Q2 of 2018.

As in the first quarter, the largest category of insider-related breaches (71.4%) between April and June 2018 was healthcare employees taking a look at family members’ health records. Other insider wrongdoing incidents including phishing attacks, insider credential sharing, downloading records for sale and identity theft.

Regulatory Heat: Is Your BAA House in Order?

Posted on August 9, 2018 I Written By

The following is a guest blog post by Greg Waldstreicher, Founder and CEO of PHIflow.

Actions by the Office for Civil Rights (OCR) have clearly demonstrated stricter enforcement of HIPAA rules in recent years, specifically upping the ante on compliance with business associate agreements (BAAs). Much of this activity can be attributed to a grim outlook on security risks: globally, 70% of healthcare organizations have suffered a data breach, and a recent Ponemon Institute report found that the vast majority have experienced multiple security incidents involving protected health information (PHI).

BAAs play an important role in security as the framework by which an organization ensures that any vendor creating, receiving, maintaining or transmitting PHI complies with HIPAA. In recent years, these contracts have come under increased scrutiny amid high-level audits launched by OCR. Mismanagement of BAAs have thus far resulted in penalties ranging from $31,000 for simply not having a BAA in place to upwards of $5.5 million for more serious offenses.

While the stakes are high, healthcare organizations often lack effective oversight strategies for these important patient protection tools. In fact, it’s not uncommon for even the most basic information to elude the executive suite such as:

  • the number of BAAs that exist across an enterprise
  • where BAAs are located
  • the terms of each BAA

In an industry that has witnessed a significant uptick in security incidents and breaches in recent years, this current state of affairs is less than optimal. In truth, the reach of recent audit activity is still an unknown as the healthcare industry awaits full disclosure and recommendations from OCR. One of the latest OCR settlements —$3.5 million levied against Fresenuis Medical Care North America—resulted from multiple incidents that occurred in 2012, underscoring the lengthy timeframe associated with finalizing investigations and legal processes.

All told, current trends point to the need for better oversight and management of BAAs. While penalty activity subsided some in recent months as OCR went through internal transitions, industry legal experts expect that investigative momentum will continue to increase in proportion to heightened security risks across the healthcare landscape.

Unfortunately, healthcare organizations face notable roadblocks to getting their BAA house in order. Amid competing priorities, many simply lack the resources for tracking these agreements. Health systems are increasingly multi-faceted, and current trends associated with mergers, acquisitions and consolidations only exacerbate the challenge. The reality is that some large organizations have as many as 10,000 BAAs across the enterprise. Because these agreements are typically spread across multiple departments and facilities and have a multitude of different owners, managing them in a strategic way via manual processes is nearly impossible.

In tandem with the internal resource challenge, the language contained in BAAs has become significantly more complicated due to not only a fluid and evolving regulatory environment, but also the vital role they play in an overall security strategy. While a simple, cookie-cutter approach to these agreements was fitting a decade ago, BAAs are now intensely negotiated between covered entities and business associates and between business associates and sub-business associates, often involving HIPAA attorneys and resulting in requirements that go beyond HIPAA and HITECH. Subsequently, the terms of each BAA across an organization may vary, making efficient and effective management extremely difficult.

The good news is that there is a relatively simple solution—automated management of BAAs. The right technological framework can lay the foundation for timely access to all contracts across an enterprise, improving compliance and ensuring readiness for audits or breach response. Once consolidated, artificial intelligence can then be applied to BAAs to draw actionable insights in near real-time, informing key personnel of the key terms across all agreements.

The healthcare industry at large has drawn heavily on the promise of automation and data analytics in recent years to power more efficient and effective processes. Management of BAAs is no different and is an area ripe for improvement. Today’s healthcare executives need to consider the high stakes associated with ineffective management of BAAs and take action to shore up strategies amid greater security risks and a challenging regulatory environment.

About Greg Waldstreicher
Greg Waldstreicher is the founder and CEO of PHIflow, and the cofounder and former CEO of DoseSpot, where he worked at the forefront of the electronic prescribing (e-Prescribing) market for nine years. Under Greg’s leadership, DoseSpot licensed its SaaS e-Prescribing solutions to 175 healthcare software companies across the medical, dental, hospice and digital health markets. Greg received a B.S. from the University of Maryland College Park in Accounting and an M.S. from Northeastern University in Technological Entrepreneurship.

HIPAA Security Infographic

Posted on August 6, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

There are a lot of nuances to HIPAA. Hopefully, you’ve addressed them as part of your security risk analysis and any mitigation work that’s required as part of that analysis. Unfortunately, even an organization that does a solid HIPAA security risk analysis often doesn’t communicate what was done in that analysis to the rest of the organization.

With this in mind, I found this HIPAA security infographic by eFax to be valuable for those that aren’t deep in the nuances of HIPAA, but that want a quick overview of some common HIPAA issues that they should know about.

Embarrassment, Career Suicide, or Jail

Posted on July 26, 2018 I Written By

Mike Semel is a noted thought leader, speaker, blogger, and best-selling author of HOW TO AVOID HIPAA HEADACHES . He is the President and Chief Security Officer of Semel Consulting, focused on HIPAA and other compliance requirements; cyber security; and Business Continuity planning. Mike is a Certified Business Continuity Professional through the Disaster Recovery Institute, a Certified HIPAA Professional, Certified Security Compliance Specialist, and Certified Health IT Specialist. He has owned or managed technology companies for over 30 years; served as Chief Information Officer (CIO) for a hospital and a K-12 school district; and managed operations at an online backup company.

What You Can Learn from the Russian Army, the US Navy, and a Suspended Nurse

The General Counsel at one of our clients is a former district attorney who prosecuted identity theft cases. When I told him we work with people who think Identity Theft is a victimless crime, he got very angry, and rattled off a list of cases he had tried that had lasting damage to the victims. Cybercrimes and compliance violations are not victimless.

Identity theft victims have suffered threats of violence, financial ruin, threats of arrest, effects of business interruptions, damaged careers, and emotional and physical stress.  Some considered suicide.

Most data breaches are malicious, but some who committed bad acts did not know they were breaking laws. They thought their actions were just ‘mischief’, or mistakenly thought what they were doing was OK, but found out the hard way that they had committed crimes. Their careers were killed and some faced criminal charges. Some blamed their training, which may have been incomplete, but ignorance of the law is no excuse.

SPEAR-PHISHING by the RUSSIAN ARMY

Twelve members of the GRU, the Russian military intelligence service, were indicted by the United States for meddling with our elections, by using spear-phishing techniques that were remarkably effective. Those who were targeted suffered public shame and career damage.

Phishing is when hackers send out broadly-targeted e-mails, seemingly from banks, fax services, and businesses, trying to sucker many people into clicking on the link and sharing their personal data, or having malicious software silently install on their computer.

Spear-phishing is when a personally-targeted message is sent just to you, seemingly from a colleague or vendor – using names you recognize – asking you to send sensitive information or to click on a link that will install malicious software. These messages can be very tough to spot, because the hackers make you think that this is a personal message from someone you know. One popular method is to send the message from an e-mail address that is one or two letters different from a real address. Your eyes play tricks and you miss the slight difference in the address.

Spear-phishing resulted in the Russians allegedly getting the logins and passwords of Democratic and Republican party officials, which they used to get access to e-mails and other sensitive data.

Another personally targeted attack resulted in a company’s HR staff sending its W-2 tax details, including all employee Social Security Numbers, at the request of their CEO, who actually was a hacker using a very similar e-mail address to the CEO at the targeted company. Employees filed their tax returns, only to find out the hackers had already filed phony tax returns and gotten refunds, using their names and Social Security Numbers. Now these employees are on special lists of victims, delaying their future tax refunds; making it more difficult to get loans and maintain their credit ratings; and creating real stress and anxiety.

Spear-phishing has been used successfully by hackers to get CFO’s to transfer money to a hacker’s bank account, at the supposed request of their company’s CEO. These scams are often discovered way too late, only after a CFO casually says to a CEO that they transferred the $ 500,000 the CEO requested, only to see the look of panic on the CEO’s face.

What You Should Do

  • Individuals: Beware of every e-mail asking you to provide personal information, click on a link, transfer money, or send sensitive information. Call or meet face-to-face with the person requesting the information, to ensure it is legitimate.
  • Employers: Use a phishing training vendor to train your employees to recognize and report phishing and spear-phishing attempts. Use spam filters to block messages from known hackers. Implement policies to slow down the transfer of sensitive data, by requiring a phone or in-person verification any time someone in your organization receives a request for sensitive data, or a money transfer. While inconvenient, a delay is much better than discovering the request was fraudulent.

STEALING DATA – US NAVY SECRETS, and a SUSPENDED NURSING LICENSE

A former employee of a US Navy contractor was found guilty in federal court of stealing secret information simply by using a company computer to create a personal DropBox account, and transferring thousands of company documents. Jared Dylan Sparks is awaiting sentencing on six convictions that can each bring 10 years in federal prison, after he stole trade secrets from his current employer while seeking employment at another company.

In another case, the New York State Department of Health suspended a FORMER nurse after she took 3,000 patient records from a previous employer to her new job.

According to healthitsecurity.com, “the list included the patients’ names, addresses, dates of birth, and diagnoses. Martha Smith-Lightfoot asked for the list to ensure continuity of care for the patients. However, she did not receive the permission of URMC or the patients to give the information to her new employer.”

Smith-Lightfoot agreed to a one-year suspension, one year stayed suspension, and three years’ probation. She can’t work as a nurse for a year. What do you think her career chances will be, after her suspension, any time someone verifies her license status and sees why she was suspended?

What You Should Do

  • Individuals: Understand the requirements of your license or certification, and the laws that protect data. Licensing requirements for privacy and confidentiality pre-date HIPAA. While your organization may face a HIPAA penalty, you may face a damaged or destroyed career, as well as jail time.
  • Employers: Educate your workforce (EVERYONE, including employees, volunteers, contractors, vendors, etc.) about keeping patient, employment, and sensitive business information secure and confidential. Have everyone sign confidentiality agreements. You must be willing to evenly enforce your policies. Terminating a long-term employee when they break your rules may seem harsh, but necessary if you want to avoid corporate theft, compliance violations, and wrongful termination lawsuits if you fire someone after letting another person get away with a policy violation.

We have worked with clients whose current and workforce members used cloud-sharing services, like DropBox, Google Drive, and Microsoft OneDrive. By the time we discovered that these tools were installed on their network, many times it was too late. Data was already out the door, and no one knew what was taken. Implement Data Loss Prevention (DLP) security software that will automatically block critical data from being transferred to e-mail, cloud services, or portable thumb drives. Those that need to move data can be exempt from blocking, but you should protect your organization against everyone else.

People get hurt by data theft and violating regulations. Protect yourself, your patients, and your organization.

Health IT Leaders Fear Insider Security Threats More Than Cyberattacks

Posted on June 8, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A recently-published survey suggests that while most health IT security leaders feel confident they can handle external attacks, they worry about insider threats.

Cybersecurity vendor Imperva spoke with 102 health IT professionals at the recent HIMSS show to find out what their most pressing security concerns were and how prepared they were to address them.

The survey found that 73% of organizations had a senior information security leader such as a CISO in place. Another 14% were hoping to hire one within the next 12 months. Only 14% said they didn’t have a senior infosec pro in place and weren’t looking to hire.

Given how many organizations have or plan to have a security professional in place, it’s not surprising to read that 93% of respondents were either “very concerned” or “concerned” about a cyberattack affecting their organization. The type of cyberattacks that concerned them most included ransomware (32%), insider threats (25%), comprised applications (19%) and DDoS attacks (13%). (Eleven percent of responses fell into the “other” category.)

Despite their concerns, however, the tech pros felt they were prepared for most of these threats, with 52% that they were “very confident” or had “above average” confidence they could handle any attack, along with 32% stating that their defenses were “adequate.”  Just 9% said that their cybersecurity approach needed work, followed by 6% reporting that their defenses needed to be rebuilt.

Thirty-eight percent of the health IT pros said they’d been hit with a cyberattack during the past year, with another 4% reporting having been attacked more than a year ago.

Given the prevalence of cyberthreats, three-quarters of respondents said they had a cybersecurity incident response plan in place, with another 12% saying they planned to develop one during the next 12 months. Only 14% didn’t have a plan nor was creating one on their radar.

When it came to external threats, on the other hand, respondents seemed to be warier and less prepared. They were most worried about careless users (51%), compromised users (25%) and malicious users (24%).

Their concerns seem to be compounded by a sense that insider threats can be hard to detect. Catching insiders was difficult for a number of reasons, including having a large number of employees, contractors and business partners with access to their network (24%), more company assets on the network or in the cloud than previously (24%), lack of staff to analyze permissions data on employee access (25%) and a lack of tools to monitor insider activities (27%).

The respondents said the most time-consuming tasks involved in investigating/responding to insider threats included collecting information from diverse security tools (32%), followed by tuning security tools (26%), forensics or incident analysis (24%) and managing too many security alerts (17%).

“Shadow” Devices Expose Networks To New Threats

Posted on June 4, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new report by security vendor Infoblox suggests that threats posed by “shadow” personal devices connected to healthcare networks are getting worse.

The study, which looks at healthcare organizations in the US, UK, Germany, and UAE, notes that the average organization has thousands of personal devices connected to their enterprise network. Including personal laptops, Kindles and mobile phones.

Employees from the US and the UK report using personal devices connected to their enterprise network for multiple activities, including social media use (39%), downloading apps (24%), games (13%) and films (7%), the report says.

It would be bad enough if these pastimes only consumed network resources and time, but the problem goes far beyond that. Use of these shadow devices can open up healthcare networks to nasty attacks. For example, social media is increasingly a vector of malware infection, where bad actors launch attacks successfully urging them to download unfamiliar files.

Health IT directors responding to the study also said there were a significant number of non-business IoT devices connected to their network including fitness trackers (49%), digital assistants like Amazon Alexa (47%), smart TVs (46%), smart kitchen devices such as connected kettles of microwaves (33%) and game consoles such as the Xbox or PlayStation (30%).

In many cases, exploits can take total control of these devices, with serious potential consequences. For example, one can turn a Samsung Smart TV into a live microphone and other smart TVs could be used to steal data and install unwanted apps.

Of course. IT directors aren’t standing around and ignoring these threats and have developed policies for dealing with them. But the report argues that their security policies for connected devices aren’t as effective as they think. For example, while 88% of the IT leaders surveyed said their security policy was either effective or very effective, employees didn’t even know it was in effect in many cases.

In addition, 85% of healthcare organizations have also increased their cybersecurity spending over the past year, and 12% of organizations have increased it by over 50%. Most HIT leaders appear to be focused on traditional solutions, including antivirus software (60%) and cybersecurity investments (57%). In addition, more than half of US healthcare IT professionals said their company invests in encryption software.

Also, about one-third of healthcare IT professionals said the company is investing in employee education (35%), email security solutions and threat intelligence (30%). One in five were investing in biometric solutions.

Ultimately, what this report makes clear is that health IT organizations need to reduce the number of unauthorized personal devices connected to their network. Nearly any other strategy just puts a band-aid on a gaping wound.

Alexa Voice Assistant Centerpiece Of Amazon Health Effort

Posted on June 1, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

I don’t know about you, but until recently I had thought of the Amazon Echo is something of a toy. From what I saw, it seemed too cute, too gimmicky and definitely too expensive for my taste. Then I had a chance to try out the Echo my mother kept in her kitchen.

It’s almost embarrassing to say how quickly I was hooked. I didn’t even use many of Alexa’s capabilities. All I had to do was command her to play some music, answer some questions and do a search on the Amazon.com site and I was convinced I needed to have one. Its $99 price suddenly seemed like a bargain.

Of course, being a health IT geek I immediately wondered how the Alexa voice assistant might play a part in applications like telemedicine, but I was spending too much time playing “Name That Song” (I’m an 80s champ) to think things through.

But I had the right instincts. It’s become increasingly clear that Amazon sees Alexa as a key channel for reaching healthcare decision-makers.

According to a story appearing on the CNBC website, Amazon has built a 12-person team within the Alexa voice-assisted division called “health & wellness” whose focus is to make Alexa more useful to healthcare patients and providers. Its first targets include diabetes management, care for mothers and infants and aging, according to people who spoke anonymously with CNBC.

Of course, this effort would involve working through HIPAA rules, but it’s hard to imagine that a company like Amazon couldn’t buy and/or cultivate that expertise.

In the piece, writers Eugene Kim and Christina Farr argue that the mere existence of the health & wellness group is a clear sign that Amazon plans to bring Alexa to healthcare. As long as the Echo can share and upload data in a secure, HIPAA-compliant fashion, the possibilities are almost endless. In addition to sharing data with patients and clinicians, this would make it possible to integrate the data with secure third-party apps.

Of course, a 12-person unit is microscopic in size within a company like Amazon, and from that standpoint, the group might seem like a one-off experiment. On the other hand, its work seems more important when you consider the steps Amazon has already taken in the healthcare space.

The most conspicuous move Amazon has made in healthcare came in early 2018, when it announced a joint initiative with Berkshire Hathaway and J.P. Morgan focused on improving healthcare services. To date, the partnership hasn’t said much about its plans, but it’s hard to argue that something huge could emerge from bringing together players of this size.

In another, less conspicuous move, Alexa took a step towards competing in the diabetes care market. In the summer of 2017, working with Merck, Amazon offered a prize to developers building Alexa “skills” which could help people with diabetes manage all aspects of their care. One might argue that this kind of project could be more important than something big and splashy.

It’s worth noting at this point that even a monster like Google still hasn’t made bold moves in healthcare (though it does have extraordinarily ambitious plans). Amazon may not find it easy to compete. Still, it will certainly do some interesting things, and I’m eager to see them play out. In fact, I’m on the edge of my seat – aren’t you?

Why You Shouldn’t Take Calculated Risks with Security

Posted on May 9, 2018 I Written By

The following is a guest blog post by Erin Gilmer (@GilmerHealthLaw).

Calculated risks are often lauded in innovation.  However, with increasing security breaches in the tech industry, it is time to reassess the calculated risks companies take in healthcare.

Time and again, I have advised technology companies and medical practices to invest in security and yet I am often met with resistance, a culture of calculated risk prevails.  To these companies and practices, this risk may make sense to them in the short term. Resources are often limited and so they often believe that they needn’t spend the time and money in security.  However, the notion that a company or a practice can take this chance is ill advised.

As a recent study conducted by HIMSS (and reviewed by Ann Zieger here) warns, “significant security incidents are projected to continue to grow in number, complexity and impact.” Thus in taking the calculated risk not to invest in security, companies and practices are creating greater risk for in the long run, one that comes with severe consequences.

As we have seen outside of healthcare, even “simple” breaches of user names and passwords as happened to Under Armour’s MyFitnessPal app, become relatively important use cases as examples of the impact a security breach can have. While healthcare companies typically think of this in terms of HIPAA compliance and oversight by the Office for Civil Rights (OCR), the consequences reach far wider.  Beyond the fines or even jail time that the OCR can impose, what these current breaches show us is how easy it is for the public to lose trust in an entity.  For a technology company, this means losing valuation which could signal a death knell for a startup. For a practice, this may mean losing patients.  For any entity, it will likely result in substantial legal fees.

Why take the risk not to invest in security? A company may think they are saving time and money up front and the likelihood of a breach or security incident is low. But in the long run, the risk is too great – no company wants to end up with their name splashed across the headlines, spending more money on legal fees, scrambling to notify those whose information has been breached, and rebuilding lost trust.  The short term gain of saving resources is not worth this risk.

The best thing a company or practice can do to get started is to run a detailed risk assessment. This is already required under HIPAA but is not always made a priority.  As the HIMSS report also discussed, there is no one standard for risk assessment and often the OCR is flexible knowing entities may be different sizes and have different resource. While encryption standards and network security should remain a high priority with constant monitoring, there are a few standard aspects of risk assessment including:

  • Identifying information (in either physical or electronic format) that may be at risk including where it is and whether the entity created, received, and/or is storing it;
  • Categorizing the risk of each type of information in terms of high, medium, or low risk and the impact a breach would have on this information;
  • Identifying who has access to the information;
  • Developing backup systems in case information is lost, unavailable, or stolen; and
  • Assessing incidence response plans.

Additionally, it is important to ensure proper training of all staff members on HIPAA policies and procedures including roles and responsibilities, which should be detailed and kept up to date in the office.

This is merely a start and should not be the end of the security measures companies and practices take to ensure they do not become the next use case. When discussing a recent $3.5 million settlement, OCR Director Roger Severino recently emphasized that, “there is no substitute for an enterprise-wide risk analysis for a covered entity.” Further, he stressed that “Covered entities must take a thorough look at their internal policies and procedures to ensure they are protecting their patients’ health information in accordance with the law.”

Though this may seem rudimentary, healthcare companies and medical practices are still not following simple steps to address security and are taking the calculated risk not to – which will likely be at their own peril.

About Erin Gilmer
Erin Gilmer is a health law and policy attorney and patient advocate. She writes about a range of issues on different forums including technology, disability, social justice, law, and social determinants of health. She can be found on twitter @GilmerHealthLaw or on her blog at www.healthasahumanright.wordpress.com.