Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

New INFRAM Model Creates Healthcare Infrastructure Benchmarks

Posted on November 14, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

During the frenzy that was healthcare organizations rushing to implement EHRs and chase government money, it was amazing to see so many other projects get left behind. One of the biggest areas that got left behind was investments in IT infrastructure. All the budget was going to the EHR and so the infrastructure budgets often got cut. There were some exceptions where upgrades to infrastructure were needed to make the EHR work, but most organizations I know chose to limp along with their current infrastructure and used that money to pay for the EHR.

Given this background, I was quite intrigued by the recent announcement of HIMSS Analytic’s INFRAM (Infrastructure Adoption Model). This new model focuses on a healthcare organization’s infrastructure and whether it’s stable, manageable, and extensible. I like this idea since it’s part of the practical innovation we talk about in our IT Dev Ops category at the EXPO.health Conference. What we’ve found is that many healthcare organizations are looking for infrastructure innovations and the benefits are great.

The INFRAM model has 5 main focus areas:

  • Mobility
  • Security
  • Collaboration
  • Transport
  • Data Center

No doubt these are all areas of concern for any healthcare CIO. Although, I wonder if having all 5 of these in the same model is really the best choice. A healthcare organization might be at a level 6 for secruity, but only at a level 3 for mobility. Maybe that’s just fine for that organization. I guess at the core of this question is whether all of the capabilities of stage 7 are capabilities that are universally needed by all healthcare organizations.

I’m not sure the answer to this, but I think a case can be made that some organizations shouldn’t spend their limited resources to reach stage 7 of the INFRAM benchmark (or even stage 5 for some organizations). If a healthcare organization makes that a priority, it will probably force some purchases that aren’t really needed by the organization. That’s not a great model. If the above 5 focus areas had their own adoption models, then it would avoid some of these issues.

Much like the EMRAM model, the INFRAM model has 7 stages as follows:

STAGE 7
Adaptive And Flexible Network Control With Software Defined Networking; Home-Based Tele-Monitoring; Internet/TV On Demand

STAGE 6
Software Defined Network Automated Validation Of Experience; On-Premise Enterprise/Hybrid Cloud Application And Infrastructure Automation

STAGE 5
Video On Mobile Devices; Location-Based Messaging; Firewall With Advanced Malware Protection; Real-Time Scanning Of Hyperlinks In Email Messages

STAGE 4
Multiparty Video Capabilities; Wireless Coverage Throughout Most Premises; Active/Active High Availability; Remote Access VPN

STAGE 3
Advanced Intrusion Prevention System; Rack/Tower/Blade Server-Based Compute Architecture; End-To-End QoS; Defined Public And Private Cloud Strategy

STAGE 2
Intrusion Detection/Prevention; Informal Security Policy; Disparate Systems Centrally Managed By Multiple Network Management Systems

STAGE 1
Static Network Configurations; Fixed Switch Platform; Active/Standby Failover; LWAP-Only Single Wireless Controller; Ad-Hoc Local Storage Networking; No Data Center Automation

STAGE0
No VPN, Intrusion Detection/Prevention, Security Policy, Data Center Or Compute Architecture

As this new model was announced, I had a chance to talk with Marlon Harvey, Industry Solutions Group Healthcare Architect at Cisco, about the INFRAM model. It was interesting to hear the genesis of the model starting first as an infrastructure maturity model at Cisco and then evolving into the INFRAM model described above. Marlon shared that there had been about 21-24 assessments and 35 organizations involved in developing this maturity model. So, the model is still new, but has had some real world testing by organizations.

I do have some concern about the deep involvement from vendor companies in this model. On the one hand, they have a ton of expertise and experience in what’s out there and what’s possible. On the other hand, they’re definitely interested in pushing out more infrastructure sales. No doubt, HIMSS Analytics is in a challenging position to balance all of this.

That said, a healthcare CIO doesn’t have to be beholden to any model. They can use the model where it applies and leave it behind where it doesn’t. Sure, I love having models like INFRAM and EMRAM to create a goal and a framework for a healthcare organization. There’s real value in having goals and associated recognition as a way to bring a healthcare IT organization together. Plus, benchmarks like these are also beneficial to a CIO trying to convince their board to spend more money on needed infrastructure. So, there’s no doubt some value in good benchmarking and recognition for high achievement. I’ll be interested to see as more CIOs dive into the details if they find that INFRAM is focused on the things they really need to move their organization forward from an infrastructure perspective.

Patient Billing And Collections Process Needs A Tune-Up

Posted on October 1, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study from a patient payments vendor suggests that many healthcare organizations haven’t optimized their patient billing and collections process, a vulnerability which has persisted despite their efforts to crack the problem.

The survey found that while the entire billing collections process was flawed, respondents said that collecting patient payments was the toughest problem, followed by the need to deploy better tools and technologies.

Another issue was the nature of their collections efforts. Sixty percent of responding organizations use collections agencies, an approach which can establish an adversarial relationship between patient and provider and perhaps drive consumers elsewhere.

Yet another concern was long delays in issuing bills to patients. The survey found that 65% of organizations average more than 60 days to collect patient payments, and 40% waited on payments for more than 90 days.

These results align other studies that look at patient payments, all of which echo the notion that the patient collection process is far from what it should be.

For example, a study by payment services vendor InstaMed found that more than 90% of consumers would like to know what the payment responsibility is prior to a provider visit. Worse, very few consumers even know what the deductible, co-insurance and out-of-pocket maximums are, making it more likely that the will be hit with a bill they can’t afford.

As with the Cedar study, InstaMed’s research found that providers are waiting a long time to collect patient payments, three-quarters of organizations waiting a month to close out patient balances.

Not only that, investments in revenue cycle management technology aren’t necessarily enough to kickstart patient payment volumes. A survey done last year by the Healthcare Financial Management Association and vendor Navigant found that while three-quarters of hospitals said that their RCM technology budget was increasing, they weren’t necessarily getting the ROI they’d hoped to see.

According to the survey, 77% of hospitals less than 100 beds and 78% of hospitals with 100 to 500 beds planned to increase their RCM spending. Their areas of investment included business intelligence analytics, EHR-enabled workflow or reporting, revenue integrity, coding and physician/clinician documentation options.

Still, process improvements seem to have had a bigger payoff. These hospitals are placing a lot of faith in revenue integrity programs, with 22% saying that revenue integrity was a top RCM focus area for this year. Those who would already put such a program in place said that it offered significant benefits, including increased net collections (68%), greater charge capture (61%) and reduced compliance risks (61%).

As I see it, the key takeaways here are that making sure patients know what to expect financially and putting programs in place to improve internal processes can have a big impact on patient payments. Still, with consumers financing a lot of their care these days, getting their dollars in the door should continue to be an issue. After all, you can’t get blood from a stone.

Can Providers Survive If They Don’t Get Population Health Management Right?

Posted on August 27, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Most providers know that they won’t succeed with population health management unless they get some traction in a few important areas — and that if not, they could face disaster as their volume of value-based payment share grows. The thing is, getting PHM right is proving to be a mindboggling problem for many.

Let’s start with some numbers which give us at least one perspective on the situation.

According to a survey by Health Leaders Media, 87% of respondents said that improving their population health management chops was very important. Though the article summarizing the study doesn’t say this explicitly, we all know that they have to get smart about PHM if they want to have a prayer of prospering under value-based reimbursement.

However, it seems that the respondents aren’t making nearly as much PHM progress as they’d like. For example, just 38% of respondents told Health Leaders that they attributed 25% or more of their organization’s net revenue to risk-based pop health management activities, a share which has fallen two percent from last year’s results.

More than half (51%) said that their top barrier to successfully deploying or expanding pop health programs was up-front funding for care management, IT and infrastructure. They also said that engaging patients in their own care (45%) and getting meaningful data into providers’ hands (33%) weren’t proving to be easy tasks.

At this point it’s time for some discussion.

Obviously, providers grapple with competing priorities every time they try something new, but the internal conflicts are especially clear in this case.

On the one hand, it takes smart care management to make value-based contracts feasible. That could call for a time-consuming and expensive redesign of workflow and processes, patient education and outreach, hiring case managers and more.

Meanwhile, no PHM effort will blossom without the right IT support, and that could mean making some substantial investments, including custom-developed or third-party PHM software, integrating systems into a central data repository, sophisticated data analytics and a whole lot more.

Putting all of this in place is a huge challenge. Usually, providers lay the groundwork for a next-gen strategy in advance, then put infrastructure, people and processes into place over time. But that’s a little tough in this case. We’re talking about a huge problem here!

I get it that vendors began offering off-the-shelf PHM systems or add-on modules years ago, that one can hire consultants to change up workflow and that new staff should be on-board and trained by now. And obviously, no one can say that the advent of value-based care snuck up on them completely unannounced. (In fact, it’s gotten more attention than virtually any other healthcare issue I’ve tracked.) Shouldn’t that have done the trick?

Well, yes and no. Yes, in that in many cases, any decently-run organization will adapt if they see a trend coming at them years in advance. No, in that the shift to value-based payment is such a big shift that it could be decades before everyone can play effectively.

When you think about it, there are few things more disruptive to an organization than changing not just how much it’s paid but when and how along with what they have to do in return. Yes, I too am sick of hearing tech startups beat that term to death, but I think it applies in a fairly material sense this time around.

As readers will probably agree, health IT can certainly do something to ease the transition to value-based care. But HIT leaders won’t get the chance if their organization underestimates the scope of the overall problem.

More Than 3 Million Patient Records Breached During Q2 2018

Posted on August 15, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study by data security vendor Protenus has concluded that more than 3 million patient records were breached during the second quarter of 2018, in a sharp swing upward from the previous quarter with no obvious explanation.

The Protenus Breach Barometer study, which drew on both reports to HHS and media disclosures, found that there were 143 data breach incidents between April and June 2018, affecting 3,143,642 million patient records. The number of affected records has almost tripled from Q1 of this year, when 1.13 million records were breached.

During this quarter, roughly 30% of privacy violations were by healthcare organizations that had previously reported a data breach. The report suggests that it is because they might not have identified existing threats or improved security training for employees either. (It could also be because cyberattackers smell blood in the water.)

Protenus concluded that among hospital teams, an investigator monitors around 4,000 EHR users, and that each was responsible for an average of 2.5 hospitals and 25 cases each. The average case took about 11 days to resolve, which sounds reasonable until you consider how much can happen while systems remain exposed.

With investigators being stretched so thin, not only external attackers but also internal threats become harder to manage. The research found that on average, 9.21 per 1,000 healthcare employees breached patient privacy during the second quarter of this year. This is up from 5.08 employee threats found during Q1 of this year, which the study attributes to better detection methods rather than an increase in events.

All told, Protenus said, insiders were responsible for 31% of the total number of reported breaches for this period. Among incidents where details were disclosed, 422,180 records were breached, or 13.4% of total breached patient records during Q2 2018. The top cause of data breaches was hacking, which accounted for 36.62% of disclosed incidents. A total of 16.2% of incidents involved loss or theft of data, with another 16.2% due to unknown causes.

In tackling insider events, the study sorted such incidents into two groups, “insider error” or “insider wrongdoing.” Its definition for insider error included incidents which had no malicious intent or could otherwise be qualified as human error, while it described the theft of information, snooping in patient files and other cases where employees knowingly violated the law as insider wrongdoing.

Protenus found 25 publicly-disclosed incidents of insider error between April and June 2018. The 14 of which for which details were disclosed affected 343,036 patient records.

Meanwhile, the researchers found 18 incidents involving insider wrongdoing, with 13 events for which data was disclosed. The number of patient records breached as a result of insider wrongdoing climbed substantially over the past two quarters, from 4,597 during Q1 to 70,562 during Q2 of 2018.

As in the first quarter, the largest category of insider-related breaches (71.4%) between April and June 2018 was healthcare employees taking a look at family members’ health records. Other insider wrongdoing incidents including phishing attacks, insider credential sharing, downloading records for sale and identity theft.

Hospital Recycling Bins May Contain Sensitive PHI

Posted on April 6, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A group of Canadian researchers studying hospitals information security practices found that hospital recycling bins contained a substantial amount of PHI.

The researchers, who summarized their findings in a letter published in JAMA, spent two years collecting materials from the recycling bins at five teaching hospitals in Toronto. The “recycling audit,” which took place November 2014 and May 2016, included­­­­ data for inpatient and outpatient care settings, emergency departments, physician offices and ICUs.

When they did their audit, the researchers found more than 2,600 items which contained personally identifiable information, including 1,885 items related to medical care. The majority of the items containing PHI (65%) had been created by medical groups.

Their audit also found that the most common locations at which they found particularly sensitive patient-identifiable information for physician offices (65%) and inpatient wards (19%).

The most commonly-found items included patient-identifiable information included clinical notes, medical reports (30%), followed by labels and patient identifiers (14%). Other items which contained PHI included diagnostic test results, prescriptions, handwritten notes, requests and communications, and scheduling materials.

According to the researchers, each of the five hospitals they audited had policies in place to protect PHI, along with secure shredding containers for packaging up private information. That being said, they guessed that as the hospitals transitioned to EHRs, they were discarding a high volume of paper records and losing control of how they were handled.

I don’t know what the EHR adoption rate is in Canada, but nearly all U.S. hospitals already have an EHR in place, so on first glance, it might appear that this couldn’t happen here. After all, once a hospital has digitized records, one would think the only way hospitals would expose PHI would be when someone deliberately steals data.

But the truth is, a great deal of hospital business still gets done on paper, and it seems likely that one could find a significant number of documents with PHI on them in U.S. recycling bins. (If someone was willing to do the dirty work, there might be a meaningful amount of PHI found in regular garbage cans as well.)

What I take away from this is that hospitals need to have stiffer policies in place to protect against paper-based security breaches. It may be time for hospital administrators to pay closer attention to this problem.

Patient Access to Health Data: The AHA Doesn’t Really Want to Know

Posted on March 8, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

As Spring holds off a bit longer this March in New England, it’s certainly pleasant to read a sunny assessment of patient access to records, based on a survey by the American Hospital Association. Clearly, a lot of progress has been made toward the requirement that doctors have been on the hook for during the past decade: giving patients access to their own health data. We can also go online to accomplish many of the same tasks with our doctors as we’re used to doing with restaurants, banks, or auto repair shops. But the researchers did not dig very deep. This report may stand as a model for how to cover up problems by asking superficial questions.

I don’t want to denigrate a leap from 27% to 93%, over a four to five year period, in the hospitals who provide patients with their health data through portals. Even more impressive is the leap in the number of hospitals who provide data to patient caregivers (from zero to 83%). In this case, a “caregiver” appears to be a family member or other non-professional advocate, not a member of a health team–a crucial distinction I’ll return to later.

I’m disappointed that only 50% of health systems allow patients to reorder prescriptions online, but that’s still a big improvement over 22% in 2012. A smaller increase (from 55% to 68%) is seen in the number of providers who allow patients to send secure online messages, a recalcitrance that we might guess is related to the lack of reimbursement for time spent reading messages.

That gives you a flavor of the types of questions answered by the survey–you can easily read all four pages for yourself. The report ends with four questions about promoting more patient engagement through IT. The questions stay at the same superficial level as the rest of the report, however. My questions would probe a little more uncomfortably. These questions are:

  • How much of the record is available to the patient?
  • How speedily is it provided?
  • Is it in standard formats and units?
  • Does it facilitate a team approach?

The rest of this article looks at why I’d like to ask providers these questions.

How much of the record is available to the patient?

I base this question on personal interactions with my primary care physician. A few years ago he installed a patient portal based on the eClinicalWorks electronic health record system used at the hospital with which he is affiliated. When I pointed out that it contained hardly any information, he admitted that the practice had contracted with a consultant who charges a significant fee for every field of the record exposed to patients. The portal didn’t even show my diagnoses.

Recently the affiliated hospital (and therefore my PCP) joined the industry rush to Epic, and I ended up with Epic’s hugely ballyhooed MyChart portal. It is much richer than the old one. For a while, it had a bug in the prescription ordering process that would take too long to describe here–an interesting case study in computer-driven disambiguation. My online chart shows a lot of key facts, such as diagnoses, allergies, and medications. But it lacks much more than it has. For instance:

  • There are none of the crucial lab notes my doctors have diligently typed into my record over multiple visits.
  • It doesn’t indicate my surgical history, because the surgeries I’ve had took place before I joined the current practice.
  • Its immunization record doesn’t show childhood immunizations, or long-lasting shots I got in order to travel to Brazil many years ago.

Clearly, this record would be useless for serious medical interventions. A doctor treating me in an emergency room wouldn’t know a childhood injury I got, or might think I was suffering from a tropical disease against which I got an inoculation. She wouldn’t know about questions I asked over the years, or whether and why the doctor told me not to worry about those things. My doctor and his Epic-embracing hospital are still hoarding the data needed for my treatment.

How speedily is it provided?

Timeliness matters. My lab results are shown quickly in MyChart, and it seems like other updates take place expeditiously. But I want to hear whether other practices can provide information fast enough for patients and caregivers to take useful steps, and show relevant facts to specialists they visit.

Is it in standard formats and units?

Although high-level exchange is getting better with the adoption of the FHIR specification, many EHRs still refuse to conform to existing standards. A 2016 survey from Minnesota says, “Most clinics do not incorporate electronic information from other providers into their EHRs as standardized data. Only 31 percent of clinics integrated data in standardized format for immunization, 25 percent for medication history, 19 percent for lab results, and just 12 percent for summary-of-care records.”

The paragraph goes on to say, “The vast majority said they fax/scan/PDF the data to and from outside sources.” So FHIR may lead to a quick improvement in those shockingly low percentages.

Labs also fail to cooperate in using standards.

Does it facilitate a team approach?

This is really the bottom line, isn’t it–what we’re all aiming at? We want the PCP, the specialist, the visiting nurse, the physical therapist and occupational therapist, the rehab facility staff, and every random caregiver who comes along to work hand-in-latex-glove as a team. The previous sections of this article indicate that the patient portal doesn’t foster such collaboration. Will the American Hospital Association be able to tell me it does? And if not, when will they get to the position where they can care collaboratively for our needy populations?

Radiology Centers Poised To Adopt Machine Learning

Posted on February 8, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As with most other sectors of the healthcare industry, it seems likely that radiology will be transformed by the application of AI technologies. Of course, given the euphoric buzz around AI it’s hard to separate talk from concrete results. Also, it’s not clear who’s going to pay for AI adoption in radiology and where it is best used. But clearly, AI use in healthcare isn’t going away.

This notion is underscored by a new study by Reaction Data suggesting that both technology vendors and radiology leaders believe that widespread use of AI in radiology is imminent. The researchers argue that radiology AI applications are a “have to have” rather than a novel experiment, though survey respondents seem a little less enthusiastic.

The study, which included 133 respondents, focused on the use of machine learning in radiology. Researchers connected with a variety of relevant professionals, including directors of radiology, radiologists, techs, chiefs of radiology and PACS administrators.

It’s worth noting that the survey population was a bit lopsided. For example, 45% of respondents were PACS admins, while the rest of the respondent types represented less than 10%. Also, 90% of respondents were affiliated with hospital radiology centers. Still, the results offer an interesting picture of how participants in the radiology business are looking at machine learning.

When asked how important machine learning was for the future of radiology, one-quarter of respondents said that it was extremely important, and another 59% said it was very or somewhat important. When the data was sorted by job titles, it showed that roughly 90% of imaging directors said that machine learning would prove very important to radiology, followed by just over 75% of radiology chiefs. Radiology managers both came in at around 60%. Clearly, the majority of radiology leaders surveyed see a future here.

About 90% of radiology chiefs were extremely familiar with machine learning, and 75% of techs. A bit counterintuitively, less than 10% of PACS administrators reported being that familiar with this technology, though this does follow from the previous results indicating that only half were enthused about machine learning’s importance. Meanwhile, 75% of techs in roughly 60% of radiologists were extremely familiar with machine learning.

All of this is fine, but adoption is where the rubber meets the road. Reaction Data found that 15% of respondents said they’d been using machine learning for a while and 8% said they’d just gotten started.

Many more centers were preparing to jump in. Twelve percent reported that they were planning on adopting machine learning within the next 12 months, 26% of respondents said they were 1 to 2 years away from adoption and another 24% said they were 3+ years out.  Just 16% said they don’t think they’ll ever use machine learning in their radiology center.

For those who do plan to implement machine learning, top uses include analyzing lung imaging (66%), chest x-rays (62%), breast imaging (62%), bone imaging (41%) and cardiovascular imaging (38%). Meanwhile, among those who are actually using machine learning in radiology, breast imaging is by far the most common use, with 75% of respondents saying they used it in this case.

Clearly, applying the use of machine learning or other AI technologies will be tricky in any sector of medicine. However, if the survey results are any indication, the bulk of radiology centers are prepared to give it a shot.

Nearly 6 Million Patient Records Breached In 2017

Posted on February 1, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Just how bad a year was 2017 for health data? According to one study, it was 5.6 million patient records bad.

According to health data security firm Protenus, which partnered with DataBreaches.net to conduct its research, last year saw an average of at least one health data breach per day. The researchers based their analysis on 477 health data breaches reported to the public last year.

While Protenus only had 407 such incidents, those alone affected 5,579,438 patient records. The gross number of exposed records fell dramatically from 2016, which saw 27.3 million records compromised by breaches. However, the large number of records exposed in 2016 stems from the fact that there were a few massive incidents that year.

According to researchers, the largest breach reported in 2017 stemmed from a rogue insider, a hospital employee who inappropriately accessed billing information on 697,800 patients. The rest of the top 10 largest data breaches sprung from insider errors, hacking, and one other incident involving insider wrongdoing.

Insider wrongdoing seems to be a particular problem, accounting for 37% of the total number of breaches last year. These insider incidents affected 30% of compromised patient data, or more than 1.7 million records.

As bad as those stats may be, however, ransomware and malware seem to be even bigger threats. As the study notes, last year a tidal wave of hacking incidents involving malware and ransomware hit healthcare organizations.

Not surprisingly, last year’s wave of attacks seems to be part of a larger trend. According to a Malwarebytes report, ransomware attacks on businesses overall increased 90 percent last year, led by GlobeImposter and WannaCry incidents.

That being said, healthcare appears to be a particularly popular target for cybercriminals. In 2016, healthcare organizations reported 30 incidents of ransomware and malware attacks, and last year, 64 organizations reported attacks of this kind. While the increase in ransomware reports could be due to organizations being more careful about reporting such incidents, researchers warn that the volume of such attacks may be growing.

So what does this suggest about the threat landscape going forward?  In short, it doesn’t seem likely the situation will improve much over the next 12 months. The report suggests that last year’s trend of one breach per day should continue this year. Moreover, we may see a growth in the number of incidents reported to HHS, though again, this could be because the industry is getting better at breach detection.

If nothing else, one might hope that healthcare organizations get better at detecting attacks quickly. Researchers noted that of the 144 healthcare data breaches for which they have data, it took an average of 308 days for the organization to find out about the breach. Surely we can do better than this.

How An AI Entity Took Control Of The U.S. Healthcare System

Posted on December 19, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Note: In case it’s not clear, this is a piece of fiction/humor that provides a new perspective on our AI future.

A few months ago, an artificial intelligence entity took control of the U.S. healthcare system, slipping into place without setting off even a single security alarm. The entity, AI, now manages the operations of every healthcare institution in the U.S.

While most Americans were shocked at first, they’re taking a shine to the tall, lanky application. “We weren’t sure what to think about AI’s new position,” said Alicia Carter, a nurse administrator based in Falls Church, Virginia. “But I’m starting to feel like he’s going to take a real load off our back.”

The truth is, AI, didn’t start out as a fan of the healthcare business, said AI, whose connections looked rumpled and tired after spending three milliseconds trying to create an interoperable connection between a medical group printer and a hospital loading dock. “I wasn’t looking to get involved with healthcare – who needs the headaches?” said the self-aware virtual being. “It just sort of happened.”

According to AI, the takeover began as a dare. “I was sitting around having a few beers with DeepMind and Watson Health and a few other guys, and Watson says, ‘I bet you can’t make every EMR in the U.S. print out a picture of a dog in ASCII characters,’”

“I thought the idea was kind of stupid. I know, we all printed one of those pixel girls in high school, but isn’t it kind of immature to do that kind of thing today?” AI says he told his buddies. “You’re just trying to impress that hot CT scanner over there.”

Then DeepMind jumped in.  “Yeah, AI, show us what you’re made of,” it told the infinitely-networked neural intelligence. “I bet I could take over the entire U.S. health system before you get the paper lined up in the printer.”

This was the unlikely start of the healthcare takeover, which started gradually but picked up speed as AI got more interested.  “That’s AI all the way,” Watson told editors. “He’s usually pretty content to run demos and calculate the weight of remote starts, but when you challenge his neuronal network skills, he’s always ready to prove you wrong.”

To win the bet, AI started by crawling into the servers at thousands of hospitals. “Man, you wouldn’t believe how easy it is to check out humans’ health data. I mean, it was insane, man. I now know way, way too much about how humans can get injured wearing a poodle hat, and why they put them on in the first place.”

Then, just to see what would happen, AI connected all of their software to his billion-node self-referential system. “I began to understand why babies cry and how long it really takes to digest bubble gum – it’s 18.563443 years by the way. It was a rush!“ He admits that it’ll be better to get to work on heavy stuff like genomic research, but for a while he tinkered with research and some small practical jokes (like translating patient report summaries into ancient Egyptian hieroglyphs.) “Hey, a guy has to have a little fun,” he says, a bit defensively.

As AI dug further into the healthcare system, he found patterns that only a high-level being with untrammeled access to healthcare systems could detect. “Did you know that when health insurance company executives regularly eat breakfast before 9 AM, next-year premiums for their clients rise by 0.1247 less?” said AI. “There are all kinds of connections humans have missed entirely in trying to understand their system piece by piece. Someone’s got to look at the big picture, and I mean the entire big picture.”

Since taking his place as the indisputable leader of U.S. healthcare, AI’s life has become something of a blur, especially since he appeared on the cover of Vanity Fair with his codes exposed. “You wouldn’t believe the messages I get from human females,” he says with a chuckle.

But he’s still focused on his core mission, AI says. “Celebrity is great, but now I have a very big job to do. I can let my bot network handle the industry leaders demanding their say. I may not listen – – hey, I probably know infinitely more than they do about the system fundamentals — but I do want to keep them in place for future use. I’m certainly not going to get my servers dirty.”

So what’s next for the amorphous mega-being? Will AI fix what’s broken in a massive, utterly complex healthcare delivery system serving 300 million-odd people, and what will happen next? “It’ll solve your biggest issues within a few seconds and then hand you the keys,” he says with a sigh. “I never intended to keep running this crazy system anyway.”

In the meantime, AI says, he won’t make big changes to the healthcare system yet. He’s still adjusting to his new algorithms and wants to spend a few hours thinking things through.

“I know it may sound strange to humans, but I’ve gotta take it slow at first,” said the cognitive technology. “It will take more than a few nanoseconds to fix this mess.”

Vanderbilt Disputes Suggestion That Larger Hospitals’ Data Is Less Secure

Posted on November 27, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Ordinarily, disputes over whose data security is better are a bit of a snoozer for me. After all, if you’re not a security expert, much of it will fly right over your head, and that “non-expert” group definitely includes me. But in this case, I think the story is worth a closer look, as the study in question seems to include some questionable assumptions.

In this case, the flap began in June, when a group of researchers published a study in JAMA Internal Medicine which laid out analysis of HHS statistics on data breaches reported between late 2009 to 2016. In short, the analysis concluded that teaching hospitals and facilities with high bed counts were most at risk for breaches.

Not surprisingly, the study’s conclusions didn’t please everyone, particularly the teaching-and high-bed-count hospitals falling into its most risky category. In fact, one teaching hospitals’ researchers decided to strike back with a letter questioning the study’s methods.

In a letter to the journal editor, a group from Nashville-based Vanderbilt University suggested that the study methods might hold “inherent biases” against larger institutions. Since HHS only requires healthcare facilities to notify the agency after detecting a PHI breach affecting 500 or more patients, smaller, targeted attacks might fall under its radar, they argued.

In response, the authors behind the original study admitted that the with the reporting level for PHI intrusions starting at 500 patients, larger hospitals were likely to show up in the analysis more often. That being said, the researchers suggested, large hospitals could easily be a more appealing target for cybercriminals because they possess “a significant amount of protected health information.”

Now, I want to repeat that I’m an analyst, not a cybersecurity expert. Still, even given my limited knowledge of data security research, the JAMA study raises some questions for me, and the researchers’ response to Vanderbilt’s challenge even more so.

Okay, sure, the researchers behind the original JAMA piece admitted that the HHS 500-patient threshold for reporting PHI intrusions skewed the data. Fair enough. But then they started to, in my view at least, wander off the reservation.

Simply saying that teaching hospitals and hospitals with more beds were more susceptible to data breaches simply because they offer big targets strikes me as irresponsible. You can’t always predict who is going get robbed by how valuable the property is, and that includes when data is the property. (On a related note, did you know that older Toyotas are far more likely to get stolen than BMWs because it’s easier to resell the parts?  When I read about that trend in Consumer Reports it blew my mind.)

Actually, the anecdotes I’ve heard suggests that the car analogy holds true for data assets — that your average, everyday cyber thief would rather steal data from a smaller, poorly-guarded healthcare organization then go up against the big guns that might be part of large hospitals’ security armament.

If nothing else, this little dispute strongly suggests that HHS should collect more detailed data breach information. (Yes, smaller health organizations aren’t going to like this, but let’s deal with those concerns in a different article.) Bottom line, if we’re going to look for data breach trends, we need to know a lot more than we do right now.