Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Radiology Centers Poised To Adopt Machine Learning

Posted on February 8, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As with most other sectors of the healthcare industry, it seems likely that radiology will be transformed by the application of AI technologies. Of course, given the euphoric buzz around AI it’s hard to separate talk from concrete results. Also, it’s not clear who’s going to pay for AI adoption in radiology and where it is best used. But clearly, AI use in healthcare isn’t going away.

This notion is underscored by a new study by Reaction Data suggesting that both technology vendors and radiology leaders believe that widespread use of AI in radiology is imminent. The researchers argue that radiology AI applications are a “have to have” rather than a novel experiment, though survey respondents seem a little less enthusiastic.

The study, which included 133 respondents, focused on the use of machine learning in radiology. Researchers connected with a variety of relevant professionals, including directors of radiology, radiologists, techs, chiefs of radiology and PACS administrators.

It’s worth noting that the survey population was a bit lopsided. For example, 45% of respondents were PACS admins, while the rest of the respondent types represented less than 10%. Also, 90% of respondents were affiliated with hospital radiology centers. Still, the results offer an interesting picture of how participants in the radiology business are looking at machine learning.

When asked how important machine learning was for the future of radiology, one-quarter of respondents said that it was extremely important, and another 59% said it was very or somewhat important. When the data was sorted by job titles, it showed that roughly 90% of imaging directors said that machine learning would prove very important to radiology, followed by just over 75% of radiology chiefs. Radiology managers both came in at around 60%. Clearly, the majority of radiology leaders surveyed see a future here.

About 90% of radiology chiefs were extremely familiar with machine learning, and 75% of techs. A bit counterintuitively, less than 10% of PACS administrators reported being that familiar with this technology, though this does follow from the previous results indicating that only half were enthused about machine learning’s importance. Meanwhile, 75% of techs in roughly 60% of radiologists were extremely familiar with machine learning.

All of this is fine, but adoption is where the rubber meets the road. Reaction Data found that 15% of respondents said they’d been using machine learning for a while and 8% said they’d just gotten started.

Many more centers were preparing to jump in. Twelve percent reported that they were planning on adopting machine learning within the next 12 months, 26% of respondents said they were 1 to 2 years away from adoption and another 24% said they were 3+ years out.  Just 16% said they don’t think they’ll ever use machine learning in their radiology center.

For those who do plan to implement machine learning, top uses include analyzing lung imaging (66%), chest x-rays (62%), breast imaging (62%), bone imaging (41%) and cardiovascular imaging (38%). Meanwhile, among those who are actually using machine learning in radiology, breast imaging is by far the most common use, with 75% of respondents saying they used it in this case.

Clearly, applying the use of machine learning or other AI technologies will be tricky in any sector of medicine. However, if the survey results are any indication, the bulk of radiology centers are prepared to give it a shot.

Nearly 6 Million Patient Records Breached In 2017

Posted on February 1, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Just how bad a year was 2017 for health data? According to one study, it was 5.6 million patient records bad.

According to health data security firm Protenus, which partnered with DataBreaches.net to conduct its research, last year saw an average of at least one health data breach per day. The researchers based their analysis on 477 health data breaches reported to the public last year.

While Protenus only had 407 such incidents, those alone affected 5,579,438 patient records. The gross number of exposed records fell dramatically from 2016, which saw 27.3 million records compromised by breaches. However, the large number of records exposed in 2016 stems from the fact that there were a few massive incidents that year.

According to researchers, the largest breach reported in 2017 stemmed from a rogue insider, a hospital employee who inappropriately accessed billing information on 697,800 patients. The rest of the top 10 largest data breaches sprung from insider errors, hacking, and one other incident involving insider wrongdoing.

Insider wrongdoing seems to be a particular problem, accounting for 37% of the total number of breaches last year. These insider incidents affected 30% of compromised patient data, or more than 1.7 million records.

As bad as those stats may be, however, ransomware and malware seem to be even bigger threats. As the study notes, last year a tidal wave of hacking incidents involving malware and ransomware hit healthcare organizations.

Not surprisingly, last year’s wave of attacks seems to be part of a larger trend. According to a Malwarebytes report, ransomware attacks on businesses overall increased 90 percent last year, led by GlobeImposter and WannaCry incidents.

That being said, healthcare appears to be a particularly popular target for cybercriminals. In 2016, healthcare organizations reported 30 incidents of ransomware and malware attacks, and last year, 64 organizations reported attacks of this kind. While the increase in ransomware reports could be due to organizations being more careful about reporting such incidents, researchers warn that the volume of such attacks may be growing.

So what does this suggest about the threat landscape going forward?  In short, it doesn’t seem likely the situation will improve much over the next 12 months. The report suggests that last year’s trend of one breach per day should continue this year. Moreover, we may see a growth in the number of incidents reported to HHS, though again, this could be because the industry is getting better at breach detection.

If nothing else, one might hope that healthcare organizations get better at detecting attacks quickly. Researchers noted that of the 144 healthcare data breaches for which they have data, it took an average of 308 days for the organization to find out about the breach. Surely we can do better than this.

How An AI Entity Took Control Of The U.S. Healthcare System

Posted on December 19, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Note: In case it’s not clear, this is a piece of fiction/humor that provides a new perspective on our AI future.

A few months ago, an artificial intelligence entity took control of the U.S. healthcare system, slipping into place without setting off even a single security alarm. The entity, AI, now manages the operations of every healthcare institution in the U.S.

While most Americans were shocked at first, they’re taking a shine to the tall, lanky application. “We weren’t sure what to think about AI’s new position,” said Alicia Carter, a nurse administrator based in Falls Church, Virginia. “But I’m starting to feel like he’s going to take a real load off our back.”

The truth is, AI, didn’t start out as a fan of the healthcare business, said AI, whose connections looked rumpled and tired after spending three milliseconds trying to create an interoperable connection between a medical group printer and a hospital loading dock. “I wasn’t looking to get involved with healthcare – who needs the headaches?” said the self-aware virtual being. “It just sort of happened.”

According to AI, the takeover began as a dare. “I was sitting around having a few beers with DeepMind and Watson Health and a few other guys, and Watson says, ‘I bet you can’t make every EMR in the U.S. print out a picture of a dog in ASCII characters,’”

“I thought the idea was kind of stupid. I know, we all printed one of those pixel girls in high school, but isn’t it kind of immature to do that kind of thing today?” AI says he told his buddies. “You’re just trying to impress that hot CT scanner over there.”

Then DeepMind jumped in.  “Yeah, AI, show us what you’re made of,” it told the infinitely-networked neural intelligence. “I bet I could take over the entire U.S. health system before you get the paper lined up in the printer.”

This was the unlikely start of the healthcare takeover, which started gradually but picked up speed as AI got more interested.  “That’s AI all the way,” Watson told editors. “He’s usually pretty content to run demos and calculate the weight of remote starts, but when you challenge his neuronal network skills, he’s always ready to prove you wrong.”

To win the bet, AI started by crawling into the servers at thousands of hospitals. “Man, you wouldn’t believe how easy it is to check out humans’ health data. I mean, it was insane, man. I now know way, way too much about how humans can get injured wearing a poodle hat, and why they put them on in the first place.”

Then, just to see what would happen, AI connected all of their software to his billion-node self-referential system. “I began to understand why babies cry and how long it really takes to digest bubble gum – it’s 18.563443 years by the way. It was a rush!“ He admits that it’ll be better to get to work on heavy stuff like genomic research, but for a while he tinkered with research and some small practical jokes (like translating patient report summaries into ancient Egyptian hieroglyphs.) “Hey, a guy has to have a little fun,” he says, a bit defensively.

As AI dug further into the healthcare system, he found patterns that only a high-level being with untrammeled access to healthcare systems could detect. “Did you know that when health insurance company executives regularly eat breakfast before 9 AM, next-year premiums for their clients rise by 0.1247 less?” said AI. “There are all kinds of connections humans have missed entirely in trying to understand their system piece by piece. Someone’s got to look at the big picture, and I mean the entire big picture.”

Since taking his place as the indisputable leader of U.S. healthcare, AI’s life has become something of a blur, especially since he appeared on the cover of Vanity Fair with his codes exposed. “You wouldn’t believe the messages I get from human females,” he says with a chuckle.

But he’s still focused on his core mission, AI says. “Celebrity is great, but now I have a very big job to do. I can let my bot network handle the industry leaders demanding their say. I may not listen – – hey, I probably know infinitely more than they do about the system fundamentals — but I do want to keep them in place for future use. I’m certainly not going to get my servers dirty.”

So what’s next for the amorphous mega-being? Will AI fix what’s broken in a massive, utterly complex healthcare delivery system serving 300 million-odd people, and what will happen next? “It’ll solve your biggest issues within a few seconds and then hand you the keys,” he says with a sigh. “I never intended to keep running this crazy system anyway.”

In the meantime, AI says, he won’t make big changes to the healthcare system yet. He’s still adjusting to his new algorithms and wants to spend a few hours thinking things through.

“I know it may sound strange to humans, but I’ve gotta take it slow at first,” said the cognitive technology. “It will take more than a few nanoseconds to fix this mess.”

Vanderbilt Disputes Suggestion That Larger Hospitals’ Data Is Less Secure

Posted on November 27, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Ordinarily, disputes over whose data security is better are a bit of a snoozer for me. After all, if you’re not a security expert, much of it will fly right over your head, and that “non-expert” group definitely includes me. But in this case, I think the story is worth a closer look, as the study in question seems to include some questionable assumptions.

In this case, the flap began in June, when a group of researchers published a study in JAMA Internal Medicine which laid out analysis of HHS statistics on data breaches reported between late 2009 to 2016. In short, the analysis concluded that teaching hospitals and facilities with high bed counts were most at risk for breaches.

Not surprisingly, the study’s conclusions didn’t please everyone, particularly the teaching-and high-bed-count hospitals falling into its most risky category. In fact, one teaching hospitals’ researchers decided to strike back with a letter questioning the study’s methods.

In a letter to the journal editor, a group from Nashville-based Vanderbilt University suggested that the study methods might hold “inherent biases” against larger institutions. Since HHS only requires healthcare facilities to notify the agency after detecting a PHI breach affecting 500 or more patients, smaller, targeted attacks might fall under its radar, they argued.

In response, the authors behind the original study admitted that the with the reporting level for PHI intrusions starting at 500 patients, larger hospitals were likely to show up in the analysis more often. That being said, the researchers suggested, large hospitals could easily be a more appealing target for cybercriminals because they possess “a significant amount of protected health information.”

Now, I want to repeat that I’m an analyst, not a cybersecurity expert. Still, even given my limited knowledge of data security research, the JAMA study raises some questions for me, and the researchers’ response to Vanderbilt’s challenge even more so.

Okay, sure, the researchers behind the original JAMA piece admitted that the HHS 500-patient threshold for reporting PHI intrusions skewed the data. Fair enough. But then they started to, in my view at least, wander off the reservation.

Simply saying that teaching hospitals and hospitals with more beds were more susceptible to data breaches simply because they offer big targets strikes me as irresponsible. You can’t always predict who is going get robbed by how valuable the property is, and that includes when data is the property. (On a related note, did you know that older Toyotas are far more likely to get stolen than BMWs because it’s easier to resell the parts?  When I read about that trend in Consumer Reports it blew my mind.)

Actually, the anecdotes I’ve heard suggests that the car analogy holds true for data assets — that your average, everyday cyber thief would rather steal data from a smaller, poorly-guarded healthcare organization then go up against the big guns that might be part of large hospitals’ security armament.

If nothing else, this little dispute strongly suggests that HHS should collect more detailed data breach information. (Yes, smaller health organizations aren’t going to like this, but let’s deal with those concerns in a different article.) Bottom line, if we’re going to look for data breach trends, we need to know a lot more than we do right now.

Eliminate These Five Flaws to Improve Asset Utilization in Healthcare

Posted on October 4, 2017 I Written By

The following is a guest blog post by Mohan Giridharadas, Founder and CEO, LeanTaaS.

The passage of the Health Information Technology for Economic and Clinical Health (HITECH) Act accelerated the deployment of electronic health records (EHRs) across healthcare. The overwhelming focus was to capture every patient encounter and place it into an integrated system of records. Equipped with this massive database of patient data, health systems believed they could make exponential improvements to patient experiences and outcomes.

The pace of this migration resulted in some shortcuts being taken — the consequences of which are now apparent to discerning CFOs and senior leaders. Among these shortcuts was the use of resources and capacity as the basis of scheduling patients; this concept is used by hundreds of schedulers in every health system. While simple to grasp, the definition is mathematically flawed.

Not being able to offer a new patient an appointment for at least 10 days negatively impacts the patient experience. Likewise, exceeding capacity by scheduling too many appointments results in long wait times for patients, which also negatively impacts their experience. The troubling paradox is that the very asset creating long wait times and long lead times for appointments also happens to perform at ~50 percent utilization virtually every day. The impact of a mathematically flawed foundation results in alternating between overutilization (causing long patient wait times and/or long delays in securing an appointment) and under-utilization (a waste of expensive capital and human assets).

Here are five specific flaws in the mathematical foundation of health system scheduling:

1. A medical appointment is a stochastic — not deterministic — event.

Every health system has some version of this grid — assets across the top, times of the day for each day of the week along the side — on paper, in electronic format or on a whiteboard. The assets could be specific (e.g., the GE MRI machine or virtual MRI #1, #2, etc.). As an appointment gets confirmed, the appropriate range of time on the grid gets filled in to indicate that the slot has been reserved.

Your local racquet club uses this approach to reserve tennis courts for its members. It works beautifully because the length of a court reservation is precisely known (i.e., deterministic) to be exactly one hour in duration. Imagine the chaos if club rules were changed to allow players to hold their reservation even if they arrive late (up to 30 minutes late) and play until they were tired (up to a maximum of two hours). This would make the start and end times for a specific tennis appointment random (i.e., stochastic). Having a reservation would no longer mean you would actually get on the court at your scheduled time. This happens to patients every day across many parts of a health system. The only way to address the fact that a deterministic framework was used to schedule a stochastic event is to “reserve capacity” either in the form of a time buffer (i.e., pretend that each appointment is actually longer than necessary) or as an asset buffer (i.e., hold some assets in reserve).

2. The asset cannot be scheduled in isolation; a staff member has to complete the treatment.

Every appointment needs a nurse, provider or technician to complete the treatment. These staff members are scheduled independently and have highly variable workloads throughout the day. Having an asset that is available without estimating the probability of the appropriate staff member also being available at that exact time will invariably result in delays. Imagine if the tennis court required the club pro be present for the first 10 and last 10 minutes of every tennis appointment. The grid system wouldn’t work in that case either (unless the club was willing to have one tennis pro on the staff for every tennis court).

3. It requires an estimation of probabilities.

Medical appointments have a degree of randomness — no-shows, cancellations and last-minute add-ons are a fact of life, and some appointments run longer or shorter than expected. Every other scheduling system faced with such uncertainty incorporates the mathematics of probability theory. For example, airlines routinely overbook their flights; the exact number of overbooked seats sold depends on the route, the day and the flight. They usually get it right, and the cancellations and no-shows create enough room for the standby passengers. Occasionally, they get it wrong and more passengers hold tickets than the number of seats on the airplane. This results in the familiar process of finding volunteers willing to take a later flight in exchange for some sort of compensation. Nothing in the EHR or scheduling systems used by hospitals allows for this strategic use of probability theory to improve asset utilization.

4. Start time and duration are independent variables.

Continuing with the airplane analogy: As a line of planes work their way toward the runway for departure, the controller really doesn’t care about each flight’s duration. Her job is to get each plane safely off the ground with an appropriate gap between successive takeoffs. If one 8-hour flight were to be cancelled, the controller cannot suddenly decide to squeeze in eight 1-hour flights in its place. Yet, EHRs and scheduling systems have conflated start time and appointment duration into a single variable. Managers, department leaders and schedulers have been taught that if they discover a 4-hour opening in the “appointment grid” for any specific asset, they are free to schedule any of the following combinations:

  • One 4-hour appointment
  • Two 2-hour appointments
  • One 2-hour appointment and two 1-hour appointments in any order
  • One 3-hour appointment and one 1-hour appointment in either order
  • Four 1-hour appointments

These are absolutely not equivalent choices. Each has wildly different resource-loading implications for the staff, and each choice has a different probability profile of starting or ending on time. This explains why the perfectly laid out appointment grid at the start of each day almost never materializes as planned.

5. Setting appointments is more complicated than first-come, first-served.

Schedulers typically make appointments on a first-come, first-served basis. If a patient were scheduling an infusion treatment or MRI far in advance, the patient would likely hear “the calendar is pretty open on that day — what time would you like?” What seems like a patient-friendly gesture is actually mathematically incorrect. The appointment options for each future day should be a carefully orchestrated set of slots of varying durations that will result in the flattest load profile possible. In fact, blindly honoring patient appointment requests just “kicks the can down the road”; the scheduler has merely swapped the inconvenience of appointment time negotiation for excessive patient delays on the day of treatment. Instead, the scheduler should steer the patient to one of the recommended appointment slots based on the duration for that patient’s specific treatment.

In the mid-1980s, Sun Microsystems famously proclaimed that the “network is the computer.” The internet and cloud computing were not yet a thing, so most people could not grasp the concept of computers needing to be interconnected and that the computation would take place in the network and not on the workstation. In healthcare scheduling, “the duration is the resource” — the number of slots of a specific duration must be counted and allocated judiciously at various points throughout the day. Providers should carefully forecast the volume and the duration mix of patients they expect to serve for every asset on every day of the week. With that knowledge the provider will know, for example, that on Mondays, we need 10 1-hour treatments, 15 2-hour treatments and so on. Schedulers could then strategically decide to space appointments throughout the day (or cluster them in the morning or afternoon) by offering up two 1-hour slots at 7:10 a.m., one 1-hour slot at 7:40 a.m., etc. The allocation pattern matches the availability of the staff and the underlying asset to deliver the most level-loaded schedule for each day. In this construct, the duration is the resource being offered up to patients one at a time with the staff and asset availability as mathematical constraints to the equation (along with dozens of other operational constraints).

Health systems need to re-evaluate the mathematical foundation used to guide their day-to-day operations — and upon which the quality of the patient experience relies. All the macro forces in healthcare (more patients, older patients, higher incidence of chronic illnesses, lower reimbursements, push toward value-based care, tighter operating and capital budgets) indicate an urgent need to be able to do more with existing assets without upsetting patient flow. A strong mathematical foundation will enable a level of operational excellence to help health systems increase their effective capacity for treating more patients while simultaneously improving the overall flow and reducing the wait time.

About Mohan Giridharadas
Mohan Giridharadas is an accomplished expert in lean methodologies. During his 18-year career at McKinsey & Company (where he was a senior partner/director for six years), he co-created the lean service operations practice and ran the North American lean manufacturing and service operations practices and the Asia-Pacific operations practice. He has helped numerous Fortune 500 companies drive operational efficiency with lean practices. As founder and CEO of LeanTaaS, a Silicon Valley-based innovator of cloud-based solutions to healthcare’s biggest challenges, Mohan works closely with dozens of leading healthcare institutions including Stanford Health Care, UCHealth, NewYork-Presbyterian, Cleveland Clinic, MD Anderson and more. Mohan holds a B.Tech from IIT Bombay, MS in Computer Science from Georgia Institute of Technology and an MBA from Stanford GSB. He is on the faculty of Continuing Education at Stanford University and UC Berkeley Haas School of Business and has been named by Becker’s Hospital Review as one of the top entrepreneurs innovating in healthcare. For more information on LeanTaaS, please visit http://www.leantaas.com and follow the company on Twitter @LeanTaaS, Facebook at https://www.facebook.com/LeanTaaS and LinkedIn at https://www.linkedin.com/company/leantaas.

Will ACOs Face Tough Antitrust Scrutiny?

Posted on August 2, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For some reason, I’ve always been interested in antitrust regulation, not just in the healthcare industry but across the board.

To me, there’s something fascinating about how federal agencies define markets, figure out what constitutes an unfair level of market dominance and decide which deals are out of bounds. For someone who’s not a lawyer, perhaps that’s a strange sort of geeking out to do, but there you have it.

Obviously, given how complex industry relationships are, healthcare relationships are fraught with antitrust issues to ponder. Lately, I’ve begun thinking about how antitrust regulators will look at large ACOs. And I’ve concluded that ACOs will be on the radar of the FTC and U.S. Department of Justice very soon, if they aren’t already.

On their face, ACOs try to dominate markets, so there’s plenty of potential for them to tip the scales too far in their favor for regulators to ignore. Their business model involves both vertical and horizontal integration, either of which could be seen as giving participants too much power.

Please take the following as a guide from an amateur who follows antitrust issues. Again, IANAL, but my understanding is as follows:

  • Vertical integration in healthcare glues together related entities that serve each other directly, such as health plans, hospitals, physician groups and skilled nursing facilities.
  • Horizontal integration connects mutually interested service providers, including competitors such as rival hospitals.

Even without being a legal whiz, it’s easy to understand why either of these ACO models might lead to (what the feds would see as) a machine that squeezes out uninvolved parties. The fact that these providers may share a single EMR could makes matters worse, as it makes the case that the parties can hoard data which binds patients to their network.

Regardless, it just makes sense that if a health plan builds an ACO network, cherry picking what it sees as the best providers, it’s unlikely that excluded providers will enjoy the same reimbursement health plan partners get. The excluded parties just won’t have as much clout.

Yes, it’s already the case that bigger providers may get either higher reimbursement or higher patient volume from insurers, but ACO business models could intensify the problem.

Meanwhile, if a bunch of competing hospitals or physician practices in a market decide to work together, it seems pretty unlikely that others could enter the market, expand their business or develop new service lines that compete with the ACO. Eventually, many patients would be forced to work with ACO providers. Their health plan will only pay for this market-dominant conglomerate.

Of course, these issues are probably being kicked around in legal circles. I’m equally confident that the ACOs, which can afford high-ticket legal advice, have looked at these concerns as well. But to my knowledge these questions aren’t popping up in the trade press, which suggests to me that they’re not a hot topic in non-legal circles.

Please note that I’m not taking a position here on whether antitrust regulation is fair or appropriate here. I’m just pointing out that if you’re part of an ACO, you may be more vulnerable to antitrust suits than you thought. Any entity which has the power to crush competition and set prices is a potential target.

A Hospital CIO Perspective on Precision Medicine

Posted on July 31, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

#Paid content sponsored by Intel.

In this video interview, I talk with David Chou, Vice President, Chief Information and Digital Officer with Kansas City, Missouri-based Children’s Mercy Hospital. In addition to his work at Children’s Mercy, he helps healthcare organizations transform themselves into digital enterprises.

Chou previously served as a healthcare technology advisor with law firm Balch & Bingham and Chief Information Officer with the University of Mississippi Medical Center. He also worked with the Cleveland Clinic to build a flagship hospital in Abu Dhabi, as well as working in for-profit healthcare organizations in California.

Precision Medicine and Genomic Medicine are important topics for every hospital CIO to understand. In my interview with David Chou, he provides the hospital CIO perspective on these topics and offers insights into what a hospital organization should be doing to take part in and be prepared for precision medicine and genomic medicine.

Here are the questions I asked him, if you’d like to skip to a specific topic in the video or check out the full video interview embedded below:

What are you doing in your organization when it comes to precision medicine and genomic medicine?

Hospitals Aren’t Getting Much ROI From RCM Technology

Posted on July 24, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

If your IT investments aren’t paying off, your revenue cycle management process is clunky and consumers are defaulting on their bills, you’re in a pretty rocky situation financially. Unfortunately, that’s just the position hospitals find themselves in lately, according to a new study.

The study, which was conducted by the Healthcare Financial Management Association and Navigant, surveyed 125 hospital health system chief financial officers and revenue cycle executives.

When they looked at the data, researchers saw that hospitals are being hit with a double whammy. On the one hand, the RCM systems hospitals have in place don’t seem to be cutting it, and on the other, the hospitals are struggling to collect from patients.

Nearly three out of four respondents said that their RCM technology budgets were increasing, with 32% reporting that they were increasing spending by 5% or more. Seventy-seven percent of hospitals with less than 100 beds and 78% of hospitals with 100 to 500 beds plan to increase such spending, the survey found.

The hospital leaders expect that technology investments will improve their RCM capabilities, with 79% considering business intelligence analytics, EHR-enabled workflow or reporting, revenue integrity, coding and physician/clinician documentation options.

Unfortunately, the software infrastructure underneath these apps isn’t performing as well as they’d like. Fifty-one percent of respondents said that their organizations had trouble keeping up with EHR upgrades, or weren’t getting the most out of functional, workflow and reporting improvements. Given these obstacles, which limit hospitals’ overall tech capabilities, these execs have little chance of seeing much ROI from RCM investments.

Not only that, CFOs and RCM leaders weren’t sure how much impact existing technology was having on their organizations. In fact, 41% said they didn’t have methods in place to track how effective their technology enhancements have been.

To address RCM issues, hospital leaders are looking beyond technology. Some said they were tightening up their revenue integrity process, which is designed to ensure that coding and charge capture processes work well and pricing for services is reasonable. Such programs are designed to support reliable financial reporting and efficient operations.

Forty-four percent of respondents said their organizations had established revenue integrity programs, and 22% said revenue integrity was a top RCM focus area for the coming year. Meanwhile, execs whose organizations already had revenue integrity programs in place said that the programs offered significant benefits, including increased net collections (68%), greater charge capture (61%) and reduced compliance risks (61%).

Still, even if a hospital has its RCM house in order, that’s far from the only revenue drain it’s likely to face. More than 90% of respondents think the steady increase in consumer responsibility for care will have an impact on their organizations, particularly rural hospital executives, the study found.

In effort to turn the tide, hospital financial execs are making it easier for consumers to pay their bills, with 93% of respondents offering an online payment portal and 63% rolling out cost-of-care estimation tools. But few hospitals are conducting sophisticated collections initiatives. Only 14% of respondents said they were using advanced modeling tools for predicting propensity to pay, researchers said.

One Hospital Faces Rebuild After Brutal Cyberattack

Posted on July 20, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Countless businesses were hit hard by the recent Petya ransomware attack, but few as hard as Princeton, West Virginia-based Princeton Community Hospital. After struggling with the aftermath of the Petya attack, the hospital had to rebuild its entire network and reinstall its core systems.

The Petya assault, which hit in late June, pounded large firms across the globe, including Nuance, Merck, advertiser WPP, Danish shipping and transport firm Maersk and legal firm DLA Piper.  The list of Petya victims also includes PCH, a 267-bed facility based in the southern part of the state.

After the attack, IT staffers first concluded that the hospital had emerged from the attack relatively unscathed. Hospital leaders noted that they are continuing to provide all inpatient care and services, as well as all other patient care services such as surgeries, therapeutics, diagnostics, lab and radiology, but was experiencing some delays in processing radiology information for non-emergent patients. Also, for a while the hospital diverted all non-emergency ambulance visits away from its emergency department.

However, within a few days executives found that its IT troubles weren’t over. “Our data appears secure, intact, and not hacked into; yet we are unable to access the data from the old devices in the network,” said the hospital in a post on Facebook.

To recover from the Petya attack, PCH decided that it had to install 53 new computers throughout the hospital offering clean access to its Meditech EMR system, as well as installing new hard drives on all devices throughout the system and building out an entirely new network.

When you consider how much time its IT staff must’ve logged bringing basic systems online, rebuilding computers and network infrastructure, it seems clear that the hospital took a major financial blow when Petya hit.

Not only that, I have little doubt that PCH faces doubts in the community about its security.  Few patients understand much, if anything, about cyberattacks, but they do want to feel that their hospital has things under control. Having to admit that your network has been compromised isn’t good for business, even if much bigger companies in and outside the healthcare business were brought to the knees by the same attack. It may not be fair, but that’s the way it is.

That being said, PCH seems to have done a good job keeping the community it serves aware what was going on after the Petya dust settled. It also made the almost certainly painful decision to rebuild key IT assets relatively quickly, which might not have been feasible for a bigger organization.

All told, it seems that PCH survived Petya successfully as any other business might have, and better than some. Let’s hope the pace of global cyberattacks doesn’t speed up further. While PCH might have rebounded successfully after Petya, there’s only so much any hospital can take.

The Fight For Patient Health Data Access Is Just Beginning

Posted on July 11, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

When some of us fight to give patients more access to their health records, we pitch everyone on the benefits it can offer — and act as though everyone feels the same way.  But as most of us know, in their heart of hearts, many healthcare industry groups aren’t exactly thrilled about sharing their clinical data.

I’ve seen this first hand, far too many times. As I noted in a previous column, some providers all but refuse to provide me with my health data, and others act like they’re doing me a big favor by deigning to share it. Yet others have put daunting processes in place for collecting your records or make you wait weeks or months for your data. Unfortunately, the truth, however inconvenient it may be, is that they have reasons to act this way.

Sure, in public, hospital execs argue for sharing data with both patients and other institutions. They all know that this can increase patient engagement and boost population health. But in private, they worry that sharing such data will encourage patients to go to other hospitals at will, and possibly arm their competitors in their battle for market share.

Medical groups have their own concerns. Physicians understand that putting data in patient’s hands can lead to better patient self-management, which can tangibly improve outcomes. That’s pretty important in an era when government and commercial payers are demanding measurably improved outcomes.

Still, though they might not admit it, doctors don’t want to deluge patients with a flood of data which could cause them to worry about inconsequential issues, or feel that data-equipped patients will challenge their judgment. And can we please admit that some simply don’t like ceding power over their domain?

Given all of this, I wasn’t surprised to read that several groups are working to improve patients’ access to their health data. Nor was it news to me that such groups are struggling (though it was interesting to hear what they’re doing to help).

MedCity News spoke to the cofounder of one such group, Share for Cures, which works to encourage patients to share their health data for medical research. The group also hopes to foster other forms of patient health data sharing.

Cofounder Jennifer King told MCN that patients face a technology barrier to accessing such records. For example, she notes, existing digital health tools may offer limited interoperability with other data sets, and patients may not be sure how to use portals. Her group is working to remove these obstacles, but “it’s still not easy,” King told a reporter.

Meanwhile, she notes, almost every hospital has implemented a customized medical record, which can often block data sharing even if the hospitals buy EMRs from the same vendor. Meanwhile, if patients have multiple doctors, at least a few will have EMRs that don’t play well with others, so sharing records between them may not be possible, King said.

To address such data sharing issues, King’s nonprofit has created a platform called SHARE, an acronym for System for Health and Research Data Exchange. SHARE lets users collect and aggregate health and wellness data from multiple sources, including physician EMRs, drug stores, mobile health apps and almost half the hospitals in the U.S.

Not only does SHARE make it easy for patients to access their own data, it’s also simple to share that data with medical research teams. This approach offers researchers an important set of benefits, notably the ability to be sure patients have consented to having their data used, King notes. “One of the ways around [HIPAA] is that patient are the true owners,” she said. “With direct patient authorization…it’s not a HIPAA issue because it’s not the doctor sharing it with someone else. It’s the patient sharing it.”

Unfortunately (and this is me talking again) the platform faces the same challenges as any other data sharing initiative.

In this case, the problem is that like other interoperability solutions, SHARE can only amass data that providers are actually able to share, and that leaves a lot of them out of the picture. In other words, it can’t do much to solve the underlying problem. Another major issue is that if patients are reluctant to use even something as simplified as a portal, they’re not to likely to use SHARE either.

I’m all in favor of pushing for greater patient data access, for personal as well as professional reasons. And I’m glad to hear that there are groups springing up to address the problem, which is obviously pretty substantial. I suspect, though, that this is just the beginning of the fight for patient data access.

Until someone comes up with a solution that makes it easy and comfortable for providers to share data, while diffusing their competitive concerns, it’s just going to be more of the same old, same old. I’m not going to hold my breath waiting for that to happen.