Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Healthcare Data Standards Tweetstorm from Arien Malec

Posted on May 20, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

If you don’t follow Arien Malec on Twitter, you should. He’s got strong opinions and an inside perspective on the real challenges associated with healthcare data interoperability.

As proof, check out the following Healthcare Standards tweetstorm he posted (removed from the tweet for easy reading):

1/ Reminder: #MU & CEHRT include standards for terminology, content, security & transport. Covers eRx, lab, Transitions of Care.

2/ If you think we “don’t have interop” b/c no standards name, wrong.

3/ Standards could be ineffective, may be wrong, may not be implemented in practice, or other elts. missing

4/ But these are *different* problems from “gov’t didn’t name standards” & fixes are different too.

5/ e.g., “providers don’t want 60p CCDA documents” – data should be structured & incorporated.

6/ #actually both (structured data w/terminology & incorporate) are required by MU/certification.

7/ “but they don’t work” — OK, why? & what’s the fix?

8/ “Government should have invested in making the standards better”

9/ #actually did. NLM invested in terminology. @ONC_HealthIT invested in CCDA & LRU projects w/ @HL7, etc.

10/ “government shouldn’t have named standards unless they were known to work” — would have led to 0 named

11/ None of this is to say we don’t have silos, impediments to #interoperability, etc.

12/ but you can’t fix the problem unless you understand it first.

13/ & “gov’t didn’t name standards” isn’t the problem.

14/ So describe the problems, let’s work on fixing them, & abandon magical thinking & 🦄. The End.

Here was my immediate response to the tweetstorm:

I agree with much of what Arien says about their being standards and the government named the standards. That isn’t the reason that exchange of health information isn’t happening. As he says in his 3rd tweet above, the standards might not be effective, they may be implemented poorly, the standards might be missing elements, etc etc etc. However, you can’t say there wasn’t a standard and that the government didn’t choose a standard.

Can we just all be honest with ourselves and admit that many people in healthcare don’t want health data to be shared? If they did, we’d have solved this problem.

The good news is that there are some signs that this is changing. However, changing someone from not wanting to share data is a hard thing and usually happens in steps. You don’t just over night have a company or individual change their culture to one of open data sharing.

Can Using Simple Metrics Help Drive Long-Term EHR Adoption? – Breakaway Thinking

Posted on May 18, 2016 I Written By

The following is a guest blog post by Lauren Brown, Adoption Specialist at The Breakaway Group (A Xerox Company). Check out all of the blog posts in the Breakaway Thinking series.
Lauren Brown - Healthcare IT Expert
Gaining clinical, financial, and operational value from Electronic Health Record (EHR) applications has become a top priority for most health organizations across the country. Gone are the days of simply focusing on implementation that, in many cases, led to dissatisfaction and low adoption rates by staff. Previously, dissatisfied customers began looking to switch applications in hopes of gaining better results. However, studies show that switching EHRs does not solve the dissatisfaction problem. In fact, only a reported 43% of physicians are glad they made the switch to a new application, and 49% reported lower productivity as a result of the switch.

Recently, there has been a shift towards optimizing these new technologies and focusing on how to get the most out of their chosen application. It is essential for organizations to establish an optimization plan in order to achieve long-term, measurable results. Utilizing a metric-driven optimization approach gives healthcare organizations the opportunity to maximize their EHR investment and uncover opportunities for adjustments that substantially bolster technology integration.

Metric-driven optimization analyzes performance data and uses this information to drive continuous performance improvement throughout the organization. The U.S. Department of Health and Human Services suggests focusing metrics on how the system performs, how it will affect the organization, and how users experience the system. The ultimate goal is to execute well-designed strategies to help organizations identify and reduce workflow inconsistency, maximize application performance, and improve patient care.

So what are the keys to a metric-driven optimization approach?

Incorporate metrics early

Initial training serves well in focusing on application basics. But adoption occurs at a varying pace, so it’s important to continually monitor training and create a plan for late adopters. During training, staff will likely remember only a small portion of the information they are taught; if optimization occurs too late in the process, users do not learn best-practice workflow. This can result in workaround habits that become difficult to change. The use of metrics early in the process will help to monitor EHR adoption and focus on areas of opportunity. Metrics allow you to identify individuals who are struggling with their education and intervene.

Utilize system data found through metrics

Often, healthcare organizations try to mimic processes and workflows from past applications or paper records. This method can get you through the initial implementation, but it is not sustainable for long-term adoption. Before implementation begins, it’s important to analyze and document best practice procedures. In order to get the most out of the system once it’s in place, you’ll want to examine staff performance and analyze key workflows. The insights you gain will help ensure that productivity and stability continue to increase over time.

Capturing the right data allows you to identify inconsistencies and application issues that would have otherwise gone unnoticed. Developing and reporting metrics shows the value of optimization efforts and helps support staff moving forward. Existing workflow issues, if not addressed, will become more visible with technology. Utilize the technology to eliminate redundant, time-consuming processes. Look at your EHR as the leverage you need to create change to promote consistency and transformation across the organization.

Metric-driven optimization is an ongoing process

The need for optimization is an ongoing effort – not a one-time event. Incorporating metrics into the long-term roadmap as a continuous project will allow you to respond to changes in a timely fashion. Comprehensive metrics regarding how end users will be able to reach proficiency in the EHR application is an important element in ensuring adoption success. The more metrics are shared the more value an organization can gain from optimization efforts. Changes, including system upgrades and new employees, can continue to challenge optimization efforts that were previously made. They often require both functional and cultural changes in processes that impact many different groups across an organization. Data regarding these changes are key to ensuring those who will be impacted are aware and have the ability to adopt for the life of the application.

By taking a metric-driven optimization approach, healthcare organizations improve their use of technology and achieve long-term adoption. Instead of simply installing an EHR, the application is leveraged to enhance performance and push organizations to exceed expectations with patient care.

How has the use of metrics improved your organization’s technology adoption?

Xerox is a sponsor of the Breakaway Thinking series of blog posts. The Breakaway Group is a leader in EHR and Health IT training.

Joint Commission Now Allows Texting Of Orders

Posted on May 17, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For a long time, it was common for clinicians to share private patient information with each other via standard text messages, despite the fact that the information was in the clear, and could theoretically be intercepted and read (which this along with other factors makes SMS texts a HIPAA violation in most cases). To my knowledge, there have been no major cases based on theft of clinically-oriented texts, but it certainly could’ve happened.

Over the past few years, however, a number of vendors have sprung up to provide HIPAA-compliant text messaging.  And apparently, these vendors have evolved approaches which satisfy the stringent demands of The Joint Commission. The hospital accreditation group had previously prohibited hospitals from sanctioning the texting of orders for patient care, treatment or services, but has now given it the go-ahead under certain circumstances.

This represents an about-face from 2011, when the group had deemed the texting of orders “not acceptable.” At the time, the Joint Commission said, technology available didn’t provide the safety and security necessary to adequately support the use of texted orders. But now that several HIPAA-compliant text-messaging apps are available, the game has changed, according to the accrediting body.

Prescribers may now text such orders to hospitals and other healthcare settings if they meet the Commissioin’s Medication Management Standard MM.04.01.01. In addition, the app prescribers use to text the orders must provide for a secure sign-on process, encrypted messaging, delivery and read receipts, date and time stamp, customized message retention time frames and a specified contact list for individuals authorized to receive and record orders.

I see this is a welcome development. After all, it’s better to guide and control key aspects of a process rather than letting it continue on underneath the surface. Also, the reality is that healthcare entities need to keep adapting to and building upon the way providers actually communicate. Failing to do so can only add layers to a system already fraught with inefficiencies.

That being said, treating provider-to-provider texts as official communications generates some technical issues that haven’t been addressed yet so far as I know.

Most particularly, if clinicians are going to be texting orders — as well as sharing PHI via text — with the full knowledge and consent of hospitals and other healthcare organizations — it’s time to look at what it takes manage that information more efficiently. When used this way, texts go from informal communication to extensions of the medical record, and organizations should address that reality.

At the very least, healthcare players need to develop policies for saving and managing texts, and more importantly, for mining the data found within these texts. And that brings up many questions. For example, should texts be stored as a searchable file? Should they be appended to the medical records of the patients referenced, and if so, how should that be accomplished technically? How should texted information be integrated into a healthcare organization’s data mining efforts?

I don’t have the answers to all of these questions, but I’d argue that if texts are now vehicles for day-to-day clinical communication, we need to establish some best practices for text management. It just makes sense.

The Perfect EHR Workflow – Video EHR

Posted on May 12, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve been floating this idea out there for years (2006 to be exact), but I’d never put it together in one consolidated post that I could point to when talking about the concept. I call it the Video EHR and I think it could be the solution to many of our current EHR woes. I know that many of you will think it’s a bit far fetched and in some ways it is. However, I think we’re culturally and technically almost to the point where the video EHR is a feasible opportunity.

The concept is very simple. Put video cameras in each exam room and have those videos replace your EHR.

Technical Feasibility
Of course there are some massive technical challenges to make this a reality. However, the cost of everything related to this idea has come down in price significantly. The cost of HD video cameras negligible. The cost of video storage, extremely cheap and getting cheaper every day. The cost of bandwidth, cheaper and higher quality and so much potential to grow as more cities get fiber connectivity. If this was built on the internal network instead of the cloud, bandwidth is an easily solved issue.

When talking costs, it’s important to note that there would be increased costs over the current documentation solutions. No one is putting in high quality video cameras and audio equipment to record their visits today. Not to mention wiring the exam room so that it all works. So, this would be an added cost.

Otherwise, the technology is all available today. We can easily record, capture and process HD video and even synchronize it across multiple cameras, etc. None of this is technically a challenge. Voice recognition and NLP have progressed significantly so you could process the audio file and convert it into granular data elements that would be needed for billing, clinical decision support, advanced care, population health, etc. These would be compiled into a high quality presentation layer that would be useful for providers to consume data from past visits.

Facial recognition technology has also progressed to the point that we could use these videos to help address the patient identification and patient matching problems that plague healthcare today. We’d have to find the right balance between trusting the technology and human verification, but it would be much better and likely more convenient than what we have today.

Imagine the doctor walking into the exam room where the video cameras in the exam room have already identified the patient and it would identify the doctor as she walked in. Then, the patient’s medical record could be automatically pulled up on the doctor’s tablet and available to them as they’re ready to see the patient.

Plus, does the doctor even need a tablet at all? Could they instead use big digital signs on the walls which are voice controlled by a Siri or Alexa like AI solution. I can already hear, “Alexa, pull up John Lynn’s cholesterol lab results for the past year.” Next thing you know, a nice chart of my cholesterol appears on the big screen for both doctor and patient to see.

Feels pretty far fetched, but all of the technology I describe is already here. It just hasn’t been packaged in a way that makes sense for this application.

Pros
Ideal Workflow for Providers – I can think of no better workflow for a doctor or nurse. Assuming the tech works properly (and that’s a big assumption will discuss in the cons), the provider walks into the exam room and engages with the patient. Everything is documented automatically. Since it’s video, I mean literally everything would be documented automatically. The providers would just focus on engaging with the patient, learning about their health challenges, and addressing their issues.

Patient Experience – I’m pretty sure patients wouldn’t know what to do if their doctor or nurse was solely focused on them and wasn’t stuck with their head in a chart or in their screen. It would totally change patients’ relationship with their doctors.

Reduced Liability – Since you literally would have a multi angle video and audio recording of the visit, you’d have the proof you’d need to show that you had offered specific instructions or that you’d warned of certain side effects or any number of medical malpractice issues could be resolved by a quick look at the video from the visit. The truth will set you free, and you’d literally have the truth about what happened during the visit on video.

No Click Visit – This really is part of the “Ideal Workflow” section, but it’s worth pointing out all the things that providers do today to document in their EHR. The biggest complaint is the number of clicks a doctor has to do. In the video EHR world where everything is recorded and processed to document the visit you wouldn’t have any clicks.

Ergonomics – I’ve been meaning to write a series of posts on the health consequences doctors are experiencing thanks to EHR software. I know many who have reported major back trouble due to time spent hunched over their computer documenting in the EHR. You can imagine the risk of carpal tunnel and other hand and wrist issues that are bound to come up. All of this gets resolved if the doctor literally walks into the exam room and just sees the patient. Depending on how the Video EHR is implemented, the doctor might have to still spend time verifying the documentation or viewing past documentation. However, that could most likely be done on a simple tablet or even using a “Siri”-like voice implementation which is much better ergonomically.

Learning – In mental health this happens all the time. Practicum students are recording giving therapy and then a seasoned counselor advises them on how they did. No doubt we could see some of the same learning benefits in a medical practice. Sometimes that would be through peer review, but also just the mere fact of a doctor watching themselves on camera.

Cons
Privacy – The biggest fear with this idea is that most people think this is or could be a major privacy issue. They usually ask the question, “Will patients feel comfortable doing this?” On the privacy front, I agree that video is more personal than granular data elements. So, the video EHR would have to take extreme precautions to ensure the privacy and security of these videos. However, from an impact standpoint, it wouldn’t be that much different than granular health information being breached. Plus, it’s much harder to breach a massive video file being sent across the wire than a few granular text data elements. No doubt, privacy and security would be a challenge, but it’s a challenge today as well. I don’t think video would be that much more significant.

As to the point of whether patients would be comfortable with a video in the exam room, no doubt there would need to be a massive culture shift. Some may never reach the point that they’re comfortable with it. However, think about telemedicine. What are patients doing in telemedicine? They’re essentially having their patient visit on video, streamed across the internet and a lot of society is very comfortable with it. In fact, many (myself included) wish that telemedicine were more widely available. No doubt telemedicine would break down the barriers when it comes to the concept of a video EHR. I do acknowledge that a video EHR takes it to another level and they’re not equal. However, they are related and illustrate that people’s comfort in having their medical visits on video might not be as far fetched as it might seem on the surface.

Turns out that doctors will face the same culture shift challenge as patients and they might even be more reluctant than patients.

Trust – I believe this is currently the biggest challenge with the concept of a video EHR. Can providers trust that the video and audio will be captured? What happens if it fails to capture? What happens if the quality of the video or audio isn’t very good? What is the voice recognition or NLP isn’t accurate and something bad happens? How do we ensure that everything that happens in the visit is captured accurately?

Obviously there are a lot of challenges associated with ensuring the video EHR’s ability to capture and document the visit properly. If it doesn’t it will lose providers and patients’ trust and it will fail. However, it’s worth remembering that we don’t necessarily need it to be perfect. We just need it to be better than our current imperfect status quo. We also just need to design the video EHR to avoid making mistakes and warn about possible missing information so that it can be addressed properly. No doubt this would be a monumental challenge.

Requires New Techniques – A video EHR would definitely require modifications in how a provider sees a patient. For example, there may be times where a patient or the doctor need to be positioned a certain way to ensure the visit gets documented properly. You can already see one of the cameras being a portable camera that can be used for close up shots of rashes or other medical issues so that they’re documented properly.

No doubt providers would have to learn new techniques on what they say in the exam room to make sure that things are documented properly. Instead of just thinking something, they’ll have to ensure that they speak clinical orders, findings, diagnosis, etc. We could have a long discussion on the impact for good and bad of this type of transparency.

Double Edged Sword of Liability – While reduced liability is a pro, liability could also be a con for a video EHR. Having the video of a medical visit can set you free, but it can also be damning as well. If you practice improper medicine, you won’t have anywhere to hide. Plus, given our current legal environment, even well intentioned doctors could get caught in challenging situations if the technology doesn’t work quite right or the video is taken out of context.

Reality Check
I realize this is a massive vision with a lot of technical and cultural challenges that would need to be overcome. Although, when I first came up with the idea of a video EHR ~10 years ago, it was even more far fetched. Since then, so many things have come into place that make this idea seem much more reasonable.

That said, I’m realistic that a solution like this would likely start with some sort of half and half solution. The video would be captured, but the provider would need to verify and complete the documentation to ensure its accuracy. We couldn’t just trust the AI engine to capture everything and be 100% accurate.

I’m also interested in watching the evolution of remote scribes. In many ways, a remote scribe is a human doing the work of the video EHR AI engine. It’s an interesting middle ground which could illustrate the possibilities and also be a small way to make patients and providers more comfortable with cameras in the exam room.

I do think our current billing system and things like meaningful use (or now MACRA) are still a challenge for a video EHR. The documentation requirements for these programs are brutal and could make the video EHR workflow lose its luster. Could it be done to accommodate the current documentation requirements? Certainly, but it might take some of the polish off the solution.

There you have it. My concept for a video EHR. What do you think of the idea? I hope you tear it up in the comments.

Discussion on Medical Errors as the 3rd Most Common Cause of Death

Posted on May 9, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Social media and mainstream media is abuzz with this article in BMJ by Martin A Makary and Michael Daniel entitled “Medical error—the third leading cause of death in the US.” This image summarizes the headlines most people wrote:

Medical Errors and Leading Cause of Death in US

While this makes for a great headline, most of the journalists and those in the media evaluating the BMJ article do like they usually do and run the headline without actually digging into the details of the study itself. Lucky for us, David Gorski, has published a really great analysis of the article on the Science-Based Medicine blog. I won’t summarize it here, since you should go and read David’s article in full. We’ll be here when you get back.

What everyone acknowledges is that medical errors take the lives of many in the US Health System. In fact, it happens in every health system. What’s also clear from this discussion is that there are A LOT of complexities associated with how you define when a death was caused by medical error, what is defined as a medical error, etc etc etc. David’s article above finishes with this summary on the importance of patient safety and decreasing death due to medical errors which is the point I think we should take from it all:

Over the last three years, I’ve learned for myself from firsthand experience just how difficult it is to improve the quality of patient care. I’ve also learned from firsthand experience that nowhere near all adverse outcomes are due to negligence or error on the part of physicians and nurses. None of this is to say that every effort shouldn’t be made to improve patient safety. Absolutely that should be a top health care policy priority. It’s an effort that will require the rigorous application of science-based medicine on top of expenditures to make changes in the health care system, as well as agreement on exactly how to define and measure medical errors. After all, one death due to medical error is too much, and even if the number is “only” 20,000 that is still too high and needs urgent attention to be brought down. Unfortunately, I also know that, human systems being what they are, the rate will never be reduced to zero. That shouldn’t stop us from trying to make that number as close to zero as we can.

Unfortunately, I believe that false headlines with inflated numbers don’t help us understand the real problem and address it. The inflated numbers from the so called “study” just cause us to confuse the issues. The numbers really don’t pass the “smell test” on a number of levels. Not the least of which, from my perspective, is that we don’t have more medical malpractice lawsuits. In this sue happy society, if there were 251k deaths due to medical error, we’d have many more medical malpractice lawsuits out there. David explains a bunch more reasons why the numbers don’t make sense and why they’re really hard to calculate, so go and read those if you want a more detailed analysis.

Gong back to the earlier quote. Even if the number was 20,000, that’s still far too many. We know medical errors cause death and we should work hard to prevent that from happening. Since I write from a tech perspective, I’m interested in thinking about how technology could impact these medical error rates.

From a tech perspective, I always find it interesting to read stories about the way EHR software can help prevent medical errors. The basic analysis usually points to things like drug to drug interaction checking, drug to allergy interaction checking, and other clinical decision support tools. No doubt simple checks like this can have an impact on the number of medical errors in a healthcare organization. We’ll leave the discussions of alert fatigue for another discussion.

Very few people would argue against the concept that having the right information at the right time will help doctors and nurses reduce medical errors. Ideally, that’s what technology should help facilitate. Plus, technology should help analyze massive amounts of health data (both personal and general) in order to facilitate the provider in their care of the patient. In many cases, that’s exactly what technology can and does do for healthcare. However, we’re not living in an ideal world. Technology can also increase the number of medical errors when implemented poorly or improperly.

In some cases, EHR software perpetuates misinformation and leads to providers having the wrong information at the wrong time. Sometimes the clinical decision support algorithms fail. I could go on and on about the potential issues. These are a problem and now that EHR software is a major part of most health systems, we’re going to see the number of medical errors due to EHR software increase. However, in doing so, we shouldn’t forget that paper had its own medical error issues as well.

Another major cause of medical errors related to EHR software is when providers create an over reliance on the software for clinical decision making. This concern is often couched as “new doctors don’t know how to see patients without an EHR.” I think this concern only partially explains the risk of medical errors that we could experience if we’re not careful with our over reliance on technology in the care we provide patients.

Just this weekend I had this experience in my own personal life. We were headed to a new restaurant on Saturday night. We plugged the address into the GPS and started following the instructions it gave us to get to the restaurant. After turning into an apartment complex, we knew that we’d relied a little too much on technology and it had led us astray.

The banter between my wife and I was telling. As the GPS told us to turn into the apartment complex I told my wife that something didn’t feel right about these directions. My wife told me that it said to turn there. It was easy for me to succumb to my wife’s reliance on technology and not follow my own intuition and experience to navigate us a better direction.

In my wife and I’s case, nothing too serious was on the line (although the kids were getting antsy in the back of the car). Sure, it took us about 5 more minutes to get to the restaurant, but we made it without any major harm. The same isn’t true in healthcare where if providers aren’t careful, their over reliance on technology can cause medical errors that could even lead to loss of life. Plus, group think about technologies ability (or inabilities) can also cause trouble.

Like most things in life, we can take any of these approaches too far. We can’t be irrational about any specific approach since these are complex problems which require a detailed approach to understanding and mitigating their impact. Sometimes technology can be the solution to medical errors, but it can also be the problem if we’re not careful. It always takes the right balance to make sure we’re reducing medical errors as much as possible while not causing new ones.

Time To Leverage EHR Data Analytics

Posted on May 5, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

For many healthcare organizations, implementing an EHR has been one of the largest IT projects they’ve ever undertaken. And during that implementation, most have decided to focus on meeting Meaningful Use requirements, while keeping their projects on time and on budget.

But it’s not good to stay in emergency mode forever. So at least for providers that have finished the bulk of their initial implementation, it may be time to pay attention to issues that were left behind in the rush to complete the EHR rollout.

According to a recent report by PricewaterhouseCoopers’ Advanced Risk & Compliance Analytics practice, it’s time for healthcare organizations to focus on a new set of EHR data analytics approaches. PwC argues that there is significant opportunity to boost the value of EHR implementations by using advanced analytics for pre-live testing and post-live monitoring. Steps it suggests include the following:

  • Go beyond sample testing: While typical EHR implementation testing strategies look at the underlying systems build and all records, that may not be enough, as build efforts may remain incomplete. Also, end-user workflow specific testing may be occurring simultaneously. Consider using new data mining, visualization analytics tools to conduct more thorough tests and spot trends.
  • Conduct real-time surveillance: Use data analytics programs to review upstream and downstream EHR workflows to find gaps, inefficiencies and other issues. This allows providers to design analytic programs using existing technology architecture.
  • Find RCM inefficiencies: Rather than relying on static EHR revenue cycle reports, which make it hard to identify root causes of trends and concerns, conduct interactive assessment of RCM issues. By creating dashboards with drill-down capabilities, providers can increase collections by scoring patients invoices, prioritizing patient invoices with the highest scores and calculating the bottom-line impact of missing payments.
  • Build a continuously-monitored compliance program: Use a risk-based approach to data sampling and drill-down testing. Analytics tools can allow providers to review multiple data sources under one dashboard identify high-risk patterns in critical areas such as billing.

It’s worth noting, at this point, that while these goals seem worthy, only a small percentage of providers have the resources to create and manage such programs. Sure, vendors will probably tell you that they can pop a solution in place that will get all the work done, but that’s seldom the case in reality. Not only that, a surprising number of providers are still unhappy with their existing EHR, and are now living in replacing those systems despite the cost. So we’re hardly at the “stop and take a breath” stage in most cases.

That being said, it’s certainly time for providers to get out of whatever defensive crouch they’ve been in and get proactive. For example, it certainly would be great to leverage EHRs as tools for revenue cycle enhancement, rather than the absolute revenue drain they’ve been in the past. PwC’s suggestions certainly offer a useful look on where to go from here. That is, if providers’ efforts don’t get hijacked by MACRA.

2 Major Problems with MACRA

Posted on May 4, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Everyone’s started to dive into the 10 million page MACRA (that might be an exaggeration, but it feels about that long) and over the next months we’ll be sure to talk about the details a lot more. However, I know that many healthcare organizations are tired of going through incredibly lengthy regulations before they’re final. Makes sense that people don’t want to go through all the details just for them to change.

As I look at MACRA from a very high level, I see at least two major problems with how MACRA will impact healthcare.

Loss of EHR Innovation
First, much like meaningful use and EHR certification, MACRA is going to suck the life out of EHR development teams. For 2-3 years, EHR roadmaps have been nothing but basically conforming to meaningful use and EHR certification. Throw in ICD-10 development for good measure and EHR development teams have basically had to be coding their application to a government standard instead of customer requests and unique innovations.

Just today I heard the Founder of SOAPware, Randall Oates, MD, say “I’m grieving MACRA to a great degree.” He’s grieving because he knows that for many months his company won’t be able to focus on innovation, but will instead focus on meeting government requirements. In fact, he said as much when he said, “We don’t have the liberty to be innovative and creative.” And no, meeting government regulations in an innovative way doesn’t meet that desire.

I remember going to lunch with a very small EHR vendor a year or so ago. I first met him pre-meaningful use and he loved being able to develop a unique EHR platform that made a doctor more efficient. He kept his customer base small so that he could focus on the needs of a small group of doctors. Fast forward to our lunch a year or so ago. He’d chosen to become a certified EHR and make it so his customers could attest to meaningful use. Meaningful use made it so he hated his EHR development process and he had lost all the fire he’d had to really create something beautiful for doctors.

The MACRA requirements will continue to suck the innovation out of EHR vendors.

New Layers of Work With No Relief
When you look at MACRA, we have all of these new regulations and requirements, but don’t see any real relief from the old models. It’s great to speak hypothetically about the move to value based reimbursement, but we’re only dipping our toe in those waters and so we can’t replace all of the old reimbursement requirements. In some ways it makes sense why CMS would take a cautious approach to entering the value based world. However, MACRA does very little to reduce the burden on the backs of physicians and healthcare organizations. In fact, in many ways it adds to their reporting burden.

Yes, there was some relief offered when it comes to meaningful use moving from the all or nothing approach and a small reduction in the number of measures. However, when it comes to value based reimbursement, MACRA seems to just be adding more reporting burdens on doctors without removing any of the old fashioned fee for service requirements.

MACRA is not like ICD-10. Once ICD-10 was implemented you could see how ICD-9 and the skills required for that coding set will eventually be fully replaced and you won’t need that skill or capability anymore. The same doesn’t seem to be true with value based care. There’s no sign that value based care will be a full replacement of anything. Instead, it just adds another layer of complexity, regulation, and reporting to an already highly regulated healthcare economic system.

This is why it’s no surprise that many are saying that MACRA will be the end of small practices. At scale, they’re onerous. Without scale, these regulations can be the death of a practice. It’s not like you can stop doing something else and learn the new MACRA regulations. No, MACRA is mostly additive without removing a healthcare organization’s previous burdens. Watch for more practices to leave Medicare. Although, even that may not be a long term solution since most commercial payers seem to follow Medicare’s lead.

While I think that CMS and the people that work there have their hearts in the right place, these two problems have me really afraid for what’s to come in health IT. EHR vendors the past few months were finally feeling some freedom to listen to their customers and develop something new and unique. I was excited to see how EHR vendors would make their software more efficient and provide better care. MACRA will likely hijack those efforts.

On the other side of the fence, doctors are getting more and more burnt out. These new MACRA regulations just add one more burden to their backs without removing any of the ones that bothered them before. Both of these problems don’t paint a pretty picture for the future of healthcare.

The great part is that MACRA is currently just a proposed rule. CMS has the opportunity to fix these problems. However, it will require them to take a big picture look at the regulation as opposed to just looking at the impact of an individual piece. If they’re willing to focus MACRA on the big wins and cut out the parts with questionable or limited benefits, then we could get somewhere. I’m just not sure if Andy Slavitt and company are ready to say “Scalpel!” and start cutting.

The Downside of Interoperability

Posted on May 2, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

It’s hard to argue that achieving health data interoperability is not important — but it comes with risks. And I’ve seen little discussion of the fact that interoperability may actually increase the chance that a major attack could hit a wide swath of healthcare providers. It might be extreme to suggest that we put off such efforts until we step up the industry’s security status, but the problem shouldn’t be ignored either.

Sure, data interoperability is a critical goal for healthcare providers of all stripes. While there’s room to argue about how it should be accomplished, particularly over whether providers or patients should drive health data management, there’s no question it needs to get done. There’s little doubt that most efforts to coordinate care will fall flat if providers are operating with incomplete information.

And what’s more, with the demand for interoperability baked into MACRA, we pretty much have no choice but to make it happen anyway. To my knowledge, HHS has proposed neither carrot nor stick to convince providers to come on board – nor has it defined “widespread” interoperability to my knowledge — but the agency has to achieve something by 2018, and that means change will come.

That being said, I’m struck by how little industry concern there seems to be about the extent to which interoperability can multiply the possibility of a breach occurring. Unfortunately, security is only as good is the weakest link in the chain, and data sharing increases the length of the chain exponentially. Of course, the risk varies a great deal depending on who or what the data-sharing intermediary is, but the fact remains that a connected network is a connected network.

The problem only gets worse if interoperability is achieved by integrating applications. I’m no software engineer, but I’m pretty sure that the more integrated providers’ infrastructure is, the more vulnerabilities they share. To be fair, hospitals theoretically vet their partners, but that defeats the purpose of universal data sharing, doesn’t it?

And even if every provider in the universal data sharing network practices good security hygiene, they can still get attacked. So it’s not a matter of requiring participants to comply with some network security standard, or meet some certification criteria. Given the massive incentives these have to steal health data (and lock it up with ransomware), nobody can hold out forever.

The bottom line is that I believe we should discuss the matter of security in a fully-connected health data sharing network more often.

Yes, we almost certainly need to press ahead and simply find a way to contain the risks. We simply can’t afford our fragmented healthcare system, and data interoperability offers perhaps the best possible chance of pulling it back together.

But before we plunge into the fray, it only makes sense to stop and consider all of the risks involved and how they should be addressed. After all, universal interconnection exposes a virtually infinite number of potential points of failure to cybercrooks. Let’s put some solutions on the table before it’s too late.

Medical Device Security At A Crossroads

Posted on April 28, 2016 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As anyone reading this knows, connected medical devices are vulnerable to attacks from outside malware. Security researchers have been warning healthcare IT leaders for years that network-connected medical devices had poor security in place, ranging from image repository backups with no passwords to CT scanners with easily-changed configuration files, but far too many problems haven’t been addressed.

So why haven’t providers addressed the security problems? It may be because neither medical device manufacturers nor hospitals are set up to address these issues. “The reality is both sides — providers and manufacturers — do not understand how much the other side does not know,” said John Gomez, CEO of cybersecurity firm Sensato. “When I talk with manufacturers, they understand the need to do something, but they have never had to deal with cyber security before. It’s not a part of their DNA. And on the hospital side, they’re realizing that they’ve never had to lock these things down. In fact, medical devices have not even been part of the IT group and hospitals.

Gomez, who spoke with Healthcare IT News, runs one of two companies backing a new initiative dedicated to securing medical devices and health organizations. (The other coordinating company is healthcare security firm Divurgent.)

Together, the two have launched the Medical Device Cybersecurity Task Force, which brings together a grab bag of industry players including hospitals, hospital technologists, medical device manufacturers, cyber security researchers and IT leaders. “We continually get asked by clients with the best practices for securing medical devices,” Gomez told Healthcare IT News. “There is little guidance and a lot of misinformation.“

The task force includes 15 health systems and hospitals, including Children’s Hospital of Atlanta, Lehigh Valley Health Network, Beebe Healthcare and Intermountain, along with tech vendors Renovo Solutions, VMware Inc. and AirWatch.

I mention this initiative not because I think it’s huge news, but rather, as a reminder that the time to act on medical device vulnerabilities is more than nigh. There’s a reason why the Federal Trade Commission, and the HHS Office of Inspector General, along with the IEEE, have launched their own initiatives to help medical device manufacturers boost cybersecurity. I believe we’re at a crossroads; on one side lies renewed faith in medical devices, and on the other nothing less than patient privacy violations, harm and even death.

It’s good to hear that the Task Force plans to create a set of best practices for both healthcare providers and medical device makers which will help get their cybersecurity practices up to snuff. Another interesting effort they have underway in the creation of an app which will help healthcare providers evaluate medical devices, while feeding a database that members can access to studying the market.

But reading about their efforts also hammered home to me how much ground we have to cover in securing medical devices. Well-intentioned, even relatively effective, grassroots efforts are good, but they’re only a drop in the bucket. What we need is nothing less than a continuous knowledge feed between medical device makers, hospitals, clinics and clinicians.

And why not start by taking the obvious step of integrating the medical device and IT departments to some degree? That seems like a no-brainer. But unfortunately, the rest of the work to be done will take a lot of thought.

The Need for Speed (In Breach Protection)

Posted on April 26, 2016 I Written By

The following is a guest blog post by Robert Lord, Co-founder and CEO of Protenus.
Robert Protenus
The speed at which a hospital can detect a privacy breach could mean the difference between a brief, no-penalty notification and a multi-million dollar lawsuit.  This month it was reported that health information from 2,000 patients was exposed when a Texas hospital took four months to identify a data breach caused by an independent healthcare provider.  A health system in New York similarly took two months to determine that 2,500 patient records may have been exposed as a result of a phishing scam and potential breach reported two months prior.

The rise in reported breaches this year, from phishing scams to stolen patient information, only underscores the risk of lag times between breach detection and resolution. Why are lags of months and even years so common? And what can hospitals do to better prepare against threats that may reach the EHR layer?

Traditional compliance and breach detection tools are not nearly as effective as they need to be. The most widely used methods of detection involve either infrequent random audits or extensive manual searches through records following a patient complaint. For example, if a patient suspects that his medical record has been inappropriately accessed, a compliance officer must first review EMR data from the various systems involved.  Armed with a highlighter (or a large excel spreadsheet), the officer must then analyze thousands of rows of access data, and cross-reference this information with the officer’s implicit knowledge about the types of people who have permission to view that patient’s records. Finding an inconsistency – a person who accessed the records without permission – can take dozens of hours of menial work per case.  Another issue with investigating breaches based on complaints is that there is often no evidence that the breach actually occurred. Nonetheless, the hospital is legally required to investigate all claims in a timely manner, and such investigations are costly and time-consuming.

According to a study by the Ponemon Institute, it takes an average of 87 days from the time a breach occurs to the time the officer becomes aware of the problem, and, given the arduous task at hand, it then takes another 105 days for the officer to resolve the issue. In total, it takes approximately 6 months from the time a breach occurs to the time the issue is resolved. Additionally, if a data breach occurs but a patient does not notice, it could take months – or even years – for someone to discover the problem. And of course, the longer it takes the hospital to identify a problem, the higher the cost of identifying how the breach occurred and remediating the situation.

In 2013, Rouge Valley Centenary Hospital in Scarborough, Canada, revealed that the contact information of approximately 8,300 new mothers had been inappropriately accessed by two employees. Since 2009, the two employees had been selling the contact information of new mothers to a private company specializing in Registered Education Savings Plans (RESPs). Some of the patients later reported that days after coming home from the hospital with their newborn child, they started receiving calls from sales representatives at the private RESP company. Marketing representatives were extremely aggressive, and seemed to know the exact date of when their child had been born.

The most terrifying aspect of this story is how the hospital was able to find out about the data breach: remorse and human error! One employee voluntarily turned himself in, while the other accidentally left patient records on a printer. Had these two events not happened, the scam could have continued for much longer than the four years it did before it was finally discovered.

Rouge Valley Hospital is currently facing a $412 million dollar lawsuit over this breach of privacy. Arguably even more damaging, is that they have lost the trust of their patients who relied on the hospital for care and confidentiality of their medical treatments.

As exemplified by the ramifications of the Rouge Valley Hospital breach and the new breaches discovered almost weekly in hospitals around the world, the current tools used to detect privacy breaches in electronic health records are not sufficient. A system needs to have the ability to detect when employees are accessing information outside their clinical and administrative responsibilities. Had the Scarborough hospital known about the inappropriately viewed records the first time they had been accessed, they could have investigated earlier and protected the privacy of thousands of new mothers.

Every person seeks a hospital’s care has the right to privacy and the protection of their medical information. However, due to the sheer volume of patient records accessed each day, it is impossible for compliance officers to efficiently detect breaches without new and practical tools. Current rule-based analytical systems often overburden the officers with alerts, and are only a minor improvement from manual detection methods.

We are in the midst of a paradigm shift with hospitals taking a more proactive and layered approach to health data security. New technology that uses machine learning and big data science to review each access to medical records will replace traditional compliance technology and streamline threat detection and resolution cycles from months to a matter of minutes. Making identifying a privacy breach or violation as simple and fast as the action that may have caused it in the first place.  Understanding how to select and implement these next-generation tools will be a new and important challenge for the compliance officers of the future, but one that they can no longer afford to delay.

Protenus is a health data security platform that protects patient data in electronic medical records for some of the nation’s top-ranked hospitals. Using data science and machine learning, Protenus technology uniquely understands the clinical behavior and context of each user that is accessing patient data to determine the appropriateness of each action, elevating only true threats to patient privacy and health data security.