Measuring Success or Failure of an EMR Implementation

A reader of EMR and HIPAA asked the following interesting question:

I was wondering if you had or heard of anyone coming up with a way to measure if the EHR implementation was successful. Other than “its in!”. Im trying to help some clients define this but cant seem to find anyone who has done this. Im thinking something like:
Were all staff trained prior to go live?
Were project goals achieved? etc

Here’s my response that I hope you’ll find useful as well:
It’s an interesting question. I’d suggest you download my free EMR Selection e-Book.

In the book, I cover the various areas where a practice can get benefit from implementing an EMR. I suggest that each practice evaluate which of the benefits they are looking to achieve with their EMR implementation. Then, it works out nicely that it’s the criteria you can use for selecting an EMR and also for measuring how successful the EMR implementation has been.

That’s how I’d approach measuring the success or failure of an EMR implementation. Of course, you could also add in any unforeseen events (good and bad) that happened during the EMR implementation too.

The real key is to establish a set of goals or expectations for what you want to get out of the EMR implementation so you have a way to evaluate the EMR software and the EMR implementation. Then, it’s good to actually look at this criteria after the implementation to see if you fell short of those goals and what you could do to actually achieve them.

Implementing an EMR is a living, breathing thing. The best EMR implementations are evolving and improving as you continue to roll out more features of an EMR or better utilize the existing features. Not to mention all the new features that an EMR vendor will roll out as they upgrade their software.

About the author

John Lynn

John Lynn is the Founder of HealthcareScene.com, a network of leading Healthcare IT resources. The flagship blog, Healthcare IT Today, contains over 13,000 articles with over half of the articles written by John. These EMR and Healthcare IT related articles have been viewed over 20 million times.

John manages Healthcare IT Central, the leading career Health IT job board. He also organizes the first of its kind conference and community focused on healthcare marketing, Healthcare and IT Marketing Conference, and a healthcare IT conference, EXPO.health, focused on practical healthcare IT innovation. John is an advisor to multiple healthcare IT companies. John is highly involved in social media, and in addition to his blogs can be found on Twitter: @techguy.

8 Comments

  • I would say that a good barometer is how many calls are your support team receiving on a daily basis? If your getting flooded with hundreds of calls a day, the implementation was probably botched. Also, how many encounters are being generated daily, how many reports, how many active prescriptions, etc etc? Usuage stats will be a clear indicator as well, and you can even go as far as generating usage reports based on usernames within different departments.

  • I think there can be many different metrics to attempt to somehow define this. But in my opinion all of it is a bunch of noise! As an EMR trainer, the only metric that really matters to me is if the providers and staff are happier after implementing the EMR? AND would they recommend the product to others? If the answer to these questions is yes, it was a success. No, it was a failure. Pure and simple. The only variable is at what point post-implementation do you ask this question? Right away? 6 months? 1-2 yrs??

  • Leor,
    Productivity is a good measure to use.

    Steve,
    The time frame is an important question. The best measures are probably those that take into account right away and 6 months after. Although, it depends on the type of consultant you hired and what you asked them to do.

  • I think productivity is definitely a key to measuring the success of an EMR implementation. This includes the efficiency created by the EMR or the workflow (whether it is significantly slower, relatively faster or no improvement whatsoever). As Steve mentioned, the reaction from the physicians as well as pure statistics (revenue, number of patients, patient satisfaction, successful claim processing, errors, and the list goes on) also play a key role in indentifying success versus failure. Many scholarly articles refer to improved revenue over time being a result of EMR implementation, claiming that we “should” be able to take on more patients with the workflow being more manageable. Revenue, however, is hard to determine in a smaller time frame. So with regards to a timeframe in which we can see success, I would say it varies per practice. A practice full of elderly physicians (and by elderly I like to think of “more experienced” or “more established” practices) who haven’t grown up with computers, used them on a regular basis or even touched one, are going to be more likely to view an EMR in a negative light. This obviously isn’t everyone but for me, when it comes to actual results, we may have accurate results after 6 months if that group or clinic already has a majority of physicians who were computer savvy. It might take longer to find successful results for a group that doesn’t necessarily favor computers in the same light. Regardless of training, physicians with less experience tend to get frustrated more easily than those with that necessary computer experience.

  • I am of the mindset that ADOPTION needs to replace GO LIVE as the focus. That’s the best way I can say it in one sentence. Think PROGRAM instead of PROJECT. With ICD-10, HIPAA 5010, things like Patient-Centered Home and Accountable Care Organization, and just the normal addition of bells & whistles by the vendor in new releases – EMR will always be changing, always have new training occurring, and must ALWAYS be tracking metrics.

    As with any effort, key metrics are identified and constantly measured & reported throughout the life of the program. I am an advocate of posting measures BY PROVIDER in a public place so that folks can view progress (or lack there of). They can be labeled Dr. A, Dr. B, etc…to avoid blatant ridicule – but the providers eventually figure out amongst themselves who is who on the charts & graphs.

  • Wes,
    I think this person was a consultant for a small practice where the assignment was a project and not so much a program. Although, I really like that thinking even for a consultant. A consultant that just sees it as a project won’t take the same care as someone who’s doing a program.

    Measures by provider in a public place. I like that thinking.

  • Thank you all for your replies!
    What I trying to come up with is a way to measure the Implementation from the stand point of the pratice. Something that can be measured. Like a survey or check list of some sort. Its easy to just say, Yes it was a success, but I like to show WHY it was a success (or failure). For example, billing, e# of encounters productivity, was the training sufficient…etc.
    Thanks again for all your responses!

  • John,

    Setting goals, as you suggest, is absolutely required. However, there are goals and goals. What’s important is to distinguish among quantitative and qualitative.

    I would set up at the beginning of the selection process, a dashboard of milestones, deliverables and goals. I would pick five or six quantitative goals and a similar number of qualitative.

    I also would insist on a contract that allows for phased and performance based acceptance, so the product would have to measure up to its own specs and to MU capture and reporting, etc.

Click here to post a comment
   

Categories