After my previous post on EMR company ratings, one of my readers pointed me to a resource (pdf) that kind of rates the various long term care EMR software vendors. There are 64 listed and not all are EMR vendors, but it’s an interesting way to approach listing the various long term care software providers. Basically, a list of each vendor and a mark in the column that matches the software.
Of course, the real problem with this type of resource is that it’s just a grid with no qualitative information. The grid works great when you’re talking about static details like database or supported operating systems. However, when you’re talking about various functions in an EMR software, you need some more qualitative information and not just a check box.
For example, it would be simple to put a check box next to ePrescribing for various vendors, but not all ePrescribing are created equal. This requires some qualitative information and no doubt can have multiple perspectives.
The question is, how do you rate the various EMR features in a meaningful way?
Simple, you give the job to a federal taskforce, who then release 25 criteria for meaningful ratings for public comment…. 😉 Seriously, though, the problem reminds of me of my efforts to rate help documentation software: there’s the requisite google search and wading through different company claims, but how do you know whether this software actually does what you, personally, need it to?
I ended up finding a great resource called the HAT (Help Authoring Tool) Matrix that allowed side-by-side in-depth comparisons of all the programs on the market. I could see detailed information on how they stacked up with each other, and adjust which categories were of most importance to me for comparison. I got my few remaining questions answered through the HATT yahoo group.
Sounds like that’s the kind of resource we need for EHRs.