SaaS EMR vs. Client Server EMR and AAFP in Denver

I knew that my previous post about the cost to update an EMR would bring out the people who like to back the SaaS EMR model versus those who like to back the Client Server EMR. As I’ve said before, it’s one of the most heated debates you can have in the EMR space.

I realized in the comments of that post why it’s such a heated topic. It’s because once an EMR software chooses to go down one path or the other, it’s nearly impossible to be able to switch paths. Why? Cause if you do choose to switch you basically have to just code a new application all over. Basically, the switching costs are enormous. So, only a few software companies (let alone EMR software companies) ever change from one to the other.

Considering the high switching costs, that basically means that an EMR vendor that is SaaS based has a strong vested interest in the benefits and upside of the SaaS model of software development. The same is true for Client Server EMR software and client server EMR companies looking at the benefits and upside of the client server model of software development.

This entrenching around a software development methodology (for which they can’t change) is what makes discussing each model so interesting. Each party dutifully makes the most of whichever software development methodology they’ve been given.

Of course, from the clinical perspective it’s sometimes hard to cut through all this discussion and get good information on the real pros and cons of each model.

In that vein, I’m looking for a couple EMR and HIPAA readers that would be interested in making the case for one or the other. All you’d need to do is create a guest blog post on the pros and cons of your preferred method. If needed, you’d also be welcome to do a response post to the other method’s post as well.

If this interests you, leave a comment or let me know on my Contact Us page. I think this could be really interesting.

On a different note, it looks like I’m going to be attending the AAFP conference in Denver next week. Is anyone else planning to be there? Anything I should know about the conference to get the most out of it?

About the author

John Lynn

John Lynn is the Founder of HealthcareScene.com, a network of leading Healthcare IT resources. The flagship blog, Healthcare IT Today, contains over 13,000 articles with over half of the articles written by John. These EMR and Healthcare IT related articles have been viewed over 20 million times.

John manages Healthcare IT Central, the leading career Health IT job board. He also organizes the first of its kind conference and community focused on healthcare marketing, Healthcare and IT Marketing Conference, and a healthcare IT conference, EXPO.health, focused on practical healthcare IT innovation. John is an advisor to multiple healthcare IT companies. John is highly involved in social media, and in addition to his blogs can be found on Twitter: @techguy.

19 Comments

  • Once again agreeing with you about the benefits of each – SaaS and Client/Server/ASP – each has its own benefits and one-shoe does not fit all. Having said that, I will send you my thoughts and obviously I will take up for SaaS and list the Pros and Cons. Cheers

  • Anthony,
    I look forward to seeing your comments.

    The response to this post has been pretty cool. A lot of people interested in participating. So, I’ll probably just open it up to everyone that wants to participate. Then, I’ll do 3 posts:
    -Client Server EMR (in house server accessed through client)
    -SaaS EMR (Web based hosted solution)
    -ASP EMR (Client Server in a data center using some remote desktop application)

    This should cover some interesting ground.

  • John,
    We will be at AAFP next week as well. I would encourage you to stop by our booth in the technology center so we can share with you how our solutions will capture the best of both worlds (Client/Server & SaaS).

    We (PDS Cortex) are constantly communicating with our physicians and medical practices to understand their ideas and concerns so that we can meet the evolutionary needs this market demands. With over 38 years in the medical industry, our ability to adapt to our customer’s requirements is what has set us apart.

    Hope to see you there.

  • Race,
    I’ll definitely stop by. I expect I’ll be spending most of my time in the technology center talking with vendors like yourself in their booths.

    I’ll send you an email with my contact info as well.

  • With our PROGRESSION EHR we offer both SaaS and local server options for our clients. It is a web-based application at heart. One of the big problems with SaaS of course is security. So these clients all have VPN’s for access. Another problem with the SaaS model is poor Internet. Even though we redundant high bandwidth connections, the offices in BFE may only have DSL that drops in and out. We therefore offer a model of SaaS where if internet is a problem we place a server in their office and administer it for them. I know this technically doesn’t then qualify as SaaS, but it gives them budgeting benefits’ without the administration headaches of owning their own server.

    So for pros of SaaS, I would say:
    1. Less up front cost
    2. Less system administration headaches
    3. Quick implementation (assuming they have an existing network)

    For the cons of SaaS:
    1. Slower Performance
    2. At the mercy of the internet
    3. Cost more over time.

    For a Local Server Model the client typically will purchase a license. This results in more up-front cost but lower cost over time, because the recurring cost are less. Because a server must be configured and setup for them it may delay implementation for a couple weeks. Of course the biggest benefits for a local server are reliability and performance. You are not at the mercy of a “Bad Internet Day”. If the practice has gone totally paperless and has scanned all old charts into the system, this performance increase can be monumental.

  • Hybrid, hybrid, hybrid !!! Why can’t we let go of the old school client/server technology but instead use the best of both (all) worlds. Hybrid is both SaaS and client/server based. It is both remote and local. It is resilient and facilitates Business Continuity. Use the browser, use RDP (or any terminal GUI client), use database async replication technology, use virtual machines, use open source software (PHP, apache, MySQL, PostGres, linux, etc.). But most important you have to get the highest system resiliency with EMR systems so failover and asynchronicity should be built in. Bring it on.

  • Hybrid Wins: Physicians can read paper charts with a flashlight during a hurricane. In a hurricane SAAS and Client/Server systems fail. Hybrid software runs stand alone on a local computer using local data and “syncs” with a backup/remote/common data store when possible. SaaS runs on someone else’s computer who you don’t know, at a location that you don’t know, managed by people you don’t know, who have access to your data (bet you didn’t know that either), who make many copies that you don’t know about, and send them to places you don’t know (like India). Hybrid systems will function as long as the laptop has battery (2-8hrs) and can be re-charged from vehicles cigarette lighters. SAAS, Client/Servers have many points of failure, Hybrids very few if any. So if you intend to maximize:
    1) Speed – Hybrid Wins – No Latency
    2) Security – Hybrid Wins – No Cloud based storage
    3) Flexibility – Hybrid Wins – Hybrids support Custom Software by user/Not cookie Cutter (all the same)
    4) Reliability – Hybrid Wins – If your PC works, Hybrid will work
    5) Cost Savings – Hybrid Wins – No long term support contracts, you can service the systems yourself.

  • Tripp,
    I don’t think it’s quite as clear cut as that. A simple example, not all Hybrid EHR supports custom software. Many of the hybrid EHR are just as cookie cutter as the rest.

  • I tried to post earlier but it bombed out. So, here it is (again)…

    Hybrid does not exactly work like so. Here are some of my thoughts and knowledge of hybrid (in mission critical ERP systems):

    1. Speed

    Hybrid means a local-functioning server with the possibility to use a remote (or SaaS) server in the event of a failure. When the local server is in use there is no (external network access) latency; but it does not necessarily mean the ERP system can be fast if there is a design flaw in its operation and architecture. When the local server fails, the remote server (or SaaS) becomes the failover and serves up the ERP processes either by manual or autmatic switchover, and may even have limited functionalities due to segregate networks. Latency is in effect but still depends on the type of subscription to gain external access to the remote server (or SaaS) provider. The higher quality of service network access will cost more, evidently.

    2. Security

    This will depend on how the software is designed (manufacturer), how the systemis used (operations) and the service/security agreement with the remote service provider (remote server data storage). Routine security audit, security-implied service contracts and HIPAA compliance checks are necessary to mitigate this problem.

    3. Flexibility (or Software Customizations)

    There is no such thing as cookie cutter program. Everything has to be customized somehow. Business mission critical applications always have to be customized to fit a practice. It has nothing to do with hybrid or SaaS but more about the flexibility of the program to integrate additional internal software development.

    4. Reliability

    Again, hybrid means the remote server functioning as a fault tolerant to the local server. If the system can be designed (again in the software development and architectural design) as active/active failover then both local and remote servers can be used actively and fault-tolerant to each other. Active/active design is better since both systems assist in the processing of the operations and both acts as redundant servers. However, the whole system dhould always be designed with N+1 redundancy (operational limit), meaning each server (in a 2 server failover) should be allowed to take only half the maximum load.

    5. Cost savings

    This is a bigger philosphy. It is about using the right tools for the right job. Information Technology is a big field; a mission critical system should be serviced by professionals, not by yourself or by doctors, or by clinicians. Not that I am definitely saying you can’t do it. But I would rather have somebody else do it faster and better than I can try do it by learning how to do it. Anyway, you can count your ROI later after hours of downtime.

  • Hey JojoP, I too had my original submission cause a server crash (apparently)…

    Hybrid EHR systems are by definition combinations of the best features of solution options what were once thought to be mutually exclusive. Example: Once you could only buy either an Electric Car or a Petrol Car but now you can buy a Hybrid car with the best features of both. Some Hybrid EHR Systems do not have a server. This is because the “server” is the single most expensive, complex, performance compromising, bottleneck in an EHR system that has to be replaced about every 3 years. When the EHR is slow, or down all users suffer until someone who is highly intelligent, highly skilled and highly expensive can get around to fixing it. So by eliminating the Server dependency the resulting Hybrid EHR runs very fast (No Latency) locally on a PC/Tablet on local data. All data is Secure on the local PC with full disk encryption so there is no risk of data loss and no necessity to report the stolen PC (HUGE BENEFIT). Flexibility is maximized because each machine can be tuned to have the necessary features like Biometrics or Hardware Keys for compliance with FDA requirements for e-Prescribing narcotics (ASP based systems will suffer here). Reliability is maximized by distributed computing where if one PC has an issue such as a Virus, then all the other PC’s continue to function correctly. Where by contrast in a C/S or ASP environment all users suffer at the same time. Cost savings are achieved by eliminating the (Approximate Figures here) $5K+ server, the 3K+ Sever class OS, the $150K/yr Server Tech, the $130K/yr DBA, the $125K/yr GUI developer, the OS Cal’ requirements, the OS Patch/Service Pack administration requirements, the $10K/yr Server Room Air-conditioning bill, the $1K UPS, and another $15K/yr+ in all the networking requirements that a fulltime network connection needs.

  • Tripp,

    In distributed computing the programs reside (or at least) runs locally on the system. You have not mentioned where the data resides, though. Will it all be local, as it is is entered on each system ? Or sent to a central database server ? If the former, then how are you going to share the data around ? If your answer is the latter then you would still need a dedicated server, thereby going back to a central computing approach.

    The beauty of central computing is that resiliency can be built once on the central server (or probably twice if you want high availability in it). The (desktop) PCs can simply be (common installation) browser clients, and no resiliency is required since PCs are high commodity. You can use another PC or keep a gold spare.

    There are still cost achievements that can be achieved using newer infrastructure technology like “VMs”. I’ll leave the information gathering beside. Another example, Google uses off-the-shelf cheap components to run their state-of-the-art software engines. Not name brand.

    Then if I would have to ensure (in your distributed computing example) that the software programs and data (if it stays in each PC) can be kept safe intact and secure on each PC then I don’t think it would be logical enough to install resiliency components on 100 or so of these. Add to that the maintenance costs, then most likely you will reach the same figures (cost) that you iterated above.

    In My Opinion.

    JP

  • Hey JP, thanks for your excellent questions;

    1) Distributed Computing means that multiple autonomous computers independently perform computations but then collaborate the results via a network. In a Hybrid system the data may initially reside on a remote store, or locally, but to be processed it must be local data and is therefore copied from a remote store if necessary.
    2) Central Database Server means that a Server Class computer is running a Database that functions as a central store of all related data. In a Hybrid system this central store may exist, but it is optional. If it does exist it is “forensic” and processes transactions (encounters) after they have been completed. There is no dependency of a full time, current connection to the Database to retrieve data.
    3) Central Storage means that some “off-site” storage is necessary to collect all data for archival purposes. In a Hybrid system PC’s go about creating files from having performed encounters and then at some point when a network connection is available they “synchronize” what changed since the last sync with the central store. Billing data is also sent along at this time. This store need not be a server or have a database, it can be a Network Attached Storage device (NAS) available from Costco in the 3TB range for about $200.00.

    4) Resiliency means the system is Robust and Reliable and is measured in “Up Time” or Mean Time between Failures (MTBF). In a central computing environment there are many points of failure. Example: If you are standing in the “wiring” closet of a typical practice you can disable a Central computing system 100+ different ways (points of failure). Just simply unplug a power strip, router, switch, modem, etc…, or you could use the power switch to just turn them off, or you could disconnect a patch cable or fiber link. Or there could be a fiber cut, or lightning strike or a traffic accident that brings down a power/data pole, etc… In a Hybrid system each PC operates autonomously, no need for a full time network connection; it has all it needs to work all the time. And if something bad happens to one PC all of the rest of them continue to work, in contrast in a centralized system all users suffer at the same time.

    5) VM’s are “Virtual Machines” that are used to economically create insolated computing environments on a single physical server. However if the physical server has a power, or data problem then all the VM’s on that physical box fail.

    6) Google does provide many business class environments but security is a critical issue. Where is the data? How has access to it? How many copies are there? If you push your data into the “cloud” you just gave it away and if it’s PHI you just created a huge liability for yourself that could cost you $1.5M in penalties. Don’t do it.

    7) The cost of maintaining many PC’s is the principle economic reason for the development and deployment of ASP solutions. However the real world of physician practices requires a lot of hardware. Hardware requires drivers. ASP’s suffer in this environment. Card Scanners read Drivers License and Insurance cards and populate Demographic data. Documents Scanners, Printers, Fax machines, Credit Card processing machines, Biometrics, Security Dongles, Headsets for Voice Recognition, Imaging (Xray, MRI, Ultasound, etc..) are all hardware in use that can and should be connected to the EHR. The PC is the hardware platform of choice so we are stuck with maintaining the PC.

  • In response to your bulleted items:

    1. I doubt that local computing can sustain the load required to perform database-related operations.
    2. Databases can grow large and also require multiple spindles (for low latency). Plus, data that is stored on local PC is not fault tolerant since the PC has a single drive.
    3. If the central database server is not a full-pledged RDBMS compliant db server then the data stored cannot have “atomicity”. Redundant data can be stored on the storage, worst, overwrite information. The SQL server is the controller of the data.
    4. All layers of redundancy must be implemented on centralized computing, since data and operations must be fully protected, by all means, including deliberate sabotage.
    5. VM hosted architecture can be so designed so that there is failover. VMWare is now used in many Fortune 500 companies.
    6. Encryption technologies. Do you use online banking ? It is the same concept. In HIT they are now forming HIE standards that will address electronic exchange of health information.
    7. This problem lies in the design of the software interface. A program that is designed to do this is generically called thin-client applications (not to be confused with thin client PCs). The thin-client software will interface to the electronic device and has a software buffer that will send/receive information to the central server. It is not a new problem and has been done in bar code scanning technologies.

    My 7c

  • Hey JP, this is a fascinating exchange.

    1) Hybrid systems may or may not have a database, in the case of no database then there is no load. In the case of a Database then this database processing typically occurs on some central machine that contains the database so once again no load on the PC.
    2) Hybrid systems may store files locally using a flat file system like you would store a word document or excel spread sheet. These files are then “synced” using standard backup software that looks for files changed since the last backup/sync. This typically occurs automatically each evening when the provider is asleep and the network traffic is low to a backup system within 20-30minutes drive. While hard disk failures do occur at most you risk loss of what work happened so far today, the rest of your files were backed up last night and can easily be retrieved in there entirety (not downloaded off the net).
    3) Hybrid systems may use “transactional” data that is atomically complete. There is no referential integrity issues, no contextual sensitivity, no chronological “race conditions”. It simply is a single XML file that contains all the data necessary to capture what happened in this encounter and successful bill for the work performed.
    4) Hybrid systems avoid centralized redundancy requirements by eliminating the concept of “central”. Each machine operates autonomously and when it’s convenient exchanges information with a central data store. The central data store can easily be replicated but it is only necessary that you replicate the data.
    5) Virtual Machines are great things, I use them in many installs however in small practices where Hybrid systems typically appear this is a level of complexity that translates to cost that can be avoided and thus usually is.
    6) Hybrid systems may or may not live in the cloud but we must address security as primary concern even if physicians rank it as the lowest concern when purchasing an EHR (See this weeks Mercom report). On-line banking benefits from very mature and time proven security environment that still has holes such as “Drafts” which is what identity thieves use to drain your bank account. With PHI this data is of even greater value because it is far more complete and easy for identity thieves to find answers to security questions. However our exposure as “Business Associates” and the exposure of physicians are not just limited to the contents of our bank account, tack on another $1.5M in penalties and possibly jail. This kicks the security problem up several notches… While the evolving HIE standards are great they are only as good as the implementation.
    7) Yes it looks like we agree, Thin Client Software must be deployed and maintained on each individual PC, then configured to interface all the necessary hardware interfaces and thus we have just defeated the primary economic driver for developing and deploying ASP architecture.

  • Okay, now that we defined what we want (we may not agree on what defines a hybrid), this is a description of my Hybrid EMR.

    1. Minimum of 2 centralized servers. The remote central server will act as the database/application archival server and as a failover server. The local central server will be the main database/application server where data updates and reads and application back-end execution runs.

    2. There will be station PCs with loadable modules (thin client software). A PC may require a direct interface to an external medical equipment (example, IV machine) which is handled by the loadable module. Transactions with the PC are handled in a lazy updates, unless immediate transfer to the central server is required. This architecture will have the least amount of latency.

    3. Data and application updates on the local central server are transferred (replicated) to the remote central server in an agreed upon interval, ex. every 5 to 15 minutes (RPO).

    4. All PC communications will relay to the local central server. If the local central server fails then all communications will be diverted to the remote central server. If this is the case then there may be latency for updates requiring immediate transfer. There will be intelligence in the loaded modules such that only medical operations that require immediate updates will be useable. All other operations will have limited function.

    5. If the connection to the remote central server is failed then all local operations should still function since the main local central server is still functional.

    6. Backup and archival of data and application should happen from the remote central server to prevent load on the main central server. Archives can be kept on the remote server facility since it is already distant from the main site.

    7. Communications between the main and remote site should be using high encryption technologies as allowed by HIPAA. If the remote site is administered by third party co-location services then a service agreement regarding security should be contracted. Routine secuirty audits on all aspects of the architecture should be implemented. Local site security can be implemented by using https enabled protocols or encrypted RDP connections. All local storage access should be disabled from within the application. All handheld or remote access devices should be limited to console access only to prevent caching of data locally to the handheld device.

    8. All central servers will be redundant from ground level to ceiling including the network infrastructure, fault-tolerant and secured in data centers.

    This is my view of high-level mission critical system installation. Only if it exists. Most of the infrastructure can accomodate these requirements. However, the software application (EMR) should have tie-ins (hook) to the high-reliability function. Usually, the problem with mission critical software is that redundancy and business continuity is left to 3rd party applications. Disaster recovery hooks must be in place with the mission critical software so that overall redundancy can be attained.

    JP
    hitme@lagunarty.com

  • Hey JP, Sounds like you have a very robust and well designed system. My Hybrid system is designed to be as simple and “low tech” as possible. Since Training is the largest barrier to EHR adoption we needed a low cost, stable user interface that had been proven to be “intuitive” in as many diverse international environments as possible, we selected Microsoft Office. XLEMR uses the MS Outlook Calendar for scheduling, Contacts for Referrals, Patients, Pharmacies, etc…, Tasks for follow-up reminders. MS Word for Progress Notes, Prescriptions, Correspondence and other documentation. MS Excel for Forms, Calculations, Charts, Graphs, CMS E&M Coding compliance, etc… MS Access for Reporting. MS Windows File Folders for Patient Charts. We built an Add-In for Excel that is a Menu of functions that automate the work flow. Primary storage is local to each Tablet PC’s 300GB drive and backed up to a central LAN or WAN store using Secure FTP. Each Tablet has a local USB drive as an additional local backup. The whole EHR with PACS server and DICOM viewer can be installed from a 1GB Jump Drive/or downloaded in about 10minutes on any machine with Office already installed. We have interfaced the free e-prescribing system with NEPSI and bi-directional interfaces to all labs including internal CBL machines. We are currently developing an interface to Microsoft Health Vault for MU compliance. Since most offices already have their existing forms in WORD it’s very easy for us to customize and a few of our clients actually do their own customization. Local maintenance and support is easily obtained and there is no need for special technical knowledge. We believe in the KISS principle. Tripp@XLEMR.com

  • Tripp,

    That really sounds simple to me, that is awesome. Using tools that most computer users already know – MS Office – that will take a chunk of the training effort away.

    If you may let me have a take on that:

    1. MS Office is really simplistic. In fact, designing a system from MS Office and adjusting it to a higher level of function, in my opinion, is much harder than designing from a collaboration software program and calling MS Office for tools and utilities as required.

    2. Using Windows file / folders for charts will not give enough reporting or sorting capabilities (indexing, filtering, file name limits, etc.).

    3. USB drive ports can be a source of secuirty breaches.

    4. I’ve had too much expandability problems with MS Access; it does not scale well. This is the reason Microsoft has the MS SQL product.

    5. Exporting from WORD, or from any other word processing program is not usually a big problem.

    6. I can attest that even the simplest utilities when used with a complex system such as an EMR will likely cause about the same amount of technical support call. It is the higher system that needs to be resilient, in effect, utilizing simpler client utilities is adaptable.

    But hey, if it works, then it should be doing something properly. These are just my opinion.

    JojoP

  • Hey JP,

    Excellent comments again!

    1) XLEMR uses MS Office as the “User Interface” so the experience is “Wow, I already know how this works”, all higher functionality is performed in our library’s and utilities. Actually it’s been super easy and fast for us to develop because so much code is available on-line for free.
    2) For Reporting and Sorting and Indexing we solved this problem using two very simple concepts:
    a. Date Coding – All “documents” are prefixed with a date code YYMMDD.
    Example: 090727 is Year 2009, Monty 07 July, 27th Day
    The Benefits of Date Coding are numerous:
    1) Fast – There are no slashes or dashes required to be typed
    2) Sort – Files are automatically sorted chronologically by MS explorer
    3) Overwrite Protection – It’s unlikely to overwrite good data with bad data
    4) History – You’ll have the history of all improvements to your documents
    5) Unique Names – Each file will have a unique name so there is no ambiguity about which file is the correct file.
    6) What changed today – It’s easy to get a list of what improvements were made today.
    7) Informative – You don’t have to open the file or use properties to know what date the file was modified.
    8) Auto Grouping – If there is more than one file in a group of files they will all be located next to each other in the folder listing
    9) Consistency – All files in XLEMR use this standard so it’s easy to write programs to manipulate these files.
    10) Archiving – You can easily choose to Archive really old files.
    b. XML History – All “encounters” create a “YYMMDD History.xml” file that contains all the discrete data elements in transactional format.
    The result of this simplicity is that we have a comprehensive chronological trail of all activity that can be very quickly processed by many tools such as Access, MySQL, Crystal Reports, etc… The beauty of using XML is that every day our systems improve in some unique way for each client and frequently it’s just a simple “Add this new data item” not previously captured. So we just add it. There is no “overhead” of a Database Schema change, GUI update or Dump and Re-load and Re-index. It’s just done in a few minutes, frequently while the client watches us do it. AND it’s available for use immediately, no waiting on the next scheduled release.

    3) Security is a comprehensive responsibility, all data ports, not just USB must be managed so we scan all data upon connection or transfer. We also offer a USB base PHR that has on-board virus scanning capacities.
    4) We use MySQL as well, like you said, it just depends on the scale of the problem.
    5) TRUE
    6) Well from my PMP and MBA training we learned (repeatedly) that Complexity is directly proportional to Cost and Duration and inversely proportional to resiliency and reliability, so said another way, Simple solutions Cost Less, Work Better and are faster to implement. So our primary architectural focus and approach is to drive down Complexity to the maximum extent possible to achieve the lowest possible cost and the shortest possible durations, and the highest possible reliability and resiliency. This has worked incredibly well for us and our clients.

  • Hybrid might also mean more cost. You maintain the local data (backup, servers, electric, hardware) and then you also have to somehow subsidize the cloud storage. You also have the same disadvantage of cloud computer i.e security. It is the best and the worst of both worlds.
    I don’t think there is a solution: If you keep your data locally, you are at the mercy of the vendor, maintenance fees, and hardware/software cost increases. If you store it in the cloud, there is a security issue and if the connection goes down, you go down. I’ve had both happen, and I can tell you that the client side was the more unpleasant of both experiences. Not only did I have to plug in a whole bunch of cash to buy software, maintenance, modules, and keep my own backup, but I ended up paying for a new server when it crashed and I had to pay for an upgrade to Windows server, client licenses, etc. As much as I hate cloud based computing, I’ve got no choice because it is more cost effective and decreasing reimbursements mean that I will have to tighten my belt. I also would like to keep a greater chunk of the stimulus money so I can at least be on the receiving end of this software fleecing. ps. Paper charts still rule!

Click here to post a comment
   

Categories