Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Speeding Sepsis Response by Integrating Key Technology

Posted on January 26, 2015 I Written By

Stephen Claypool, M.D., is Vice President of Clinical Development & Informatics, Clinical Solutions, with Wolters Kluwer Health and Medical Director of its Innovation Lab. He can be reached at
Stephen Claypool - WKH
Three-week-old Jose Carlos Romero-Herrera was rushed to the ER, lethargic and unresponsive with a fever of 102.3. His mother watched helplessly as doctors, nurses, respiratory therapists and assorted other clinicians frantically worked to determine what was wrong with an infant who just 24 hours earlier had been healthy and happy.

Hours later, Jose was transferred to the PICU where his heart rate remained extremely high and his blood pressure dangerously low. He was intubated and on a ventilator. Seizures started. Blood, platelets, plasma, IVs, and multiple antibiotics were given. Still, Jose hovered near death.

CT scans, hourly blood draws and EEGs brought no answers. Despite all the data and knowledge available to the clinical team fighting for Jose’s life, it was two days before the word “sepsis” was uttered. By then, his tiny body was in septic shock. It had swelled to four times the normal size. The baby was switched from a ventilator to an oscillator. He received approximately 16 different IV antibiotics, along with platelets, blood, plasma, seizure medications and diuretics.

“My husband and I were overwhelmed at the equipment in the room for such a tiny little person. We were still in shock about how we’d just sat there and enjoyed him a few hours ago and now were being told that we may not be bringing him back home with us,” writes Jose’s mother, Edna, who shared the story of her baby’s 30-day ordeal as part of the Sepsis Alliance’s “Faces of Sepsis” series.

Jose ultimately survived. Many do not. Three-year-old Ivy Hayes went into septic shock and died after being sent home from the ER with antibiotics for a UTI. Larry Przybylski’s mother died just days after complaining of a “chill” that she suspected was nothing more than a 24-hour bug.

Sepsis is the body’s overwhelming, often-fatal immune response to infection. Worldwide, there are an estimated 8 million deaths from sepsis, including 750,000 in the U.S. At $20 billion annually, sepsis is the single most expensive condition treated in U.S. hospitals.

Hampering Efforts to Fight Sepsis

Two overarching issues hamper efforts to drive down sepsis mortality and severity rates.

First, awareness among the general population is surprisingly low. A recent study conducted by The Harris Poll on behalf of Sepsis Alliance found that just 44% of Americans had ever even heard of sepsis.

Second, the initial presentation of sepsis can be subtle and its common signs and symptoms are shared by multiple other illnesses. Therefore, along with clinical acumen, early detection requires the ability to integrate and track multiple data points from multiple sources—something many hospitals cannot deliver due to disparate systems and siloed data.

While the Sepsis Alliance focuses on awareness through campaigns including Faces of Sepsis and Sepsis Awareness Month, hospitals and health IT firms are focused on reducing rates by arming clinicians with the tools necessary to rapidly diagnose and treat sepsis at its earliest stages.

A primary clinical challenge is that sepsis escalates rapidly, leading to organ failure and septic shock, resulting in death in nearly 30 percent of patients. Every hour without treatment significantly raises the risk of death, yet early screening is problematic. Though much of the data needed to diagnose sepsis already reside within EHRs, most systems don’t have the necessary clinical decision support content or informatics functionality.

There are also workflow issues. Inadequate cross-shift communication, challenges in diagnosing sepsis in lower-acuity areas, limited financial resources and a lack of sepsis protocols and sepsis-specific quality metrics all contribute to this intractable issue.

Multiple Attack Points

Recognizing the need to attack sepsis from multiple angles, our company is testing a promising breakthrough in the form of POC Advisor™. The program is a holistic approach that integrates advanced technology with clinical change management to prevent the cascade of adverse events that occur when sepsis treatment is delayed.

This comprehensive platform is currently being piloted at Huntsville Hospital in Alabama and John Muir Medical Center in California. It works by leveraging EHR data and automated surveillance, clinical content and a rules engine driven by proprietary algorithms to begin the sepsis evaluation process. Mobile technology alerts clinical staff to evaluate potentially septic patients and determine a course of treatment based on their best clinical judgment.

For a truly comprehensive solution, it is necessary to evaluate specific needs at each hospital. That information is used to expand sepsis protocols and add rules, often hundreds of them, to improve sensitivity and specificity and reduce alert fatigue by assessing sepsis in complex clinical settings. These additional rules take into account comorbid medical conditions and medications that can cause lab abnormalities that may mimic sepsis. This helps to ensure alerts truly represent sepsis.

The quality of these alerts is crucial to clinical adoption. They must be both highly specific and highly sensitive in order to minimize alert fatigue. In the case of this specific system, a 95% specificity and sensitivity rating has been achieved by constructing hundreds of variations of sepsis rules. For example, completely different rules are run for patients with liver disease versus those with end-stage renal disease. Doing so ensures clinicians only get alerts that are helpful.

Alerts are also coupled with the best evidence-based recommendations so the clinical staff can decide which treatment path is most appropriate for a specific patient.

The Human Element

To address the human elements impacting sepsis rates, the system in place includes clinical change management to develop best practices, including provider education and screening tools and protocols for early sepsis detection. Enhanced data analytics further manage protocol compliance, public reporting requirements and real-time data reporting, which supports system-wide best practices and performance improvement.

At John Muir, the staff implemented POC Advisor within two medical/surgical units for patients with chronic kidney disease and for oncology patient populations. Four MEDITECH interfaces sent data to the platform, including lab results, pharmacy orders, Admit Discharge Transfer (ADT) and vitals/nursing documentation. A clinical database was created from these feeds, and rules were applied to create the appropriate alerts.

Nurses received alerts on a VoIP phone and then logged into the solution to review the specifics and determine whether they agree with the alerts based on their clinical training. The system prompted the nursing staff to respond to each one, either through acknowledgement or override. If acknowledged, suggested guidance regarding the appropriate next steps was provided, such as alerting the physician or ordering diagnostic lactate tests, based on the facility’s specific protocols. If alerts were overridden, a reason had to be entered, all of which were logged, monitored and reported. If action was not taken, repeat alerts were fired, typically within 10 minutes. If repeat alerts were not acted upon, they were escalated to supervising personnel.

Over the course of the pilot, the entire John Muir organization benefited from significant improvements on several fronts:

  • Nurses were able to see how data entered into the EHR was used to generate alerts
  • Data could be tracked to identify clinical process problems
  • Access to clinical data empowered the quality review team
  • Nurses reported being more comfortable communicating quickly with physicians based on guidance from the system and from John Muir’s standing policies

Finally, physicians reported higher confidence in the validity of information relayed to them by the nursing staff because they knew it was being communicated based on agreed upon protocols.

Within three months, John Muir experienced significant improvements related to key sepsis compliance rate metrics. These included an 80% compliance with patient screening protocols, 90% lactate tests ordered for patients who met screening criteria and 75% initiation of early, goal-directed therapy for patients with severe sepsis.

Early data from Huntsville Hospital is equally promising, including a 37% decline in mortality on patient floors where POC Advisor was implemented. Thirty-day readmissions have declined by 22% on screening floors, and data suggest documentation improvements resulting from the program may positively impact reimbursement levels.

This kind of immediate outcome is generating excitement at the pilot hospitals. Though greater data analysis is still necessary, early indications are that a multi-faceted approach to sepsis holds great promise for reducing deaths and severity.

Big Brother Or Best Friend?

Posted on April 9, 2014 I Written By

Kyle is CoFounder and CEO of Pristine, a VC backed company based in Austin, TX that builds software for Google Glass for healthcare, life sciences, and industrial environments. Pristine has over 30 healthcare customers. Kyle blogs regularly about business, entrepreneurship, technology, and healthcare at

The premise of clinical decision support (CDS) is simple and powerful: humans can’t remember everything, so enter data into a computer and let the computer render judgement. So long as the data is accurate and the rules in the computer are valid, the computer will be correct the vast majority of the time.

CDS is commonly implemented in computerized provider order entry (CPOE) systems across most order types – labs, drugs, radiology, and more. A simple example: most pediatric drugs require weight-based dosing. When physicians order drugs for pediatric patients using CPOE, the computer should validate the dose of the drug against the patient’s weight to ensure the dose is in the acceptable range. Given that the computer has all of the information necessary to calculate acceptable dose ranges, and the fact that it’s easy to accidently enter the wrong dose into the computer, CDS at the point of ordering delivers clear benefits.

The general notion of CDS – checking to make sure things are being done correctly – is the same fundamental principle behind checklists. In The Checklist Manifesto, Dr. Atul Gawande successfully argues that the challenge in medicine today is not in ignorance, but in execution. Checklists (whether paper or digital) and CDS are realizations of that reality.

CDS in CPOE works because physicians need to enter orders to do their job. But checklists aren’t as fundamentally necessary for any given procedure or action. The checklist can be skipped, and the provider can perform the procedure at hand. Thus, the fundamental problem with checklists are that they insert a layer of friction into workflows: running through the checklist. If checklists could be implemented seamlessly without introducing any additional workflow friction, they would be more widely adopted and adhered to. The basic problem is that people don’t want to go back to the same repetitive formula for tasks they feel comfortable performing. Given the tradeoff between patient safety and efficiency, checklists have only been seriously discussed in high acuity, high risk settings such as surgery and ICUs. It’s simply not practical to implement checklists for low risk procedures. But even in high acuity environments, many organizations continue to struggle implementing checklists.

So…. what if we could make checklists seamless? How could that even be done?

Looking at CPOE CDS as a foundation, there are two fundamental challenges: collecting data, and checking against rules.

Computers can already access EMRs to retrieve all sorts of information about the patient. But computers don’t yet have any ability to collect data about what providers are and aren’t physically doing at the point of are. Without knowing what’s physically happening, computers can’t present alerts based on skipped or incorrect steps of the checklist. The solution would likely be based on a Kinect-like system that can detect movements and actions. Once the computer knows what’s going on, it can cross reference what’s happening against what’s supposed to happen given the context of care delivery and issue alerts accordingly.

What’s described above is an extremely ambitious technical undertaking. It will take many years to get there. There are already a number of companies trying to addressing this in primitive forms: SwipeSense detects if providers clean their hands before seeing patients, and the CHARM system uses Kinect to detect hand movements and ensure surgeries are performed correctly.

These early examples are a harbinger of what’s to come. If preventable mistakes are the biggest killer within hospitals, hospitals need to implement systems to identify and prevent errors before they happen.

Let’s assume that the tech evolves for an omniscient benevolent computer that detects errors and issues warnings. Although this is clearly desirable for patients, what does this mean for providers? Will they become slaves to the computer? Providers already face challenges with CPOE alert fatigue. Just imagine do-anything alert fatigue.

There is an art to telling people that they’re wrong. In order to successfully prevent errors, computers will need to learn that art. Additionally, there must be a cultural shift to support the fact that when the computer speaks up, providers should listen. Many hospitals still struggle today with implementing checklists because of cultural issues. There will need to be a similar cultural shift to enable passive omniscient computers to identify errors and warn providers.

I’m not aware of any omniscient computers that watch people all day and warn them that they’re about to make a mistake. There could be such software for workers in nuclear power plants or other critical jobs in which the cost of being wrong is devastating. If you know of any such software, please leave a comment.

A Hospital Perspective on Meaningful Use from Encore Health Resources

Posted on February 18, 2014 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The following is a guest blog post by Karen Knecht in response to the question I posed in my “State of the Meaningful Use” call to action.

If MU were gone (ie. no more EHR incentive money or penalties), which parts of MU would you remove from your EHR immediately and which parts would you keep?

Karen Knecht
Karen Knecht
Chief Innovation Officer at Encore Health Resources

It’s an interesting question you’ve posed on MU, and I think you have generated some great discussion on this topic, such as last week’s response by Dr. Sherling from the perspective of an eligible provider.

My colleagues and I would like to provide an eligible hospital perspective.  The industry is now three-plus years down the path of implementing “certified EHRs.”  There was a need to kick-start the digitization of healthcare in this country and create a common infrastructure to drive change, and MU has done that.  For example, establishing standards for data capture is critical for unified reporting and analysis.  Would the industry establish and adopt these standards without a program like MU?

But working with many large healthcare organizations representing several hundred individual hospitals in their MU programs, there are clearly many lessons learned and opportunities to improve for the future, even if the MU program were to go away.

Overall, there are no MU objectives that we would discount as having no value.  However, there are some that have served their time and others that are ahead of their time.

For the parts to continue, we see a high level of value in the CPOE, Barcoded Medication Administration, Medication Reconciliation and Clinical Decision Support objectives, as they are making tangible contributions to patient care.  However, we would recommend timeline delay due to additional capital outlay as well as complexity of workflow.  This would give more time for deeper and broader adoption.

For the parts to no longer measure in the same way, we would start by simplifying and removing the objectives that are topped out: the ones that are already hardwired in most organizations such as Vital Signs, Demographics, and Smoking Status.  This is no different than the current process for removing quality measures from reporting requirements once they have been well adopted — and HITPC is in agreement about this.  In their meeting last week where they discussed proposed Stage 3 measures, they were saying much the same thing.  Even if you stop measuring these things explicitly, they will continue to be electronically documented.

Second, we could see removing objectives that are now standard for “certified” EHRs.  For example, the time and effort to document the Drug Formulary, Drug-Drug, and Drug-Allergy checking functionality, for the sole purpose of meeting the MU objective, is not well spent.  Another example is the lab results stored as discrete values, which are part and parcel of any lab system in existence.

Other objectives that are causing great concern among many hospitals are the ones dealing with providing and exchanging information electronically.  It would be helpful to reconsider the expectations for these objectives, since many are finding out that implementing a patient portal without a sound patient engagement strategy is not going to be enough to ensure that 5% of patients will actually access their records.  Hospitals should have a portal and secure messaging capability, but it doesn’t seem realistic to put thresholds on patient utilization.  As the old saying goes, “You can lead a horse to water, but you can’t make it drink.”

Additionally, the requirement for Direct exchange to transmit summary of care is cumbersome and actually a step backwards for those entities who are part of an HIE and are currently exchanging data among members.  For most others, it is really only practical to implement with a physician ambulatory partner.  The sad fact is that nursing homes, SNF’s, and other entities where hospitals commonly transfer patients are not included in the EHR incentive program and do not have the technology necessary to participate in a direct exchange in a meaningful way.

And finally, we think all aspects of electronic quality measures should be rethought.  We love the idea of calculating these measures electronically, but they need to be appropriately validated and re-addressed in the context of the poor data collection that is occurring.  Perhaps CMS should consider another voluntary incentive program for facilities that have fully implemented all their clinical documentation.  Given the change that is proposed to the physician quality reporting programs as a result of the SGR fix, perhaps a similar refinement of the IQR and VBP programs along with MU should be considered.

See other responses to this question here and please reach out to us if you’re interested in providing a response to the question.

Epic API, EMR Market Saturation, and Faith in Clinical Decision Support

Posted on September 22, 2013 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Vince asks a good question. I wrote a little bit previously about the Epic User Group Meeting where they announced the Epic API. I definitely think there’s still a lot of missed opportunity for the announced Epic API, but hopefully what they’ve released is successful so it encourages them to open up their API much more.

I’ve been writing a lot lately about the changing EMR marketplace (see Golen Age of EHR Over). This prediction offers a new insight I hadn’t covered. The market could decrease because many of the larger purchases are already done. So, that could slow the EHR spending.

I wish I could find this talk. I’m really interested in how properly implemented clinical decision support can save lives. That’s what this should be about anyway, no?

Health IT Tweet Roundup – Neil Versel Edition

Posted on July 21, 2013 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

As you know, each weekend I like to do a roundup of interesting tweets and add a bit of commentary. This time I thought it would be fun to grab some tweets from just one person, Neil Versel. Neil has been doing a number of really great posts on his blog Meaningful Health IT News lately (Full Disclosure: Neil’s blog is part of the Healthcare Scene blog network). The following tweets highlight some of Neil’s recent blog posts.

I agree that Blue Button Plus is a great step forward for Blue Button. This post is particularly interesting because Neil didn’t see the promise of Blue Button before the changes were made and it was called plus.

This is a great discussion on the meaningful use requirements and Blue Button’s role in them. Join in if you have some knowledge on the area about what your EHR is doing.

Neil’s right about people who don’t cover healthcare regularly not understanding many of the true dynamics at play. I do find it interesting that Neil is such a fan of clinical decision support. I still think it’s in such an infant state. I can’t wait for much more advanced clinical decision support.

The New Healthcare Team: GE & Microsoft

Posted on December 13, 2011 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Editor’s Note: The following is a guest post by Jeremy Bikman. You can read more about the GE and Microsoft Venture on EMR and EHR.

Guest Post: Jeremy Bikman is Chairman at KATALUS Advisors, a strategic consulting firm focused on the healthcare vertical. We help vendors grow, guide hospitals into the future, and advise private equity groups on their investments. Our clients are found in North America, Europe, and Asia.

Healthcare is being held hostage and it doesn’t even know it.

It is held hostage by burdensome regulations, by archaic practices, and (oddly enough) by technology itself. In this age of Facebook, Twitter, and LinkedIn, an age where anybody with internet access can connect to somebody else on the other side of the globe and share personal information and other data with the click of a mouse, it is impossible that you could visit a hospital in the next town over and they would be able to procure your personal health information as easily or as quickly.

Healthcare, globally and locally, is utterly huge and mind-blowingly complex, and thus absolutely needs the very best innovation of everybody involved. Yet, healthcare technology companies almost universally deliver products which are built on closed-minded concepts. They lock down their platforms, creating real barriers to interoperability, patient data exchange, and actual innovation. This is the present reality within, and across, practically every hospital on earth. The recently announced joint venture between GE and Microsoft offers hope of an alternate reality, one where hospitals can bring together data streams from all over the enterprise, while utilizing new innovations and technology as they see fit, including different best-of-breed sources.

Giving Hospitals a New Choice
There are huge flaws in how technology is delivered in healthcare today, flaws which impact quality of care within a hospital and across the entire industry irrespective of country or region. While the rest of the tech world is moving towards open platforms and collaborative delivery models, healthcare seems to be stuck in the dark ages of single-source solutions which compel all-or-nothing investments to the tune of millions and millions of dollars. Too often those investments fail. But, the more important question is why must hospitals be forced into all-or-nothing decisions in the first place? Why must they choose between integration and functionality, between a single platform, however mediocre, and a best-of-breed mix? We believe those are questions of the antiquated past and that brave new innovation can deliver a new avenue for hospitals who refuse to be painted into a corner. Hospitals shouldn’t have to choose between apples and oranges. They want, and should be able to get, both.

The Basics of the Joint Venture
Selected product lines from both companies’ health groups will be part of the new company. These products were chosen for their specific focus on “empowering connected patient-centric care.”

GE is contributing an interoperable clinical data model and decision support system via Qualibria. GE’s eHealth is an HIE solution in use at a large number of sites in North America. Microsoft is bringing Amalga to the table, which is a data aggregation platform which facilitates interoperability and a host of other advanced capabilities. Vergence and expreSSO come through Microsoft’s acquisition of Sentillion and provide strong context management and single sign-on solutions. The strategy appears to be one of leveraging Microsoft’s platform technology (Amalga) to underpin GE’s clinical depth (Qualibria, eHealth). Additionally, this model will allow hospitals and vendors to integrate best-of-breed 3rd party products into the ecosystem as they see fit. This mix of products and capabilities will enable a true best-of-breed environment emerge while still having the core elements of integration as well. This ecosystem will be powered by the partnership’s own applications and those built by ISVs. No other major vendor offers this unique model and set of abilities, although Allscripts is the one traditional EMR vendor that is building a strategy of accepting of 3rd party solutions.

Tackling the Big Problems

No one is saying that this joint venture is guaranteed to be a resounding success. However, we applaud the visionary model and risks this new team is taking. It looks like they want to address all the big hairy obstacles that every provider organization, region, and nation is facing. Big data? Absolutely. Enterprise analytics and business intelligence? Yes. Clinical decision support? For sure. Population management? You bet. Nobody else in the industry has shown they can tackle these issues even though every hospital is clamoring for this type of model. So why not this joint venture between GE and Microsoft? We say good luck, and more power to them.

The principals of KATALUS Advisors have worked with hundreds of healthcare organizations, vendors, and other consulting firms across the globe. The opinions expressed here are our own and are not intended to promote any specific vendor and do not reflect those of any other organization or individual.

The “Smart EMR” Differentiator

Posted on October 25, 2011 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

As I’ve been able to talk to more and more EMR companies I’ve been trying to figure out a way to differentiate the various EHR software. In fact, when I meet with EHR software companies I suggest that instead of them showing me a full demo of their EHR software, I ask them to show me the feature(s) that set their EHR apart from the other 300+ EHR companies out there. I must admit that it’s always interesting to see what they show me. Sometimes because what they show me isn’t that interesting or different. Many of my EMR company specific posts come from these experiences.

Today at MGMA as I went from one EHR company to another I started to get an idea for what might be the future differentiation between EHR companies. I’m calling it: “Smart EMR.”

You can be sure that I’ll be writing about my thoughts on Smart EMR software many more times in the future. However, the basic idea is that far too many EHR software are just basic translations from paper to electronic. Sure, some of them do a pretty good job of capturing the data in granular data elements (something not possible on paper), but that’s far from my idea of what a future Smart EMR software will need to accomplish.

I’m sure that many of those that are reading this post immediately started to think about the idea of clinical decision support. Certainly clinical decision support will be one important element of a Smart EMR, but I think that’s barely even the beginning of how a Smart EMR will need to work in the future. However, clinical decision support as it’s been described to date focuses far too much on how a clinician’s discretely entered data elements can support the care they provide. That’s far too narrow of a view of how an EMR will improve the patient-doctor interaction.

Without going into all the detail, EHR software is going to have to learn to accept and process a number of interesting and external data sources. One example could be all the data that a patient has in the PHR. Another could be patient data that was collected using personal various medical devices like a blood pressure cuff, an EKG, and blood glucose meters. Not to mention more consumer centric data devices and apps such as RunKeeper, Fitbit, sleep tracking, mood tracking, etc etc etc.

Another example of an external source could be access to some community health data repository. Why shouldn’t community trends in healthcare be part of the patient care process? None of this is far reaching since we’re collecting this data today and it will become more and more mainstream over time. Something we can’t do today, but likely will in the future is things like genomics. Imagine how personalized healthcare will change when an EHR will need to know and be able to process your genome in order to provide proper care.

I don’t claim to know all the sources, but I think that gives you a flavor of what a Smart EMR will have to process in the future. I’ll be interested to see which EHR software companies see this change and are able to execute on it. Many of the current innovations in EHR have been pretty academic. The Smart EMR I describe above will be much more complicated and require some specific skills and resources to do it right.

Guest Post: Overcoming EMR Integration Challenges

Posted on September 15, 2011 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Dan Neuwirth is the CEO of MedCPU, provider of the innovative MedCPUAdvisor™ platform: with applications for decision support for clinical guidelines, Meaningful Use, and care pathways, that captures the complete clinical picture in real time, including narrative text and structured data to deliver the most accurate clinical and compliance guidance.

There’s no question that healthcare needs to adopt new technology that makes us more effective and efficient and curbs costs, like Electronic Medical Records (EMR) solutions and Clinical Decision Support (CDS) systems. In today’s world, providers of all sizes continue to find it challenging to integrate existing HIT systems with EMRs for a variety of reasons. As our industry evolves, technology solutions need to be smarter and empower seamless integration.

EMR and HIPAA guest author Susan White covers in depth how a lack of connectivity standards affects EMR integration. There are no mandated standards for EMR vendors to follow, making it hard to coordinate data sharing between medical devices and other systems (including from one EMR to another), even at the same facility. As those systems operate in disparate fashions, critical clinical information is often lost or stuck in silos. Most importantly, the information is not where clinicians need it most–at their fingertips, in an exam room, with a patient.

This lack of data sharing is a pervasive concern. One Markle report finds that roughly 80 percent of both consumers and physicians demand that hospitals and doctors be required to share information that improves coordination of care, cuts unnecessary costs, and reduces medical errors.

In 2010, more than $88 Billion were spent on developing and implementing EHRs, health information exchanges (HIEs) and other health IT initiatives. When you consider that the average 10-physician practice spends more than $137,000 per year on prior authorizations and pharmacy callbacks alone, you’ll have to agree that the lack of data integration and sharing get very costly. And although I agree with John Halamka, who recently wrote these challenges exist because healthcare is inherently more complicated than other industries, I am a strong believer that a lot of them can be overcome by the use of smart technology.

We need smart, flexible solutions, which capitalize on existing technologies and require minimal integration. Technologies that employ advanced screen extraction, for example, empower several important improvements in the clinical decision support space such as the capturing and analysis of both free and structured text. A lot of time such solutions are rendered ineffective as they either lack compatibility with leading EMR systems or are too hard to integrate.

As the industry evolves, developing robust protocols for capturing both structured and unstructured data along with standards for data integration and sharing will become increasingly important. With all the data points created on patients every day, we will need a consistent, secure, and reliable way to capture and share patient data among all systems and healthcare providers. What is your experience? What are top data capturing and integration challenges faced by your organization? Looking forward to continuing the dialog and hearing your feedback.

Jeopardy!’s Watson Computer and Healthcare

Posted on May 25, 2011 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’m sure like many of you, I was completely intrigued by the demonstration of the Watson computer competing against the best Jeopardy! stars. It was amazing to watch not only how Watson was able to come up with the answer, but also how quickly it was able to reach the correct answer.

The hype at the IBM booth at HIMSS was really strong since it had been announced that healthcare was one of the first places that IBM wanted to work on implementing the “Watson” technology (read more about the Watson Technology in Healthcare in this AP article). Although, I found the most interesting conversation about Watson in the Nuance booth when I was talking to Dr. Nick Van Terheyden. The idea of combining the Watson technology with the voice recognition and natural language processing technologies that Nuance has available makes for a really compelling product offering.

One of the keys in the AP article above and was also mentioned by Dr. Nick from Nuance was that the Watson technology in healthcare would be applied differently than it was on Jeopardy!. In healthcare it wouldn’t try and make the decision and provide the correct answer for you. Instead, the Watson technology would be about providing you a number of possible answers and the likelihood of that answer possibly being the issue.

Some of this takes me back to Neil Versel’s posts about Clinical Decision Support and doctors resistance to CDS. There’s no doubt that the Watson technology is another form of Clinical Decision Support, but there’s little about the Watson technology which takes power away from the doctor’s decision making. It certainly could have an influence on a doctor’s ability to provide care, but that’s a great thing. Not that I want doctors constantly second guessing themselves. Not that I want doctors relying solely on the information that Watson or some other related technology provides. It’s like most clinical tools. When used properly, they can provide a great benefit to the doctor using them. When used improperly, it can lead to issues. However, it’s quite clear that Watson technology does little to take away from the decision making of doctors. In fact, I’d say it empowers doctors to do what they do better.

Personally I’m very excited to see technologies like Watson implemented in healthcare. Plus, I think we’re just at the beginning of what will be possible with this type of computing.

Skills in Search As Valuable as Memorization

Posted on May 6, 2011 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Neils’ article about Unrealistic Expectations about Clinical Decision Support made me think of how important the ability to know where to find the information can be in so many different situations. In fact, memorization of where to search might be more valuable and useful than strict memorization of everything.

The core point is that with very rare exception, the human mind can only store and recall so much information. However, if you only have to remember where to find a certain piece of information, it’s much easier to remember. For example, many of my readers probably don’t realize that I have a network of TV blogs. I get a lot of credit on those websites for listing out the music for those shows. Funny thing is that I’m not all that good at identifying songs. However, I am great at searching and finding the information.

Why can’t we accept this from doctors? Why do we expect that doctors should know everything as opposed to accepting that they don’t know everything, but they know where to find out more? Many actually can accept this.

Of course, many people might appropriately ask the question, “If my doctor’s just going to look up the information, why don’t I just look it up myself?”

There are quite a few reasons why it’s not the same. Let me just give one of them. While Doctors don’t know everything, they have been trained to identify the relevant information. Understanding what’s relevant turns out to be incredibly valuable when trying to solve a problem.

How about an example for comparison sake. Many Windows users are quite familiar with what’s affectionately called the Windows “Blue Screen of Death.” To the untrained eye, the blue screen of death is a daunting screen that provides an information overload of error messages of what went wrong your computer. To an IT person like myself, I can quickly identify the 1 or 2 lines that are actually relevant to the problem and find a possible solution.

While certainly not a perfect comparison, I think the skills that a trained doctor uses to identify a medical issue are similar to the above scenario. Funny thing is that no one would have any issue with me doing a search for how to solve the problem the blue screen of death identifies. However, many are uncomfortable with the idea of their doctor doing a similar search.

This isn’t to say that patients shouldn’t participate in their own care. That’s a related, but different topic. However, I echo Neil’s call for patients to be more accepting of doctors who use clinical decision support and other tools that help provide better care. Not to mention his call for doctors to not be afraid to admit when they don’t know everything, but that they have the tools, resources and skills to provide great patient care.