Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 1 of 3)

Posted on October 16, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The leading pharmaceutical and medical company Merck, together with Amazon Web Services, has recently been exploring the potential health impacts of voice interfaces and natural language processing (NLP) through an Alexa Diabetes Challenge. I recently talked to the five finalists in this challenge. This article explores the potential of new interfaces to transform the handling of chronic disease, and what the challenge reveals about currently available technology.

Alexa, of course, is the ground-breaking system that brings everyday voice interaction with computers into the home. Most of its uses are trivial (you can ask about today’s weather or change channels on your TV), but one must not underestimate the immense power of combining artificial intelligence with speech, one of the most basic and essential human activities. The potential of this interface for disabled or disoriented people is particularly intriguing.

The diabetes challenge is a nice focal point for exploring the more serious contribution made by voice interfaces and NLP. Because of the alarming global spread of this illness, the challenge also presents immediate opportunities that I hope the participants succeed in productizing and releasing into the field. Using the challenge’s published criteria, the judges today announced Sugarpod from Wellpepper as the winner.

This article will list some common themes among the five finalists, look at the background about current EHR interfaces and NLP, and say a bit about the unique achievement of each finalist.

Common themes

Overlapping visions of goals, problems, and solutions appeared among the finalists I interviewed for the diabetes challenge:

  • A voice interface allows more frequent and easier interactions with at-risk individuals who have chronic conditions, potentially achieving the behavioral health goal of helping a person make the right health decisions on a daily or even hourly basis.

  • Contestants seek to integrate many levels of patient intervention into their tools: responding to questions, collecting vital signs and behavioral data, issuing alerts, providing recommendations, delivering educational background material, and so on.

  • Services in this challenge go far beyond interactions between Alexa and the individual. The systems commonly anonymize and aggregate data in order to perform analytics that they hope will improve the service and provide valuable public health information to health care providers. They also facilitate communication of crucial health data between the individual and her care team.

  • Given the use of data and AI, customization is a big part of the tools. They are expected to determine the unique characteristics of each patient’s disease and behavior, and adapt their advice to the individual.

  • In addition to Alexa’s built-in language recognition capabilities, Amazon provides the Lex service for sophisticated text processing. Some contestants used Lex, while others drew on other research they had done building their own natural language processing engines.

  • Alexa never initiates a dialog, merely responding when the user wakes it up. The device can present a visual or audio notification when new material is present, but it still depends on the user to request the content. Thus, contestants are using other channels to deliver reminders and alerts such as messaging on the individual’s cell phone or alerting a provider.

  • Alexa is not HIPAA-compliant, but may achieve compliance in the future. This would help health services turn their voice interfaces into viable products and enter the mainstream.

Some background on interfaces and NLP

The poor state of current computing interfaces in the medical field is no secret–in fact, it is one of the loudest and most insistent complaints by doctors, such as on sites like KevinMD. You can visit Healthcare IT News or JAMA regularly and read the damning indictments.

Several factors can be blamed for this situation, including unsophisticated electronic health records (EHRs) and arbitrary reporting requirements by Centers for Medicare & Medicaid Services (CMS). Natural language processing may provide one of the technical solutions to these problems. The NLP services by Nuance are already famous. An encouraging study finds substantial time savings through using NLP to enter doctor’s insights. And on the other end–where doctors are searching the notes they previously entered for information–a service called Butter.ai uses NLP for intelligent searches. Unsurprisingly, the American Health Information Management Association (AHIMA) looks forward to the contributions of NLP.

Some app developers are now exploring voice interfaces and NLP on the patient side. I covered two such companies, including the one that ultimately won the Alexa Diabetes Challenge, in another article. In general, developers using these interfaces hope to eliminate the fuss and abstraction in health apps that frustrate many consumers, thereby reaching new populations and interacting with them more frequently, with deeper relationships.

The next two parts of this article turn to each of the five finalists, to show the use they are making of Alexa.

Wellpepper and SimplifiMed Meet the Patients Where They Are Through Modern Interaction Techniques

Posted on August 9, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Over the past few weeks I found two companies seeking out natural and streamlined ways to connect patients with their doctors. Many of us have started using web portals for messaging–a stodgy communication method that involves logins and lots of clicking, often just for an outcome such as message “Test submitted. No further information available.” Web portals are better than unnecessary office visits or days of playing phone tag, and so are the various secure messaging apps (incompatible with one another, unfortunately) found in the online app stores. But Wellpepper and SimplifiMed are trying to bring us a bit further into the twenty-first century, through voice interfaces and natural language processing.

Wellpepper’s Sugarpod

Wellpepper recently ascended to finalist status in the Alexa Diabetes Challenge, which encourages research into the use of Amazon.com’s popular voice-activated device, Alexa, to improve the lives of people with Type 2 Diabetes. For this challenge, Wellpepper enhanced its existing service to deliver messages over Amazon Echo and interview patients. Wellpepper’s entry in the competition is an integrated care plan called Sugarpod.

The Wellpepper platform is organized around a care plan, and covers the entire cycle of treatment, such as delivering information to patients, managing their medications and food diaries, recording information from patients in the health care provider’s EHR, helping them prepare for surgery, and more. Messages adapt to the patient’s condition, attempting to present the right tone for adherent versus non-adherent patients. The data collected can be used for analytics benefitting both the provider and the patient–valuable alerts, for instance.

It must be emphasized at the outset that Wellpepper’s current support for Alexa is just a proof of concept. It cannot be rolled out to the public until Alexa itself is HIPAA-compliant.

I interviewed Anne Weiler, founder and CEO of Wellpepper. She explained that using Alexa would be helpful for people who have mobility problems or difficulties using their hands. The prototype proved quite popular, and people seem willing to open up to the machine. Alexa has some modest affective computing features; for instance, if the patient reports feeling pain, the device will may respond with “Ouch!”

Wellpepper is clinically validated. A study of patients with Parkinson’s showed that those using Wellpepper showed 9 percent improvement in mobility, whereas those without it showed a 12% decline. Wellpepper patients adhered to treatment plans 81% of the time.

I’ll end this section by mentioning that integration EHRs offer limited information of value to Wellpepper. Most EHRs don’t yet accept patient data, for instance. And how can you tell whether a patient was admitted to a hospital? It should be in the EHR, but Sugarpod has found the information to be unavailable. It’s especially hidden if the patient is admitted to a different health care providers; interoperability is a myth. Weiler said that Sugarpod doesn’t depend on the EHR for much information, using a much more reliable source of information instead: it asks the patient!

SimplifiMed

SimplifiMed is a chatbot service that helps clinics automate routine tasks such as appointments, refills, and other aspects of treatment. CEO Chinmay A. Singh emphasized to me that it is not an app, but a natural language processing tool that operates over standard SMS messaging. They enable a doctor’s landline phone to communicate via text messages and route patients’ messages to a chatbot capable of understanding natural language and partial sentences. The bot interacts with the patients to understand their needs, and helps them accomplish the task quickly. The result is round-the-clock access to the service with no waiting on the phone a huge convenience to busy patients.

SimplifiMed also collects insurance information when the patient signs up, and the patient can use the interface to change the information. Eventually, they expect the service to analyze patient’s symptom in light of data from the EHR and help the patient make the decision about whether to come in to the doctor.

SMS is not secure, but HIPAA does not get violated because the patient can choose what to send to the doctor, and the chatbot’s responses contain no personally identifiable information. Between the doctor and the SimplifiMed service, data is sent in encrypted form. Singh said that the company built its own natural language processing engine, because it didn’t want to share sensitive patient data with an outside service.

Due to complexity of care, insurance requirements, and regulations, a doctor today needs support from multiple staff members: front desk, MA, biller, etc. MACRA and value-based care will increase the burden on staff without providing the income to hire more. Automating routine activities adds value to clinics without breaking the bank.

Earlier this year I wrote about another company, HealthTap, that had added Alexa integration. This trend toward natural voice interfaces, which the Alexa Diabetes Challenge finalists are also pursuing, along with the natural language processing that they and SimplifiMed are implementing, could put health care on track to a new era of meeting patients where they are now. The potential improvements to care are considerable, because patients are more likely to share information, take educational interventions seriously, and become active participants in their own treatment.

Hands-On Guidance for Data Integration in Health: The CancerLinQ Story

Posted on June 15, 2017 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Institutions throughout the health care field are talking about data sharing and integration. Everyone knows that improved care, cost controls, and expanded research requires institutions who hold patient data to safely share it. The American Society of Clinical Oncology’s CancerLinQ, one of the leading projects analyzing data analysis to find new cures, has tackled data sharing with a large number of health providers and discovered just how labor-intensive it is.

CancerLinQ fosters deep relationships and collaborations with the clinicians from whom it takes data. The platform turns around results from analyzing the data quickly and to give the clinicians insights they can put to immediate use to improve the care of cancer patients. Issues in collecting, storing, and transmitting data intertwine with other discussion items around cancer care. Currently, CancerLinQ isolates the data from each institution, and de-identifies patient information in order to let it be shared among participating clinicians. CancerLinQ LLC is a wholly-owned nonprofit subsidiary of ASCO, which has registered CancerLinQ as a trademark.

CancerLinQ logo

Help from Jitterbit

In 2015, CancerLinQ began collaborating with Jitterbit, a company devoted to integrating data from different sources. According to Michele Hazard, Director of Healthcare Solutions, and George Gallegos, CEO, their company can recognize data from 300 different sources, including electronic health records. At the beginning, the diversity and incompatibility of EHRs was a real barrier. It took them several months to figure out each of the first EHRs they tackled, but now they can integrate a new one quickly. Oncology care, the key data needed by CancerLinQ, is a Jitterbit specialty.

Jitterbit logo

One of the barriers raised by EHRs is licensing. The vendor has to “bless” direct access to EHR and data imported from external sources. HIPAA and licensing agreements also make tight security a priority.

Another challenge to processing data is to find records in different institutions and accurately match data for the correct patient.

Although the health care industry is moving toward the FHIR standard, and a few EHRs already expose data through FHIR, others have idiosyncratic formats and support older HL7 standards in different ways. Many don’t even have an API yet. In some cases, Jitterbit has to export the EHR data to a file, transfer it, and unpack it to discover the patient data.

Lack of structure

Jitterbit had become accustomed to looking in different databases to find patient information, even when EHRs claimed to support the same standard. One doctor may put key information under “diagnosis” while another enters it under “patient problems,” and doctors in the same practice may choose different locations.

Worse still, doctors often ignore the structured fields that were meant to hold important patient details and just dictate or type it into a free-text note. CancerLinQ anticipated this, unpacking the free text through optical character recognition (OCR) and natural language processing (NLP), a branch of artificial intelligence.

It’s understandable that a doctor would evade the use of structured fields. Just think of the position she is in, trying to keep a complex cancer case in mind while half a dozen other patients sit in the waiting room for their turn. In order to use the structured field dedicated to each item of information, she would have to first remember which field to use–and if she has privileges at several different institutions, that means keeping the different fields for each hospital in mind.

Then she has to get access to the right field, which may take several clicks and require movement through several screens. The exact information she wants to enter may or may not be available through a drop-down menu. The exact abbreviation or wording may differ from EHR to EHR as well. And to carry through a commitment to using structured fields, she would have to go through this thought process many times per patient. (CancerLinQ itself looks at 18 Quality eMeasures today, with the plan to release additional measures each year.)

Finally, what is the point of all this? Up until recently, the information would never come back in a useful form. To retrieve it, she would have to retrace the same steps she used to enter the structured data in the first place. Simpler to dump what she knows into a free-text note and move on.

It’s worth mentioning that this Babyl of health care information imposes negative impacts on the billing and reimbursement process, even though the EHRs were designed to support those very processes from the start. Insurers have to deal with the same unstructured data that CancerLinQ and Jitterbit have learned to read. The intensive manual process of extracting information adds to the cost of insurance, and ultimately the entire health care system. The recent eClinicalWorks scandal, which resembles Volkswagon’s cheating on auto emissions and will probably spill out to other EHR vendors as well, highlights the failings of health data.

Making data useful

The clue to unblocking this information logjam is deriving insights from data that clinicians can immediately see will improve their interventions with patients. This is what the CancerLinQ team has been doing. They run analytics that suggest what works for different categories of patients, then return the information to oncologists. The CancerLinQ platform also explains which items of data were input to these insights, and urges the doctors to be more disciplined about collecting and storing the data. This is a human-centered, labor-intensive process that can take six to twelve months to set up for each institution. Richard Ross, Chief Operating Officer of CancerLinQ calls the process “trench warfare,” not because its contentious but because it is slow and requires determination.

Of the 18 measures currently requested by CancerLinQ, one of the most critical data elements driving the calculation of multiple measures is staging information: where the cancerous tumors are and how far it has progressed. Family history, treatment plan, and treatment recommendations are other examples of measures gathered.

The data collection process has to start by determining how each practice defines a cancer patient. The CancerLinQ team builds this definition into its request for data. Sometimes they submit “pull” requests at regular intervals to the hospital or clinic, whereas other times the health care provider submits the data to them at a time of its choosing.

Some institutions enforce workflows more rigorously than others. So in some hospitals, CancerLinQ can persuade the doctors to record important information at a certain point during the patient’s visit. In other hospitals, doctors may enter data at times of their own choosing. But if they understand the value that comes from this data, they are more likely to make sure it gets entered, and that it conforms to standards. Many EHRs provide templates that make it easier to use structured fields properly.

When accepting information from each provider, the team goes through a series of steps and does a check-in with the provider at each step. The team evaluates the data in a different stage for each criterion: completeness, accuracy of coding, the number of patients reported, and so on. By providing quick feedback, they can help the practice improve its reporting.

The CancerLinQ/Jitterbit story reveals how difficult it is to apply analytics to health care data. Few organizations can afford the expertise they apply to extracting and curating patient data. On the other hand, CancerLinQ and Jitterbit show that effective data analysis can be done, even in the current messy conditions of electronic data storage. As the next wave of technology standards, such as FHIR, fall into place, more institutions should be able to carry out analytics that save lives.

The Perfect EHR Workflow – Video EHR

Posted on May 12, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve been floating this idea out there for years (2006 to be exact), but I’d never put it together in one consolidated post that I could point to when talking about the concept. I call it the Video EHR and I think it could be the solution to many of our current EHR woes. I know that many of you will think it’s a bit far fetched and in some ways it is. However, I think we’re culturally and technically almost to the point where the video EHR is a feasible opportunity.

The concept is very simple. Put video cameras in each exam room and have those videos replace your EHR.

Technical Feasibility
Of course there are some massive technical challenges to make this a reality. However, the cost of everything related to this idea has come down in price significantly. The cost of HD video cameras negligible. The cost of video storage, extremely cheap and getting cheaper every day. The cost of bandwidth, cheaper and higher quality and so much potential to grow as more cities get fiber connectivity. If this was built on the internal network instead of the cloud, bandwidth is an easily solved issue.

When talking costs, it’s important to note that there would be increased costs over the current documentation solutions. No one is putting in high quality video cameras and audio equipment to record their visits today. Not to mention wiring the exam room so that it all works. So, this would be an added cost.

Otherwise, the technology is all available today. We can easily record, capture and process HD video and even synchronize it across multiple cameras, etc. None of this is technically a challenge. Voice recognition and NLP have progressed significantly so you could process the audio file and convert it into granular data elements that would be needed for billing, clinical decision support, advanced care, population health, etc. These would be compiled into a high quality presentation layer that would be useful for providers to consume data from past visits.

Facial recognition technology has also progressed to the point that we could use these videos to help address the patient identification and patient matching problems that plague healthcare today. We’d have to find the right balance between trusting the technology and human verification, but it would be much better and likely more convenient than what we have today.

Imagine the doctor walking into the exam room where the video cameras in the exam room have already identified the patient and it would identify the doctor as she walked in. Then, the patient’s medical record could be automatically pulled up on the doctor’s tablet and available to them as they’re ready to see the patient.

Plus, does the doctor even need a tablet at all? Could they instead use big digital signs on the walls which are voice controlled by a Siri or Alexa like AI solution. I can already hear, “Alexa, pull up John Lynn’s cholesterol lab results for the past year.” Next thing you know, a nice chart of my cholesterol appears on the big screen for both doctor and patient to see.

Feels pretty far fetched, but all of the technology I describe is already here. It just hasn’t been packaged in a way that makes sense for this application.

Pros
Ideal Workflow for Providers – I can think of no better workflow for a doctor or nurse. Assuming the tech works properly (and that’s a big assumption will discuss in the cons), the provider walks into the exam room and engages with the patient. Everything is documented automatically. Since it’s video, I mean literally everything would be documented automatically. The providers would just focus on engaging with the patient, learning about their health challenges, and addressing their issues.

Patient Experience – I’m pretty sure patients wouldn’t know what to do if their doctor or nurse was solely focused on them and wasn’t stuck with their head in a chart or in their screen. It would totally change patients’ relationship with their doctors.

Reduced Liability – Since you literally would have a multi angle video and audio recording of the visit, you’d have the proof you’d need to show that you had offered specific instructions or that you’d warned of certain side effects or any number of medical malpractice issues could be resolved by a quick look at the video from the visit. The truth will set you free, and you’d literally have the truth about what happened during the visit on video.

No Click Visit – This really is part of the “Ideal Workflow” section, but it’s worth pointing out all the things that providers do today to document in their EHR. The biggest complaint is the number of clicks a doctor has to do. In the video EHR world where everything is recorded and processed to document the visit you wouldn’t have any clicks.

Ergonomics – I’ve been meaning to write a series of posts on the health consequences doctors are experiencing thanks to EHR software. I know many who have reported major back trouble due to time spent hunched over their computer documenting in the EHR. You can imagine the risk of carpal tunnel and other hand and wrist issues that are bound to come up. All of this gets resolved if the doctor literally walks into the exam room and just sees the patient. Depending on how the Video EHR is implemented, the doctor might have to still spend time verifying the documentation or viewing past documentation. However, that could most likely be done on a simple tablet or even using a “Siri”-like voice implementation which is much better ergonomically.

Learning – In mental health this happens all the time. Practicum students are recording giving therapy and then a seasoned counselor advises them on how they did. No doubt we could see some of the same learning benefits in a medical practice. Sometimes that would be through peer review, but also just the mere fact of a doctor watching themselves on camera.

Cons
Privacy – The biggest fear with this idea is that most people think this is or could be a major privacy issue. They usually ask the question, “Will patients feel comfortable doing this?” On the privacy front, I agree that video is more personal than granular data elements. So, the video EHR would have to take extreme precautions to ensure the privacy and security of these videos. However, from an impact standpoint, it wouldn’t be that much different than granular health information being breached. Plus, it’s much harder to breach a massive video file being sent across the wire than a few granular text data elements. No doubt, privacy and security would be a challenge, but it’s a challenge today as well. I don’t think video would be that much more significant.

As to the point of whether patients would be comfortable with a video in the exam room, no doubt there would need to be a massive culture shift. Some may never reach the point that they’re comfortable with it. However, think about telemedicine. What are patients doing in telemedicine? They’re essentially having their patient visit on video, streamed across the internet and a lot of society is very comfortable with it. In fact, many (myself included) wish that telemedicine were more widely available. No doubt telemedicine would break down the barriers when it comes to the concept of a video EHR. I do acknowledge that a video EHR takes it to another level and they’re not equal. However, they are related and illustrate that people’s comfort in having their medical visits on video might not be as far fetched as it might seem on the surface.

Turns out that doctors will face the same culture shift challenge as patients and they might even be more reluctant than patients.

Trust – I believe this is currently the biggest challenge with the concept of a video EHR. Can providers trust that the video and audio will be captured? What happens if it fails to capture? What happens if the quality of the video or audio isn’t very good? What is the voice recognition or NLP isn’t accurate and something bad happens? How do we ensure that everything that happens in the visit is captured accurately?

Obviously there are a lot of challenges associated with ensuring the video EHR’s ability to capture and document the visit properly. If it doesn’t it will lose providers and patients’ trust and it will fail. However, it’s worth remembering that we don’t necessarily need it to be perfect. We just need it to be better than our current imperfect status quo. We also just need to design the video EHR to avoid making mistakes and warn about possible missing information so that it can be addressed properly. No doubt this would be a monumental challenge.

Requires New Techniques – A video EHR would definitely require modifications in how a provider sees a patient. For example, there may be times where a patient or the doctor need to be positioned a certain way to ensure the visit gets documented properly. You can already see one of the cameras being a portable camera that can be used for close up shots of rashes or other medical issues so that they’re documented properly.

No doubt providers would have to learn new techniques on what they say in the exam room to make sure that things are documented properly. Instead of just thinking something, they’ll have to ensure that they speak clinical orders, findings, diagnosis, etc. We could have a long discussion on the impact for good and bad of this type of transparency.

Double Edged Sword of Liability – While reduced liability is a pro, liability could also be a con for a video EHR. Having the video of a medical visit can set you free, but it can also be damning as well. If you practice improper medicine, you won’t have anywhere to hide. Plus, given our current legal environment, even well intentioned doctors could get caught in challenging situations if the technology doesn’t work quite right or the video is taken out of context.

Reality Check
I realize this is a massive vision with a lot of technical and cultural challenges that would need to be overcome. Although, when I first came up with the idea of a video EHR ~10 years ago, it was even more far fetched. Since then, so many things have come into place that make this idea seem much more reasonable.

That said, I’m realistic that a solution like this would likely start with some sort of half and half solution. The video would be captured, but the provider would need to verify and complete the documentation to ensure its accuracy. We couldn’t just trust the AI engine to capture everything and be 100% accurate.

I’m also interested in watching the evolution of remote scribes. In many ways, a remote scribe is a human doing the work of the video EHR AI engine. It’s an interesting middle ground which could illustrate the possibilities and also be a small way to make patients and providers more comfortable with cameras in the exam room.

I do think our current billing system and things like meaningful use (or now MACRA) are still a challenge for a video EHR. The documentation requirements for these programs are brutal and could make the video EHR workflow lose its luster. Could it be done to accommodate the current documentation requirements? Certainly, but it might take some of the polish off the solution.

There you have it. My concept for a video EHR. What do you think of the idea? I hope you tear it up in the comments.

Why Will Medical Professionals Use Laptops?

Posted on February 4, 2014 I Written By

Kyle is CoFounder and CEO of Pristine, a VC backed company based in Austin, TX that builds software for Google Glass for healthcare, life sciences, and industrial environments. Pristine has over 30 healthcare customers. Kyle blogs regularly about business, entrepreneurship, technology, and healthcare at kylesamani.com.

Steve Jobs famously said that “laptops are like trucks. They’re going to be used by fewer and fewer people. This transition is going to make people uneasy.”

Are medical professionals truck drivers or bike riders?

We have witnessed truck drivers turn into bike riders in almost every computing context:

Big businesses used to buy mainframes. Then they replaced mainframes with mini computers. Then they replaced minicomputers with desktops and servers. Small businesses began adopting technology in meaningful ways once they could deploy a local server and clients at reasonable cost inside their businesses. As web technologies exploded and mobile devices became increasingly prevalent, large numbers of mobile professionals began traveling with laptops, tablets and smartphones. Over the past few years, many have even stopped traveling with laptops; now they travel with just a tablet and smartphone.

Consumers have been just as fickle, if not more so. They adopted build-it-yourself computers, then Apple IIs, then mid tower desktops, then laptops, then ultra-light laptops, and now smartphones and tablets.

Mobile is the most under-hyped trend in technology. Mobile devices – smartphones, tablets, and soon, wearables – are occupying an increasingly larger percentage of total computing time. Although mobile devices tend to have smaller screens and fewer robust input methods relative to traditional PCs (see why the keyboard and mouse are the most efficient input methods), mobile devices are often preferred because users value ease of use, mobility, and access more than raw efficiency.

The EMR is still widely conceived of as a desktop-app with a mobile add-on. A few EMR companies, such as Dr Chrono, are mobile-first. But even in 2014, the vast majority of EMR companies are not mobile-first. The legacy holdouts cite battery, screen size, and lack of a keyboard as reasons why mobile won’t eat healthcare. Let’s consider each of the primary constraints and the innovations happening along each front:

Battery – Unlike every other computing component, batteries are the only component that aren’t doubling in performance every 2-5 years. Battery density continues to improve at a measly 1-2% per year. The battery challenge will be overcome through a few means: huge breakthroughs in battery density, and increasing efficiency in all battery-hungry components: screens and CPUs. We are on the verge of the transition to OLED screens, which will drive an enormous improvement in energy efficiency in screens. Mobile CPUs are also about to undergo a shift as OEM’s values change: mobile CPUs have become good enough that the majority of future CPU improvements will emphasize battery performance rather than increased compute performance.

Lack of a keyboard – Virtual keyboards will never offer the speed of physical keyboards. The laggards miss the point that providers won’t have to type as much. NLP is finally allowing people to speak freely. The problem with keyboards aren’t the characteristics of the keyboard, but rather the existential presence of the keyboard itself. Through a combination of voice, natural-language-processing, and scribes, doctors will type less and yet document more than ever before. I’m friends with CEOs of at least half a dozen companies attempting to solve this problem across a number of dimensions. Given how challenging and fragmented the technology problem is, I suspect we won’t see a single winner, but a variety of solutions each with unique compromises.

Screen size – We are on the verge of foldable, bendable, and curved screens. These traits will help resolve the screen size problem on touch-based devices. As eyeware devices blossom, screen size will become increasingly trivial because eyeware devices have such an enormous canvas to work with. Devices such as the MetaPro and AtheerOne will face the opposite problem: data overload. These new user interfaces can present extremely large volumes of robust data across 3 dimensions. They will mandate a complete re-thinking of presentation and user interaction with information at the point of care.

I find it nearly impossible to believe that laptops have more than a decade of life left in clinical environments. They simply do not accommodate the ergonomics of care delivery. As mobile devices catch up to PCs in terms of efficiency and perceived screen size, medical professionals will abandon laptops in droves.

This begs the question: what is the right form factor for medical professionals at the point of care?

To tackle this question in 2014 – while we’re still in the nascent years of wearables and eyeware computing – I will address the question “what software experiences should the ideal form factor enable?”

The ideal hardware* form factor of the future is:

Transparent: The hardware should melt away and the seams between hardware and software should blur. Modern tablets are quite svelte and light. There isn’t much more value to be had by improving portability of modern tablets; users simply can’t perceive the difference between .7lb and .8lb tablets. However, there is enormous opportunity for improvements in portability and accessibility when devices go handsfree.

Omni-present, yet invisible: There is way too much friction separating medical professionals from the computers that they’re interacting with all day long: physical distance (even the pocket is too far) and passwords. The ideal device of the future is friction free. It’s always there and always authenticated. In order to always be there, it must appear as if it’s not there. It must be transparent. Although Glass isn’t there just yet, Google describes the desired paradox eloquently when describing Glass: “It’s there when you need it, and out of sight when you don’t.” Eyeware devices will trend this way.

Interactive: despite their efficiency, PC interfaces are remarkably un-interactive. Almost all interaction boils down to a click on a pixel location or a keyboard command. Interacting with healthcare information in the future will be diverse and rich: natural physical movements, subtle winks, voice, and vision will all play significant roles. Although these interactions will require some learning (and un-learning of bad behaviors) for existing staff, new staff will pick them up and never look back.

Robust: Mobile devices of the future must be able to keep up with medical professionals. The devices must have shift-long battery life and be able to display large volumes of complex information at a glance.

Secure: This is a given. But I’ll emphasize this is as physical security becomes increasingly important in light of the number of unencrypted hospital laptops being stolen or lost.

Support 3rd party communications: As medicine becomes increasingly complex, specialized, and team-based, medical professionals will share even more information with one another, patients, and their families. Medical professionals will need a device that supports sharing what they’re seeing and interacting with.

I’m fairly convinced (and to be fair, highly biased as CEO of a Glass-centric company) that eyeware devices will define the future of computer interaction at the point of care. Eyeware devices have the potential to exceed tablets, smartphones, watches, jewelry, and laptops across every dimension above, except perhaps 3rd party communication. Eyeware devices are intrinsically personal, and don’t accommodate others’ prying eyes. If this turns out to be a major detriment, I suspect the problem will be solved through software to share what you’re seeing.

What do you think? What is the ideal form factor at the point of care?

*Software tends to dominate most health IT discussions; however, this blog post is focused on ergonomics of hardware form factors. As such, this list avoids software-centric traits such as context, intelligence, intuition, etc.

EMR Jobs, Olympic EMR, EMR O/S, EHR Dictation, and EMR Purchasing

Posted on May 27, 2012 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

You can see we have a jam packed weekend Twitter round up. There were a lot of interesting topics being discussed this week in healthcare social media. As usual, we’ll do our best to provide some of the more interesting tweets. Not to mention we’ll add a bit of our own commentary to provide some background and understanding about the tweets as well.

Now without further ado, a few EMR and healthcare IT tweets for your reading pleasure:


I saw this job tweeted. I didn’t necessarily find this job all that unique, but it’s an interesting contrast to see all the EMR jobs tweeted out, posted on the EMR and EHR Job board, and posted to the Healthcare Scene LinkedIn group. Compare that with experiences like this one posted on EMR Thoughts. It’s such a conundrum that so many don’t have jobs while many can’t find qualified EMR talent.


GE Centricity has been the choice of the USOC for a few years now. I’d love to go to London to see it in action first hand. Anyone want to sponsor that? I do LOVE watching the Olympics!


Does operating system really matter anymore? I’m finding that the operating system is mattering less and less. Ok, with most client server products you need a certain operating system, but with most well done SaaS EHR it doesn’t matter. I’ve reinstalled a few computers recently myself and all I do is reinstall my browser, hook up dropbox and I have probably 90% of what I need.


The sub head on the article describes the link of EHR and dictation better: “Doctors who dictate their clinical notes before they’re entered into an EHR have lower quality of care scores than those who type or enter structured data directly into the EHR, according to Partners Healthcare researchers.” I’m always suspect of these studies. Particularly because they usually have a much narrower focus, but provide for a great headline.

Plus, I think it’s still early on NLP (natural language processing) and CLU (clinical language understanding) technology that will extract more data from unstructured text in real time to support quality care measures. Let’s look at this in 3 years and we’ll see if voice and narrative text is common place or gone the way of the dinosaurs.


I’m sure that this number is lower than many ambulatory EMR companies expect. It’s certainly much less than ONC would predict. I personally predict the number is a bit low. I expect we’ll see a few more EHR purchases than 7-8%, but probably not more than 15%.

EHR Charting in Another Language

Posted on January 13, 2012 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently started to think about some of the implications associated with multiple languages in an EHR. One of my readers asked me how EHR vendors correlated data from those charting in Spanish and those charting in English. My first response to this question was, “How many doctors chart in Spanish?” Yes, this was a very US centric response since obviously I know that almost all of the doctors in Latin America and other Spanish speaking countries chart in Spanish, but I wonder how many doctors in the US chart in Spanish. I expect the answer is A LOT more than I realize.

Partial evidence of this is that about a year ago HIMSS announced a Latino Health IT Initiative. From that today there is now a HIMSS Latino Community web page and also a HIMSS Latino Community Workshop at the HIMSS Annual Conference in Las Vegas. I’m going to have to find some time to try and learn more about the HIMSS Latino Community. My Espanol is terrible, but I know enough that I think I could enjoy the event.

After my initial reaction, I then started wondering how you would correlate data from another language. So, much for coordinated care. I wonder what a doctor does if he asks for his patient’s record and it is all in Spanish. That’s great if all of your doctors know Spanish, but in the US at least I don’t know of any community that has doctors who know Spanish in every specialty. How do they get around it? I don’t think those translation services you can call are much help.

Once we start talking about automated patient records the language issue becomes more of a problem. Although, maybe part of that problem is solved if you use could standards like ICD-10, SNOMED, etc. A code is a code is a code regardless of what language it is and computers are great at matching up those codes. Although, if these standards are not used, then forget trying to connect the data even through Natural Language Processing (NLP). Sure the NLP could be bi-lingual, but has anyone done that? My guess is not.

All of this might start to really matter more when we’re talking about public health issues as we aggregate data internationally. Language becomes a much larger issue in this context and so it begs for an established set of standards for easy comparison.

I’d be interested to hear about other stories and experiences with EHR charting in Spanish or another language. I bet the open source EHR have some interesting solutions similar to the open source projects I know well. I look forward to learning more about the challenge of multiple languages.

Clinical Data Abstraction to Meet Meaningful Use – Meaningful Use Monday

Posted on November 21, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In many of our Meaningful Use Monday series we focused on a lot of the details around the meaningful use regulations. In this post I want to highlight one of the strategies that I’ve seen a bunch of EHR vendors and other EHR related companies employing to meet Meaningful Use. It’s an interesting concept that will be exciting to see play out.

The idea is what many are calling clinical data abstraction. I’ve actually heard some people refer to it as other names as well, but clinical data abstraction is the one that I like most.

I’ve seen two main types of clinical data abstraction. One is the automated clinical data abstraction. The other is manual clinical data abstraction. The first type is where your computer or server goes through the clinical content and using some combination of natural language processing (NLP) or other technology it identifies the important clinical data elements in a narrative passage. The second type is where a trained medical professional pulls out the various clinical data elements.

I asked one vendor that is working on clinical data abstraction whether they thought that the automated, computer generated clinical abstraction would be the predominate means or whether some manual abstraction will always be necessary. They were confident that we could get there with the automated computer abstraction of the clinical data. I’m not so confident. I think like transcription the computer could help speed up the abstraction, but there might still need to be someone who checks and verifies the data abstraction.

Why does this matter for meaningful use?
One of the challenges for meaningful use is that it really wants to know that you’ve documented certain discrete data elements. It’s not enough for you to just document the smoking status in a narrative paragraph. You have to not only document the smoking status, but your EMR has to have a way to report that you have documented the various meaningful use measures. In comes clinical data abstraction.

Proponents of clinical data abstraction argue that clinical data abstraction provides the best of both worlds: narrative with discrete data elements. It’s an interesting argument to make since many doctors love to see and read the narrative. However, all indications are that we need discrete data elements in order to improve patient care and see some of the other benefits of capturing all this healthcare data. In fact, the future Smart EMR that I wrote about before won’t be possible without these discrete healthcare data elements.

So far I believe that most people who have shown meaningful use haven’t used clinical data abstraction to meet the various meaningful use measures. Although, it’s an intriguing story to tell and could be an interesting way for doctors to meet meaningful use while minimizing changes to their workflow.

Side Note: Clinical data abstraction is also becoming popular when scanning old paper charts into your EHR. Although, that’s a topic for a future post.

Jeopardy!’s Watson Computer and Healthcare

Posted on May 25, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’m sure like many of you, I was completely intrigued by the demonstration of the Watson computer competing against the best Jeopardy! stars. It was amazing to watch not only how Watson was able to come up with the answer, but also how quickly it was able to reach the correct answer.

The hype at the IBM booth at HIMSS was really strong since it had been announced that healthcare was one of the first places that IBM wanted to work on implementing the “Watson” technology (read more about the Watson Technology in Healthcare in this AP article). Although, I found the most interesting conversation about Watson in the Nuance booth when I was talking to Dr. Nick Van Terheyden. The idea of combining the Watson technology with the voice recognition and natural language processing technologies that Nuance has available makes for a really compelling product offering.

One of the keys in the AP article above and was also mentioned by Dr. Nick from Nuance was that the Watson technology in healthcare would be applied differently than it was on Jeopardy!. In healthcare it wouldn’t try and make the decision and provide the correct answer for you. Instead, the Watson technology would be about providing you a number of possible answers and the likelihood of that answer possibly being the issue.

Some of this takes me back to Neil Versel’s posts about Clinical Decision Support and doctors resistance to CDS. There’s no doubt that the Watson technology is another form of Clinical Decision Support, but there’s little about the Watson technology which takes power away from the doctor’s decision making. It certainly could have an influence on a doctor’s ability to provide care, but that’s a great thing. Not that I want doctors constantly second guessing themselves. Not that I want doctors relying solely on the information that Watson or some other related technology provides. It’s like most clinical tools. When used properly, they can provide a great benefit to the doctor using them. When used improperly, it can lead to issues. However, it’s quite clear that Watson technology does little to take away from the decision making of doctors. In fact, I’d say it empowers doctors to do what they do better.

Personally I’m very excited to see technologies like Watson implemented in healthcare. Plus, I think we’re just at the beginning of what will be possible with this type of computing.

Nuance and MModal – Natural Language Processing Expertise

Posted on July 23, 2010 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Many of you might remember that one of the most interesting things I saw at HIMSS this year was the natural language processing that was being done by MModal. In case you don’t know what I’m talking about, check out this video interview of MModal that I did at HIMSS. I still think there really could be something to the idea of retaining the narrative that dictation provides while also pulling out the granular data elements in that narrative.

With that background, I found it really interesting when I was on LinkedIn the other day and saw Dr. Nick van Terheyden,the same guy I interviewed in the video linked above had switched companies. Nick’s profile on LinkedIn had him listed as working for Nuance instead of MModal. I guess this shouldn’t have been a surprise. Nuance has a lot of skin in the natural language processing game and it seemed to me that MModal had the technology that would make it a reality. So, now Dr. Nick van Terheyden is the Chief of Medical Information Officer for Nuance.

I’d say this is a really good move by Nuance and I’m sure Nick is being richly rewarded as well. Nick was one of the most interesting people that I met at HIMSS this year. I’ll be certain to search him out at next year’s event to hear the whole story. Luckily, I also found out that Nick is blogging about voice recognition in healthcare on his blog Voice of the Doctor. I always love it when smart people like Nick start blogging.