Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 3 of 3)

Posted on October 20, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Earlier parts of this article set the stage for understanding what the Alexa Diabetes Challenge is trying to achieve and how some finalists interpreted the mandate. We examine three more finalists in this final section.

DiaBetty from the University of Illinois-Chicago

DiaBetty focuses on a single, important aspect of diabetes: the effect of depression on the course of the disease. This project, developed by the Department of Psychiatry at the University of Illinois-Chicago, does many of the things that other finalists in this article do–accepting data from EHRs, dialoguing with the individual, presenting educational materials on nutrition and medication, etc.–but with the emphasis on inquiring about mood and handling the impact that depression-like symptoms can have on behavior that affects Type 2 diabetes.

Olu Ajilore, Associate Professor and co-director of the CoNECt lab, told me that his department benefited greatly from close collaboration with bioengineering and computer science colleagues who, before DiaBetty, worked on another project that linked computing with clinical needs. Although they used some built-in capabilities of the Alexa, they may move to Lex or another AI platform and build a stand-alone device. Their next step is to develop reliable clinical trials, checking the effect of DiaBetty on health outcomes such as medication compliance, visits, and blood sugar levels, as well as cost reductions.

T2D2 from Columbia University

Just as DiaBetty explores the impact of mood on diabetes, T2D2 (which stands for “Taming Type 2 Diabetes, Together”) focuses on nutrition. Far more than sugar intake is involved in the health of people with diabetes. Elliot Mitchell, a PhD student who led the T2D2 team under Assistant Professor Lena Mamykina in the Department of Biomedical Informatics, told me that the balance of macronutrients (carbohydrates, fat, and protein) is important.

T2D2 is currently a prototype, developed as a combination of Alexa Skill and a chatbot based on Lex. The Alexa Skills Kit handle voice interactions. Both the Skill and the chatbot communicate with a back end that handles accounts and logic. Although related Columbia University technology in diabetes self-management is used, both the NLP and the voice interface were developed specifically for the Alexa Diabetes Challenge. The T2D2 team included people from the disciplines of computer interaction, data science, nursing, and behavioral nutrition.

The user invokes Alexa to tell it blood sugar values and the contents of meals. T2D2, in response, offers recipe recommendations and other advice. Like many of the finalists in this article, it looks back at meals over time, sees how combinations of nutrients matched changes in blood sugar, and personalizes its food recommendations.

For each patient, before it gets to know that patient’s diet, T2D2 can make food recommendations based on what is popular in their ZIP code. It can change these as it watches the patient’s choices and records comments to recommendations (for instance, “I don’t like that food”).

Data is also anonymized and aggregated for both recommendations and future research.

The care team and family caregivers are also involved, although less intensely than some other finalists do. The patient can offer caregivers a one-page report listing a plot of blood sugar by time and day for the previous two weeks, along with goals and progress made, and questions. The patient can also connect her account and share key medical information with family and friends, a feature called the Supportive Network.

The team’s next phase is run studies to evaluable some of assumptions they made when developing T2D2, and improve it for eventual release into the field.

Sugarpod from Wellpepper

I’ll finish this article with the winner of the challenge, already covered by an earlier article. Since the publication of the article, according to the founder and CEO of Wellpepper, Anne Weiler, the company has integrated some of Sugarpod functions into a bathroom scale. When a person stands on the scale, it takes an image of their feet and uploads it to sites that both the individual and their doctor can view. A machine learning image classifier can check the photo for problems such as diabetic foot ulcers. The scale interface can also ask the patient for quick information such as whether they took their medication and what their blood sugar is. Extended conversations are avoided, under the assumption that people don’t want to have them in the bathroom. The company designed its experiences to be integrated throughout the person’s day: stepping on the scale and answering a few questions in the morning, interacting with the care plan on a mobile device at work, and checking notifications and messages with an Echo device in the evening.

Any machine that takes pictures can arouse worry when installed in a bathroom. While taking the challenge and talking to people with diabetes, Wellpepper learned to add a light that goes on when the camera is taking a picture.

This kind of responsiveness to patient representatives in the field will determine the success of each of the finalists in this challenge. They all strive for behavioral change through connected health, and this strategy is completely reliant on engagement, trust, and collaboration by the person with a chronic illness.

The potential of engagement through voice is just beginning to be tapped. There is evidence, for instance, that serious illnesses can be diagnosed by analyzing voice patterns. As we come up on the annual Connected Health Conference this month, I will be interested to see how many participating developers share the common themes that turned up during the Alexa Diabetes Challenge.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 2 of 3)

Posted on October 19, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article introduced the problems of computer interfaces in health care and mentioned some current uses for natural language processing (NLP) for apps aimed at clinicians. I also summarized the common goals, problems, and solutions I found among the five finalists in the Alexa Diabetes Challenge. This part of the article shows the particular twist given by each finalist.

My GluCoach from HCL America in Partnership With Ayogo

There are two levels from which to view My GluCoach. On one level, it’s an interactive tool exemplifying one of the goals I listed earlier–intense engagement with patients over daily behavior–as well as the theme of comprehensivenesss. The interactions that My GluCoach offers were divided into three types by Abhishek Shankar, a Vice President at HCL Technologies America:

  • Teacher: the service can answer questions about diabetes and pull up stored educational materials

  • Coach: the service can track behavior by interacting with devices and prompt the patient to eat differently or go out for exercise. In addition to asking questions, a patient can set up Alexa to deliver alarms at particular times, a feature My GluCoach uses to deliver advice.

  • Assistant: provide conveniences to the patient, such as ordering a cab to take her to an appointment.

On a higher level, My GluCoach fits into broader services offered to health care institutions by HCL Technologies as part of a population health program. In creating the service HCL partnered with Ayogo, which develops a mobile platform for patient engagement and tracking. HCL has also designed the service as a general health care platform that can be expanded over the next six to twelve months to cover medical conditions besides diabetes.

Another theme I discussed earlier, interactions with outside data and the use of machine learning, are key to my GluCoach. For its demo at the challenge, My GluCoach took data about exercise from a Fitbit. It can potentially work with any device that shares information, and HCL plans to integrate the service with common EHRs. As My GluCoach gets to know the individual who uses it over months and years, it can tailor its responses more and more intelligently to the learning style and personality of the patient.

Patterns of eating, medical compliance, and other data are not the only input to machine learning. Shankar pointed out that different patients require different types of interventions. Some simply want to be given concrete advice and told what to do. Others want to be presented with information and then make their own decisions. My GluCoach will hopefully adapt to whatever style works best for the particular individual. This affective response–together with a general tone of humor and friendliness–will win the trust of the individual.

PIA from Ejenta

PIA, which stands for “personal intelligent agent,” manages care plans, delivering information to the affected patients as well as their care teams and concerned relatives. It collects medical data and draws conclusions that allow it to generate alerts if something seems wrong. Patients can also ask PIA how they are doing, and the agent will respond with personalized feedback and advice based on what the agent has learned about them and their care plan.

I talked to Rachna Dhamija, who worked on a team that developed PIA as the founder and CEO of Ejenta. (The name Ejenta is a version of the word “agent” that entered the Bengali language as slang.) She said that the AI technology had been licensed from NASA, which had developed it to monitor astronauts’ health and other aspects of flights. Ejenta helped turn it into a care coordination tool with interfaces for the web and mobile devices at a major HMO to treat patients with chronic heart failure and high-risk pregnancies. Ejenta expanded their platform to include an Alexa interface for the diabetes challenge.

As a care management tool, PIA records targets such as glucose levels, goals, medication plans, nutrition plans, and action parameters such as how often to take measurements using the devices. Each caregiver, along the patient, has his or her own agent, and caregivers can monitor multiple patients. The patient has very granular control over sharing, telling PIA which kind of data can be sent to each caretaker. Access rights must be set on the web or a mobile device, because allowing Alexa to be used for that purpose might let someone trick the system into thinking he was the patient.

Besides Alexa, PIA takes data from devices (scales, blood glucose monitors, blood pressure monitors, etc.) and from EHRs in a HIPAA-compliant method. Because the service cannot wake up Alexa, it currently delivers notifications, alerts, and reminders by sending a secure message to the provider’s agent. The provider can then contact the patient by email or mobile phone. The team plans to integrate PIA with an Alexa notifications feature in the future, so that PIA can proactively communicate with the patient via Alexa.

PIA goes beyond the standard rules for alerts, allowing alerts and reminders to be customized based on what it learns about the patient. PIA uses machine learning to discover what is normal activity (such as weight fluctuations) for each patient and to make predictions based on the data, which can be shared with the care team.

The final section of this article covers DiaBetty, T2D2, and Sugarpod, the remaining finalists.

Alexa Can Truly Give Patients a Voice in Their Health Care (Part 1 of 3)

Posted on October 16, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The leading pharmaceutical and medical company Merck, together with Amazon Web Services, has recently been exploring the potential health impacts of voice interfaces and natural language processing (NLP) through an Alexa Diabetes Challenge. I recently talked to the five finalists in this challenge. This article explores the potential of new interfaces to transform the handling of chronic disease, and what the challenge reveals about currently available technology.

Alexa, of course, is the ground-breaking system that brings everyday voice interaction with computers into the home. Most of its uses are trivial (you can ask about today’s weather or change channels on your TV), but one must not underestimate the immense power of combining artificial intelligence with speech, one of the most basic and essential human activities. The potential of this interface for disabled or disoriented people is particularly intriguing.

The diabetes challenge is a nice focal point for exploring the more serious contribution made by voice interfaces and NLP. Because of the alarming global spread of this illness, the challenge also presents immediate opportunities that I hope the participants succeed in productizing and releasing into the field. Using the challenge’s published criteria, the judges today announced Sugarpod from Wellpepper as the winner.

This article will list some common themes among the five finalists, look at the background about current EHR interfaces and NLP, and say a bit about the unique achievement of each finalist.

Common themes

Overlapping visions of goals, problems, and solutions appeared among the finalists I interviewed for the diabetes challenge:

  • A voice interface allows more frequent and easier interactions with at-risk individuals who have chronic conditions, potentially achieving the behavioral health goal of helping a person make the right health decisions on a daily or even hourly basis.

  • Contestants seek to integrate many levels of patient intervention into their tools: responding to questions, collecting vital signs and behavioral data, issuing alerts, providing recommendations, delivering educational background material, and so on.

  • Services in this challenge go far beyond interactions between Alexa and the individual. The systems commonly anonymize and aggregate data in order to perform analytics that they hope will improve the service and provide valuable public health information to health care providers. They also facilitate communication of crucial health data between the individual and her care team.

  • Given the use of data and AI, customization is a big part of the tools. They are expected to determine the unique characteristics of each patient’s disease and behavior, and adapt their advice to the individual.

  • In addition to Alexa’s built-in language recognition capabilities, Amazon provides the Lex service for sophisticated text processing. Some contestants used Lex, while others drew on other research they had done building their own natural language processing engines.

  • Alexa never initiates a dialog, merely responding when the user wakes it up. The device can present a visual or audio notification when new material is present, but it still depends on the user to request the content. Thus, contestants are using other channels to deliver reminders and alerts such as messaging on the individual’s cell phone or alerting a provider.

  • Alexa is not HIPAA-compliant, but may achieve compliance in the future. This would help health services turn their voice interfaces into viable products and enter the mainstream.

Some background on interfaces and NLP

The poor state of current computing interfaces in the medical field is no secret–in fact, it is one of the loudest and most insistent complaints by doctors, such as on sites like KevinMD. You can visit Healthcare IT News or JAMA regularly and read the damning indictments.

Several factors can be blamed for this situation, including unsophisticated electronic health records (EHRs) and arbitrary reporting requirements by Centers for Medicare & Medicaid Services (CMS). Natural language processing may provide one of the technical solutions to these problems. The NLP services by Nuance are already famous. An encouraging study finds substantial time savings through using NLP to enter doctor’s insights. And on the other end–where doctors are searching the notes they previously entered for information–a service called Butter.ai uses NLP for intelligent searches. Unsurprisingly, the American Health Information Management Association (AHIMA) looks forward to the contributions of NLP.

Some app developers are now exploring voice interfaces and NLP on the patient side. I covered two such companies, including the one that ultimately won the Alexa Diabetes Challenge, in another article. In general, developers using these interfaces hope to eliminate the fuss and abstraction in health apps that frustrate many consumers, thereby reaching new populations and interacting with them more frequently, with deeper relationships.

The next two parts of this article turn to each of the five finalists, to show the use they are making of Alexa.

We Don’t Use the Context We Have in Healthcare

Posted on June 1, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I was recently looking at all the ways consumer technology has been using the context of our lives to make things better. Some obvious examples are things like Netflix which knows what shows we watch and recommends other shows that we might enjoy. Amazon knows what we’ve bought before and what we’re searching for and can use those contexts to recommend other things that we might want to consider. I know I’ve used that feature a lot to evaluate which item was the best for me to purchase on Amazon.

Everywhere we turn in our consumer lives, our context is being used to provide a better experience. Sometimes this shows up in creepy ways like the time a certain cleaning product was mentioned in my kitchen and then I saw an ad for it on a website I was visiting. Was it just coincidence or did Alexa hear me talking about it and then make the recommendation to buy based on that data? Yes, some of this stuff can bit a little creepy and even concerning. However, I personally love the era of personalization which generally makes our lives better.

While this is happening everywhere in our personal lives, healthcare has been slow to adopt similar technologies. Far too often we’re treated in healthcare without taking into account the context of our needs. Sometimes this is as simple as a healthcare provider not taking time to look at the chart. Other times we deny patients request that we add their medical record to our own record or we store it in a place where no one will ever actually access it.

Those are just the basic ways we don’t use context to help us better serve patients. More advanced ways are when we deny patients the opportunity to share their patient generated health data or we don’t use the health data they’re providing. Many people are working on pushing out social data which can provide a lot of context into why a patient is experience health issues or how we could better treat them. This is only going to grow larger, but we’re doing a poor job finding ways to seamlessly incorporate this data into the care that’s being provided.

One of the big challenges of AI is that it has a hard time understanding context. However, humans have a unique ability to include context in the decisions they make. Our interfaces should take this into account so that humans have the information they need to be able to make the proper contextual decisions. At least until the robots get smart enough to do it themselves.

Have you seen other places where healthcare didn’t use the context of the situation and should have used it? How about examples where we use context very effectively?

The Perfect EHR Workflow – Video EHR

Posted on May 12, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve been floating this idea out there for years (2006 to be exact), but I’d never put it together in one consolidated post that I could point to when talking about the concept. I call it the Video EHR and I think it could be the solution to many of our current EHR woes. I know that many of you will think it’s a bit far fetched and in some ways it is. However, I think we’re culturally and technically almost to the point where the video EHR is a feasible opportunity.

The concept is very simple. Put video cameras in each exam room and have those videos replace your EHR.

Technical Feasibility
Of course there are some massive technical challenges to make this a reality. However, the cost of everything related to this idea has come down in price significantly. The cost of HD video cameras negligible. The cost of video storage, extremely cheap and getting cheaper every day. The cost of bandwidth, cheaper and higher quality and so much potential to grow as more cities get fiber connectivity. If this was built on the internal network instead of the cloud, bandwidth is an easily solved issue.

When talking costs, it’s important to note that there would be increased costs over the current documentation solutions. No one is putting in high quality video cameras and audio equipment to record their visits today. Not to mention wiring the exam room so that it all works. So, this would be an added cost.

Otherwise, the technology is all available today. We can easily record, capture and process HD video and even synchronize it across multiple cameras, etc. None of this is technically a challenge. Voice recognition and NLP have progressed significantly so you could process the audio file and convert it into granular data elements that would be needed for billing, clinical decision support, advanced care, population health, etc. These would be compiled into a high quality presentation layer that would be useful for providers to consume data from past visits.

Facial recognition technology has also progressed to the point that we could use these videos to help address the patient identification and patient matching problems that plague healthcare today. We’d have to find the right balance between trusting the technology and human verification, but it would be much better and likely more convenient than what we have today.

Imagine the doctor walking into the exam room where the video cameras in the exam room have already identified the patient and it would identify the doctor as she walked in. Then, the patient’s medical record could be automatically pulled up on the doctor’s tablet and available to them as they’re ready to see the patient.

Plus, does the doctor even need a tablet at all? Could they instead use big digital signs on the walls which are voice controlled by a Siri or Alexa like AI solution. I can already hear, “Alexa, pull up John Lynn’s cholesterol lab results for the past year.” Next thing you know, a nice chart of my cholesterol appears on the big screen for both doctor and patient to see.

Feels pretty far fetched, but all of the technology I describe is already here. It just hasn’t been packaged in a way that makes sense for this application.

Pros
Ideal Workflow for Providers – I can think of no better workflow for a doctor or nurse. Assuming the tech works properly (and that’s a big assumption will discuss in the cons), the provider walks into the exam room and engages with the patient. Everything is documented automatically. Since it’s video, I mean literally everything would be documented automatically. The providers would just focus on engaging with the patient, learning about their health challenges, and addressing their issues.

Patient Experience – I’m pretty sure patients wouldn’t know what to do if their doctor or nurse was solely focused on them and wasn’t stuck with their head in a chart or in their screen. It would totally change patients’ relationship with their doctors.

Reduced Liability – Since you literally would have a multi angle video and audio recording of the visit, you’d have the proof you’d need to show that you had offered specific instructions or that you’d warned of certain side effects or any number of medical malpractice issues could be resolved by a quick look at the video from the visit. The truth will set you free, and you’d literally have the truth about what happened during the visit on video.

No Click Visit – This really is part of the “Ideal Workflow” section, but it’s worth pointing out all the things that providers do today to document in their EHR. The biggest complaint is the number of clicks a doctor has to do. In the video EHR world where everything is recorded and processed to document the visit you wouldn’t have any clicks.

Ergonomics – I’ve been meaning to write a series of posts on the health consequences doctors are experiencing thanks to EHR software. I know many who have reported major back trouble due to time spent hunched over their computer documenting in the EHR. You can imagine the risk of carpal tunnel and other hand and wrist issues that are bound to come up. All of this gets resolved if the doctor literally walks into the exam room and just sees the patient. Depending on how the Video EHR is implemented, the doctor might have to still spend time verifying the documentation or viewing past documentation. However, that could most likely be done on a simple tablet or even using a “Siri”-like voice implementation which is much better ergonomically.

Learning – In mental health this happens all the time. Practicum students are recording giving therapy and then a seasoned counselor advises them on how they did. No doubt we could see some of the same learning benefits in a medical practice. Sometimes that would be through peer review, but also just the mere fact of a doctor watching themselves on camera.

Cons
Privacy – The biggest fear with this idea is that most people think this is or could be a major privacy issue. They usually ask the question, “Will patients feel comfortable doing this?” On the privacy front, I agree that video is more personal than granular data elements. So, the video EHR would have to take extreme precautions to ensure the privacy and security of these videos. However, from an impact standpoint, it wouldn’t be that much different than granular health information being breached. Plus, it’s much harder to breach a massive video file being sent across the wire than a few granular text data elements. No doubt, privacy and security would be a challenge, but it’s a challenge today as well. I don’t think video would be that much more significant.

As to the point of whether patients would be comfortable with a video in the exam room, no doubt there would need to be a massive culture shift. Some may never reach the point that they’re comfortable with it. However, think about telemedicine. What are patients doing in telemedicine? They’re essentially having their patient visit on video, streamed across the internet and a lot of society is very comfortable with it. In fact, many (myself included) wish that telemedicine were more widely available. No doubt telemedicine would break down the barriers when it comes to the concept of a video EHR. I do acknowledge that a video EHR takes it to another level and they’re not equal. However, they are related and illustrate that people’s comfort in having their medical visits on video might not be as far fetched as it might seem on the surface.

Turns out that doctors will face the same culture shift challenge as patients and they might even be more reluctant than patients.

Trust – I believe this is currently the biggest challenge with the concept of a video EHR. Can providers trust that the video and audio will be captured? What happens if it fails to capture? What happens if the quality of the video or audio isn’t very good? What is the voice recognition or NLP isn’t accurate and something bad happens? How do we ensure that everything that happens in the visit is captured accurately?

Obviously there are a lot of challenges associated with ensuring the video EHR’s ability to capture and document the visit properly. If it doesn’t it will lose providers and patients’ trust and it will fail. However, it’s worth remembering that we don’t necessarily need it to be perfect. We just need it to be better than our current imperfect status quo. We also just need to design the video EHR to avoid making mistakes and warn about possible missing information so that it can be addressed properly. No doubt this would be a monumental challenge.

Requires New Techniques – A video EHR would definitely require modifications in how a provider sees a patient. For example, there may be times where a patient or the doctor need to be positioned a certain way to ensure the visit gets documented properly. You can already see one of the cameras being a portable camera that can be used for close up shots of rashes or other medical issues so that they’re documented properly.

No doubt providers would have to learn new techniques on what they say in the exam room to make sure that things are documented properly. Instead of just thinking something, they’ll have to ensure that they speak clinical orders, findings, diagnosis, etc. We could have a long discussion on the impact for good and bad of this type of transparency.

Double Edged Sword of Liability – While reduced liability is a pro, liability could also be a con for a video EHR. Having the video of a medical visit can set you free, but it can also be damning as well. If you practice improper medicine, you won’t have anywhere to hide. Plus, given our current legal environment, even well intentioned doctors could get caught in challenging situations if the technology doesn’t work quite right or the video is taken out of context.

Reality Check
I realize this is a massive vision with a lot of technical and cultural challenges that would need to be overcome. Although, when I first came up with the idea of a video EHR ~10 years ago, it was even more far fetched. Since then, so many things have come into place that make this idea seem much more reasonable.

That said, I’m realistic that a solution like this would likely start with some sort of half and half solution. The video would be captured, but the provider would need to verify and complete the documentation to ensure its accuracy. We couldn’t just trust the AI engine to capture everything and be 100% accurate.

I’m also interested in watching the evolution of remote scribes. In many ways, a remote scribe is a human doing the work of the video EHR AI engine. It’s an interesting middle ground which could illustrate the possibilities and also be a small way to make patients and providers more comfortable with cameras in the exam room.

I do think our current billing system and things like meaningful use (or now MACRA) are still a challenge for a video EHR. The documentation requirements for these programs are brutal and could make the video EHR workflow lose its luster. Could it be done to accommodate the current documentation requirements? Certainly, but it might take some of the polish off the solution.

There you have it. My concept for a video EHR. What do you think of the idea? I hope you tear it up in the comments.

AI (Artificial Intelligence) in EMR Software

Posted on December 2, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Today I had an interesting conversation with MEDENT. It’s an EHR company that’s in only 8 states. I could actually write a whole post on just their approach to EMR software and the EMR market in general. They take a pretty unique approach to the market. They’ve exercised some restraint in their approach that I haven’t seen from many other EHR vendors. I’ll be interested to see how that plays out for them.

Their market approach aside, I was really intrigued by their approach to dealing with ICD-10. They actually described their approach to ICD-10 similar to how Google handled search. There’s all this information out there (or you could say all these new codes) and so they wanted to build a simple interface that would be able to easily and naturally filter the information (or codes in this case). A unique way of looking at the challenge of so many new ICD-10 codes.

However, that was just the base use case, but didn’t include what I consider applying AI (Artificial Intelligence) to really improve a user interface. The simple example they gave had to do with collecting data from their users about which things they typed and which codes they actually selected. This real time feedback is then added to the algorithm to improve how quickly you can get to the code you’re actually trying to find.

One interesting thing about incorporating this feedback from actual user experiences is that you could even create a customized personal experience in the EMR. In fact, that’s basically what Google has done with search with their search personalization (ie. when you’re logged in it knows your search history and details so it can personalize the search results for you). Although, when you start personalizing, you still have to make sure that the out of box experience is good. Plus, in healthcare you could do some great personalization around specialties as well that could be really beneficial.

I’d heard something similar from NextGen at the user group meeting applied to coding. The idea of tracking user behavior and incorporating those behaviors into the intelligence of the EMR is a fascinating subject to me. I just wonder how many other places in an EMR these same principles can apply.

I see these types of movements as part of the larger move to “Smart EMR Software.”