Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Machine Learning, Data Science, AI, Deep Learning, and Statistics – It’s All So Confusing

Posted on November 30, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

It seems like these days every healthcare IT company out there is saying they’re doing machine learning, AI, deep learning, etc. So many companies are using these terms that they’ve started to lose meaning. The problem is that people are using these labels regardless of whether they really apply. Plus, we all have different definitions for these terms.

As I search to understand the differences myself, I found this great tweet from Ronald van Loon that looks at this world and tries to better define it:

In that tweet, Ronald also links to an article that looks at some of the differences. I liked this part he took from Quora:

  • AI (Artificial intelligence) is a subfield of computer science, that was created in the 1960s, and it was (is) concerned with solving tasks that are easy for humans, but hard for computers. In particular, a so-called Strong AI would be a system that can do anything a human can (perhaps without purely physical things). This is fairly generic, and includes all kinds of tasks, such as planning, moving around in the world, recognizing objects and sounds, speaking, translating, performing social or business transactions, creative work (making art or poetry), etc.
  • Machine learning is concerned with one aspect of this: given some AI problem that can be described in discrete terms (e.g. out of a particular set of actions, which one is the right one), and given a lot of information about the world, figure out what is the “correct” action, without having the programmer program it in. Typically some outside process is needed to judge whether the action was correct or not. In mathematical terms, it’s a function: you feed in some input, and you want it to to produce the right output, so the whole problem is simply to build a model of this mathematical function in some automatic way. To draw a distinction with AI, if I can write a very clever program that has human-like behavior, it can be AI, but unless its parameters are automatically learned from data, it’s not machine learning.
  • Deep learning is one kind of machine learning that’s very popular now. It involves a particular kind of mathematical model that can be thought of as a composition of simple blocks (function composition) of a certain type, and where some of these blocks can be adjusted to better predict the final outcome.

Is that clear for you now? Would you suggest different definitions? Where do you see people using these terms correctly and where do you see them using them incorrectly?

Searching EMR For Risk-Related Words Can Improve Care Coordination

Posted on September 18, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

Though healthcare organizations are working on the problem, they’re still not as good at care coordination as they should be. It’s already an issue and will only get worse under value-based care schemes, in which the ability to coordinate care effectively could be a critical issue for providers.

Admittedly, there’s no easy way to solve care coordination problems, but new research suggests that basic health IT tools might be able to help. The researchers found that digging out important words from EMRs can help providers target patients needing extra care management and coordination.

The article, which appears in JMIR Medical Informatics, notes that most care coordination programs have a blind spot when it comes to identifying cases demanding extra coordination. “Care coordination programs have traditionally focused on medically complex patients, identifying patients that qualify by analyzing formatted clinical data and claims data,” the authors wrote. “However, not all clinically relevant data reside in claims and formatted data.”

For example, they say, relying on formatted records may cause providers to miss psychosocial risk factors such as social determinants of health, mental health disorder, and substance abuse disorders. “[This data is] less amenable to rapid and systematic data analyses, as these data are often not collected or stored as formatted data,” the authors note.

To address this issue, the researchers set out to identify psychosocial risk factors buried within a patient’s EHR using word recognition software. They used a tool known as the Queriable Patient Inference Dossier (QPID) to scan EHRs for terms describing high-risk conditions in patients already in care coordination programs.

After going through the review process, the researchers found 22 EHR-available search terms related to psychosocial high-risk status. When they were able to find nine or more of these terms in the patient’s EHR, it predicted that a patient would meet criteria for participation in a care coordination program. Presumably, this approach allowed care managers and clinicians to find patients who hadn’t been identified by existing care coordination outreach efforts.

I think this article is valuable, as it outlines a way to improve care coordination programs without leaping over tall buildings. Obviously, we’re going to see a lot more emphasis on harvesting information from structured data, tools like artificial intelligence, and natural language processing. That makes sense. After all, these technologies allow healthcare organizations to enjoy both the clear organization of structured data and analytical options available when examining pure data sets. You can have your cake and eat it too.

Obviously, we’re going to see a lot more emphasis on harvesting information from structured data, tools like artificial intelligence and natural language processing. That makes sense. After all, these technologies allow healthcare organizations to enjoy both the clear organization of structured data and analytical options available when examining pure data sets. You can have your cake and eat it too.

Still, it’s good to know that you can get meaningful information from EHRs using a comparatively simple tool. In this case, parsing patient medical records for a couple dozen keywords helped the authors find patients that might have otherwise been missed. This can only be good news.

Yes, there’s no doubt we’ll keep on pushing the limits of predictive analytics, healthcare AI, machine learning and other techniques for taming wild databases. In the meantime, it’s good to know that we can make incremental progress in improving care using simpler tools.

More About Artificial Intelligence in Healthcare – #HITsm Chat Topic

Posted on August 8, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

We’re excited to share the topic and questions for this week’s #HITsm chat happening Friday, 8/11 at Noon ET (9 AM PT). This week’s chat will be hosted by Prashant Natarajan (@natarpr) on the topic of “More About Artificial Intelligence in Healthcare.” Be sure to also check out Prashant’s HIMSS best selling book Demystifying Big Data and Machine Learning for Healthcare to learn about his perspectives and insights into the topic.

Healthcare transformation requires us to continually look at new and better ways to manage insights – both within and outside the organization today. Increasingly, the ability to glean and operationalize new insights efficiently as a byproduct of an organization’s day-to-day operations is becoming vital to hospitals and health systems ability to survive and prosper. One of the long-standing challenges in healthcare informatics has been the ability to deal with the sheer variety and volume of disparate healthcare data and the increasing need to derive veracity and value out of it.

The potential for big data in healthcare – especially given the trends discussed earlier is as bright as any other industry. The benefits that big data analytics, AI, and machine learning can provide for healthier patients, happier providers, and cost-effective care are real. The future of precision medicine, population health management, clinical research, and financial performance will include an increased role for machine-analyzed insights, discoveries, and all-encompassing analytics.

This chat explores participants thoughts and feelings about the future of artificial intelligence in the healthcare industry and how healthcare organizations might leverage artificial intelligence to discover new business value, use cases, and knowledge.

Note: For purpose of this chat, “artificial intelligence” can mean predictive analytics, machine learning, big data analytics, natural language processing and contextually intelligent agents.

Reference Materials

Questions we will explore in this week’s #HITsm chat include:
T1: What words or short phrases convey your current thoughts & feelings about ‘artificial intelligence’ in the healthcare space? #HITsm #AI

T2: What are big & small steps healthcare can take to leverage big data & machine learning for population health & personalized care? #HITsm

T3: Which areas of healthcare might be most positively impacted by artificial intelligence? #HITsm #AI

T4: What are some areas within healthcare that will likely NOT be improved or replaced by artificial intelligence? #HITsm #AI

T5: What lessons learned from early days of ‘advanced analytics’ must not be forgotten as use of artificial intelligence expands? #HITsm #AI

Bonus: How is your organization preparing for the application and use of artificial intelligence in healthcare? #HITsm #AI

Upcoming #HITsm Chat Schedule
8/18 – Diversity in HIT
Hosted by Jeanmarie Loria (@JeanmarieLoria) from @advizehealth

8/25 – Consumer Data Liquidity – The Road So Far, The Road Ahead
Hosted by Greg Meyer (@Greg_Meyer93)

We look forward to learning from the #HITsm community! As always, let us know if you’d like to host a future #HITsm chat or if you know someone you think we should invite to host.

If you’re searching for the latest #HITsm chat, you can always find the latest #HITsm chat and schedule of chats here.

Hands-On Guidance for Data Integration in Health: The CancerLinQ Story

Posted on June 15, 2017 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Institutions throughout the health care field are talking about data sharing and integration. Everyone knows that improved care, cost controls, and expanded research requires institutions who hold patient data to safely share it. The American Society of Clinical Oncology’s CancerLinQ, one of the leading projects analyzing data analysis to find new cures, has tackled data sharing with a large number of health providers and discovered just how labor-intensive it is.

CancerLinQ fosters deep relationships and collaborations with the clinicians from whom it takes data. The platform turns around results from analyzing the data quickly and to give the clinicians insights they can put to immediate use to improve the care of cancer patients. Issues in collecting, storing, and transmitting data intertwine with other discussion items around cancer care. Currently, CancerLinQ isolates the data from each institution, and de-identifies patient information in order to let it be shared among participating clinicians. CancerLinQ LLC is a wholly-owned nonprofit subsidiary of ASCO, which has registered CancerLinQ as a trademark.

CancerLinQ logo

Help from Jitterbit

In 2015, CancerLinQ began collaborating with Jitterbit, a company devoted to integrating data from different sources. According to Michele Hazard, Director of Healthcare Solutions, and George Gallegos, CEO, their company can recognize data from 300 different sources, including electronic health records. At the beginning, the diversity and incompatibility of EHRs was a real barrier. It took them several months to figure out each of the first EHRs they tackled, but now they can integrate a new one quickly. Oncology care, the key data needed by CancerLinQ, is a Jitterbit specialty.

Jitterbit logo

One of the barriers raised by EHRs is licensing. The vendor has to “bless” direct access to EHR and data imported from external sources. HIPAA and licensing agreements also make tight security a priority.

Another challenge to processing data is to find records in different institutions and accurately match data for the correct patient.

Although the health care industry is moving toward the FHIR standard, and a few EHRs already expose data through FHIR, others have idiosyncratic formats and support older HL7 standards in different ways. Many don’t even have an API yet. In some cases, Jitterbit has to export the EHR data to a file, transfer it, and unpack it to discover the patient data.

Lack of structure

Jitterbit had become accustomed to looking in different databases to find patient information, even when EHRs claimed to support the same standard. One doctor may put key information under “diagnosis” while another enters it under “patient problems,” and doctors in the same practice may choose different locations.

Worse still, doctors often ignore the structured fields that were meant to hold important patient details and just dictate or type it into a free-text note. CancerLinQ anticipated this, unpacking the free text through optical character recognition (OCR) and natural language processing (NLP), a branch of artificial intelligence.

It’s understandable that a doctor would evade the use of structured fields. Just think of the position she is in, trying to keep a complex cancer case in mind while half a dozen other patients sit in the waiting room for their turn. In order to use the structured field dedicated to each item of information, she would have to first remember which field to use–and if she has privileges at several different institutions, that means keeping the different fields for each hospital in mind.

Then she has to get access to the right field, which may take several clicks and require movement through several screens. The exact information she wants to enter may or may not be available through a drop-down menu. The exact abbreviation or wording may differ from EHR to EHR as well. And to carry through a commitment to using structured fields, she would have to go through this thought process many times per patient. (CancerLinQ itself looks at 18 Quality eMeasures today, with the plan to release additional measures each year.)

Finally, what is the point of all this? Up until recently, the information would never come back in a useful form. To retrieve it, she would have to retrace the same steps she used to enter the structured data in the first place. Simpler to dump what she knows into a free-text note and move on.

It’s worth mentioning that this Babyl of health care information imposes negative impacts on the billing and reimbursement process, even though the EHRs were designed to support those very processes from the start. Insurers have to deal with the same unstructured data that CancerLinQ and Jitterbit have learned to read. The intensive manual process of extracting information adds to the cost of insurance, and ultimately the entire health care system. The recent eClinicalWorks scandal, which resembles Volkswagon’s cheating on auto emissions and will probably spill out to other EHR vendors as well, highlights the failings of health data.

Making data useful

The clue to unblocking this information logjam is deriving insights from data that clinicians can immediately see will improve their interventions with patients. This is what the CancerLinQ team has been doing. They run analytics that suggest what works for different categories of patients, then return the information to oncologists. The CancerLinQ platform also explains which items of data were input to these insights, and urges the doctors to be more disciplined about collecting and storing the data. This is a human-centered, labor-intensive process that can take six to twelve months to set up for each institution. Richard Ross, Chief Operating Officer of CancerLinQ calls the process “trench warfare,” not because its contentious but because it is slow and requires determination.

Of the 18 measures currently requested by CancerLinQ, one of the most critical data elements driving the calculation of multiple measures is staging information: where the cancerous tumors are and how far it has progressed. Family history, treatment plan, and treatment recommendations are other examples of measures gathered.

The data collection process has to start by determining how each practice defines a cancer patient. The CancerLinQ team builds this definition into its request for data. Sometimes they submit “pull” requests at regular intervals to the hospital or clinic, whereas other times the health care provider submits the data to them at a time of its choosing.

Some institutions enforce workflows more rigorously than others. So in some hospitals, CancerLinQ can persuade the doctors to record important information at a certain point during the patient’s visit. In other hospitals, doctors may enter data at times of their own choosing. But if they understand the value that comes from this data, they are more likely to make sure it gets entered, and that it conforms to standards. Many EHRs provide templates that make it easier to use structured fields properly.

When accepting information from each provider, the team goes through a series of steps and does a check-in with the provider at each step. The team evaluates the data in a different stage for each criterion: completeness, accuracy of coding, the number of patients reported, and so on. By providing quick feedback, they can help the practice improve its reporting.

The CancerLinQ/Jitterbit story reveals how difficult it is to apply analytics to health care data. Few organizations can afford the expertise they apply to extracting and curating patient data. On the other hand, CancerLinQ and Jitterbit show that effective data analysis can be done, even in the current messy conditions of electronic data storage. As the next wave of technology standards, such as FHIR, fall into place, more institutions should be able to carry out analytics that save lives.

#TransformHIT Think Tank Hosted by DellEMC

Posted on April 5, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.


DellEMC has once again invited me back to participate at the 6th annual #TransformHIT Healthcare Think Tank event happening Tuesday, April 18, 2017 from Noon ET (9 AM PT) – 3 PM ET (Noon PT). I think I’ve been lucky enough to participate 5 of the 6 years and I’ve really enjoyed every one of them. DellEMC does a great job bringing together really smart, interesting people and encourages a sincere, open discussion of major healthcare IT topics. Plus, they do a great job making it so everyone can participate, watch, and share virtually as well.

This year they asked me to moderate the Think Tank which will be a fun new adventure for me, but my job will be made easy by this exceptional list of people that will be participating:

  • John Lynn (@techguy)
  • Paul Sonnier (@Paul_Sonnier)
  • Linda Stotsky (@EMRAnswers)
  • Joe Babaian (@JoeBabaian)
  • Dr. Joe Kim (@DrJosephKim)
  • Andy DeLaO (@cancergeek)
  • Dan Munro (@danmunro)
  • Dr. Jeff Trent (@TGen)
  • Shahid Shah (@ShahidNShah)
  • Dave Dimond(@NextGenHIT)
  • Mike Feibus (@MikeFeibus)

This panel is going to take on three hot topics in the healthcare industry today:

  • Consumerism in Healthcare
  • Precision Medicine
  • Big Data and AI in Healthcare

The great thing is that you can watch the whole #TransformHIT Think Tank event remotely on Livestream (recording will be available after as well). We’ll be watching the #TransformHIT tweet stream and messages to @DellEMCHealth during the event as well if you want to ask any questions or share any insights. We’ll do our best to add outside people’s comments and questions into the discussion. The Think Tank is being held in Phoenix, AZ, so if you’re local there are a few audience seats available if you’d like to come watch live and meet any of the panelists in person. Just let me know in the comments or on our contact us page and I can give you more details.

If you have an interest in healthcare consumerism, precision medicine, or big data and AI in healthcare, then please join us on Tuesday, April 18, 2017 from Noon ET (9 AM PT) – 3 PM ET (Noon PT) for the live stream. It’s sure to be a lively and interesting discussion.
Read more..

What If Your Doctor Knew All Your Health Searches?

Posted on June 30, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Back in 2013, the Pew Research Internet Project found that 72% of internet users looked online for health information. This was well before the most recent update to Dr. Google. It’s only a matter of time that those health searches will end up going through some sort of AI solution (Siri, Alexa, Galaxy, etc) we bring into the home.

Imagine if we connected this font of health information and questions together with the healthcare establishment. What if your doctor had access to all of the health related searches you were doing? Might he be able to provide better service to you and your family?

Yes, I realize that this idea will be extremely controversial. There are some major privacy challenges and issues with this idea, but there’s also a lot of potential benefits. It seems a little bit hypocritical that we ask doctors to be open and transparent with our health records if we as patients aren’t going to be open and transparent with our medical concerns. Certainly, we should be able to control what and with whom we share this information, but I believe that many will be willing to share it with their doctors.

Yes, this will require a pretty dramatic shift in how our medical professionals will handle a patient visit. However, if I’ve been doing a bunch of searches around back pain, imagine how much different my visit to the doctor for an earache would be. Could that provide the opportunity for the doctor to talk to me about my back pain searches?

It’s fascinating to think how this is almost the complete opposite of the office visit today. I’ve seen doctors that wanted to only deal with one issue at a time. Those doctors have learned the special dance that allows them to avoid talking about more than the presenting concern. Many doctors learn essentially a new language that makes sure that they get in and out of the exam room quickly without bringing up the rabbit hole of potential health problems a patient might be actually experiencing.

That’s the reality of today’s medicine. This is what we pay them to do. That’s changing with things like CCM where a healthcare provider is paid to dig in a little deeper. It’s certainly not enough to fully change these behaviors.

Until the reimbursement fully changes over to doctors getting paid to keep you healthy, a doctor knowing your health searches won’t be of interest to most doctors. However, once reimbursement changes, a doctor will become much more interested in what’s really ailing you. Your online searches certainly will say a lot about your health, both physical and mental.