Are Healthcare Data Streams Rich Enough To Support AI?

As I’ve noted previously, artificial intelligence and machine learning applications are playing an increasingly important role in healthcare. The two technologies are central to some intriguing new data analytics approaches, many of which are designed to predict which patients will suffer from a particular ailment (or progress in that illness), allowing doctors to intervene.

For example, at New York-based Mount Sinai Hospital, executives are kicking off a predictive analytics project designed to predict which patients might develop congestive heart failure, as well as to care for those who’ve are done so more effectively. The hospital is working with AI vendor CloudMedx to make the predictions, which will generate predictions by mining the organization’s EMR for clinical clues, as well as analyzing data from implantable medical devices, health tracking bands and smartwatches to predict the patient’s future status.

However, I recently read an article questioning whether all health IT infrastructures are capable of handling the influx of data that are part and parcel with using AI and machine learning — and it gave me pause.

Artificial intelligence, the article notes, functions on collected data, and the more data AI solution has access to, the more successful the implementation will be, contends Elizabeth O’Dowd in HIT Infrastructure. And there are some questions as to whether healthcare IT departments can integrate this data, especially Internet of Things datapoints such as wearables and other personal devices.

After all, O’Dowd notes, for the AI solution to crawl data from IoT wearables, mobile apps and other connected devices, the data must be integrated into the patient’s medical record in a format which is compatible with the organization’s EMR technology. Otherwise, the organization’s data analytics solution won’t be able to process the data, and in turn, the AI solution won’t be able to evaluate it, she writes.

Without a doubt, O’Dowd has raised some important issues here. But the real question, as I see it, is whether such data integration is really the biggest bottleneck AI and machine learning must pass through before becoming accessible to a wide range of users. For example, healthcare AI-based Lumiata offers a FHIR-compliant API to help organizations integrate such data, which is certainly relevant to this discussion.

It seems to me that giving the AI every possible scrap of data to feed on isn’t the be all and end all, and may even actually less important than the clinical rationale developers uses to back up its work. In other words, in the case of Lumiata and its competitors, it appears that creating a firm foundation for the predictions is still as much the work of clinicians as much is AI.

I guess what I’m getting to here is that while AI is doubtless more effective at predicting events as it has access to more data, using what data we have with and letting skilled clinicians manage it is still quite valuable. So let’s not back off on harvesting the promise of AI just because we don’t have all the data in hand yet.

About the author

Anne Zieger

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

1 Comment

  • So long as incoming data is in digital format, a generic data exchanger is capable of mapping the data for import to an EMR that accommodates user-definable database fields.

    If the incoming data is not in a format that the data exchanger recognizes, a parser must be written (typically not a big deal).

    Many times the mapping is not 1:1 (i.e. the parser must be able to extract data elements from memo text, for instance)

Click here to post a comment
   

Categories