Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

IBM Watson Health Layoffs Suggests AI Strategy Isn’t Working

Posted on June 6, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

IBM Watson Health is apparently making massive cuts to its staff, in a move suggesting that its healthcare AI isn’t working.

Watson Health leaders have argued that AI (which Watson Health leaders call “cognitive computing”) as the solution to many of the healthcare industry’s problems. IBM pitched Watson technology as a revolutionary tool which could get to the root of difficult medical problems.

Over time, however, it’s begun to look like this wasn’t going to happen, at least for the present. Among other high-profile goofs, IBM Watson has struggled with applying the supercomputing tech to oncology, which was one of its main goals.

Now IBM Watson Health has slashed up to 70% of its staff, according to sources speaking to The Register. The site reports that most of the layoffs are cutting staff within companies IBM has brought in an effort to build out its healthcare credentials. These include medical data company Truven, acquired in 2016 for $2.6 billion, medical imaging firm Merge, bought in 2015 for $1 billion and healthcare management firm Phytel, the site reports.

The cuts reflect a major strategic shift for Watson Health, which was one of IBM’s flagship divisions until recently. Having invested heavily in businesses that might have helped it dominate the health IT world, it now appears to be rethinking it’s all in approach.

That being said, no one has suggested that IBM Watson Health will disappear in a poof of smoke. IBM corporate leaders seem dedicated to an AI future. However, if this report is correct, Watson Health is being reorganized completely. Not too much of a surprise since given how hyped it was, it would have been almost impossible for it to live up to the hype.

To me, this suggests that rolling out healthcare AI tools might call for a completely different business model. Rather than applying brute force supercomputing tools to enterprise healthcare issues, it may be better to build from the ground up.

For example, consider Google’s approach to healthcare AI supercomputing. UK-based DeepMind is building relationships and products from the ground up. Working with the National Health Service DeepMind Health is bringing mobile tools and AI research to hospitals. Its mobile health tools include Streams, a secure mobile phone app which feeds critical medical information to doctors and hospitals.

In my opinion, the future of AI in healthcare will look more like the DeepMind model and less like IBM Watson’s top-down approach. Building out AI-based tools and platforms for physicians and nurses first just makes sense.

Healthcare AI Needs a Breadth and Depth of Data

Posted on May 17, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Today I’m enjoying the New England HIMSS Spring Conference including an amazing keynote session by Dale Sanders from Health Catalyst. Next week I’ll be following up this blog post with some other insights that Dale shared at the New England HIMSS event, but today I just wanted to highlight one powerful concept that he shared:

Healthcare AI Needs a Breadth and Depth of Data

As part of this idea, Dale shared the following image to illustrate how much data is really needed for AI to effectively assess our health:

Dale pointed out that in healthcare today we really only have access to the data in the bottom right corner. That’s not enough data for AI to be able to properly assess someone’s health. Dale also suggested the following about EHR data:

Long story short, the EHR data is not going to be enough to truly assess someone’s health. As Google recently proved, a simple algorithm with more data is much more powerful than a sophisticated algorithm with less data. While we think we have a lot of data in healthcare, we really don’t have that much data. Dale Sanders made a great case for why we need more data if we want AI to be effective in healthcare.

What are you doing in your organization to collect data? What are you doing to get access to this data? Does collection of all of this data scare anyone? How far away are we from this data driven, AI future? Let us know your thoughts in the comments.

Google And Fitbit Partner On Wearables Data Options

Posted on May 7, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Fitbit and Google have announced plans to work together, in a deal intended to “transform the future of digital health and wearables.” While the notion of transforming digital health is hyperbole even for companies the size of Google and Fitbit, the pairing does have plenty of potential.

In a nutshell, Fitbit and Google expect to take on both consumer and enterprise health projects that integrate data from EMRs, wearables and other sources of patient information together. Given the players involved, it’s hard to doubt that at least something neat will emerge from their union.

Among the first things the pair plans to use Google’s new Cloud Healthcare API to connect Fitbit data with EMRs. Of course, readers will know that it’s one thing to say this and another to actually do it, but gross oversimplifications aside, the idea is worth pursuing.

Also, using services such as those offered by Twine Health– a recent Fitbit acquisition — the two companies will work to better manage chronic conditions such as diabetes and hypertension. Twine offers a connected health platform which leverages Fitbit data to offer customized health coaching.

Of course, as part of the deal Fitbit is moving to the Google Cloud Platform, which will supply the expected cloud services and engineering support.

The two say that moving to the Cloud Platform will offer Fitbit advanced security capabilities which will help speed up the growth of Fitbit Health Solutions business. They also expect to make inroads in population health analysis. For its part, Google also notes that it will bring its AI, machine learning capabilities and predictive analytics algorithms to the table.

It might be worth a small caution here. Google makes a point of saying it is “committed” to meeting HIPAA standards, and that most Google Cloud products do already. That “most” qualifier would make me a little bit nervous as a provider, but I know, why worry about these niceties when big deals are afoot. However, fair warning that when someone says general comments like this about meeting HIPAA standards, it probably means they already employ high security standards which are likely better than HIPAA. However, it also means that they probably don’t comply with HIPAA since HIPAA is about more than security and requires a contractual relationship between provider and business associate and the associated liability of being a business associate.

Anyway, to round out all of this good stuff, Fitbit and Google said they expect to “innovate and transform” the future of wearables, pairing Fitbit’s brand, community, data and high-profile devices with Google’s extreme data management and cloud capabilities.

You know folks, it’s not that I don’t think this is interesting. I wouldn’t be writing about if I didn’t. But I do think it’s worth pointing out how little this news announcement says, really.

Yes, I realize that when partnerships begin, they are by definition all big ideas and plans. But when giants like Google, much less Fitbit, have to fall back on words like innovate and transform (yawn!), the whole thing is still pretty speculative. Just sayin’.

Privacy Fears May Be Holding Back Digital Therapeutics Adoption

Posted on May 3, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Consumers were already afraid that their providers might not be able to protect the privacy of their health data. Given the daily news coverage of large data breaches and since the Facebook data scandal blew up, consumers may be even less likely try out new digital health approaches.

For example, a new study by innovation consultancy Enspektos has concluded that patients may be afraid to adopt digital therapeutics options. Many fear that the data might be compromised or the technology may subject them to unwanted personal surveillance.

Without a doubt, digital therapeutics could have a great future. Possibilities include technologies such as prescription drugs with embedded sensors tracking medication compliance, as well as mobile apps that could potentially replace drugs. However, consumers’ appetite for such innovations may be diminishing as consumer fears over data privacy grow.

The research, which was done in collaboration with Savvy Cooperative, found that one-third of respondents fear that such devices will be used to track their behavior in invasive ways or that the data might be sold to a third party without the permission. As the research authors note, it’s hard to argue that the Facebook affair has ratcheted up these concerns.

Other research by Enspektos includes some related points:

  • Machine-aided diagnosis is growing as AI, wearables and data analytics are combined to predict and treat diseases
  • The deployment of end-to-end digital services is increasing as healthcare organizations work to create comprehensive platforms that embrace a wide range of conditions

It’s worth noting that It’s not just consumers who are worried about new forms of hacker intrusions. Industry CIOs have been fretting as it’s become more common for cybercriminals to attack healthcare organizations specifically. In fact, just last month Symantec identified a group known as Orangeworm that is breaking into x-ray, MRI and other medical equipment.

If groups like Orangeworm have begun to attack medical devices — something cybersecurity experts have predicted for years — we’re looking at a new phase in the battle to protect hospital devices and data. If one cybercriminal decides to focus on healthcare specifically, it’s likely that others will as well.

It’s bad enough that people are worried about the downsides of digital therapeutics. If they really knew how insecure their overall medical data could be going forward, they might be afraid to even sign in to their portal again.

More Ways AI Can Transform Healthcare

Posted on April 25, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

You’ve probably already heard a lot about how AI will change healthcare. Me too. Still, given its potential, I’m always interested in hearing more, and the following article struck me as offering some worthwhile ideas.

The article, which was written by Humberto Alexander Lee of Tesser Health, looks at ways in which AI tools can reduce data complexity and detect patterns which would be difficult or even impossible for humans to detect.

His list of AI’s transformative powers includes the following:

  • Identifying diseases and providing diagnoses

AI algorithms can predict when people are likely to develop heart disease far more accurately than humans. For example, at Google healthcare technology subsidiary Verily, scientists created an algorithm that can predict heart disease by looking at the back of a person’s eyes and pinpoint early signs of specific heart conditions.

  • Crowdsourcing treatment options and monitoring drug response

As wearable devices and mobile applications mature, and data interoperability improves thanks to standards such as FHIR, data scientists and clinicians are beginning to generate new insights using machine learning. This is leading to customizable treatments that can provide better results than existing approaches.

  • Monitoring health epidemics

While performing such a task would be virtually impossible for humans, AI and AI-related technologies can sift through staggering pools of data, including government intelligence and millions of social media posts, and combine them with ecological, biogeographical and public health information, to track epidemics. In some cases, this process will predict health threats before they blossom.

  • Virtual assistance helping patients and physicians communicate clearly

AI technology can improve communication between patients and physicians, including by creating software that simplifies patient communication, in part by transforming complex medical terminology into digestible information. This helps patients and physicians engage in a meaningful two-way conversation using mobile devices and portals.

  • Developing better care management by improving clinical documentation

Machine learning technology can improve documentation, including user-written patient notes, by analyzing millions of rows of data and letting doctors know if any data is missing or clarification is needed on any procedures. Also, Deep Neural Network algorithms can sift through information in written clinical documentation. These processes can improve outcomes by identifying patterns almost invisible to human eyes.

Lee is so bullish on AI that he believes we can do even more than he has described in his piece. And generally speaking, it’s hard to disagree with him that there’s a great deal of untapped potential here.

That being said, Lee cautions that there are pitfalls we should be aware of when we implement AI. What risks do you see in widespread AI implementation in healthcare?

London Doctors Stage Protest Over Rollout Of App

Posted on April 18, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

We all know that doctors don’t take kindly to being forced to use health IT tools. Apparently, that’s particularly the case in London, where a group of general practitioners recently held a protest to highlight their problems with a telemedicine app rolled out by the National Health Service.

The doctors behind the protest are unhappy with the way the NHS structured its rollout of the smartphone app GP at Hand, which they say has created extra work and confusion among the patients.

The service, which is run by UK-based technology company Babylon Health, launched in November of last year. Using the app, patients can either have a telemedicine visit or schedule an in-person appointment with a GP’s office. Telemedicine services are available 24/7, and patients can be seen in minutes in some cases.

GP at Hand seems to be popular with British consumers. Since its launch, over 26,000 patients have registered for the service, according to the NHS.

However, to participate in the service, patients are automatically de-registered from their existing GP office when they register for GP at Hand. Many patients don’t seem to have known this. According to the doctors at the protest, they’ve been getting calls from angry former patients demanding that they be re-registered with their existing doctor’s office.

The doctors also suggest that the service gets to cherry-pick healthier, more profitable patients, which weighs down their practice. “They don’t want patients with complex mental health problems, drug problems, dementia, a learning disability or other challenging conditions,” said protest organizer Dr. Jackie Applebee. “We think that’s because these patients are expensive.” (Presumably, Babylon is paid out of a separate NHS fund than the GPs.)

Is there lessons here for US-based healthcare providers? Perhaps so.

Of course, the National Health Service model is substantially different from the way care is delivered in this country, so the administrative challenges involved in rolling out a similar service could be much different. But this news does offer some lessons to consider nonetheless.

For one thing, it reminds us that even in a system much different than ours, financing and organizing telemedicine services can be fraught with conflict. Reimbursement would be an even bigger issue than it seems to have been in the UK.

Also, it’s also of note that the NHS and Babylon Health faced a storm of patient complaints about the way the service was set up. It’s entirely possible that any US-based efforts would generate their own string of unintended consequences, the magnitude which would be multiplied by the fact that there’s no national entity coordinating such a rollout.

Of course, individual health systems are figuring out how to offer telemedicine and blend it with access to in-person care. But it’s telling that insurers with a national presence such as CIGNA or Humana aren’t plunging into telemedicine with both feet. At least none of them have seen substantial success in their efforts. Bottom line, offering telehealth is much harder than it looks.

Thoughts on Privacy in Health Care in the Wake of Facebook Scrutiny

Posted on April 13, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A lot of health IT experts are taking a fresh look at the field’s (abysmal) record in protecting patient data, following the shocking Cambridge Analytica revelations that cast a new and disturbing light on privacy practices in the computer field. Both Facebook and others in the computer field who would love to emulate its financial success are trying to look at general lessons that go beyond the oddities of the Cambridge Analytica mess. (Among other things, the mess involved a loose Facebook sharing policy that was tightened up a couple years ago, and a purported “academic researcher” who apparently violated Facebook’s terms of service.)

I will devote this article to four lessons from the Facebook scandal that apply especially to health care data–or more correctly, four ways in which Cambridge Analytica reinforces principles that privacy advocates have known for years. Everybody recognizes that the risks modern data sharing practices pose to public life are hard, even intractable, and I will have to content myself with helping to define the issues, not present solutions. The lessons are:

  • There is no such thing as health data.

  • Consent is a meaningless concept.

  • The risks of disclosure go beyond individuals to affect the whole population.

  • Discrimination doesn’t have to be explicit or conscious.

The article will now lay out each concept, how the Facebook events reinforce it, and what it means for health care.

There is no such thing as health data

To be more precise, I should say that there is no hard-and-fast distinction between health data, financial data, voting data, consumer data, or any other category you choose to define. Health care providers are enjoined by HIPAA and other laws to fiercely protect information about diagnoses, medications, and other aspects of their patients’ lives. But a Facebook posting or a receipt from the supermarket can disclose that a person has a certain condition. The compute-intensive analytics that data brokers, marketers, and insurers apply with ever-growing sophistication are aimed at revealing these things. If the greatest impact on your life is that a pop-up ad for some product appears on your browser, count yourself lucky. You don’t know what else someone is doing with the information.

I feel a bit of sympathy for Facebook’s management, because few people anticipated that routine postings could identify ripe targets for fake news and inflammatory political messaging (except for the brilliant operatives who did that messaging). On the other hand, neither Facebook nor the US government acted fast enough to shut down the behavior and tell the public about it, once it was discovered.

HIPAA itself is notoriously limited. If someone can escape being classified as a health care provider or a provider’s business associate, they can collect data with abandon and do whatever they like (except in places such as the European Union, where laws hopefully require them to use the data for the purpose they cited while collecting it). App developers consciously strive to define their products in such a way that they sidestep the dreaded HIPAA coverage. (I won’t even go into the weaknesses of HIPAA and subsequent laws, which fail to take modern data analysis into account.)

Consent is a meaningless concept

Even the European Union’s new regulations (the much-publicized General Data Protection Regulation or GDPR) allows data collection to proceed after user consent. Of course, data must be collected for many purposes, such as payment and shipping at retail web sites. And the GDPR–following a long-established principle of consumer rights–requires further consent if the site collecting the data wants to use it beyond its original purpose. But it’s hard to imagine what use data will be put to, especially a couple years in the future.

Privacy advocates have known from the beginning of the ubiquitous “terms of service” that few people read before the press the Accept button. And this is a rational ignorance. Even if you read the tiresome and legalistic terms of service (I always do), you are unlikely to understand their implications. So the problem lies deeper than tedious verbiage: even the most sophisticated user cannot predict what’s going to happen to the data she consented to share.

The health care field has advanced farther than most by installing legal and regulatory barriers to sharing. We could do even better by storing all health data in a Personal Health Record (PHR) for each individual instead of at the various doctors, pharmacies, and other institutions where it can be used for dubious purposes. But all use requires consent, and consent is always on shaky grounds. There is also a risk (although I think it is exaggerated) that patients can be re-identified from de-identified data. But both data sharing and the uses of data must be more strictly regulated.

The risks of disclosure go beyond individuals to affect the whole population

The illusion that an individual can offer informed consent is matched by an even more dangerous illusion that the harm caused by a breach is limited to the individual affected, or even to his family. In fact, data collected legally and pervasively is used daily to make decisions about demographic groups, as I explained back in 1998. Democracy itself took a bullet when Russian political agents used data to influence the British EU referendum and the US presidential election.

Thus, privacy is not the concern of individuals making supposedly rational decisions about how much to protect their own data. It is a social issue, requiring a coordinated regulatory response.

Discrimination doesn’t have to be explicit or conscious

We have seen that data can be used to draw virtual red lines around entire groups of people. Data analytics, unless strictly monitored, reproduce society’s prejudices in software. This has a particular meaning in health care.

Discrimination against many demographic groups (African-Americans, immigrants, LGBTQ people) has been repeatedly documented. Very few doctors would consciously aver that they wish people harm in these groups, or even that they dismiss their concerns. Yet it happens over and over. The same unconscious or systemic discrimination will affect analytics and the application of its findings in health care.

A final dilemma

Much has been made of Facebook’s policy of collecting data about “friends of friends,” which draws a wide circle around the person giving consent and infringes on the privacy of people who never consented. Facebook did end the practice that allowed Global Science Research to collect data on an estimated 87 million people. But the dilemma behind the “friends of friends” policy is how inextricably it embodies the premise behind social media.

Lots of people like to condemn today’s web sites (not just social media, but news sites and many others–even health sites) for collecting data for marketing purposes. But as I understand it, the “friends of friends” phenomenon lies deeper. Finding connections and building weak networks out of extended relationships is the underpinning of social networking. It’s not just how networks such as Facebook can display to you the names of people they think you should connect with. It underlies everything about bringing you in contact with information about people you care about, or might care about. Take away “friends of friends” and you take away social networking, which has been the most powerful force for connecting people around mutual interests the world has ever developed.

The health care field is currently struggling with a similar demonic trade-off. We desperately hope to cut costs and tame chronic illness through data collection. The more data we scoop up and the more zealously we subject it to analysis, the more we can draw useful conclusions that create better care. But bad actors can use the same techniques to deny insurance, withhold needed care, or exploit trusting patients and sell them bogus treatments. The ethics of data analysis and data sharing in health care require an open, and open-eyed, debate before we go further.

Texting Patients Is OK Under HIPAA, as long as you…

Posted on March 6, 2018 I Written By

Mike Semel is a noted thought leader, speaker, blogger, and best-selling author of HOW TO AVOID HIPAA HEADACHES . He is the President and Chief Security Officer of Semel Consulting, focused on HIPAA and other compliance requirements; cyber security; and Business Continuity planning. Mike is a Certified Business Continuity Professional through the Disaster Recovery Institute, a Certified HIPAA Professional, Certified Security Compliance Specialist, and Certified Health IT Specialist. He has owned or managed technology companies for over 30 years; served as Chief Information Officer (CIO) for a hospital and a K-12 school district; and managed operations at an online backup company.

OCR Director Severino Makes Policy from the Podium

Speaking at the HIMSS health IT conference in Las Vegas on Tuesday, Roger Severino, Director of the US Department of Health and Human Services Office for Civil Rights (OCR), the HIPAA enforcement agency, said that health care providers may share Protected Health Information (PHI) with patients through standard text messages. Providers must first warn their patients that texting is not secure, gain the patients’ authorization, and document the patients’ consent.

In 2013, the HIPAA Omnibus Final Rule allowed healthcare providers to communicate Electronic Protected Health Information (ePHI) with patients through unencrypted e-mail, if the provider informs the patient that their e-mail service is not secure, gains the patient’s authorization to accept the risk, and documents the patient’s consent.

A HIMSS audience member asked Severino why the OCR hasn’t issued similar guidance for text messaging with patients. “I don’t see a difference,” Severino said. “I think it’s empowering the patient, making sure that their data is as accessible as possible in the way they want to receive it, and that’s what we want to do.”

“Wow! That’s a big change,” said Tom Leary, Vice President of Government Relations for HIMSS. “That’s wonderful. Actually, the physician community has been clamoring for clarification on that for several years now. Our physician community will be very supportive of that.”

The 2013 OCR guidance for e-mails,  and Severino’s announcement about text messages, only applies to communications with patients. All HIPAA Covered Entities and Business Associates are still forbidden to use unsecure communications tools to communicate with each other.

Messages sent through free e-mail services are not private. Google’s Gmail Terms of Service, allow Google to “use…reproduce…communicate, publish…publicly display and distribute” your e-mail messages. Health care providers must use encrypted e-mail or secure e-mail systems to communicate ePHI outside of their organizations.

In 2012, a small medical practice was penalized $ 100,000 for sharing patient information through free Internet services, including e-mail.  According to the resolution agreement, Phoenix Cardiac Surgery “daily transmitted ePHI from an Internet-based email account to workforce members’ personal Internet-based email accounts.”

While the OCR may be best-known for its HIPAA enforcement, it has pushed healthcare organizations to lower barriers that have prevented patients from obtaining their medical records. The Omnibus Rule required health care providers to only recover actual costs when providing patients with copies of their records.

In its 2016 guidance, the OCR set a $ 6.50 limit (inclusive of all labor, supplies, and postage) for health care providers “that do not want to go through the process of calculating actual or average allowable costs for requests for electronic copies of PHI maintained electronically.”

The federal requirement to recover actual costs, or a flat fee of $ 6.50, supersedes state laws that allowed providers to charge for medical record searches and per-page fees. Maine caps the cost at $ 250 for a medical record, far above the federal $ 6.50 flat fee.

 

Some Of The Questions I Plan To Ask At #HIMSS18

Posted on February 23, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As always, this year’s HIMSS event will feature enough noise, sound and color to overwhelm your senses for months afterward. And talk about a big space to tread — I’ve come away with blisters more than once after attending.

Nonetheless, in my book it’s always worth attending the show. While no one vendor or session might blow you away, finding out directly what trends and products generated the most buzz is always good. The key is not only to attend the right educational sessions or meet the right people but to figure out how companies are making decisions.

Below, here are some of the questions that I hope to ask (and hopefully find answers) at the show. If you have other questions to suggest I’d love to bring them with me to the show —  the way I see it, the more the merrier!

-Anne

Blockchain

Vendors:  What functions does blockchain perform in your solution and what are the benefits of these additions? What made that blockchain the best technology choice for getting the job done? What challenges have you faced in developing a platform that integrates blockchain technology, and how are you addressing them? Is blockchain the most cost-efficient way of accomplishing the task you have in mind? What problems is blockchain best suited to address?

Providers: Have you rolled out any blockchain-based systems? If you haven’t currently deployed blockchain technology, do you expect to do so the future? When do you think that will happen? How will you know when it’s time to do so? What benefits do you think it will offer to your organization, and why? Do you think blockchain implementations could generate a significant level of additional server infrastructure overhead?

AI

Vendors: What makes your approach to healthcare AI unique and/or beneficial?  What is involved in integrating your AI product or service with existing provider technology, and how long does it usually take? Do providers have to do this themselves or do you help? Did you develop your own algorithms, license your AI engine or partner with someone else deliver it? Can you share any examples of how your customers have benefited by using AI?

Providers: What potential do you think AI has to change the way you deliver care? What specific benefits can AI offer your organization? Do you think healthcare AI applications are maturing, and if not how will you know when they have? What types of AI applications potentially interest you, and are you pilot-testing any of them?

Interoperability

Vendors:  How does your solution overcome barriers still remaining to full health data sharing between all healthcare industry participants? What do you think are the biggest interoperability challenges the industry faces? Does your solution require providers to make any significant changes to their infrastructure or call for advanced integration with existing systems? How long does it typically take for customers to go live with your interoperability solution, and how much does it cost on average? In an ideal world, what would interoperability between health data partners look like?

Providers: Do you consider yourself to have achieved full, partial or little/no health data interoperability between you and your partners? Are you happy with the results you’ve gotten from your interoperability efforts to date? What are the biggest benefits you’ve seen from achieving full or partial interoperability with other providers? Have you experienced any major failures in rolling out interoperability? If so, what damage did they do if any? Do you think interoperability is a prerequisite to delivering value-based care and/or population health management?

What topics are you looking forward to hearing about at #HIMSS18? What questions would you like asked? Share them in the comments and I’ll see what I can do to find answers.

Radiology Centers Poised To Adopt Machine Learning

Posted on February 8, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As with most other sectors of the healthcare industry, it seems likely that radiology will be transformed by the application of AI technologies. Of course, given the euphoric buzz around AI it’s hard to separate talk from concrete results. Also, it’s not clear who’s going to pay for AI adoption in radiology and where it is best used. But clearly, AI use in healthcare isn’t going away.

This notion is underscored by a new study by Reaction Data suggesting that both technology vendors and radiology leaders believe that widespread use of AI in radiology is imminent. The researchers argue that radiology AI applications are a “have to have” rather than a novel experiment, though survey respondents seem a little less enthusiastic.

The study, which included 133 respondents, focused on the use of machine learning in radiology. Researchers connected with a variety of relevant professionals, including directors of radiology, radiologists, techs, chiefs of radiology and PACS administrators.

It’s worth noting that the survey population was a bit lopsided. For example, 45% of respondents were PACS admins, while the rest of the respondent types represented less than 10%. Also, 90% of respondents were affiliated with hospital radiology centers. Still, the results offer an interesting picture of how participants in the radiology business are looking at machine learning.

When asked how important machine learning was for the future of radiology, one-quarter of respondents said that it was extremely important, and another 59% said it was very or somewhat important. When the data was sorted by job titles, it showed that roughly 90% of imaging directors said that machine learning would prove very important to radiology, followed by just over 75% of radiology chiefs. Radiology managers both came in at around 60%. Clearly, the majority of radiology leaders surveyed see a future here.

About 90% of radiology chiefs were extremely familiar with machine learning, and 75% of techs. A bit counterintuitively, less than 10% of PACS administrators reported being that familiar with this technology, though this does follow from the previous results indicating that only half were enthused about machine learning’s importance. Meanwhile, 75% of techs in roughly 60% of radiologists were extremely familiar with machine learning.

All of this is fine, but adoption is where the rubber meets the road. Reaction Data found that 15% of respondents said they’d been using machine learning for a while and 8% said they’d just gotten started.

Many more centers were preparing to jump in. Twelve percent reported that they were planning on adopting machine learning within the next 12 months, 26% of respondents said they were 1 to 2 years away from adoption and another 24% said they were 3+ years out.  Just 16% said they don’t think they’ll ever use machine learning in their radiology center.

For those who do plan to implement machine learning, top uses include analyzing lung imaging (66%), chest x-rays (62%), breast imaging (62%), bone imaging (41%) and cardiovascular imaging (38%). Meanwhile, among those who are actually using machine learning in radiology, breast imaging is by far the most common use, with 75% of respondents saying they used it in this case.

Clearly, applying the use of machine learning or other AI technologies will be tricky in any sector of medicine. However, if the survey results are any indication, the bulk of radiology centers are prepared to give it a shot.