Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

London Doctors Stage Protest Over Rollout Of App

Posted on April 18, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

We all know that doctors don’t take kindly to being forced to use health IT tools. Apparently, that’s particularly the case in London, where a group of general practitioners recently held a protest to highlight their problems with a telemedicine app rolled out by the National Health Service.

The doctors behind the protest are unhappy with the way the NHS structured its rollout of the smartphone app GP at Hand, which they say has created extra work and confusion among the patients.

The service, which is run by UK-based technology company Babylon Health, launched in November of last year. Using the app, patients can either have a telemedicine visit or schedule an in-person appointment with a GP’s office. Telemedicine services are available 24/7, and patients can be seen in minutes in some cases.

GP at Hand seems to be popular with British consumers. Since its launch, over 26,000 patients have registered for the service, according to the NHS.

However, to participate in the service, patients are automatically de-registered from their existing GP office when they register for GP at Hand. Many patients don’t seem to have known this. According to the doctors at the protest, they’ve been getting calls from angry former patients demanding that they be re-registered with their existing doctor’s office.

The doctors also suggest that the service gets to cherry-pick healthier, more profitable patients, which weighs down their practice. “They don’t want patients with complex mental health problems, drug problems, dementia, a learning disability or other challenging conditions,” said protest organizer Dr. Jackie Applebee. “We think that’s because these patients are expensive.” (Presumably, Babylon is paid out of a separate NHS fund than the GPs.)

Is there lessons here for US-based healthcare providers? Perhaps so.

Of course, the National Health Service model is substantially different from the way care is delivered in this country, so the administrative challenges involved in rolling out a similar service could be much different. But this news does offer some lessons to consider nonetheless.

For one thing, it reminds us that even in a system much different than ours, financing and organizing telemedicine services can be fraught with conflict. Reimbursement would be an even bigger issue than it seems to have been in the UK.

Also, it’s also of note that the NHS and Babylon Health faced a storm of patient complaints about the way the service was set up. It’s entirely possible that any US-based efforts would generate their own string of unintended consequences, the magnitude which would be multiplied by the fact that there’s no national entity coordinating such a rollout.

Of course, individual health systems are figuring out how to offer telemedicine and blend it with access to in-person care. But it’s telling that insurers with a national presence such as CIGNA or Humana aren’t plunging into telemedicine with both feet. At least none of them have seen substantial success in their efforts. Bottom line, offering telehealth is much harder than it looks.

Thoughts on Privacy in Health Care in the Wake of Facebook Scrutiny

Posted on April 13, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A lot of health IT experts are taking a fresh look at the field’s (abysmal) record in protecting patient data, following the shocking Cambridge Analytica revelations that cast a new and disturbing light on privacy practices in the computer field. Both Facebook and others in the computer field who would love to emulate its financial success are trying to look at general lessons that go beyond the oddities of the Cambridge Analytica mess. (Among other things, the mess involved a loose Facebook sharing policy that was tightened up a couple years ago, and a purported “academic researcher” who apparently violated Facebook’s terms of service.)

I will devote this article to four lessons from the Facebook scandal that apply especially to health care data–or more correctly, four ways in which Cambridge Analytica reinforces principles that privacy advocates have known for years. Everybody recognizes that the risks modern data sharing practices pose to public life are hard, even intractable, and I will have to content myself with helping to define the issues, not present solutions. The lessons are:

  • There is no such thing as health data.

  • Consent is a meaningless concept.

  • The risks of disclosure go beyond individuals to affect the whole population.

  • Discrimination doesn’t have to be explicit or conscious.

The article will now lay out each concept, how the Facebook events reinforce it, and what it means for health care.

There is no such thing as health data

To be more precise, I should say that there is no hard-and-fast distinction between health data, financial data, voting data, consumer data, or any other category you choose to define. Health care providers are enjoined by HIPAA and other laws to fiercely protect information about diagnoses, medications, and other aspects of their patients’ lives. But a Facebook posting or a receipt from the supermarket can disclose that a person has a certain condition. The compute-intensive analytics that data brokers, marketers, and insurers apply with ever-growing sophistication are aimed at revealing these things. If the greatest impact on your life is that a pop-up ad for some product appears on your browser, count yourself lucky. You don’t know what else someone is doing with the information.

I feel a bit of sympathy for Facebook’s management, because few people anticipated that routine postings could identify ripe targets for fake news and inflammatory political messaging (except for the brilliant operatives who did that messaging). On the other hand, neither Facebook nor the US government acted fast enough to shut down the behavior and tell the public about it, once it was discovered.

HIPAA itself is notoriously limited. If someone can escape being classified as a health care provider or a provider’s business associate, they can collect data with abandon and do whatever they like (except in places such as the European Union, where laws hopefully require them to use the data for the purpose they cited while collecting it). App developers consciously strive to define their products in such a way that they sidestep the dreaded HIPAA coverage. (I won’t even go into the weaknesses of HIPAA and subsequent laws, which fail to take modern data analysis into account.)

Consent is a meaningless concept

Even the European Union’s new regulations (the much-publicized General Data Protection Regulation or GDPR) allows data collection to proceed after user consent. Of course, data must be collected for many purposes, such as payment and shipping at retail web sites. And the GDPR–following a long-established principle of consumer rights–requires further consent if the site collecting the data wants to use it beyond its original purpose. But it’s hard to imagine what use data will be put to, especially a couple years in the future.

Privacy advocates have known from the beginning of the ubiquitous “terms of service” that few people read before the press the Accept button. And this is a rational ignorance. Even if you read the tiresome and legalistic terms of service (I always do), you are unlikely to understand their implications. So the problem lies deeper than tedious verbiage: even the most sophisticated user cannot predict what’s going to happen to the data she consented to share.

The health care field has advanced farther than most by installing legal and regulatory barriers to sharing. We could do even better by storing all health data in a Personal Health Record (PHR) for each individual instead of at the various doctors, pharmacies, and other institutions where it can be used for dubious purposes. But all use requires consent, and consent is always on shaky grounds. There is also a risk (although I think it is exaggerated) that patients can be re-identified from de-identified data. But both data sharing and the uses of data must be more strictly regulated.

The risks of disclosure go beyond individuals to affect the whole population

The illusion that an individual can offer informed consent is matched by an even more dangerous illusion that the harm caused by a breach is limited to the individual affected, or even to his family. In fact, data collected legally and pervasively is used daily to make decisions about demographic groups, as I explained back in 1998. Democracy itself took a bullet when Russian political agents used data to influence the British EU referendum and the US presidential election.

Thus, privacy is not the concern of individuals making supposedly rational decisions about how much to protect their own data. It is a social issue, requiring a coordinated regulatory response.

Discrimination doesn’t have to be explicit or conscious

We have seen that data can be used to draw virtual red lines around entire groups of people. Data analytics, unless strictly monitored, reproduce society’s prejudices in software. This has a particular meaning in health care.

Discrimination against many demographic groups (African-Americans, immigrants, LGBTQ people) has been repeatedly documented. Very few doctors would consciously aver that they wish people harm in these groups, or even that they dismiss their concerns. Yet it happens over and over. The same unconscious or systemic discrimination will affect analytics and the application of its findings in health care.

A final dilemma

Much has been made of Facebook’s policy of collecting data about “friends of friends,” which draws a wide circle around the person giving consent and infringes on the privacy of people who never consented. Facebook did end the practice that allowed Global Science Research to collect data on an estimated 87 million people. But the dilemma behind the “friends of friends” policy is how inextricably it embodies the premise behind social media.

Lots of people like to condemn today’s web sites (not just social media, but news sites and many others–even health sites) for collecting data for marketing purposes. But as I understand it, the “friends of friends” phenomenon lies deeper. Finding connections and building weak networks out of extended relationships is the underpinning of social networking. It’s not just how networks such as Facebook can display to you the names of people they think you should connect with. It underlies everything about bringing you in contact with information about people you care about, or might care about. Take away “friends of friends” and you take away social networking, which has been the most powerful force for connecting people around mutual interests the world has ever developed.

The health care field is currently struggling with a similar demonic trade-off. We desperately hope to cut costs and tame chronic illness through data collection. The more data we scoop up and the more zealously we subject it to analysis, the more we can draw useful conclusions that create better care. But bad actors can use the same techniques to deny insurance, withhold needed care, or exploit trusting patients and sell them bogus treatments. The ethics of data analysis and data sharing in health care require an open, and open-eyed, debate before we go further.

Texting Patients Is OK Under HIPAA, as long as you…

Posted on March 6, 2018 I Written By

Mike Semel is a noted thought leader, speaker, blogger, and best-selling author of HOW TO AVOID HIPAA HEADACHES . He is the President and Chief Security Officer of Semel Consulting, focused on HIPAA and other compliance requirements; cyber security; and Business Continuity planning. Mike is a Certified Business Continuity Professional through the Disaster Recovery Institute, a Certified HIPAA Professional, Certified Security Compliance Specialist, and Certified Health IT Specialist. He has owned or managed technology companies for over 30 years; served as Chief Information Officer (CIO) for a hospital and a K-12 school district; and managed operations at an online backup company.

OCR Director Severino Makes Policy from the Podium

Speaking at the HIMSS health IT conference in Las Vegas on Tuesday, Roger Severino, Director of the US Department of Health and Human Services Office for Civil Rights (OCR), the HIPAA enforcement agency, said that health care providers may share Protected Health Information (PHI) with patients through standard text messages. Providers must first warn their patients that texting is not secure, gain the patients’ authorization, and document the patients’ consent.

In 2013, the HIPAA Omnibus Final Rule allowed healthcare providers to communicate Electronic Protected Health Information (ePHI) with patients through unencrypted e-mail, if the provider informs the patient that their e-mail service is not secure, gains the patient’s authorization to accept the risk, and documents the patient’s consent.

A HIMSS audience member asked Severino why the OCR hasn’t issued similar guidance for text messaging with patients. “I don’t see a difference,” Severino said. “I think it’s empowering the patient, making sure that their data is as accessible as possible in the way they want to receive it, and that’s what we want to do.”

“Wow! That’s a big change,” said Tom Leary, Vice President of Government Relations for HIMSS. “That’s wonderful. Actually, the physician community has been clamoring for clarification on that for several years now. Our physician community will be very supportive of that.”

The 2013 OCR guidance for e-mails,  and Severino’s announcement about text messages, only applies to communications with patients. All HIPAA Covered Entities and Business Associates are still forbidden to use unsecure communications tools to communicate with each other.

Messages sent through free e-mail services are not private. Google’s Gmail Terms of Service, allow Google to “use…reproduce…communicate, publish…publicly display and distribute” your e-mail messages. Health care providers must use encrypted e-mail or secure e-mail systems to communicate ePHI outside of their organizations.

In 2012, a small medical practice was penalized $ 100,000 for sharing patient information through free Internet services, including e-mail.  According to the resolution agreement, Phoenix Cardiac Surgery “daily transmitted ePHI from an Internet-based email account to workforce members’ personal Internet-based email accounts.”

While the OCR may be best-known for its HIPAA enforcement, it has pushed healthcare organizations to lower barriers that have prevented patients from obtaining their medical records. The Omnibus Rule required health care providers to only recover actual costs when providing patients with copies of their records.

In its 2016 guidance, the OCR set a $ 6.50 limit (inclusive of all labor, supplies, and postage) for health care providers “that do not want to go through the process of calculating actual or average allowable costs for requests for electronic copies of PHI maintained electronically.”

The federal requirement to recover actual costs, or a flat fee of $ 6.50, supersedes state laws that allowed providers to charge for medical record searches and per-page fees. Maine caps the cost at $ 250 for a medical record, far above the federal $ 6.50 flat fee.

 

Some Of The Questions I Plan To Ask At #HIMSS18

Posted on February 23, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As always, this year’s HIMSS event will feature enough noise, sound and color to overwhelm your senses for months afterward. And talk about a big space to tread — I’ve come away with blisters more than once after attending.

Nonetheless, in my book it’s always worth attending the show. While no one vendor or session might blow you away, finding out directly what trends and products generated the most buzz is always good. The key is not only to attend the right educational sessions or meet the right people but to figure out how companies are making decisions.

Below, here are some of the questions that I hope to ask (and hopefully find answers) at the show. If you have other questions to suggest I’d love to bring them with me to the show —  the way I see it, the more the merrier!

-Anne

Blockchain

Vendors:  What functions does blockchain perform in your solution and what are the benefits of these additions? What made that blockchain the best technology choice for getting the job done? What challenges have you faced in developing a platform that integrates blockchain technology, and how are you addressing them? Is blockchain the most cost-efficient way of accomplishing the task you have in mind? What problems is blockchain best suited to address?

Providers: Have you rolled out any blockchain-based systems? If you haven’t currently deployed blockchain technology, do you expect to do so the future? When do you think that will happen? How will you know when it’s time to do so? What benefits do you think it will offer to your organization, and why? Do you think blockchain implementations could generate a significant level of additional server infrastructure overhead?

AI

Vendors: What makes your approach to healthcare AI unique and/or beneficial?  What is involved in integrating your AI product or service with existing provider technology, and how long does it usually take? Do providers have to do this themselves or do you help? Did you develop your own algorithms, license your AI engine or partner with someone else deliver it? Can you share any examples of how your customers have benefited by using AI?

Providers: What potential do you think AI has to change the way you deliver care? What specific benefits can AI offer your organization? Do you think healthcare AI applications are maturing, and if not how will you know when they have? What types of AI applications potentially interest you, and are you pilot-testing any of them?

Interoperability

Vendors:  How does your solution overcome barriers still remaining to full health data sharing between all healthcare industry participants? What do you think are the biggest interoperability challenges the industry faces? Does your solution require providers to make any significant changes to their infrastructure or call for advanced integration with existing systems? How long does it typically take for customers to go live with your interoperability solution, and how much does it cost on average? In an ideal world, what would interoperability between health data partners look like?

Providers: Do you consider yourself to have achieved full, partial or little/no health data interoperability between you and your partners? Are you happy with the results you’ve gotten from your interoperability efforts to date? What are the biggest benefits you’ve seen from achieving full or partial interoperability with other providers? Have you experienced any major failures in rolling out interoperability? If so, what damage did they do if any? Do you think interoperability is a prerequisite to delivering value-based care and/or population health management?

What topics are you looking forward to hearing about at #HIMSS18? What questions would you like asked? Share them in the comments and I’ll see what I can do to find answers.

Radiology Centers Poised To Adopt Machine Learning

Posted on February 8, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

As with most other sectors of the healthcare industry, it seems likely that radiology will be transformed by the application of AI technologies. Of course, given the euphoric buzz around AI it’s hard to separate talk from concrete results. Also, it’s not clear who’s going to pay for AI adoption in radiology and where it is best used. But clearly, AI use in healthcare isn’t going away.

This notion is underscored by a new study by Reaction Data suggesting that both technology vendors and radiology leaders believe that widespread use of AI in radiology is imminent. The researchers argue that radiology AI applications are a “have to have” rather than a novel experiment, though survey respondents seem a little less enthusiastic.

The study, which included 133 respondents, focused on the use of machine learning in radiology. Researchers connected with a variety of relevant professionals, including directors of radiology, radiologists, techs, chiefs of radiology and PACS administrators.

It’s worth noting that the survey population was a bit lopsided. For example, 45% of respondents were PACS admins, while the rest of the respondent types represented less than 10%. Also, 90% of respondents were affiliated with hospital radiology centers. Still, the results offer an interesting picture of how participants in the radiology business are looking at machine learning.

When asked how important machine learning was for the future of radiology, one-quarter of respondents said that it was extremely important, and another 59% said it was very or somewhat important. When the data was sorted by job titles, it showed that roughly 90% of imaging directors said that machine learning would prove very important to radiology, followed by just over 75% of radiology chiefs. Radiology managers both came in at around 60%. Clearly, the majority of radiology leaders surveyed see a future here.

About 90% of radiology chiefs were extremely familiar with machine learning, and 75% of techs. A bit counterintuitively, less than 10% of PACS administrators reported being that familiar with this technology, though this does follow from the previous results indicating that only half were enthused about machine learning’s importance. Meanwhile, 75% of techs in roughly 60% of radiologists were extremely familiar with machine learning.

All of this is fine, but adoption is where the rubber meets the road. Reaction Data found that 15% of respondents said they’d been using machine learning for a while and 8% said they’d just gotten started.

Many more centers were preparing to jump in. Twelve percent reported that they were planning on adopting machine learning within the next 12 months, 26% of respondents said they were 1 to 2 years away from adoption and another 24% said they were 3+ years out.  Just 16% said they don’t think they’ll ever use machine learning in their radiology center.

For those who do plan to implement machine learning, top uses include analyzing lung imaging (66%), chest x-rays (62%), breast imaging (62%), bone imaging (41%) and cardiovascular imaging (38%). Meanwhile, among those who are actually using machine learning in radiology, breast imaging is by far the most common use, with 75% of respondents saying they used it in this case.

Clearly, applying the use of machine learning or other AI technologies will be tricky in any sector of medicine. However, if the survey results are any indication, the bulk of radiology centers are prepared to give it a shot.

Nuance Communications Focuses on Practical Application of AI Ahead of HIMSS18

Posted on January 31, 2018 I Written By

Colin Hung is the co-founder of the #hcldr (healthcare leadership) tweetchat one of the most popular and active healthcare social media communities on Twitter. Colin speaks, tweets and blogs regularly about healthcare, technology, marketing and leadership. He is currently an independent marketing consultant working with leading healthIT companies. Colin is a member of #TheWalkingGallery. His Twitter handle is: @Colin_Hung.

Is there a hotter buzzword than Artificial Intelligence (AI) right now? It dominated the discussion at the annual RSNA conference late last year and will undoubtedly be on full display at the upcoming HIMSS18 event next month in Las Vegas. One company, Nuance Communications, is cutting through the hype by focusing their efforts on practical applications of AI in healthcare.

According to Accenture, AI in healthcare is defined as:

A collection of multiple technologies enabling machines to sense, comprehend, act and learn so they can perform administrative and clinical healthcare functions. Unlike legacy technologies that are only algorithms/ tools that complement a human, health AI today can truly augment human activity.

One of the most talked about applications of AI in healthcare is in the area of clinical decision support. By analyzing the vast stores of electronic health data, AI algorithms could assist clinicians in the diagnosis of patient conditions. Extending this idea a little further and you arrive in a world where patients talk to an electronic doctor who can determine what’s wrong and make recommendations for treatment.

Understandably there is a growing concern around AI as a replacement for clinician-led diagnosis. This is more than simply fear of losing jobs to computers, there are questions rightfully being asked about the datasets being used to train AI algorithms and whether or not they are truly representative of patient populations. Detractors point to the recent embarrassing example of the “racist soap dispenser” – a viral video posted by Chukwuemeka Afigbo – as an example of how easy it is to build a product that ignores an entire portion of the population.

Nuance Communications, a leading provider of voice and language solutions for businesses and consumers, believes in AI. For years Nuance has been a pioneer in applying natural language processing (NLP) to assist physicians and healthcare workers. Since NLP is a specialized area of AI, it was natural (excuse the pun) for Nuance to expand into the world of AI.

Wisely Nuance chose to avoid using AI to develop a clinical decision support tool – a path they could have easily taken given how thousands use their PowerScribe platform to dictate physician notes. Instead, they focused on applying AI to improve clinical workflow. Their first application is in radiology.

Nuance embedded AI into their radiology systems in three specific ways:

  1. Using AI to help prioritize the list of unread images based on need. Traditionally images are read on a first-in, first-out basis (with the exception being emergency cases). Now an AI algorithm analyzes the patient data and prioritizes the images based on acuity. Thus, images for patients that are more critical rise to the top. This helps Radiologists use their time more effectively.
  2. Using AI to display the appropriate clinical guidelines to the Radiologist based on what’s being read from the image. As information is being transcribed through PowerScribe, the system analyzes the input in real-time and displays the guideline that matches. This helps to drive consistency and saves time for the Radiologist who no longer has to manually look up the guideline.
  3. Using AI to take measurements of lesion growth. Here the system analyzes the image of lesions and determines their size which is then displayed to the Radiologist for verification. This helps save time.

“There is a real opportunity here for us to use AI to not only improve workflows,” says Karen Holzberger, Vice President and General Manager of Diagnostic Solutions at Nuance. “But to help reduce burnout as well. Through AI we can reduce or eliminate a lot of small tasks so that Radiologists can focus more on what they do best.”

Rather than try to use AI to replace Radiologists, Nuance has smartly used AI to eliminate mundane and non-value-add tasks in radiology workflow. Nuance sees this as a win-win-win scenario. Radiologists are happier and more effective in their work. Patients receive better care. Productivity improves the healthcare system as a whole.

The Nuance website states: “The increasing pressure to produce timely and accurate documentation demands a new generation of tools that complement patient care rather than compete with it. Powered by artificial intelligence and machine learning, Nuance solutions build on over three decades of clinical expertise to slash documentation time by up to 45 percent—while improving quality by 36 percent.”

Nuance recently doubled-down on AI, announcing the creation of a new AI-marketplace for medical imaging. Researchers and software developers can put their AI-powered applications in the marketplace and expose it to the 20,000 Radiologists that use Nuance’s PowerScribe platform. Radiologists can download and use the applications they want or that they find interesting.

Through the marketplace, AI applications can be tested (both from a technical perspective as well as from a market acceptance perspective) before a full launch. “Transforming the delivery of patient care and combating disease starts with the most advanced technologies being readily available when and where it counts – in every reading room, across the United States,” said Peter Durlach, senior vice president, Healthcare at Nuance. “Our AI Marketplace will bring together the leading technical, research and healthcare minds to create a collection of image processing algorithms that, when made accessible to the wide array of radiologists who use our solutions daily, has the power to exponentially impact outcomes and further drive the value of radiologists to the broader care team.”

Equally important is the dataset the marketplace will generate. With 20,000 Radiologists from organizations around the world, the marketplace has the potential to be the largest, most diverse imaging dataset available to AI researchers and developers. This diversity may be key to making AI more universally applicable.

“AI is a nice concept,” continued Holzberger. “However, in the end you have to make it useful. Our customers have repeatedly told us that if it’s useful AND useable they’ll use it. That’s true for any healthcare technology, AI included.”

Federal Advisors Say Yes, AI Can Change Healthcare

Posted on January 26, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

The use of AI in healthcare has been the subject of scores of articles and endless debate among industry professionals over its benefits. The fragile consensus seems to be that while AI certainly has the potential to accomplish great things, it’s not ready for prime time.

That being said, some well-informed healthcare observers disagree. In an ONC blog post, a collection of thought leaders from the agency, AHRQ and the Robert Wood Johnson Foundation believe that over the long-term, AI could play an important role in the future of healthcare.

The group of institutions asked JASON, an independent group of scientists and academics who advise the federal government on science and technology issues, to look at AI’s potential. JASON’s job was to look at the technical capabilities, limitations and applications for AI in healthcare over the next 10 years.

In its report, JASON concluded that AI has broad potential for sparking significant advances in the industry and that the time may be right for using AI in healthcare settings.

Why is now a good time to play AI in healthcare? JASON offers a list of reasons, including:

  • Frustration with existing medical systems
  • Universal use of network smart devices by the public
  • Acceptance of at-home services provided by companies like Amazon

But there’s more to consider. While the above conditions are necessary, they’re not enough to support an AI revolution in healthcare on their own, the researchers say. “Without access to high-quality, reliable data, the problems that AI will not be realized,” JASON’s report concludes.

The report notes that while we have access to a flood of digital health data which could fuel clinical applications, it will be important to address the quality of that data. There are also questions about how health data can be integrated into new tools. In addition, it will be important to make sure the data is accessible, and that data repositories maintain patient privacy and are protected by strong security measures, the group warns.

Going forward, JASON recommends the following steps to support AI applications:

  • Capturing health data from smartphones
  • Integrating social and environmental factors into the data mix
  • Supporting AI technology development competitions

According to the blog post, ONC and AHRQ plan to work with other agencies within HHS to identify opportunities. For example, the FDA is likely to look at ways to use AI to improve biomedical research, medical care and outcomes, as well as how it could support emerging technologies focused on precision medicine.

And in the future, the possibilities are even more exciting. If JASON is right, the more researchers study AI applications, the more worthwhile options they’ll find.

A Learning EHR for a Learning Healthcare System

Posted on January 24, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Can the health care system survive the adoption of electronic health records? When the HITECH act mandated the installation of EHRs in 2009, we all hoped they would propel hospitals and clinics into a 21st-century consciousness. Instead, EHRs threaten to destroy those who have adopted them: the doctors whose work environment they degrade and the hospitals that they are pushing into bankruptcy. But the revolution in artificial intelligence that’s injecting new insights into many industries could also create radically different EHRs.

Here I define AI as software that, instead of dictating what a computer system should do, undergoes a process of experimentation and observation that creates a model to control the system, hopefully with far greater sophistication, personalization, and adaptability. Breakthroughs achieved in AI over the past decade now enable things that seemed impossible a bit earlier, such as voice interfaces that can both recognize and produce speech.

AI has famously been used by IBM Watson to make treatment recommendations. Analyses of big data (which may or may not qualify as AI) have saved hospitals large sums of money and even–finally, what we’ve been waiting for!–make patients healthier. But I’m talking in this article about a particular focus: the potential for changing the much-derided EHR. As many observers have pointed out, current EHRs are mostly billion-dollar file cabinets in electronic form. That epithet doesn’t even characterize them well enough–imagine instead a file cabinet that repeatedly screamed at you to check what you’re doing as you thumb through the papers.

How can AI create a new electronic health record? Major vendors have announced virtual assistants (See also John’s recent interview with MEDITECH which mentions their interest in virtual assistants) to make their interfaces more intuitive and responsive, so there is hope that they’re watching other industries and learning from machine learning. I don’t know what the vendors basing these assistants on, but in this article I’ll describe how some vanilla AI techniques could be applied to the EHR.

How a Learning EHR Would Work

An AI-based health record would start with the usual dashboard-like interface. Each record consists of hundreds of discrete pieces of data, such as age, latest blood pressure reading, a diagnosis of chronic heart failure, and even ZIP code and family status–important public health indicators. Each field of data would be called a feature in traditional AI. The goal is to find which combination of features–and their values, such as 75 for age–most accurately predict what a clinician does with the EHR. With each click or character typed, the AI model looks at all the features, discards the bulk of them that are not useful, and uses the rest to present the doctor with fields and information likely to be of value.

The EHR will probably learn that the forms pulled up by a doctor for a heart patient differ from those pulled up for a cancer patient. One case might focus on behavior, another on surgery and medication. Clinicians certainly behave differently in the hospital from how they behave in their home offices, or even how they behave in another hospital across town with different patient demographics. A learning EHR will discover and adapt to these differences, while also capitalizing on the commonalities in the doctor’s behavior across all settings, as well as how other doctors in the practice behave.

Clinicians like to say that every patient is different: well, with AI tracking behavior, the interface can adapt to every patient.

AI can also make use of messy and incomplete data, the well-known weaknesses of health care. But it’s crucial, to maximize predictive accuracy, for the AI system to have access to as many fields as possible. Privacy rules, however, dictate that certain fields be masked and others made fuzzy (for instance, specifying age as a range from 70 to 80 instead of precisely 75). Although AI can still make use of such data, it might be possible to provide more precise values through data sharing agreements strictly stipulating that the data be used only to improve the EHR–not for competitive strategizing, marketing, or other frowned-on exploitation.

A learning EHR would also be integrated with other innovations that increase available data and reduce labor–for instance, devices worn by patients to collect vital signs and exercise habits. This could free up doctors do less time collecting statistics and more time treating the patient.

Potential Impacts of AI-Based Records

What we hope for is interfaces that give the doctor just what she needs, when she needs it. A helpful interface includes autocompletion for data she enters (one feature of a mobile solution called Modernizing Medicine, which I profiled in an earlier article), clear and consistent displays, and prompts that are useful instead of distracting.

Abrupt and arbitrary changes to interfaces can be disorienting and create errors. So perhaps the EHR will keep the same basic interface but use cues such as changes in color or highlighted borders to suggest to the doctor what she should pay attention to. Or it could occasionally display a dialog box asking the clinician whether she would like the EHR to upgrade and streamline its interface based on its knowledge of her behavior. This intervention might be welcome because a learning EHR should be able to drastically reduce the number of alerts that interrupt the doctors’ work.

Doctors’ burdens should be reduced in other ways too. Current blind and dumb EHRs require doctors to enter the same information over and over, and even to resort to the dangerous practice of copy and paste. Naturally, observers who write about this problem take the burden off of the inflexible and poorly designed computer systems, and blame the doctors instead. But doing repetitive work for humans is the original purpose of computers, and what they’re best at doing. Better design will make dual entries (and inconsistent records) a thing of the past.

Liability

Current computer vendors disclaim responsibility for errors, leaving it up the busy doctor to verify that the system carried out the doctor’s intentions accurately. Unfortunately, it will be a long time (if ever) before AI-driven systems are accurate enough to give vendors the confidence to take on risk. However, AI systems have an advantage over conventional ones by assigning a confidence level to each decision they make. Therefore, they could show the doctor how much the system trusts itself, and a high degree of doubt could let the doctor know she should take a closer look.

One of the popular terms that have sprung up over the past decade to describe health care reform is the “learning healthcare system.” A learning system requires learning on every level and at every stage. Because nobody likes the designs of current EHRs, they should be happy to try a new EHR with a design based directly on their behavior.

UPMC Sells Oncology Analytics Firm To Elsevier

Posted on January 22, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Using analytics tools to improve cancer treatment can be very hard. That struggle is exemplified by the problems faced by IBM Watson Health, which dove into the oncology analytics field a few years ago but made virtually no progress in improving cancer treatment.

With any luck, however, Via Oncology will be more successful at moving the needle in cancer care. The company, which offers decision support for cancer treatment and best practices in cancer care management, was just acquired by information analytics firm Elsevier, which plans to leverage the company’s technology to support its healthcare business.

Elsevier’s Clinical Solutions group works to improve patient outcomes, reduce clinical errors and optimize cost and reimbursements for providers. Via Oncology, a former subsidiary of the University of Pittsburgh Medical Center, develops and implements clinical pathways for cancer care. Via Oncology spent more than 15 years as part of UPMC prior to the acquisition.

Via Oncology’s Via Pathways tool relies on evidence-based content to create clinical algorithms covering 95% of cancer types treated in the US. The content was developed by oncologists. In addition to serving as a basis for algorithm development, Via Oncology also shares the content with physicians and their staff through its Via Portal, a decision support tool which integrates with provider EMRs.

According to Elsevier, Via Pathways addresses more than 2,000 unique patient presentations which can be addressed by clinical algorithms and recommendations for all major aspects of cancer care. The system can also offer nurse triage and symptom tracking, cost information analytics, quality reporting and medical home tools for cancer centers.

According to the prepared statement issued by Elsevier, UPMC will continue to be a Via Oncology customer, which makes it clear that the healthcare giant wasn’t dumping its subsidiary or selling it for a fire sale price.

That’s probably because in addition to UPMC, more than 1,500 oncology providers and community, hospital and academic settings hold Via Pathways licenses. What makes this model particularly neat is that these cancer centers are working collaboratively to improve the product as they use it. Too few specialty treatment professionals work together this effectively, so it’s good to see Via Oncology leveraging user knowledge this way.

While most of this seems clear, I was left with the question of what role, if any, genomics plays in Via Oncology’s strategy. While it may be working with such technologies behind the scenes, the company didn’t mention any such initiatives in its publicly-available information.

This approach seems to fly in the face of existing trends and in particular, physician expectations. For example, a recent survey of oncologists by medical publication Medscape found that 71% of respondents felt genomic testing was either very important or extremely important to their field.

However, Via Oncology may have something up its sleeve and is waiting for it to be mature before it dives into the genomics pool. We’ll just have to see what it does as part of Elsevier.

Are there other areas beyond cancer where a similar approach could be taken?

Key Articles in Health IT from 2017 (Part 2 of 2)

Posted on January 4, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article set a general context for health IT in 2017 and started through the year with a review of interesting articles and studies. We’ll finish the review here.

A thoughtful article suggests a positive approach toward health care quality. The author stresses the value of organic change, although using data for accountability has value too.

An article extolling digital payments actually said more about the out-of-control complexity of the US reimbursement system. It may or not be coincidental that her article appeared one day after the CommonWell Health Alliance announced an API whose main purpose seems to be to facilitate payment and other data exchanges related to law and regulation.

A survey by KLAS asked health care providers what they want in connected apps. Most apps currently just display data from a health record.

A controlled study revived the concept of Health Information Exchanges as stand-alone institutions, examining the effects of emergency departments using one HIE in New York State.

In contrast to many leaders in the new Administration, Dr. Donald Rucker received positive comments upon acceding to the position of National Coordinator. More alarm was raised about the appointment of Scott Gottlieb as head of the FDA, but a later assessment gave him high marks for his first few months.

Before Dr. Gottlieb got there, the FDA was already loosening up. The 21st Century Cures Act instructed it to keep its hands off many health-related digital technologies. After kneecapping consumer access to genetic testing and then allowing it back into the ring in 2015, the FDA advanced consumer genetics another step this year with approval for 23andMe tests about risks for seven diseases. A close look at another DNA site’s privacy policy, meanwhile, warns that their use of data exploits loopholes in the laws and could end up hurting consumers. Another critique of the Genetic Information Nondiscrimination Act has been written by Dr. Deborah Peel of Patient Privacy Rights.

Little noticed was a bill authorizing the FDA to be more flexible in its regulation of digital apps. Shortly after, the FDA announced its principles for approving digital apps, stressing good software development practices over clinical trials.

No improvement has been seen in the regard clinicians have for electronic records. Subjective reports condemned the notorious number of clicks required. A study showed they spend as much time on computer work as they do seeing patients. Another study found the ratio to be even worse. Shoving the job onto scribes may introduce inaccuracies.

The time spent might actually pay off if the resulting data could generate new treatments, increase personalized care, and lower costs. But the analytics that are critical to these advances have stumbled in health care institutions, in large part because of the perennial barrier of interoperability. But analytics are showing scattered successes, being used to:

Deloitte published a guide to implementing health care analytics. And finally, a clarion signal that analytics in health care has arrived: WIRED covers it.

A government cybersecurity report warns that health technology will likely soon contribute to the stream of breaches in health care.

Dr. Joseph Kvedar identified fruitful areas for applying digital technology to clinical research.

The Government Accountability Office, terror of many US bureaucracies, cam out with a report criticizing the sloppiness of quality measures at the VA.

A report by leaders of the SMART platform listed barriers to interoperability and the use of analytics to change health care.

To improve the lower outcomes seen by marginalized communities, the NIH is recruiting people from those populations to trust the government with their health data. A policy analyst calls on digital health companies to diversify their staff as well. Google’s parent company, Alphabet, is also getting into the act.

Specific technologies

Digital apps are part of most modern health efforts, of course. A few articles focused on the apps themselves. One study found that digital apps can improve depression. Another found that an app can improve ADHD.

Lots of intriguing devices are being developed:

Remote monitoring and telehealth have also been in the news.

Natural language processing and voice interfaces are becoming a critical part of spreading health care:

Facial recognition is another potentially useful technology. It can replace passwords or devices to enable quick access to medical records.

Virtual reality and augmented reality seem to have some limited applications to health care. They are useful foremost in education, but also for pain management, physical therapy, and relaxation.

A number of articles hold out the tantalizing promise that interoperability headaches can be cured through blockchain, the newest hot application of cryptography. But one analysis warned that blockchain will be difficult and expensive to adopt.

3D printing can be used to produce models for training purposes as well as surgical tools and implants customized to the patient.

A number of other interesting companies in digital health can be found in a Fortune article.

We’ll end the year with a news item similar to one that began the article: serious good news about the ability of Accountable Care Organizations (ACOs) to save money. I would also like to mention three major articles of my own:

I hope this review of the year’s articles and studies in health IT has helped you recall key advances or challenges, and perhaps flagged some valuable topics for you to follow. 2018 will continue to be a year of adjustment to new reimbursement realities touched off by the tax bill, so health IT may once again languish somewhat.