Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Patient Billing And Collections Process Needs A Tune-Up

Posted on October 1, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

A new study from a patient payments vendor suggests that many healthcare organizations haven’t optimized their patient billing and collections process, a vulnerability which has persisted despite their efforts to crack the problem.

The survey found that while the entire billing collections process was flawed, respondents said that collecting patient payments was the toughest problem, followed by the need to deploy better tools and technologies.

Another issue was the nature of their collections efforts. Sixty percent of responding organizations use collections agencies, an approach which can establish an adversarial relationship between patient and provider and perhaps drive consumers elsewhere.

Yet another concern was long delays in issuing bills to patients. The survey found that 65% of organizations average more than 60 days to collect patient payments, and 40% waited on payments for more than 90 days.

These results align other studies that look at patient payments, all of which echo the notion that the patient collection process is far from what it should be.

For example, a study by payment services vendor InstaMed found that more than 90% of consumers would like to know what the payment responsibility is prior to a provider visit. Worse, very few consumers even know what the deductible, co-insurance and out-of-pocket maximums are, making it more likely that the will be hit with a bill they can’t afford.

As with the Cedar study, InstaMed’s research found that providers are waiting a long time to collect patient payments, three-quarters of organizations waiting a month to close out patient balances.

Not only that, investments in revenue cycle management technology aren’t necessarily enough to kickstart patient payment volumes. A survey done last year by the Healthcare Financial Management Association and vendor Navigant found that while three-quarters of hospitals said that their RCM technology budget was increasing, they weren’t necessarily getting the ROI they’d hoped to see.

According to the survey, 77% of hospitals less than 100 beds and 78% of hospitals with 100 to 500 beds planned to increase their RCM spending. Their areas of investment included business intelligence analytics, EHR-enabled workflow or reporting, revenue integrity, coding and physician/clinician documentation options.

Still, process improvements seem to have had a bigger payoff. These hospitals are placing a lot of faith in revenue integrity programs, with 22% saying that revenue integrity was a top RCM focus area for this year. Those who would already put such a program in place said that it offered significant benefits, including increased net collections (68%), greater charge capture (61%) and reduced compliance risks (61%).

As I see it, the key takeaways here are that making sure patients know what to expect financially and putting programs in place to improve internal processes can have a big impact on patient payments. Still, with consumers financing a lot of their care these days, getting their dollars in the door should continue to be an issue. After all, you can’t get blood from a stone.

Applying AI Based Outlier Detection to Healthcare – Interview with Dr. Gidi Stein from MedAware

Posted on September 17, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Most people who receive healthcare understand that healthcare is as much art as it is science. We don’t expect our doctors to be perfect or know everything because the human body is just too complex and there are so many factors that influence health. What’s hard for patients to understand is when obvious human errors occur. This is especially true when technology or multiple layers of humans should have caught the obvious.

This is exactly why I was excited to interview Dr. Gidi Stein, CEO and Co-founder of MedAware. As stated on their website, their goal is to eliminate prescription errors. In the interview below, you’ll learn more about what MedAware and Dr. Stein are doing to achieve this goal.

Tell us a little about yourself and MedAware.

Early in my career, I worked in the Israeli high-tech industry and served as CTO and Chief Architect of several algorithm-rich startups. However, after many years working in technology, I decided to return to school and study medicine. In 2002, I graduated from Tel Aviv University Medical School with a specialization in internal medicine, treating patients and teaching students and residents in one of Israel’s largest hospitals.

After working as a physician for several years, I heard a heartbreaking story, which ultimately served as my motivation and inspiration to found MedAware. A physician was treating a 9-year-old boy who suffered from Asthma. To treat the symptoms, the physician entered the electronic prescribing environment and selected Singulair from the drop-down menu, a standard treatment for asthma. However, unfortunately, he accidentally clicked Sintrom, an anticoagulant (blood thinner). Tragically, neither the physician, pharmacist nor parent caught this error, which resulted in the boys’ untimely death. This avoidable, medication-related complication and death was caused by a typo.

Having worked as a physician for many years, I had a difficult time understanding that with all the medical intervention and technological support we rely on, our healthcare system was not intelligent enough to prevent errors like this. This was a symptom of a greater challenge; how can we identify and prevent medication related complications before they occur? Given my combined background in technology and medicine, I knew that there must be a solution to eliminate these types of needless errors. I founded MedAware to transform patient safety and save lives.

Describe the problem with prescription-based medication errors that exists today.  What’s the cause of most of these errors?

Every year in the U.S. alone, there are 1.5 million preventable medication errors, which result in patient injury or death. In fact, medication errors are the third leading cause of death in the US, and errors related to incorrect prescription are a major part of these. Today’s prescription-related complications fall into two main categories: medication errors that occur at the point of order entry (like the example of the 9yr old boy) and errors that result from evolving adverse drug events (ADEs). Point of order entry errors are a consequence of medication reconciliation challenges, typos, incorrect dosage input and other clinical inconsistencies.

Evolving ADEs are, in fact, the bulk of the errors that occur – almost 2/3 of errors are those that happen after a medication was correctly prescribed. These are often the most catastrophic errors, as they are completely unforeseen, and don’t necessarily result from physician error. Rather, they occur when a patient’s health status has changed, and a previously safe medication becomes unsafe.

MedAware uses AI to detect outlier prescriptions.  It seems that everything is being labeled AI, so how does this work and how effective is it at detecting medication errors?

AI is best used to analyze large scale data to identify patterns and outliers to those patterns. The common theme in industries, such as aviation, cyber security and credit card fraud, is that they are rich with millions of transactions, 99.99% of which are okay. But, a small fraction of them are hazardous, and these dangers most often occur in new and unexpected ways. In these industries, AI is used to crunch millions of transactions, identify patterns, and most importantly, identify outliers to those patterns as potential hazards with high accuracy.

Medication safety is similar to these industries. Here too, millions of medications are prescribed and dispensed every day, and in 99.99% of cases, the right medication is prescribed and dispensed to the right patient. But, on rare occasions, an unexpected error or oversight may put patients at risk. MedAware analyzes millions of clinical records to identify errors and oversights as statistical outliers to the normal behavioral patterns of providers treating similar patients. Our data shows that this methodology, identifies errors and ADEs with high accuracy and clinical relevance and that most of the errors found by our system would not have been caught by any other existing system.

Are most of the errors you find obvious errors that a human could have detected but just missed or are you finding surprising errors as well?  Can you share some stories of what you’ve found?

The errors that we find are obvious errors; any physician would agree that they are indeed erroneous. These include: prescribing chemotherapy to healthy individuals, not stopping anticoagulation to a bleeding patient, birth control pills to a 70-year-old male and prescribing Viagra to a 2-year-old baby. All of these are obvious errors, so why didn’t the prescribers pick these up? The answer is simple: they are human, and humans err, especially when they are less experienced and over worked. Our software is able to mirror back to the providers the crowdsourced behavioral patterns of their peers and identify outliers to these patterns as errors.

You recently announced a partnership with Allscripts and their dbMotion interoperability solution.  How does that work and what’s the impact of this partnership?

Today’s healthcare systems have created a reality where patient health information can be scattered across multiple health systems, infrastructures and EHRs. The dbMotion health information exchange platform aggregates and harmonizes that scattered patient data, delivering the information clinicians need in a usable and actionable format at the point of care, within the provider’s native and familiar workflow. With dbMotion, all of the patient’s records are in one place. MedAware sits on top of the bdMotion interoperability platform as a layer of safety, accurately looking at the thousands of clinical inputs in the system and warning with even greater accuracy. MedAware catches various medication errors that would have been missed due to a decentralized patient health record. In addition to identifying prescription-based medication errors, MedAware can also notify physicians of patients who are at risk of opioid addiction.

This partnership will allow any institution using Allscripts’ dbMotion to easily implement MedAware’s system in a streamlined manner, with each installation being quick and effortless.

Once MedAware identifies a prescription error, how do you communicate that information back to the provider? Do you integrate your solution with the EHR vendor?

Yes, MedAware is integrated with EHR platforms. This is necessary for error detection and communication of the warning to the provider. There are two intervention scenarios: 1) Synchronous – when errors are caught at the point of order entry, a popup alert appears within the EHR user interface, without disturbing the provider’s workflow, and the provider can choose to accept or reject the alert. 2) Asynchronous – the errors/ADE is caught following a change in the patient’s clinical record (i.e. new lab result or vital sign), long after the prescription was entered. These alerts are displayed as a physician’s task, within the physician’s workflow and the EHR’s user interface.

What’s next for MedAware?  Where are you planning to take this technology?

The next steps for us are:

  1. Scale our current technology to grow to 20 million lives analyzed by 2020
  2. Create additional patient safety centered solutions to providers, such as opioid dependency risk assessment, gaps in care and trend projection analysis.
  3. Share our life-saving insights directly to those who need it most – consumers.

Does NLP Deserve To Be The New Hotness In Healthcare?

Posted on August 30, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Lately, I’ve been seeing a lot more talk about the benefits of using natural language processing technology in healthcare. In fact, when I Googled the topic, I turned up a number of articles on the subject published over the last several weeks. Clearly, something is afoot here.

What’s driving the happy talk? One case in point is a new report from health IT industry analyst firm Chilmark Research laying out 12 possible use cases for NLP in healthcare.

According to Chilmark, some of the most compelling options include speech recognition, clinical documentation improvement, data mining research, computer-assisted coding and automated registry reporting. Its researchers also seem to be fans of clinical trial matching, prior authorization, clinical decision support and risk adjustment and hierarchical condition categories, approaches it labels “emerging.”

From what I can see, the highest profile application of NLP in healthcare is using it to dig through unstructured data and text. For example, a recent article describes how Intermountain Healthcare has begun identifying heart failure patients by reading data from 25 different free text documents stored in the EHR. Clearly, exercises like these can have an immediate impact on patient health.

However, stories like the above are actually pretty unusual. Yes, healthcare organizations have been working to use NLP to mine text for some time, and it seems like a very logical way to filter out critical information. But is there a reason that NLP use even for this purpose isn’t as widespread as one might think? According to one critic, the answer is yes.

In a recent piece, Dale Sanders, president of technology at HealthCatalyst, goes after the use of comparative data, predictive analytics and NLP in healthcare, arguing that their benefits to healthcare organizations have been oversold.

Sanders, who says he came to healthcare with a deep understanding of NLP and predictive analytics, contends that NLP has had ”essentially no impact” on healthcare. ”We’ve made incremental progress, but there are fundamental gaps in our industry’s data ecosystem– missing pieces of the data puzzle– that inherently limit what we can achieve with NLP,” Sanders argues.

He doesn’t seem to see this changing in the near future either. Given how much money has already been sunk in the existing generation of EMRs, vendors have no incentive to improve their capacity for indexing information, Sanders says.

“In today’s EMRs, we have little more than expensive word processors,” he writes. “I keep hoping that the Googles, Facebooks and Amazons of the world will quietly build a new generation EMR.” He’s not the only one, though that’s a topic for another article.

I wish I could say that I side with researchers like Chilmark that see a bright near-term future for NLP in healthcare. After all, part of why I love doing what I do is exploring and getting excited about emerging technologies with high potential for improving healthcare, and I’d be happy to wave the NLP flag too.

Unfortunately, my guess is that Sanders is right about the obstacles that stand in the way of widespread NLP use in our industry. Until we have a more robust way of categorizing healthcare data and text, searching through it for value can only go so far. In other words, it may be a little too soon to pitch NLP’s benefits to providers.

Can Providers Survive If They Don’t Get Population Health Management Right?

Posted on August 27, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Most providers know that they won’t succeed with population health management unless they get some traction in a few important areas — and that if not, they could face disaster as their volume of value-based payment share grows. The thing is, getting PHM right is proving to be a mindboggling problem for many.

Let’s start with some numbers which give us at least one perspective on the situation.

According to a survey by Health Leaders Media, 87% of respondents said that improving their population health management chops was very important. Though the article summarizing the study doesn’t say this explicitly, we all know that they have to get smart about PHM if they want to have a prayer of prospering under value-based reimbursement.

However, it seems that the respondents aren’t making nearly as much PHM progress as they’d like. For example, just 38% of respondents told Health Leaders that they attributed 25% or more of their organization’s net revenue to risk-based pop health management activities, a share which has fallen two percent from last year’s results.

More than half (51%) said that their top barrier to successfully deploying or expanding pop health programs was up-front funding for care management, IT and infrastructure. They also said that engaging patients in their own care (45%) and getting meaningful data into providers’ hands (33%) weren’t proving to be easy tasks.

At this point it’s time for some discussion.

Obviously, providers grapple with competing priorities every time they try something new, but the internal conflicts are especially clear in this case.

On the one hand, it takes smart care management to make value-based contracts feasible. That could call for a time-consuming and expensive redesign of workflow and processes, patient education and outreach, hiring case managers and more.

Meanwhile, no PHM effort will blossom without the right IT support, and that could mean making some substantial investments, including custom-developed or third-party PHM software, integrating systems into a central data repository, sophisticated data analytics and a whole lot more.

Putting all of this in place is a huge challenge. Usually, providers lay the groundwork for a next-gen strategy in advance, then put infrastructure, people and processes into place over time. But that’s a little tough in this case. We’re talking about a huge problem here!

I get it that vendors began offering off-the-shelf PHM systems or add-on modules years ago, that one can hire consultants to change up workflow and that new staff should be on-board and trained by now. And obviously, no one can say that the advent of value-based care snuck up on them completely unannounced. (In fact, it’s gotten more attention than virtually any other healthcare issue I’ve tracked.) Shouldn’t that have done the trick?

Well, yes and no. Yes, in that in many cases, any decently-run organization will adapt if they see a trend coming at them years in advance. No, in that the shift to value-based payment is such a big shift that it could be decades before everyone can play effectively.

When you think about it, there are few things more disruptive to an organization than changing not just how much it’s paid but when and how along with what they have to do in return. Yes, I too am sick of hearing tech startups beat that term to death, but I think it applies in a fairly material sense this time around.

As readers will probably agree, health IT can certainly do something to ease the transition to value-based care. But HIT leaders won’t get the chance if their organization underestimates the scope of the overall problem.

An Interesting Overview Of Alphabet’s Healthcare Investments

Posted on June 27, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Recently I’ve begun reading a blog called The Medical Futurist which offers some very interesting fare. In addition to some intriguing speculation, it includes some research that I haven’t seen anywhere else. (It is written by a physician named Bertalan Mesko.)

In this case, Mesko has buried a shrewd and well-researched piece on Alphabet’s healthcare investments in an otherwise rambling article. (The rambling part is actually pretty interesting on its own, by the way.)

The piece offers a rather comprehensive update on Alphabet’s investments in and partnerships with healthcare-related companies, suggesting that no other contender in Silicon Valley is investing in this sector heavily as Alphabet’s GV (formerly Google Ventures). I don’t know if he’s right about this, but it’s probably true.

By Mesko’s count, GV has backed almost 60 health-related enterprises since the fund was first kicked off in 2009. These investments include direct-to-consumer genetic testing firm 23andme, health insurance company Oscar Health, telemedicine venture Doctor on Demand and Flatiron Health, which is building an oncology-focused data platform.

Mesko also points out that GV has had an admirable track record so far, with five of the companies it first backed going public in the last year. I’m not sure I agree that going public is per se a sign of success — a lot depends on how the IPO is received by Wall Street– but I see his logic.

In addition, he notes that Alphabet is stocking up on intellectual resources. The article cites research by Ernest & Young reporting that Alphabet filed 186 healthcare-related patents between 2013 and 2017.

Most of these patents are related to DeepMind, which Google acquired in 2014, and Verily Life Sciences (formerly Google Life Sciences). While these deals are interesting in and of themselves, on a broader level the patents demonstrate Alphabet’s interest in treating chronic illnesses like diabetes and the use of bioelectronics, he says.

Meanwhile, Verily continues to work on a genetic data-collecting initiative known as the Baseline Study. It plans to leverage this data, using some of the same algorithms behind Google’s search technology, to pinpoint what makes people healthy.

It’s a grand and somewhat intimidating picture.

Obviously, there’s a lot more to discuss here, and even Mesko’s in-depth piece barely scratches the surface of what can come out of Alphabet and Google’s health investments. Regardless, it’s worth keeping track of their activity in the sector even if you find it overwhelming. You may be working for one of those companies someday.

Healthcare AI Needs a Breadth and Depth of Data

Posted on May 17, 2018 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Today I’m enjoying the New England HIMSS Spring Conference including an amazing keynote session by Dale Sanders from Health Catalyst. Next week I’ll be following up this blog post with some other insights that Dale shared at the New England HIMSS event, but today I just wanted to highlight one powerful concept that he shared:

Healthcare AI Needs a Breadth and Depth of Data

As part of this idea, Dale shared the following image to illustrate how much data is really needed for AI to effectively assess our health:

Dale pointed out that in healthcare today we really only have access to the data in the bottom right corner. That’s not enough data for AI to be able to properly assess someone’s health. Dale also suggested the following about EHR data:

Long story short, the EHR data is not going to be enough to truly assess someone’s health. As Google recently proved, a simple algorithm with more data is much more powerful than a sophisticated algorithm with less data. While we think we have a lot of data in healthcare, we really don’t have that much data. Dale Sanders made a great case for why we need more data if we want AI to be effective in healthcare.

What are you doing in your organization to collect data? What are you doing to get access to this data? Does collection of all of this data scare anyone? How far away are we from this data driven, AI future? Let us know your thoughts in the comments.

Google And Fitbit Partner On Wearables Data Options

Posted on May 7, 2018 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

Fitbit and Google have announced plans to work together, in a deal intended to “transform the future of digital health and wearables.” While the notion of transforming digital health is hyperbole even for companies the size of Google and Fitbit, the pairing does have plenty of potential.

In a nutshell, Fitbit and Google expect to take on both consumer and enterprise health projects that integrate data from EMRs, wearables and other sources of patient information together. Given the players involved, it’s hard to doubt that at least something neat will emerge from their union.

Among the first things the pair plans to use Google’s new Cloud Healthcare API to connect Fitbit data with EMRs. Of course, readers will know that it’s one thing to say this and another to actually do it, but gross oversimplifications aside, the idea is worth pursuing.

Also, using services such as those offered by Twine Health– a recent Fitbit acquisition — the two companies will work to better manage chronic conditions such as diabetes and hypertension. Twine offers a connected health platform which leverages Fitbit data to offer customized health coaching.

Of course, as part of the deal Fitbit is moving to the Google Cloud Platform, which will supply the expected cloud services and engineering support.

The two say that moving to the Cloud Platform will offer Fitbit advanced security capabilities which will help speed up the growth of Fitbit Health Solutions business. They also expect to make inroads in population health analysis. For its part, Google also notes that it will bring its AI, machine learning capabilities and predictive analytics algorithms to the table.

It might be worth a small caution here. Google makes a point of saying it is “committed” to meeting HIPAA standards, and that most Google Cloud products do already. That “most” qualifier would make me a little bit nervous as a provider, but I know, why worry about these niceties when big deals are afoot. However, fair warning that when someone says general comments like this about meeting HIPAA standards, it probably means they already employ high security standards which are likely better than HIPAA. However, it also means that they probably don’t comply with HIPAA since HIPAA is about more than security and requires a contractual relationship between provider and business associate and the associated liability of being a business associate.

Anyway, to round out all of this good stuff, Fitbit and Google said they expect to “innovate and transform” the future of wearables, pairing Fitbit’s brand, community, data and high-profile devices with Google’s extreme data management and cloud capabilities.

You know folks, it’s not that I don’t think this is interesting. I wouldn’t be writing about if I didn’t. But I do think it’s worth pointing out how little this news announcement says, really.

Yes, I realize that when partnerships begin, they are by definition all big ideas and plans. But when giants like Google, much less Fitbit, have to fall back on words like innovate and transform (yawn!), the whole thing is still pretty speculative. Just sayin’.

Thoughts on Privacy in Health Care in the Wake of Facebook Scrutiny

Posted on April 13, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A lot of health IT experts are taking a fresh look at the field’s (abysmal) record in protecting patient data, following the shocking Cambridge Analytica revelations that cast a new and disturbing light on privacy practices in the computer field. Both Facebook and others in the computer field who would love to emulate its financial success are trying to look at general lessons that go beyond the oddities of the Cambridge Analytica mess. (Among other things, the mess involved a loose Facebook sharing policy that was tightened up a couple years ago, and a purported “academic researcher” who apparently violated Facebook’s terms of service.)

I will devote this article to four lessons from the Facebook scandal that apply especially to health care data–or more correctly, four ways in which Cambridge Analytica reinforces principles that privacy advocates have known for years. Everybody recognizes that the risks modern data sharing practices pose to public life are hard, even intractable, and I will have to content myself with helping to define the issues, not present solutions. The lessons are:

  • There is no such thing as health data.

  • Consent is a meaningless concept.

  • The risks of disclosure go beyond individuals to affect the whole population.

  • Discrimination doesn’t have to be explicit or conscious.

The article will now lay out each concept, how the Facebook events reinforce it, and what it means for health care.

There is no such thing as health data

To be more precise, I should say that there is no hard-and-fast distinction between health data, financial data, voting data, consumer data, or any other category you choose to define. Health care providers are enjoined by HIPAA and other laws to fiercely protect information about diagnoses, medications, and other aspects of their patients’ lives. But a Facebook posting or a receipt from the supermarket can disclose that a person has a certain condition. The compute-intensive analytics that data brokers, marketers, and insurers apply with ever-growing sophistication are aimed at revealing these things. If the greatest impact on your life is that a pop-up ad for some product appears on your browser, count yourself lucky. You don’t know what else someone is doing with the information.

I feel a bit of sympathy for Facebook’s management, because few people anticipated that routine postings could identify ripe targets for fake news and inflammatory political messaging (except for the brilliant operatives who did that messaging). On the other hand, neither Facebook nor the US government acted fast enough to shut down the behavior and tell the public about it, once it was discovered.

HIPAA itself is notoriously limited. If someone can escape being classified as a health care provider or a provider’s business associate, they can collect data with abandon and do whatever they like (except in places such as the European Union, where laws hopefully require them to use the data for the purpose they cited while collecting it). App developers consciously strive to define their products in such a way that they sidestep the dreaded HIPAA coverage. (I won’t even go into the weaknesses of HIPAA and subsequent laws, which fail to take modern data analysis into account.)

Consent is a meaningless concept

Even the European Union’s new regulations (the much-publicized General Data Protection Regulation or GDPR) allows data collection to proceed after user consent. Of course, data must be collected for many purposes, such as payment and shipping at retail web sites. And the GDPR–following a long-established principle of consumer rights–requires further consent if the site collecting the data wants to use it beyond its original purpose. But it’s hard to imagine what use data will be put to, especially a couple years in the future.

Privacy advocates have known from the beginning of the ubiquitous “terms of service” that few people read before the press the Accept button. And this is a rational ignorance. Even if you read the tiresome and legalistic terms of service (I always do), you are unlikely to understand their implications. So the problem lies deeper than tedious verbiage: even the most sophisticated user cannot predict what’s going to happen to the data she consented to share.

The health care field has advanced farther than most by installing legal and regulatory barriers to sharing. We could do even better by storing all health data in a Personal Health Record (PHR) for each individual instead of at the various doctors, pharmacies, and other institutions where it can be used for dubious purposes. But all use requires consent, and consent is always on shaky grounds. There is also a risk (although I think it is exaggerated) that patients can be re-identified from de-identified data. But both data sharing and the uses of data must be more strictly regulated.

The risks of disclosure go beyond individuals to affect the whole population

The illusion that an individual can offer informed consent is matched by an even more dangerous illusion that the harm caused by a breach is limited to the individual affected, or even to his family. In fact, data collected legally and pervasively is used daily to make decisions about demographic groups, as I explained back in 1998. Democracy itself took a bullet when Russian political agents used data to influence the British EU referendum and the US presidential election.

Thus, privacy is not the concern of individuals making supposedly rational decisions about how much to protect their own data. It is a social issue, requiring a coordinated regulatory response.

Discrimination doesn’t have to be explicit or conscious

We have seen that data can be used to draw virtual red lines around entire groups of people. Data analytics, unless strictly monitored, reproduce society’s prejudices in software. This has a particular meaning in health care.

Discrimination against many demographic groups (African-Americans, immigrants, LGBTQ people) has been repeatedly documented. Very few doctors would consciously aver that they wish people harm in these groups, or even that they dismiss their concerns. Yet it happens over and over. The same unconscious or systemic discrimination will affect analytics and the application of its findings in health care.

A final dilemma

Much has been made of Facebook’s policy of collecting data about “friends of friends,” which draws a wide circle around the person giving consent and infringes on the privacy of people who never consented. Facebook did end the practice that allowed Global Science Research to collect data on an estimated 87 million people. But the dilemma behind the “friends of friends” policy is how inextricably it embodies the premise behind social media.

Lots of people like to condemn today’s web sites (not just social media, but news sites and many others–even health sites) for collecting data for marketing purposes. But as I understand it, the “friends of friends” phenomenon lies deeper. Finding connections and building weak networks out of extended relationships is the underpinning of social networking. It’s not just how networks such as Facebook can display to you the names of people they think you should connect with. It underlies everything about bringing you in contact with information about people you care about, or might care about. Take away “friends of friends” and you take away social networking, which has been the most powerful force for connecting people around mutual interests the world has ever developed.

The health care field is currently struggling with a similar demonic trade-off. We desperately hope to cut costs and tame chronic illness through data collection. The more data we scoop up and the more zealously we subject it to analysis, the more we can draw useful conclusions that create better care. But bad actors can use the same techniques to deny insurance, withhold needed care, or exploit trusting patients and sell them bogus treatments. The ethics of data analysis and data sharing in health care require an open, and open-eyed, debate before we go further.

Small Grounds for Celebration and Many Lurking Risks in HIMSS Survey

Posted on March 12, 2018 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

When trying to bypass the breathless enthusiasm of press releases and determine where health IT is really headed, we can benefit from a recent HIMMS survey, released around the time of their main annual conference. They managed to get responses from 224 managers of health care facilities–which range from hospitals and clinics to nursing homes–and 145 high-tech developers that fall into the large categories of “vendors” and “consultants.” What we learn is that vendors are preparing for major advances in health IT, but that clinicians are less ready for them.

On the positive side, both the clinicians and the vendors assign fairly high priority to data analytics and to human factors and design (page 7). In fact, data analytics have come to be much more appreciated by clinicians in the past year (page 9). This may reflect the astonishing successes of deep learning artificial intelligence reported recently in the general press, and herald a willingness to invest in these technologies to improve health care. As for human factors and design, the importance of these disciplines has been repeatedly shown in HxRefactored conferences.

Genomics ranks fairly low for both sides, which I think is reasonable given that there are still relatively few insights we can gain from genetics to change our treatments. Numerous studies have turned up disappointing results: genetic testing doesn’t work very well yet, and tends to lead only to temporary improvements. In fact, both clinicians and vendors show a big drop in interest in precision medicine and genetics (pages 9 and 10). The drop in precision medicine, in particular, may be related to the strong association the term has with Vice President Joe Biden in the previous administration, although NIH seems to still be committed to it. Everybody knows that these research efforts will sprout big payoffs someday–but probably not soon enough for the business models of most companies.

But much more of the HIMSS report is given over to disturbing perception gaps between the clinicians and vendors. For instance, clinicians hold patient safety in higher regard than vendors (page 7). I view this concern cynically. Privacy and safety have often been invoked to hold back data exchange. I cannot believe that vendors in the health care space treat patient safety or privacy carelessly. I think it more likely that clinicians are using it as a shield to hide their refusal to try valuable new technologies.

In turn, vendors are much more interested in data exchange and integration than clinicians (page 7). This may just reflect a different level of appreciation for the effects of technology on outcomes. That is, data exchange and integration may be complex and abstract concepts, so perhaps the vendors are in a better position to understand that it ultimately determines whether a patient gets the treatment her condition demands. But really, how difficult can it be to be to understand data exchange? It seems like the clinicians are undermining the path to better care through coordination.

I have trouble explaining the big drops in interest in care coordination and public health (pages 9 and 10), which is worrisome because these things will probably do more than anything to produce healthier populations. The problem, I think, is probably that there’s no reimbursement for taking on these big, hairy problems. HIMMS explains the drop as a shift of attention to data analytics, which should ultimately help achieve the broader goals (page 11).

HIMSS found that clinicians expect to decrease their investments in health IT over the upcoming year, or at least to keep the amount steady (page 14). I suspect this is because they realize they’ve been soaked by suppliers and vendors. Since Meaningful Use was instituted in 2009, clinicians have poured billions of dollars and countless staff time into new EHRs, reaping mostly revenue-threatening costs and physician burn-out. However, as HIMSS points out, vendors expect clinicians to increase their investments in health IT–and may be sorely disappointed, especially as they enter a robust hiring phase (page 15).

Reading the report, I come away feeling that the future of health care may be bright–but that the glow you see comes from far over the horizon.

Three Pillars of Clinical Process Improvement and Control

Posted on February 21, 2018 I Written By

The following is a guest blog post by Brita Hansen, MD, Chief Medical Officer at LogicStream Health.

In a value-based care environment, achieving quality and safety measures is a priority. Health systems must have the capabilities to measure a process following its initial implementation. The reality, however, is that traditional improvement methods are often plagued with lagging indicators that provide little (if any) insight into areas requiring corrective actions. Health systems have an opportunity to make a significant impact on patient care by focusing on three pillars of clinical process improvement and control: quality and safety, appropriate utilization and clinician engagement.

Quality and Safety

Data in a health system’s electronic health record (EHR) typically is not easily accessible. Providers struggle to aggregate the data they need in a timely manner, often with limited resources, thereby hindering efforts to measure process efficacy and consistency. To achieve sustainable quality improvements, clinical leaders must equip their teams with advanced software solutions capable of delivering highly-actionable insights in near-real-time, thereby allowing them to gain a true understanding of clinical processes and how to avoid clinical errors and care variations.

Clinicians need instant insights into what clinical content in their EHR is being used; by whom; and how it affects patient care. This data empowers providers with the ability to continuously analyze and address care gaps and inefficient workflows.

For example, identifying inappropriate uses of Foley catheters that lead to catheter associated urinary tract infections (CAUTI) allows clinical leaders make targeted improvements to the care process or to counsel individual clinician outliers on appropriate best practices. This will, in turn, reduce CAUTI rates. To most effectively improve clinical processes, clinicians need software tools that enable them to examine those processes in their entirety, including process steps within the EHR, patient data and the actions of individual clinicians or groups as they interact with the care process every day.

Only with instant insight into how the care process is being followed can clinicians see in real-time what is happening and where to intervene, make the necessary changes in the EHR workflow, then measure and monitor the effects over time to improve care delivery in a sustainable way.

Appropriate Utilization

Verifying appropriate utilization of best practices also plays a critical role in optimizing clinical processes. Yet healthcare organizations often lack the ability to identify and correct the use of obsolete tests, procedures and medications. When armed with dynamic tools that quickly and easily allow any individual to understand the exact location of ordering opportunities for these components, an organization can evaluate its departments, clinicians, and patient populations for ineffective ordering patterns and areas that require greater compliance. By assessing areas in need of intervention, organizations can notify clinicians of the most up-to-date best practices that, when integrated into clinical workflows, will improve care and yield significant cost savings. Through targeted efforts to ensure proper usage of high-cost and high-volume medications, lab tests and other orderables, for example, health systems can achieve significant savings while improving the quality of care delivery.

The benefits of such an approach are reflected in one health system’s implementation of clinical process improvement and control software, which allowed them to more effectively manage the content in their EHR, including oversight of order sets. Specifically, the organization focused on reviewing the rate of tests used diagnose acute myocardial infarctions (heart attacks). It discovered that physicians were regularly ordering an outdated Creatine kinase-MB (CKMB) lab test along with a new, more efficient test for no other reason than it was pre-checked on numerous order sets.

Although the test itself was inexpensive, the high order rate led to massive waste and increased the cost of care. Leveraging the software enabled the organization to quickly identify the problem, then significantly reduce costs and save resources by eliminating an unnecessary test that otherwise would have remained hidden within the EHR.

Clinician Engagement

Enhancing clinician engagement is key to addressing dissatisfaction and burnout, often traced to alert fatigue and a lack of order set optimization within an EHR. The typical health system averages 24 million alert firings per year. Confronted with a high volume of unnecessary warnings, clinicians ignore alerts 49 percent to 96 percent of the time, resulting in poor compliance with care protocols. EHRs often contain an overwhelming number of order sets that can lead to confusion about best practices for patient care and a frustrating amount of choice to navigate. To increase engagement, alerts must be designed to send the right information, to the right person, in the right format, through the right channel, at the right time in the workflow; and order sets should be streamlined and make it easy for clinicians to follow the up-to-date best clinical practices.

For example, one hospital utilized EHR-generated alerts targeting potential cases of sepsis. These alerts, however, were rarely acted upon as they were not specific enough and fired inappropriately at such exhaustive rates clinicians grew to simply ignore them, creating a clear case of alert fatigue. By fine-tuning alerts and adjusting the workflow to ensure alerts were sent to the right clinician at the optimal time, the hospital was able to achieve and maintain nearly full compliance with its initiative. As early detection and treatment of sepsis increased, the hospital also reduced length of stay in its intensive care unit. Data-driven targeted interventions were developed to address outliers whose actions were driving unnecessary variation in the process.

Ultimately, when the three pillars—quality and safety, appropriate utilization and clinician engagement—are used as the building blocks for standardizing and controlling vital clinical processes, multiple objectives can be realized. Empowered with technology that supports these factors, healthcare organizations can truly achieve sustainable, proactive clinical process improvement and control.

Dr. Brita Hansen is a hospitalist at Hennepin County Medical Center in Minneapolis and Assistant Professor of Medicine at the University of Minnesota School of Medicine. Dr. Hansen also serves as Chief Medical Officer of LogicStream Health.