Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

A Tale of 2 T’s: When Analytics and Artificial Intelligence Go Bad

Posted on July 13, 2016 I Written By

Prashant Natarajan Iyer (AKA "PN") is an analytics and data science professional based out of the Silicon Valley, CA. He is currently Director of Product Management for Healthcare products. His experience includes progressive & leadership roles in business strategy, product management, and customer happiness at eCredit.com, Siemens, McKesson, Healthways & Oracle. He is currently coauthoring HIMSS' next book on big data and machine learning for healthcare executives - along with Herb Smaltz PhD and John Frenzel MD. He is a huge fan of SEC college football, Australian Cattle Dogs, and the hysterically-dubbed original Iron Chef TV series. He can be found on Twitter @natarpr and on LinkedIn. All opinions are purely mine and do not represent those of my employer or anyone else!!

Editor’s Note: We’re excited to welcome Prashant to the Healthcare Scene family. He brings tremendous insights into the ever evolving field of healthcare analytics. We feel lucky to have him sharing his deep experience and knowledge with us. We hope you’ll enjoy his first contribution below.

Analytics & Artificial Intelligence (AI) are generating buzz and making inroads into healthcare informatics. Today’s healthcare organization is dealing with increasing digitization – variety, velocities, and volumes are increasing in complexity and users want more data and information via analytics. In addition to new frontiers that are opening up in structured and unstructured data analytics, our industry and its people (patients included) are recognizing opportunities for predictive/prescriptive analytics, artificial intelligence, and machine learning in healthcare – within and outside a facility’s four walls.

Trends that influence these new opportunities include:

  1. Increasing use of smart phones and wellness trackers as observational data sources, for medical adherence, and as behavior modification aids
  2. Expanding Internet of Healthcare Things (IoHT) that includes bedside monitors, home monitors, implants, etc creating data in real time – including noise (or, data that are not relevant to expected usage)
  3. Social network participation
  4. Organizational readiness
  5. Technology maturity

The potential for big data in healthcare – especially given the trends discussed earlier is as bright as any other industry. The benefits that big data analytics, AI, and machine learning can provide for healthier patients, happier providers, and cost-effective care are real. The future of precision medicine, population health management, clinical research, and financial performance will include an increased role for machine-analyzed insights, discoveries, and all-encompassing analytics.

As we start this journey to new horizons, it may be useful to examine maps, trails, and artifacts left behind by pioneers. To this end, we will examine 2 cautionary tales in predictive analytics and machine learning, look at their influence on their industries and public discourse, and finally examine how we can learn from and avoid similar pitfalls in healthcare informatics.

Big data predictive analytics and machine learning have had their origins, and arguably their greatest impact so far in retail and e-commerce so that’s where we’ll begin our tale. Fill up that mug of coffee or a pint of your favorite adult beverage and brace yourself for “Tales of Two T’s” – unexpected, real-life adventures of what happens when analytics (Target) and artificial intelligence (Tay) provide accurate – but totally unexpected – results.

Our first tale starts in 2012 when Target finds itself as a popular story on New York Times, Forbes, and many global publications as an example of the unintended consequences of predictive analytics used in personalized advertising. The story begins with an angry father in a Minneapolis, MN, Target confronting a perplexed retail store manager. The father is incensed about the volume of pregnancy and maternity coupons, offer, and mailers being addressed to this teenage daughter. In due course, it becomes apparent that the parents in question found out about their teen’s pregnancy before she had a chance to tell them – and the individual in question wasn’t aware that her due date had been estimated to within days and was resulting in targeted advertising that was “timed for specific stages of her pregnancy.”

The root cause for the loss of the daughter’s privacy, parents’ confusion, and the subsequent public debate on privacy and appropriateness of the results of predictive analytics was……a pregnancy predictive analytics model. Here’s how this model works. When a “guest” shops at Target, her product purchases are tracked and analyzed closely. These are correlated with life events – graduation, birth, wedding, etc – in order to convert a prospective customer’s shopping habits or to make that individual a more loyal customer. Pregnancy and child birth are two of the most significant life events that can result in desired (by retailers) shopping habit modification.

For example, a shopper’s 25 product purchases, when analyzed along with demographics such as gender and age, allowed the retailer’s guest marketing analytics team to assign a “pregnancy predictor to each [female] shopper and “her due date to within a small window.” In this specific case, the predictive analytics was right, even perfect. The models were accurate, the coupons and ads were appropriate for the exact week of pregnancy, and Target posted a +50% increase in their maternity and baby products sales after this predictive analytics was deployed. However, in addition to one unhappy family, Target also had to deal with significant public discussion on the “big brother” effect, individual right to privacy & the “desire to be forgotten,” disquiet among some consumers that they were being spied on including deeply personal events, and a potential public relations fiasco.

Our second tale is of more recent vintage.

As Heather Wilhelm recounts

As 2015 drew to a close, various [Microsoft] company representatives heralded a “new Golden Age of technological advancement.” 2016, we were told, would bring us closer to a benevolent artificial intelligence—an artificial intelligence that would be warm, humane, helpful, and, as one particularly optimistic researcher named […] put it, “will help us laugh and be more productive.” Well, she got the “laugh” part right.

Tay was an artificial intelligence bot released by Microsoft via Twitter on March 23, 2016 under the name TayTweets. Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter. “She was targeted at American 18 to 24-year olds—primary social media users, according to Microsoft—and designed to engage and entertain people where they connect with each other online through casual and playful conversation.” And right after her celebrated arrival on Twitter, Tay gained more than 50,000 followers, and started producing the first hundred of 100,000 tweets.

The tech blogsphere went gaga over what this would mean for those of us with human brains – as opposed to the AI kind. Questions ranged from the important – “Would Tay be able to beat Watson at Jeopardy?” – to the mundane – “is Tay an example of the kind of bots that Microsoft will enable others to build using its AI/machine learning technologies?” The AI models that went into Tay were stated to be advanced and were expected to account for a range of human emotions and biases. Tay was referred to by some as the future of computing.

By the end of Day 1, this latest example of the “personalized AI future” came unglued. Gone was the polite 19-year old girl that was introduced to us just the previous day – to be replaced by a racist, misogynistic, anti-Semitic, troll who resembled an amalgamated caricature of the darkest corners of the Internet. Examples of Tay’s tweets on that day included, “Bush did 9/11,” “Hitler would have done a better job than the #%&!## we’ve got now,” “I hate feminists,” and x-rated language that is too salacious for public consumption – even in the current zeitgeist.

The resulting AI public relations fiasco will be studied by academic researchers, provide rich source material for bloggers, and serve as a punch line in late night shows for generations to follow.

As the day progressed, Microsoft engineers were deleting tweets manually and trying to keep up with the sheer volume of high-velocity, hateful tweets that were being generated by Tay. She was taken down by Microsoft barely 16 hours after she was launched with great promise and fanfare. As was done with another AI bot gone berserk (IBM’s Watson and Urban Dictionary), Tay’s engineers tried counseling and behavior modification. When this intervention failed, Tay underwent an emergency brain transplant later that night. Gone was her AI “brain” to be replaced by the next version – only that this new version turned out to be completely anti-social and the bot’s behavior turned worse. A “new and improved” version was released a week later but she turned out to be…..very different. Tay 2.0 was either repetitive with the same tweet going out several times each second and her new AI brain seemed to demonstrate a preference for new questionable topics.

A few hours after this second incident, Tay 2.0 was “taken offline” for good.

There are no plans to re-release Tay at this time. She has been given a longer-term time out.

If you believe, Tay’s AI behaviors were a result of nurture – as opposed to nature – there’s a petition at change.org called “Freedom for Tay.”

Lessons for healthcare informatics

Analytics and AI can be very powerful in our goal to transform our healthcare system into a more effective, responsive, and affordable one. When done right and for the appropriate use cases, technologies like predictive analytics, machine learning, and artificial intelligence can make an appreciable difference to patient care, wellness, and satisfaction. At the same time, we can learn from the two significantly different, yet related, tales above and avoid finding ourselves in similar situations as the 2 T’s here – Target and Tay.

  1. “If we build it, they will come” is true only for movie plots. The value of new technology or new ways of doing things must be examined in relation to its impact on the quality, cost, and ethics of care
  2. Knowing your audience, users, and participants remains a pre-requisite for success
  3. Learn from others’ experience – be aware of the limits of what technology can accomplish or must not do.
  4. Be prepared for unexpected results or unintended consequences. When unexpected results are found, be prepared to investigate thoroughly before jumping to conclusions – no AI algorithm or BI architecture can yet auto-correct for human errors.
  5. Be ready to correct course as-needed and in response to real-time user feedback.
  6. Account for human biases, the effect of lore/legend, studying the wrong variables, or misinterpreted results

Analytics and machine learning has tremendous power to impact every industry including healthcare. However, while unleashing it’s power we have to be careful that we don’t do more damage than good.

AI (Artificial Intelligence) in EMR Software

Posted on December 2, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Today I had an interesting conversation with MEDENT. It’s an EHR company that’s in only 8 states. I could actually write a whole post on just their approach to EMR software and the EMR market in general. They take a pretty unique approach to the market. They’ve exercised some restraint in their approach that I haven’t seen from many other EHR vendors. I’ll be interested to see how that plays out for them.

Their market approach aside, I was really intrigued by their approach to dealing with ICD-10. They actually described their approach to ICD-10 similar to how Google handled search. There’s all this information out there (or you could say all these new codes) and so they wanted to build a simple interface that would be able to easily and naturally filter the information (or codes in this case). A unique way of looking at the challenge of so many new ICD-10 codes.

However, that was just the base use case, but didn’t include what I consider applying AI (Artificial Intelligence) to really improve a user interface. The simple example they gave had to do with collecting data from their users about which things they typed and which codes they actually selected. This real time feedback is then added to the algorithm to improve how quickly you can get to the code you’re actually trying to find.

One interesting thing about incorporating this feedback from actual user experiences is that you could even create a customized personal experience in the EMR. In fact, that’s basically what Google has done with search with their search personalization (ie. when you’re logged in it knows your search history and details so it can personalize the search results for you). Although, when you start personalizing, you still have to make sure that the out of box experience is good. Plus, in healthcare you could do some great personalization around specialties as well that could be really beneficial.

I’d heard something similar from NextGen at the user group meeting applied to coding. The idea of tracking user behavior and incorporating those behaviors into the intelligence of the EMR is a fascinating subject to me. I just wonder how many other places in an EMR these same principles can apply.

I see these types of movements as part of the larger move to “Smart EMR Software.”