A Tale of 2 T’s: When Analytics and Artificial Intelligence Go Bad

The following is a guest blog post by Prashant Natarajan.

Analytics & Artificial Intelligence (AI) are generating buzz and making inroads into healthcare informatics. Today’s healthcare organization is dealing with increasing digitization – variety, velocities, and volumes are increasing in complexity and users want more data and information via analytics. In addition to new frontiers that are opening up in structured and unstructured data analytics, our industry and its people (patients included) are recognizing opportunities for predictive/prescriptive analytics, artificial intelligence, and machine learning in healthcare – within and outside a facility’s four walls.

Trends that influence these new opportunities include:

  1. Increasing use of smart phones and wellness trackers as observational data sources, for medical adherence, and as behavior modification aids
  2. Expanding Internet of Healthcare Things (IoHT) that includes bedside monitors, home monitors, implants, etc creating data in real time – including noise (or, data that are not relevant to expected usage)
  3. Social network participation
  4. Organizational readiness
  5. Technology maturity

The potential for big data in healthcare – especially given the trends discussed earlier is as bright as any other industry. The benefits that big data analytics, AI, and machine learning can provide for healthier patients, happier providers, and cost-effective care are real. The future of precision medicine, population health management, clinical research, and financial performance will include an increased role for machine-analyzed insights, discoveries, and all-encompassing analytics.

As we start this journey to new horizons, it may be useful to examine maps, trails, and artifacts left behind by pioneers. To this end, we will examine 2 cautionary tales in predictive analytics and machine learning, look at their influence on their industries and public discourse, and finally examine how we can learn from and avoid similar pitfalls in healthcare informatics.

Big data predictive analytics and machine learning have had their origins, and arguably their greatest impact so far in retail and e-commerce so that’s where we’ll begin our tale. Fill up that mug of coffee or a pint of your favorite adult beverage and brace yourself for “Tales of Two T’s” – unexpected, real-life adventures of what happens when analytics (Target) and artificial intelligence (Tay) provide accurate – but totally unexpected – results.

Our first tale starts in 2012 when Target finds itself as a popular story on New York Times, Forbes, and many global publications as an example of the unintended consequences of predictive analytics used in personalized advertising. The story begins with an angry father in a Minneapolis, MN, Target confronting a perplexed retail store manager. The father is incensed about the volume of pregnancy and maternity coupons, offer, and mailers being addressed to this teenage daughter. In due course, it becomes apparent that the parents in question found out about their teen’s pregnancy before she had a chance to tell them – and the individual in question wasn’t aware that her due date had been estimated to within days and was resulting in targeted advertising that was “timed for specific stages of her pregnancy.”

The root cause for the loss of the daughter’s privacy, parents’ confusion, and the subsequent public debate on privacy and appropriateness of the results of predictive analytics was……a pregnancy predictive analytics model. Here’s how this model works. When a “guest” shops at Target, her product purchases are tracked and analyzed closely. These are correlated with life events – graduation, birth, wedding, etc – in order to convert a prospective customer’s shopping habits or to make that individual a more loyal customer. Pregnancy and child birth are two of the most significant life events that can result in desired (by retailers) shopping habit modification.

For example, a shopper’s 25 product purchases, when analyzed along with demographics such as gender and age, allowed the retailer’s guest marketing analytics team to assign a “pregnancy predictor to each [female] shopper and “her due date to within a small window.” In this specific case, the predictive analytics was right, even perfect. The models were accurate, the coupons and ads were appropriate for the exact week of pregnancy, and Target posted a +50% increase in their maternity and baby products sales after this predictive analytics was deployed. However, in addition to one unhappy family, Target also had to deal with significant public discussion on the “big brother” effect, individual right to privacy & the “desire to be forgotten,” disquiet among some consumers that they were being spied on including deeply personal events, and a potential public relations fiasco.

Our second tale is of more recent vintage.

As Heather Wilhelm recounts

As 2015 drew to a close, various [Microsoft] company representatives heralded a “new Golden Age of technological advancement.” 2016, we were told, would bring us closer to a benevolent artificial intelligence—an artificial intelligence that would be warm, humane, helpful, and, as one particularly optimistic researcher named […] put it, “will help us laugh and be more productive.” Well, she got the “laugh” part right.

Tay was an artificial intelligence bot released by Microsoft via Twitter on March 23, 2016 under the name TayTweets. Tay was designed to mimic the language patterns of a 19-year-old American girl, and to learn from interacting with human users of Twitter. “She was targeted at American 18 to 24-year olds—primary social media users, according to Microsoft—and designed to engage and entertain people where they connect with each other online through casual and playful conversation.” And right after her celebrated arrival on Twitter, Tay gained more than 50,000 followers, and started producing the first hundred of 100,000 tweets.

The tech blogsphere went gaga over what this would mean for those of us with human brains – as opposed to the AI kind. Questions ranged from the important – “Would Tay be able to beat Watson at Jeopardy?” – to the mundane – “is Tay an example of the kind of bots that Microsoft will enable others to build using its AI/machine learning technologies?” The AI models that went into Tay were stated to be advanced and were expected to account for a range of human emotions and biases. Tay was referred to by some as the future of computing.

By the end of Day 1, this latest example of the “personalized AI future” came unglued. Gone was the polite 19-year old girl that was introduced to us just the previous day – to be replaced by a racist, misogynistic, anti-Semitic, troll who resembled an amalgamated caricature of the darkest corners of the Internet. Examples of Tay’s tweets on that day included, “Bush did 9/11,” “Hitler would have done a better job than the #%&!## we’ve got now,” “I hate feminists,” and x-rated language that is too salacious for public consumption – even in the current zeitgeist.

The resulting AI public relations fiasco will be studied by academic researchers, provide rich source material for bloggers, and serve as a punch line in late night shows for generations to follow.

As the day progressed, Microsoft engineers were deleting tweets manually and trying to keep up with the sheer volume of high-velocity, hateful tweets that were being generated by Tay. She was taken down by Microsoft barely 16 hours after she was launched with great promise and fanfare. As was done with another AI bot gone berserk (IBM’s Watson and Urban Dictionary), Tay’s engineers tried counseling and behavior modification. When this intervention failed, Tay underwent an emergency brain transplant later that night. Gone was her AI “brain” to be replaced by the next version – only that this new version turned out to be completely anti-social and the bot’s behavior turned worse. A “new and improved” version was released a week later but she turned out to be…..very different. Tay 2.0 was either repetitive with the same tweet going out several times each second and her new AI brain seemed to demonstrate a preference for new questionable topics.

A few hours after this second incident, Tay 2.0 was “taken offline” for good.

There are no plans to re-release Tay at this time. She has been given a longer-term time out.

If you believe, Tay’s AI behaviors were a result of nurture – as opposed to nature – there’s a petition at change.org called “Freedom for Tay.”

Lessons for healthcare informatics

Analytics and AI can be very powerful in our goal to transform our healthcare system into a more effective, responsive, and affordable one. When done right and for the appropriate use cases, technologies like predictive analytics, machine learning, and artificial intelligence can make an appreciable difference to patient care, wellness, and satisfaction. At the same time, we can learn from the two significantly different, yet related, tales above and avoid finding ourselves in similar situations as the 2 T’s here – Target and Tay.

  1. “If we build it, they will come” is true only for movie plots. The value of new technology or new ways of doing things must be examined in relation to its impact on the quality, cost, and ethics of care
  2. Knowing your audience, users, and participants remains a pre-requisite for success
  3. Learn from others’ experience – be aware of the limits of what technology can accomplish or must not do.
  4. Be prepared for unexpected results or unintended consequences. When unexpected results are found, be prepared to investigate thoroughly before jumping to conclusions – no AI algorithm or BI architecture can yet auto-correct for human errors.
  5. Be ready to correct course as-needed and in response to real-time user feedback.
  6. Account for human biases, the effect of lore/legend, studying the wrong variables, or misinterpreted results

Analytics and machine learning has tremendous power to impact every industry including healthcare. However, while unleashing it’s power we have to be careful that we don’t do more damage than good.

About Prashant Natarajan
Prashant Natarajan Iyer (AKA “PN”) is an analytics and data science professional based out of the Silicon Valley, CA. He is currently Director of Product Management for Healthcare products. His experience includes progressive & leadership roles in business strategy, product management, and customer happiness at eCredit.com, Siemens, McKesson, Healthways & Oracle. He is currently coauthoring HIMSS’ next book on big data and machine learning for healthcare executives – along with Herb Smaltz PhD and John Frenzel MD. He is a huge fan of SEC college football, Australian Cattle Dogs, and the hysterically-dubbed original Iron Chef TV series. He can be found on Twitter @natarpr and on LinkedIn. All opinions are purely mine and do not represent those of my employer or anyone else!!

About the author

Guest Author

3 Comments

  • PN, the piece was not only informative , but was also entertaining. I was LOL.

    Keep up the good work.

  • Very good blog! Especially below line was spot-on wrt narration of 2 T’s:
    Learn from others’ experience – be aware of the limits of what technology can accomplish or must not do.

  • Mila, thank you for your kind feedback.

    We wanted to convey, via humor, that analytics is always more than just numbers. On our blog calendar is a future post on “Addressing the Thanksgiving Turkey bias in predictive analytics”

Click here to post a comment
   

Categories