Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

How Secure Are Wearables?

Posted on October 1, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

JaneenB asks a really fantastic question in this tweet. Making sure that wearables are secure is going to be a really hot topic. Yesterday, I was talking with Mac McMillan from Cynergistek and he suggested that the FDA was ready to make medical device security a priority. I’ll be interested to see what the FDA does to try and regulate security in medical devices, but you can see why this is an important thing. Mac also commented that while it’s incredibly damaging for someone to hack a pacemaker like the one Vice President Cheney had (has?), the bigger threat is the 300 pumps that are installed in a hospital. If one of them can be hacked, they all can be hacked and the process for updating them is not simple.

Of course, Mac was talking about medical device security from more of an enterprise perspective. Now, let’s think about this across millions of wearable devices that are used by consumers. Plus, many of these consumer wearable devices don’t require FDA clearance and so the FDA won’t be able to impose more security restrictions on them.

I’m not really sure the answer to this problem of wearable security. Although, I think two steps in the right direction could be for health wearable companies to first build a culture of security into their company and their product. This will add a little bit of expense on the front end, but it will more than pay off on the back end when they avoid security issues which could literally leave the company in financial ruins. Second, we could use some organization to take on the effort of reporting on the security (or lack thereof) of these devices. I’m not sure if this is a consumer reports type organization or a media company. However, I think the idea of someone holding organizations accountable is important.

We’re definitely heading towards a world of many connected devices. I don’t think we have a clear picture of what this means from a security perspective.

Has the Google Glass Hype Passed?

Posted on September 23, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

It seems to me that the hype over Google Glass is done. Enough people started using them and many couldn’t see the apparent value. In fact, some are wondering if Google will continue to invest in it. They’ve gone radio silent on Google Glass from what I’ve seen. We’ll see if they’re planning to abandon the project or if they’re just reloading.

While the future of Google Glass seems unsure to me, I think the idea of always on, connected computing is still alive and well. Whether it’s eyeware, a watch or dome other wearable doesn’t matter to me. Always on, connected computing is a powerful concept.

I’m also interested in the telemedicine and second screen approaches that have been started using Google Glass in Healthcare. Both of these concepts will be an important part of the fabric of health care going forward.

I still remember the wow factor that occurred when I first used Google Glass. It still amazes me today. I just wish it were a little more functional and didn’t hurt my eyes when I used it for long periods.

What do you think of Google Glass and the category of always on computing?  Do you see something I’m missing?

Is The Future of Smart Clothing Modular or Integrated?

Posted on September 4, 2014 I Written By

Kyle is Founder and CEO of Pristine, a company in Austin, TX that develops telehealth communication tools optimized for Google Glass in healthcare environments. Prior to founding Pristine, Kyle spent years developing, selling, and implementing electronic medical records (EMRs) into hospitals. He also writes for EMR and HIPAA, TechZulu, and Svbtle about the intersections of healthcare, technology, and business. All of his writing is reproduced at kylesamani.com

OMSignal recently raised $10M to build sensors into smart clothes. Sensoria recently raised $5M in pursuit of the same mission, albeit using different tactics. Meanwhile, Apple hired the former CEO of Burberry, Angela Ahrendts, to lead its retail efforts.

And Google is pushing Android Wear in a major way, with significant adoption and uptake by OEMs.

There’re two distinct approaches that are evolving in the smart clothing space. OMSignal, Sensoria, and Apple are taking a full-stack, vertical approach. OMSignal and Sensoria are building sensors into clothing and selling their own clothes directly to consumers. Although Apple hasn’t announced anything to compete with OMSignal or Sensoria, it’s clear they’re heading into the smart clothing space in traditional Apple fashion with the launch of Health, the impending launch of the iWatch, and the hiring of Angela Ahrendts.

Google, on the other hand, is licensing Android Wear to OEM vendors in traditional Google fashion: by providing the operating system and relevant Google Services to OEMs who can customize and configure and compete on retail and marketing. Although Google is yet to announce partnerships with any more traditional clothing vendors, it’s inevitable that they’ll license Android Wear to more traditional fashion brands that want to produce smart, sensor-laden clothing.

Apple’s vertically-integrated model is powerful because it allows Apple to pioneer new markets that require novel implementations utilizing intertwined software and hardware. Pioneering a new factor is especially difficult when dealing with separate hardware and software vendors and all of the associated challenges: disparate P&Ls, different visions, and unaligned managerial mandates. However, once the new form factor is understood, modular hardware and software companies can quickly optimize each component to drive down costs and create new choices for consumers. This approached has been successfully played out in the PC, smartphone, and tablet form factors.

Apple’s model is not well-suited to being the market leader in terms of raw volume. Indeed, Apple optimizes towards the high end, not the masses and this strategy has served them well. But it will be interesting to see how they, along with other vertically integrated smart-clothing vendors, approach the clothing market. Fashion is already an established industry that is predicated on variety, choice, and personalization; these traits are the antithesis of the Apple model. There’s no way that 20% or even 10% of the population will wear t- shirts, polos, tank tops, dresses, business clothes, etc., (which I’ll collectively call the “t-shirt market”) made by a single company. No one company can so single-handedly dominate the t-shirt market. People simply desire too many choices for that to happen.

OMSignal and Sensoria don’t need to worry about this problem as much as Apple since they’re targeting niche use cases in fitness and health. However, as they scale and set their sites on the mass consumer market, they will need to figure out a strategy to drive massive personalization. Apple, given its scale and brand, will need to address the personalization problem in the t- shirt market before they enter it.

The t-shirt market is going to be exciting to watch over the coming decades. There are enormous opportunities to be had. Let the best companies win!

Feel free to a drop a comment with how you think the market will play out. Will the startups open up their sensors to 3rd party clothing companies? Will Apple? How will Google counteract?

Where is Voice Recognition in EHR Headed?

Posted on August 22, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve long been interested in voice recognition together with EHR software. In many ways it just makes sense to use voice recognition in healthcare. There was so much dictation in healthcare, that you’d think that the move to voice recognition would be the obvious move. The reality however has been quite different. There are those who love voice recognition and those who’ve hated it.

One of the major problems with voice recognition is how you integrate the popular EHR template documentation methods with voice. Sure, almost every EHR vendor can do free text boxes as well, but in order to get all the granular data it’s meant that doctors have done a mix of clicking a lot of boxes together with some voice recognition.

A few years ago, I started to see how EHR voice recognition could be different when I saw the Dragon Medical Enabled Chart Talk EHR. It was literally a night and day difference between dragon on other EHR software and the dragon embedded into Chart Talk. You could see so much more potential for voice documentation when it was deeply embedded into the EHR software.

Needless to say, I was intrigued when I was approached by the people at NoteSwift. They’d taken a number of EHR software: Allscripts Pro, Allscripts TouchWorks, Amazing Charts, and Aprima and deeply integrated voice into the EHR documentation experience. From my perspective, it was providing Chart Talk EHR like voice capabilities in a wide variety of EHR vendors.

To see what I mean, check out this demo video of NoteSwift integrated with Allscripts Pro:

You can see a similar voice recognition demo with Amazing Charts if you prefer. No doubt, one of the biggest complaints with EHR software is the number of clicks that are required. I’ve argued a number of times that number of clicks is not the issue people make it out to be. Or at least that the number of clicks can be offset with proper training and an EHR that provides quick and consistent responses to clicks (see my piano analogy and Not All EHR Clicks Are Evil posts). However, I’m still interested in ways to improve the efficiency of a doctor and voice recognition is one possibility.

I talked with a number of NoteSwift customers about their experience with the product. First, I was intrigued that the EHR vendors themselves are telling their customers about NoteSwift. That’s a pretty rare thing. When looking at adoption of NoteSwift by these practices, it seemed that doctor’s perceptions of voice recognition are carrying over to NoteSwift. I’ll be interested to see how this changes over time. Will the voice recognition doctors using NoteSwift start going home early with their charts done while the other doctors are still clicking away? Once that happens enough times, you can be sure the other doctors will take note.

One of the NoteSwift customers I talked to did note the following, “It does require them to take the time up front to set it up correctly and my guess is that this is the number one reason that some do not use NoteSwift.” I asked this same question of NoteSwift and they pointed to the Dragon training that’s long been required for voice recognition to be effective (although, Dragon has come a long way in this regard as well). While I think NoteSwift still has some learning curve, I think it’s likely easier to learn than Dragon because of how deeply integrated it is into the EHR software’s terminology.

I didn’t dig into the details of this, but NoteSwift suggested that it was less likely to break during an EHR upgrade as well. Master Dragon users will find this intriguing since they’ve likely had a macro break after their EHR gets upgraded.

I’ll be interested to watch this space evolve. I won’t be surprised if Nuance buys up NoteSwift once they’ve integrated with enough EHR vendors. Then, the tight NoteSwift voice integrations would come native with Dragon Medical. Seems like a good win win all around.

Looking into the future, I’ll be watching to see how new doctors approach documentation. Most of them can touch type and are use to clicking a lot. Will those new “digital native” doctors be interested in learning voice? Then again, many of them are using Siri and other voice recognition on their phone as well. So, you could make the case that they’re ready for voice enabled technologies.

My gut tells me that the majority of EHR users will still not opt for a voice enabled solution. Some just don’t feel comfortable with the technology at all. However, with advances like what NoteSwift is doing, it may open voice to a new set of users along with those who miss the days of dictation.

EMR Biometrics, Battleground EHR, and EHR Patient Communication

Posted on June 15, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.


To be honest, I’m not sure this tweet is actually about EMR or not, but it reminded me of biometric integration with EMR. I was absolutely intrigued by it when I first started with EMR. However, I haven’t dug into it in a couple years and I’ve never really seen it take off. The only exception was this video demo of biometric patient identification which I think is really interesting and powerful. I’d love to hear if you work at a place that uses biometrics.


Hopefully these modifications can be fit into the $11 billion DoD EHR budget. Like any system change, some people are going to miss the old system and miss specific features of the old system. Especially since ALTHA was built specific for their needs. Good luck to whoever wins the DoD EHR contract.


I agree with this tweet to some extent. There’s some comparison between the messaging in EHRs today. The problem is that most of them just aren’t as easy to use as email or text messaging. So, that’s where I think the comparison falls apart. I look forward to the day when the comparison is accurate.

Big Brother Or Best Friend?

Posted on April 9, 2014 I Written By

Kyle is Founder and CEO of Pristine, a company in Austin, TX that develops telehealth communication tools optimized for Google Glass in healthcare environments. Prior to founding Pristine, Kyle spent years developing, selling, and implementing electronic medical records (EMRs) into hospitals. He also writes for EMR and HIPAA, TechZulu, and Svbtle about the intersections of healthcare, technology, and business. All of his writing is reproduced at kylesamani.com

The premise of clinical decision support (CDS) is simple and powerful: humans can’t remember everything, so enter data into a computer and let the computer render judgement. So long as the data is accurate and the rules in the computer are valid, the computer will be correct the vast majority of the time.

CDS is commonly implemented in computerized provider order entry (CPOE) systems across most order types – labs, drugs, radiology, and more. A simple example: most pediatric drugs require weight-based dosing. When physicians order drugs for pediatric patients using CPOE, the computer should validate the dose of the drug against the patient’s weight to ensure the dose is in the acceptable range. Given that the computer has all of the information necessary to calculate acceptable dose ranges, and the fact that it’s easy to accidently enter the wrong dose into the computer, CDS at the point of ordering delivers clear benefits.

The general notion of CDS – checking to make sure things are being done correctly – is the same fundamental principle behind checklists. In The Checklist Manifesto, Dr. Atul Gawande successfully argues that the challenge in medicine today is not in ignorance, but in execution. Checklists (whether paper or digital) and CDS are realizations of that reality.

CDS in CPOE works because physicians need to enter orders to do their job. But checklists aren’t as fundamentally necessary for any given procedure or action. The checklist can be skipped, and the provider can perform the procedure at hand. Thus, the fundamental problem with checklists are that they insert a layer of friction into workflows: running through the checklist. If checklists could be implemented seamlessly without introducing any additional workflow friction, they would be more widely adopted and adhered to. The basic problem is that people don’t want to go back to the same repetitive formula for tasks they feel comfortable performing. Given the tradeoff between patient safety and efficiency, checklists have only been seriously discussed in high acuity, high risk settings such as surgery and ICUs. It’s simply not practical to implement checklists for low risk procedures. But even in high acuity environments, many organizations continue to struggle implementing checklists.

So…. what if we could make checklists seamless? How could that even be done?

Looking at CPOE CDS as a foundation, there are two fundamental challenges: collecting data, and checking against rules.

Computers can already access EMRs to retrieve all sorts of information about the patient. But computers don’t yet have any ability to collect data about what providers are and aren’t physically doing at the point of are. Without knowing what’s physically happening, computers can’t present alerts based on skipped or incorrect steps of the checklist. The solution would likely be based on a Kinect-like system that can detect movements and actions. Once the computer knows what’s going on, it can cross reference what’s happening against what’s supposed to happen given the context of care delivery and issue alerts accordingly.

What’s described above is an extremely ambitious technical undertaking. It will take many years to get there. There are already a number of companies trying to addressing this in primitive forms: SwipeSense detects if providers clean their hands before seeing patients, and the CHARM system uses Kinect to detect hand movements and ensure surgeries are performed correctly.

These early examples are a harbinger of what’s to come. If preventable mistakes are the biggest killer within hospitals, hospitals need to implement systems to identify and prevent errors before they happen.

Let’s assume that the tech evolves for an omniscient benevolent computer that detects errors and issues warnings. Although this is clearly desirable for patients, what does this mean for providers? Will they become slaves to the computer? Providers already face challenges with CPOE alert fatigue. Just imagine do-anything alert fatigue.

There is an art to telling people that they’re wrong. In order to successfully prevent errors, computers will need to learn that art. Additionally, there must be a cultural shift to support the fact that when the computer speaks up, providers should listen. Many hospitals still struggle today with implementing checklists because of cultural issues. There will need to be a similar cultural shift to enable passive omniscient computers to identify errors and warn providers.

I’m not aware of any omniscient computers that watch people all day and warn them that they’re about to make a mistake. There could be such software for workers in nuclear power plants or other critical jobs in which the cost of being wrong is devastating. If you know of any such software, please leave a comment.

Effortless EHR Interaction

Posted on April 9, 2013 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently came across the really interesting device called MYO. I really can’t do the device justice, so I’ll just share this video which will do a much better job showing the gesture controls that are possible with the MYO.

I love how it senses even changes in the muscle. I love when description that says that the response sometimes feels like it responds before you even move since it senses your muscle before the movement is even done. Pretty amazing.

There are has to be so many possible uses for a next generation gesture device like MYO in healthcare. I’ve been thinking a lot about effortless EHR interaction and where it could go. I wonder if MYO and other gesture control systems can dramatically improve a physician’s interaction with an EHR.

Plus, the most exciting thing of all is that I think we’re still in the very early days of what’s going to be possible with gesture control and human computer interaction in general. Pair this with always on ubiquitous computing like is being shown with Google Glass and we’re just at the very beginning of the computing revolution.

I guess we’ll see if healthcare decides to lag behind these new technologies or whether we’ll ride the wave of transformation.

Retina Scanning vs. Iris Recognition in Healthcare – Best Technology Seen at AHIMA

Posted on November 1, 2012 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

While at AHIMA, I was lucky enough to meet John Trader from RightPatient (A part of M2SYS Healthcare Solutions). During our meeting he showed me the coolest technology I’ve seen in quite a while. Ever since I first started this blog, I had a serious interest in seeing how biometric solutions could benefit an EHR implementation. I’ve tried fingerprint, facial (and this review), voice, typing, etc and been amazed by the technology. Facial recognition was probably my favorite despite its weaknesses.

The funny thing is that I always shot down anyone that suggested the use of some sort of eye related biometric identification. Thinking to my only reference for retina scanning biometrics (movies like Mission Impossible), I didn’t see how that was going to integrate well with healthcare.

Turns out that I was wrong, and my big mistake was that I was looking at the technology from a doctor, nurse, front desk staff identification perspective as opposed to a patient identification perspective. Plus, I didn’t get the difference between retina scanning and iris recognition.

With this background, you can imagine my surprise when I fell in love with the RightPatient iris recognition technology that John Trader demoed to me at AHIMA. I shot this short video embedded below where John discusses the differences between retina scanning (the laser scan you see in the movies) and iris recognition. Then, John demos their iris recognition technology.


Much more could be said about how the iris technology works, but I think it’s best deployed at a hospital front desk during registration. Imagine the number of duplicates that could be avoided with good biometric iris recognition. Imagine the insurance abuse that could be avoided with iris recognition.

In the video I only showed one of the model’s that RightPatient deploys. They have another model that automatically swivels until it locates your iris. It’s hard to explain on the blog, but when you try it first hand it’s like magic.

Wireless Health Data Collection Innovations Getting Hot

Posted on September 25, 2012 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

This week, psfk.com and pharma partner Boehringer Mannheim published a list of the week’s top innovations in healthcare. All were interesting, but I was particularly intrigued by a couple which continue to stretch the boundaries of wireless medicine.

One innovation example comes from a German research team, which has developed a tiny chip (a two-millimeter device much shorter than an eyelash) which can sample blood sugar levels by testing tears or sweat. The chip is equipped to transmit the results wirelessly to providers, as well as sending patients alerts to their wireless phone.  Even cooler, the chip can be powered wirelessly through radio frequency, keeping it charged for weeks or even months.

Another entirely cool innovation comes from U.S. high school student Catherine Wong, who has invented an ECG made of off the shelf electronic components which can broadcast results wirelessly.  The device, which could make ECGs available to to the two billion-plus people without access to healthcare, picks up heart signals, then transmits them via cellphone to a healthcare provider.  The cellphone connects to the ECG using Bluetooth, and heart rhythms display on  a smartphone screen thanks to a Java app.

As readers know, the idea of broadcasting test results to remote providers via wireless devices is not a new one. The idea is so hot, in fact, that the FCC is holding a public meeting on September 24 to discuss how to accelerate the adoption of such approaches. (The event will be live streamed at http://www.itif.org/events/recommendations-mhealth-task-force at 2PM Eastern Standard Time.)

After watching projects like these germinate for a number of years, I’m thrilled to see more innovation arising in this sector of the mHealth space. Inventors, keep it coming!

Will Growth In Mobile Use Compromise HIPAA Compliance?

Posted on May 31, 2012 I Written By

Katherine Rourke is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she's served as editor in chief of several healthcare B2B sites.

There’s little doubt that giving doctors mobile access to data via their personal devices can be valuable. We’ve probably all read case studies in which doctors saved a great deal of time and made the right clinical call because they reached to via an iPad, smartphone or Android tablet.

And this is as it should be. We’ve been working to push intelligence to the network for at least the two decades I’ve been writing about IT.

That being said, we haven’t yet gotten our arms around the security problems posed by mobile computing during that period, as hard as IT managers have tried.  Adding a HIPAA compliance requirement to the mix makes things even more difficult. As John wrote about previously, Email is Not HIPAA Secure and Text is Not HIPAA Secure either.

According to one security expert, healthcare providers need to do at least the following to meet HIPAA standards with mobile devices:

  • Protect their private data and ePHI on personal-liable (BYOD) mobile devices;
  • Encrypt all corporate email, data and documents in transit and at rest on all devices ;
  • Remotely configure and manage device policies;
  • Apply dynamic policy controls that restrict access to certain data or applications;
  • Enforce strict access controls and data rights on individual apps and services;
  • Continuously monitor device integrity to ensure PHI transmission;
  • Protect against malicious applications, malware and cyber threats;
  • Centrally manage policies and configurations across all devices;
  • Generate comprehensive compliance reporting across all mobile devices and infrastructure.

Just a wild guess here, but my hunch is that very few providers have gone to these lengths to protect the ePHI on clinicians’ devices.  In fact, my sense is that if Mr. Bad Guy stole a few iPads or laptops from doctors at random right now, they’d find a wide open field. True, the thief probably couldn’t log into the EMR(s) the physician uses, but any other clinical observations or notes — think Microsoft Office apps — would be in the clear in most cases.

Being a journalist, not a security PhD, I can’t tell you I know what must be done. But having talked to countless IT administrators, I can definitely see that this is a nasty, hairy problem, for many reasons including the following:

–  I doubt it’s going to be solved by a single vendor, though I bet you will be or are already getting pitches to that effect  — given the diversity of systems even a modestly-large medical practice runs.

– Two factor authentication that locks up the device for all but the right user sounds good, but add-ons like, say, biometrics isn’t cheap.

– Add too many login steps to doctors already tired of extra clicks and you may see mass defections away from EMR use.

– Remotely managing and patching security software on devices with multiple operating systems and network capabilities is no joke.

If you feel your institution has gotten a grip on this problem, please do chime in and tell me. Or feel free to be a mean ol’ pessimist like myself. Either way, I’d love to hear some of your experiences in protecting mobile data.  Maybe you have a good news story to tell.