Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and HIPAA for FREE!!

Health Data Standardization Project Proposes “One Record Per Person” Model

Posted on October 13, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

When we sit around the ol’ HIT campfire and swap interoperability stories, many of us have little to do but gripe.

Is FHIR going to solve all of our interoperability problems? Definitely not right away, and who knows if it ever will? Can we get the big EMR vendors to share and share alike? They’ll try, but there’s always a catch. And so on. There’s always a major catch involved.

I don’t know if the following offers a better story than any of the others, but at least it’s new one, or at least new to me. Folks, I’m talking about the Standard Health Record, an approach to health data sharing doesn’t fall precisely any of the other buckets I’m aware of.

SHR is based at The MITRE Corporation, which also hosts virtual patient generator Synthea. Rather than paraphrase, let’s let the MITRE people behind SHR tell you what they’re trying to accomplish:

The Standard Health Record (SHR) provides a high quality, computable source of patient information by establishing a single target for health data standardization… Enabled through open source technology, the SHR is designed by, and for, its users to support communication across homes and healthcare systems.

Generalities aside, what is an SHR? According to the project website, the SHR specification will contain all information critical to patient identification, emergency care and primary care along with background on social determinants of health. In the future, the group expects the SHR to support genomics, microbiomics and precision medicine.

Before we dismiss this as another me-too project, it’s worth giving the collaborative’s rationale a look:

The fundamental problem is that today’s health IT systems contain semantically incompatible information. Because of the great variety of the data models of EMR/EHR systems, transferring information from one health IT system to another frequently results in the distortion or loss of information, blocking of critical details, or introduction of erroneous data. This is unacceptable in healthcare.

The approach of the Standard Health Record (SHR) is to standardize the health record and health data itself, rather than focusing on exchange standards.

As a less-technical person, I’m not qualified to say whether this can be done in a way that will be widely accepted, but the idea certainly seems intuitive.

In any event, no one is suggesting that the SHR will change the world overnight. The project seems to be at the beginning stages, with collaborators currently prototyping health record specifications leveraging existing medical record models. (The current SHR spec can be found here.)

Still, I’d love for this to work, because it is at least a fairly straightforward idea. Creating a single source of health data truth seems like it might work.

A Hospital CIO Perspective on Precision Medicine

Posted on July 31, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

#Paid content sponsored by Intel.

In this video interview, I talk with David Chou, Vice President, Chief Information and Digital Officer with Kansas City, Missouri-based Children’s Mercy Hospital. In addition to his work at Children’s Mercy, he helps healthcare organizations transform themselves into digital enterprises.

Chou previously served as a healthcare technology advisor with law firm Balch & Bingham and Chief Information Officer with the University of Mississippi Medical Center. He also worked with the Cleveland Clinic to build a flagship hospital in Abu Dhabi, as well as working in for-profit healthcare organizations in California.

Precision Medicine and Genomic Medicine are important topics for every hospital CIO to understand. In my interview with David Chou, he provides the hospital CIO perspective on these topics and offers insights into what a hospital organization should be doing to take part in and be prepared for precision medicine and genomic medicine.

Here are the questions I asked him, if you’d like to skip to a specific topic in the video or check out the full video interview embedded below:

What are you doing in your organization when it comes to precision medicine and genomic medicine?

Will Data Aggregation For Precision Medicine Compromise Patient Privacy?

Posted on April 10, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

Like anyone else who follows medical research, I’m fascinated by the progress of precision medicine initiatives. I often find myself explaining to relatives that in the (perhaps far distant) future, their doctor may be able to offer treatments customized specifically for them. The prospect is awe-inspiring even for me, someone who’s been researching and writing about health data for decades.

That being the case, there are problems in bringing so much personal information together into a giant database, suggests Jennifer Kulynych in an article for OUPblog, which is published by Oxford University Press. In particular, bringing together a massive trove of individual medical histories and genomes may have serious privacy implications, she says.

In arguing her point, she makes a sobering observation that rings true for me:

“A growing number of experts, particularly re-identification scientists, believe it simply isn’t possible to de-identify the genomic data and medical information needed for precision medicine. To be useful, such information can’t be modified or stripped of identifiers to the point where there’s no real risk that the data could be linked back to a patient.”

As she points out, norms in the research community make it even more likely that patients could be individually identified. For example, while a doctor might need your permission to test your blood for care, in some states it’s quite legal for a researcher to take possession of blood not needed for that care, she says. Those researchers can then sequence your genome and place that data in a research database, and the patient may never have consented to this, or even know that it happened.

And there are other, perhaps even more troubling ways in which existing laws fail to protect the privacy of patients in researchers’ data stores. For example, current research and medical regs let review boards waive patient consent or even allow researchers to call DNA sequences “de-identified” data. This flies in the face of conventional wisdom that there’s no re-identification risk, she writes.

On top of all of this, the technology already exists to leverage this information for personal identification. For example, genome sequences can potentially be re-identified through comparison to a database of identified genomes. Law enforcement organizations have already used such data to predict key aspects of an individual’s face (such as eye color and race) from genomic data.

Then there’s the issue of what happens with EMR data storage. As the author notes, healthcare organizations are increasingly adding genomic data to their stores, and sharing it widely with individuals on their network. While such practices are largely confined to academic research institutions today, this type of data use is growing, and could also expose patients to involuntary identification.

Not everyone is as concerned as Kulynych about these issues. For example, a group of researchers recently concluded that a single patient anonymization algorithm could offer a “standard” level of privacy protection to patient, even when the organizations involved are sharing clinical data. They argue that larger clinical datasets that use this approach could protect patient privacy without generalizing or suppressing data in a manner that would undermine its usefulness.

But if nothing else, it’s hard to argue Kulynych’s central concern, that too few rules have been updated to reflect the realities of big genomic and medical data stories. Clearly, state and federal rules  need to address the emerging problems associated with big data and privacy. Otherwise, by the time a major privacy breach occurs, neither patients nor researchers will have any recourse.

AMIA Asks NIH To Push For Research Data Sharing

Posted on January 23, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

The American Medical Informatics Association has is urging leaders at the NIH to take researchers’ data sharing plans into account when considering grant proposals.

AMIA is responding to an NIH Request for Information (topic: “Strategies for NIH Data Management, Sharing and Citation”) was published in November 2016. In the RFI, it asked for feedback on how digital scientific data generated by NIH-funded research should be managed and disclosed to the public. It also asked for input on how to set standards for citing shared data and software.

In its response, AMIA said that the agency should give researchers “institutional incentives” designed to boost data sharing and strengthen data management. Specifically, the trade group suggested that NIH make data sharing plans a “scoreable” part of grant applications.

“Data sharing has become such an important proximal output of research that we believe the relative value of a proposed project should include consideration of how its data will be shared,” AMIA said in its NIH response. “By using the peer-review process, we will make incremental improvements to interoperability, while identifying approaches to better data sharing practices over time.”

To help the agency implement this change, AMIA recommended that applicants earmark funds for data curation and sharing as part of the grants’ direct costs. Doing so will help assure that data sharing becomes part of research ecosystems.

AMIA also recommends that NIH offer rewards to scholars who either create or contribute to publicly-available datasets and software. The trade group argues that such incentives would help those who create and analyze data advance their careers. (And this, your editor notes, would help foster a virtuous cycle in which data-oriented scientists are available to foster such efforts.)

Right now, to my knowledge, few big data integration projects include the kind of front-line research data we’re talking about here.  On the other hand, while few community hospitals are likely to benefit from research data in the near term, academic medical organizations are having a bit more luck, and offer us an attractive picture of how things could be.

For example, look at this project at Vanderbilt University Medical Center which collects and manages translational and clinical research data via an interface with its EMR system.

At Vanderbilt, research data collection is integrated with clinical EMR use. Doctors there use a module within the research platform (known as REDCap) to collect data for prospective clinical studies. Once they get their research project approved, clinicians use menus to map health record data fields to REDCap. Then, REDCap automatically retrieves health record data for selected patients.

My feeling is that if NIH starts pushing grantees to share data effectively, we’ll see more projects like REDCap, and in turn, better clinical care supported by such research. It looks to me like everybody wins here. So I hope the NIH takes AMIA’s proposal seriously.

IBM Watson Partners With FDA On Blockchain-Driven Health Sharing

Posted on January 16, 2017 I Written By

Anne Zieger is a healthcare journalist who has written about the industry for 30 years. Her work has appeared in all of the leading healthcare industry publications, and she’s served as editor in chief of several healthcare B2B sites.

IBM Watson Health has partnered with the FDA in an effort to create scalable exchange of health data using blockchain technology. The two will research the exchange of owner-mediated data from a variety of clinical data sources, including EMRs, clinical trial data and genomic health data. The researchers will also incorporate data from mobiles, wearables and the Internet of Things.

The initial project planned for IBM Watson and the FDA will focus on oncology-related data. This makes sense, given that cancer treatment involves complex communication between multispecialty care teams, transitions between treatment phases, and potentially, the need to access research and genomic data for personalized drug therapy. In other words, managing the communication of oncology data is a task fit for Watson’s big brain, which can read 200 million pages of text in 3 seconds.

Under the partnership, IBM and the FDA plan to explore how the blockchain framework can benefit public health by supporting information exchange use cases across varied data types, including both clinical trials and real-world data. They also plan to look at new ways to leverage the massive volumes of diverse data generated by biomedical and healthcare organizations. IBM and the FDA have signed a two-year agreement, but they expect to share initial findings this year.

The partnership comes as IBM works to expand its commercial blockchain efforts, including initiatives not only in healthcare, but also in financial services, supply chains, IoT, risk management and digital rights management. Big Blue argues that blockchain networks will spur “dramatic change” for all of these industries, but clearly has a special interest in healthcare.  According to IBM, Watson Health’s technology can access the 80% of unstructured health data invisible to most systems, which is clearly a revolution in the making if the tech giant can follow through on its potential.

According to Scott Lundstrom, group vice president and general manager of IDC Government and Health Insights, blockchain may solve some of the healthcare industry’s biggest data management challenges, including a distributed, immutable patient record which can be secured and shared, s. In fact, this idea – building a distributed, blockchain-based EMR — seems to be gaining traction among most health IT thinkers.

As readers may know, I’m neither an engineer nor a software developer, so I’m not qualified to judge how mature blockchain technologies are today, but I have to say I’m a bit concerned about the rush to adopt it nonetheless.  Even companies with a lot at stake  — like this one, which sells a cloud platform backed by blockchain tech — suggest that the race to adopt it may be a bit premature.

I’ve been watching tech fashions come and go for 25 years, and they follow a predictable pattern. Or rather, they usually follow two paths. Go down one, and the players who are hot for a technology put so much time and money into it that they force-bake it into success. (Think, for example, the ERP revolution.) Go down the other road, however, and the new technology crumbles in a haze of bad results and lost investments. Let’s hope we go down the former, for everyone’s sake.

How Precision Medicine Can Save More Lives and Waste Less Money (Part 2 of 2)

Posted on August 10, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article looked at how little help we get from genetic testing. Admittedly, when treatments have been associated with genetic factors, testing has often been the difference between life and death. Sometimes doctors can hone in with laser accuracy on a treatment that works for someone because a genetic test shows that he or she will respond to that treatment. Hopefully, the number of treatments that we can associate with tests will grow over time.

So genetics holds promise, but behavioral and environmental data are what we can use right now. One sees stories in the trade press all the time such as these:

These studies usually depend on straightforward combinations of data that are easy to get, either from the health care system (clinical or billing data) or from the patient (reports of medication adherence, pain level, etc.).

And we’ve only scratched the surface of the data available to us. Fitness devices, sensors in our neighborhoods, and other input will give us much more. We can also find new applications for data: for instance, to determine whether one institution is overprescribing certain high-cost drugs, or whether an asthma victim is using an inhaler too often, meaning the medication isn’t strong enough. We know that social factors, notably poverty (LGBTQ status is not mentioned in the article, but is another a huge contributor to negative health outcomes, due to discrimination and clinician ignorance) must be incorporated into models for diagnosis, prediction, and care.

President Obama promises that Precision Medicine features both genetics and personal information. One million volunteers are sought for DNA samples and information on age, race, income, education, sexual orientation, and gender identity.

There are other issues that critics have brought up with the Precision Medicine initiative. For instance, its focus on cure instead of prevention weakens its value for long-term public health improvements. We must also remember the large chasm between knowing what’s good for you and doing it. People don’t change notoriously unhealthy behaviors, such as smoking, even when told they are at increased risk. Some experts think people shouldn’t be told their DNA results.

Meanwhile, those genetic database can be used against you. But let’s consider our context, once again, in order to assess the situation responsibly. The data is being mined by police, but it’s probably not very useful because the DNA segments collected are different from what the police are looking for. Behavioral data, if abused, is probably more damning than genetic data.

Just as there are powerful economic forces biasing us toward genetics, social and political considerations weigh against behavioral and environmental data. We all know the weaknesses in the government’s dietary guidelines, heavily skewed by the food industry. And the water disaster in Flint, Michigan showed how cowardice and resistance by the guardians of public health to admitting changes raised the costs in public health measures. Industry lobbying and bureaucratic inertia work together to undermine the simplest and most effective ways of improving health. But let’s get behavioral and environmental measures on the right track before splurging on genetic testing.

How Precision Medicine Can Save More Lives and Waste Less Money (Part 1 of 2)

Posted on August 9, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

We all have by now seen the hype around the Obama Administration’s high-profile Precision Medicine Initiative and the related Cancer Moonshot, both of which plan to cull behavioral and genomic data on huge numbers of people in a secure manner for health research. Major companies have rushed to take advantage of the funds and spotlight what these initiatives offer. I think they’re a good idea so long as they focus on behavioral and environmental factors. (Scandalously, the Moonshot avoids environmental factors, which are probably the strongest contributors to cancer) . What I see is an unadvised over-emphasis on the genetic aspect of health analytics. This can be seen in announcements health IT vendors, incubators, and the trade press.

I can see why the big analytics firms are excited about increasing the health care field’s reliance on genomics: that’s where the big bucks are. Sequencing (especially full sequencing) is still expensive, despite dramatic cost reductions over the past decade. And after sequencing, analysis requires highly specialized expertise that relatively few firms possess. I wouldn’t say that genomics is the F-35 of health care, but is definitely an expensive path to our ultimate goals: reducing the incidence of disease and improving life quality.

Genomics offer incredible promise, but we’re still waiting to see just how it will help us. The problems that testing turns up, such as Huntington’s, usually lack solutions. One study states, “Despite the success of genome-wide association and whole-exome and whole-genome sequencing (WES/WGS) studies in revealing the DNA variants that underlie the genetic basis of disease, the development of effective treatments for most diseases has remained a challenge.” Another says, “Despite much progress in defining the genetic basis of asthma and atopy [predisposition to getting asthma] in the last decade, further research is required.”

When we think about the value of knowing a gene or a genetic deviation, we are asking: “How much does this help predict the likelihood that I’ll get the disease, or that a particular treatment will work on me?” The most impressive “yes” is probably in this regard to the famous BRCA1 and BRCA2 genes. If you are unlucky enough to have certain mutations of these gene, you have a 70% lifetime risk for developing breast or ovarian cancer. This is why testing for the gene is so popular (as well as contentious from an intellectual property standpoint), and why so may women act on the results.

However–this is my key point–only a small percentage of women who get these cancers have these genetic mutations. Most are not helped by testing for the genes, and a negative result on such a test gives them only a slight extra feeling of relief that they might not get cancer. Still, because the incidence of cancer is so high among the unfortunate women with the mutations, testing is worthwhile. Most of the time, though, testing is not worth much, because the genetic component of the disease is small in relation to lifestyle choices, environmental factors, or other things we might know nothing about.

So, although it’s hard enough already to say with any assurance that a particular gene or combination of genes is associated with a disease, it’s even harder to say that testing will make a big difference. Maybe, as with breast or ovarian cancer, a lot of people will get the disease for reasons unrelated to the gene.

In short, several factors go into determining the value of testing: how often a positive test guarantees a result, how often a negative test guarantees a result, how common the disease is, and more. Is there some way to wrap all these factors up into a single number? Yes, there is: it’s called the odds ratio. The higher an odds ratio, the more helpful (using all the criteria I mentioned) an association is between gene and disease, or gene and treatment. For instance, one study found that certain genes have a significant association with asthma. But the odds ratios were modest: 3.203 and 5.328. One would want something an order of magnitude higher to show running a test for the genes would have a really strong value.

This reality check can explain why doctors don’t tend to recommend genetic testing. Many sense that the tests can’t help or aren’t good at predicting most things.

The next section of this article will turn to behavioral and environmental factors.

What Data Do You Need in Order to Guide Behavioral Change?

Posted on June 2, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

This is an exciting time for the health care field, as its aspirations toward value-based payments and behavioral responses to chronic conditions converge on a more and more precise solution. Dr. Joseph Kvedar has called this comprehensive approach connected health and has formed both a conference and a book around it. BaseHealth, a predictive analytics company in healthcare, has teamed up with TriVita to offer a consumer-based service around this approach, which combines access to peer-reviewed research with fine-tuned guidance that taps into personal health and behavioral data and leverages the individual interests of each participant.

I have previously written about BaseHealth’s assessment engine, which asks individuals for information about their activities, family history, and health conditions in order to evaluate their health profile and risk for common diseases. TriVita is a health coaching service with a wide-ranging assessment tool and a number of products, including cutely named supplements such as Joint Complex and Daily Cleanse. TriVita’s nutritionists, exercise coaches, and other staff are overseen by physicians, but their service is not medical: it does not enter the heavily regulated areas where clinicians practice.

I recently talked with BaseHealth’s CEO, Prakash Menon, and Dan Hoemke, its Vice President of Business Development. They describe BaseHealth’s predictive analytics as input that informs TriVita’s coaching service. What I found interesting is the sets of data that seem most useful for coaching and behavioral interventions.

In my earlier article, I wrote, “BaseHealth has trouble integrating EHR data.” Menon tells me that getting this data has become much easier over the past several months, because several companies have entered the market to gather and combine the data from different vendors. Still, BaseHealth focuses on a few sources of medical data, such as lab and biometric data. Overall, they focus on gathering data required to identify disease risk and guide behavior change, which in turn improves preventable conditions such as heart disease and diabetes.

Part of their choice springs from the philosophy driving BaseHealth’s model. Menon says, “BaseHealth wants to work with you before you have a chronic condition.” For instance, the American Diabetes Association estimated in 2012 that 86 million Americans over the age of 20 had prediabetes. Intervening before these people have developed the full condition is when behavioral change is easiest and most effective.

Certainly, BaseHealth wants to know your existing medical conditions. So they ask you about them when you sign up. Other vital signs, such as cholesterol, are also vital to BaseHealth’s analytics. Through a partnership with LabCo, a large diagnostics company in Europe, they are able to tap into lab systems to get these vital signs automatically. But users in the United States can enter them manually with little effort.

BaseHealth is not immune to the industry’s love affair with genetics and personalization, either. They take about 1500 genetic factors into account, helping them to quantify your risk of getting certain chronic conditions. But as a behavioral health service, Menon points out, BaseHealth is not designed to do much with genetic traits signifying a high chance of getting a disease. They deal with problems that you can do something about–preventable conditions. Menon cites a Health 2.0 presentation (see Figure 1) saying that our health can, on average, be attributed 60 percent to lifestyle, 30 percent to genetics, and 10 percent to clinical interventions. But genetics help to show what is achievable. Hoemke says BaseHealth likes to compare each person against the best she can be, whereas many sites just compare a user against the average population with similar health conditions.

Relative importance of health factors

Figure 1. Relative importance of health factors

BaseHealth gets most of its data from conditions known to you, your environment, family history, and more than 75 behavioral factors: your activity, food, over-the-counter meds, sleep activity, alcohol use, smoking, several measures of stress, etc. BaseHealth assessment recommendations and other insights are based on peer-reviewed research. BaseHealth will even point the individual to particular studies to provide the “why” for its recommendations.

So where does TriVita fit in? Hoemke says that BaseHealth has always stressed the importance of human intervention, refusing to fall into the fallacy that health can be achieved just through new technology. He also said that TriVita fits into the current trend of shifting accountability for health to the patient; he calls it a “health empowerment ecosystem.” As an example of the combined power of BaseHealth and TriVita, a patient can send his weight regularly to a coach, and both can view the implications of the changes in weight–such as changes in risk factors for various diseases–on charts. Some users make heavy use of the coaches, whereas others take the information and recommendations and feel they can follow their plan on their own.

As more and more companies enter connected health, we’ll get more data about what works. And even though BaseHealth and TriVita are confident they can achieve meaningful results with mostly patient-generated data, I believe that clinicians will use similar techniques to treat sicker people as well.

Genomic Medicine

Posted on February 3, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Last month I was lucky to lead a panel discussion on the topic of genomics in medicine at CES. I was joined on the panel by Andy De, Global Managing Director and General Manager for Healthcare and Life Sciences at Tableau, and Aaron Black, Director, Informatics, Inova Translational Medicine Institute. There certainly wasn’t enough time in our session to get to everything that was really happening in genomics, but Andy and Aaron do a great job giving you an idea of what’s really happening with genomics and the baseline of genomic data that’s being set for the future. You can see what I mean in the video below:

Be sure to see all of the conferences where you can find Healthcare Scene.

Envisioning the Future of Personalized Healthcare – Predictive Analytics – Breakaway Thinking

Posted on December 16, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The following is a guest blog post by Jennifer Bergeron, Learning and Development Manager at The Breakaway Group (A Xerox Company). Check out all of the blog posts in the Breakaway Thinking series.
Jennifer Bergeron
As 2016 approaches, individuals and organizations are beginning to consider their New Year’s resolutions. In order to make a plan for change, we imagine ways we might reach our goals: “If I eat more vegetables, I’ll lower my cholesterol and have more energy. But if I eat more vegetables and skip the donuts, I will see the same improvements faster!” What if someone could tell us exactly what action or combination of actions would produce which results over a specific timeframe?

In healthcare, predictive analytics is doing just that – providing potential outcomes based on specific factors. The process involves more than gathering statistics that define group results, but research of patient outcomes that allows predictions for individuals. Both technology and statistics are used to sift through these results and turn them into meaningful insights. Considering big data and a patient’s own health information, diagnoses can be more accurate, patient outcomes improved, and readmission rates reduced.

Predictive analytics is being used to help improve patient safety, predict crises in the ICU, uncover hereditary diseases, and reveal correlations between diseases. Researchers at the University of California Davis are using electronic health record (EHR) data to create an algorithm to warn providers about sepsis. Genomic tests, an example of precision medicine, are now available for at-home DNA testing, which allows individuals to discover hereditary traits through genetic sequencing. Correlations can be found between illnesses using EHR data. Thirty thousand Type 2 diabetic patients were studied to predict the risk of dementia.

BMC Medical Informatics & Decision Making reported on the use of EHRs as a prediction tool for readmission or death among adult patients. The model was built using specific criteria: candidate risk factors had to be available in the EHR system at each hospital, were routinely collected and available within 24 hours, and were predictors of adverse outcomes.

But predictive analytics can only be as good as the data it uses. Accurate, relevant data is necessary in order to receive valuable information from the algorithms. But the information can be hard to find, considering that healthcare data is expected to grow from 500 to 25,000 petabytes between 2012 and 2020 (A petabyte is a million billion bytes). In an effort to solve this challenge, more than $1.9 billion of capital has been raised since 2011 to fund companies that can gather, process, and interpret the increasing amount of information.

There are four principles to follow in order to optimize how information is captured, stored, and managed in the EHR system:

  • Ensure that leadership delivers the message to the organization about the importance and future impacts of the EHR system
  • Quickly bring staff up to speed
  • Measure and track the results of the staff’s learning
  • Continue to support and invest in EHR adoption.

The EHR stands as the first point of collection of much of this data. Given the importance of accuracy and consistency, it is critical that EHR education and use is made a priority in healthcare.

Xerox is a sponsor of the Breakaway Thinking series of blog posts. The Breakaway Group is a leader in EHR and Health IT training.