Personalized Medicine: Innovation to Clinical Execution
My story of innovation begins in 1981, when, as a young surgical trainee in Boston, I was shocked to learn that my dreams of joining the pioneers in the exciting new field of heart transplantation had been abruptly shattered—by a policy decision. The trustees of the hospital in which I was training decided to put an indefinite moratorium on performing this emerging procedure because they did not feel it was cost-effective. In response, I travelled across the country to Stanford to be mentored by the father of heart transplantation, Dr. Norman Shumway.
Lesson learned: policy can destroy the innovation ecosystem…or, if we are smart, policy can accelerate it. Replacing the human heart is still state-of-the-art, but just this year a revolutionary paper from the Gladstone Institute in Nature shows how far we have progressed, accelerated by public policy to invest in science. For the first time ever, adult stem cells were used to reprogram non-beating cells into beating cardiomyocytes in a mouse model. Cardiac fibroblasts—essentially inert fibrous tissue—were genetically reprogrammed to become newly born, beating and blood-pumping cardiomyocytes by introduction of retrovirals.
That innovation in regenerative medicine means that someday we may not have to cut out the entire human heart and replace it with someone else’s to give a person another chance to live. But let me turn to an even more exciting accelerator of innovation: data. Our nation spends millions on clinical trials that demonstrate how a disease, heart failure, for example, can be treated effectively by a new drug. Even with all of the convincing evidence, some patients treated with a new drug actually get worse. Why does what appears to be the right treatment sometimes yield worse results?
For decades, this question has eluded health care professionals. However, as we learn, we begin to ask different questions. We’ve evolved from asking, “How can we improve our research?” to asking a more fundamental and probing question, “What if what was touted as the right therapy really wasn’t right for this particular patient?” Could this simple idea be behind the more than 100,000 lives lost each year due to medical errors, as reported by the Institute of Medicine? Or the estimated 30 percent rate of waste in our health care spending?
Human genetic variation is one key to these questions. Many of what used to be considered inconsequential genetic mutations are now being identified as critical determinants of how the body functions, how it responds to disease, and how it would best be aided by therapy. Together, these determinants are forming a very new phenomenon in medical care, called “personalized medicine.”
With personalized medicine, researchers use extremely sophisticated data analysis to separate a large population of people who used to be treated uniformly into smaller, more discrete, “personalized clusters” of people who share some important physiology or biochemical variants that respond in different ways to a particular treatment.
Every day, we are gaining new knowledge about the role of genetic factors in common adult diseases including gallstones, osteoporosis, osteoarthritis, skin cancer, prostate cancer, migraine headaches, obesity, mental retardation, Alzheimer’s, arthritis, diabetes, multiple sclerosis, schizophrenia, and hearing loss. Just to take one example, Alzheimer’s disease alone affects 4 million Americans at a cost of $152 billion per year. The math quickly becomes significant when we consider the impact of improving the lives of just 10 percent of these patients by better understanding how they should be treated, how their version of the disease is different from other versions, and how to avoid side effects of medicines for the personalized characteristics of their disease.
Here’s how we do it. Vanderbilt University and a host of researchers in the public and private sectors are actively combining publicly available data sources with deidentified patient medical records and genomic data to create massive databases. They are being used today to discover the genetic basis behind the predilection some people have for specific diseases and even the genetics of medication metabolism that predict which patients will respond differently to conventional therapies.
Let me share an example from my own specialty of organ transplantation. Researchers at Vanderbilt were aware that a large percent of patients on tacrolimus for organ rejection developed dangerous adverse effects. Looking at data from the electronic health record, they discovered a wide variation in the blood concentrations of this drug in transplant patients. The massive database of clinical and genetic data was used to identify a set of kidney transplant patients on this drug. Using genome-wide association studies, they identified a specific genetic variant associated with tacrolimus blood level variation. Vanderbilt clinicians now test patients for this variant before starting treatment and then carefully monitor tacrolimus blood levels. For the individual patient, potentially severe side effects have been reduced, including organ rejection and even death. The impact on costs and quality of life are self-evident. Now that is power in data!
As exciting as this health care evolution is, it only scratches the surface of what we can do with health care data at this scale. Imagine the speed with which we could learn about diseases if there were more national disease registries—e.g., registries to accelerate learning from those early heart transplants. Imagine how efficient a case manager might be if she had ready access to data for an individual from every relevant patient encounter at all potential sites of care?
Innovation coupled with clinical execution and implementation at our research universities is leading to better value for the ecosystem of health care delivery and optimum care for each individual patient.