2016Weeknotes

Lost in Translation: Are we utilizing the best models of human disease?

We are all aware of the time and costs associated with getting a drug to market…at least 12 years and ~$1.8 billion, (2010 figures)1, but far fewer people are aware of the failure rates associated with getting a drug to market. The failure of a molecular entity along the drug development pipeline is categorized by when the failure occurs: the preclinical or clinical phase.2 About 35% of molecular entities will enter human trials (clinical) from the preclinical phase. Depending on the disease system, roughly 90% of those drugs entering human trials will fail.1,2 Why do so many drugs fail? Clearly this is a complicated question, but increasingly researchers and policymakers are wondering whether the preclinical phase of drug development is contributing to the high attrition (failure) during the clinical phases.

Preclinical research involves both in vitro testing, analyses in test tubes and petri dishes, as well as in vivo testing, analyses within living organisms. To get the best drugs to market efficiently, we try to build off of existing knowledge in the preclinical phase hoping to identify the most promising molecules to enter the clinical phase. We can only do that if our science is top notch – reproducible and predictive.

An experiment is said to be reproducible when more than one lab or researcher using the same methods and materials can replicate it achieving the same results

Transparent, reproducible basic science is foundational in the research community. It is important that this science is right because it largely dictates what molecules, pathways, or modes of action are pursued in therapeutic drug development. Notably, transparency and reproducibility are not equivalent to scientific “truths” or to scientific “fallacies”. That said, these two factors play a significant role on the road to finding the actual scientific “truth”. Another important factor is best research practices, including the appropriate use of models and experimental design. Preclinical research that is not reproducible, performed and reported transparently, or completed using best research practices, intentionally or unintentionally, limits the value of the data or outcome.

In early 2014, a manuscript from the directors of the National Institute of Health (NIH)3 acknowledged a growing concern from some scientists that the system itself was partly responsible for the irreproducibility of (in one estimate) over 50% of preclinical research in the literature, accounting for about $28 billion/year.4 Collins and Tabak, identified several causes for this lack of reproducibility; the system of incentives (“publish or perish”), poor training in experimental design, and poor reporting of experimental methods. Further, Collins and Tabak identified animal models as an area of research particularly susceptible to reproducibility troubles. The NIH has since engaged with the research and publishing communities to create strategies and initiatives on how to improve the situation.

Why rely so heavily on animal models, which, given our advances in technology may seem archaic and crude?

      1. Historically, animal models have been used in toxicity testing since the late 1930s when Congress passed the Food, Drug, and Cosmetic Act (FDCA) of 1938 in response to the sulfanilamide tragedy.  Since then, evidence of safety (typically from animal toxicity studies) is required before a drug is allowed on the market. While costly and time-consuming and for obvious ethical reasons, animal testing is dramatically less costly and complex than testing on humans. Thus, pharmaceutical companies prefer that if/when drugs do fail, they fail fast and hard during preclinical testing, so less time, money, and resources are spent on something that will not be useful or profitable.
      2. Logistically and financially speaking, building infrastructure and labs have largely been tailored to support animal housing and research. The pharmaceutical industry has been quicker than academia to move away from animal models in preclinical research partly because procuring new equipment is expensive on the front-end.  Furthermore, even if alternatives to animal testing are more efficient, (cheaper and faster), it is still unclear how to integrate the two types of data.
      3. Culturally, animal models are well known. The technological advancements utilizing in vitro systems are relatively recent compared to animal research. For years, animal research has been considered the gold standard because, frankly, that is all we knew. So understandably, some scientists are still skeptical of using alternative methods.

However, given failure rates during the clinical phases, it is reasonable to question the predictivity of the animal models’ that companies and universities have grown accustomed to using in the drug discovery phase and further, to wonder whether these models are pointing researchers in the wrong direction.

Predictivity is this context refers to the ability of an animal model to accurately identify a target or respond to a molecule that will produce successful clinical outcomes in humans.

Using predictive models in preclinical drug development is critical in pursuing the correct molecules and targets that will resonate in human trials. The literature shows that when animal models are used to determine whether a molecule should enter the drug pipeline, those decisions are too often wrong. As one article points out, of over 200 different interventions that were effective in a mouse model for Alzheimer’s disease, zero were shown to be effective in human trials.1 This is one of several important examples from published papers. Given this evidence, it seems as though these animal models are predictive of not necessarily the wrong molecules, but definitively not the right molecules.

In June 2013, the NIH’s Scientific Management Review Board (SMRB) held a meeting on assessing the value of biomedical research in response to a 2012 request from the NIH director. Speaking on the “Value of Federally-Funded Biomedical Research in the Development of Medical Interventions and Treatments” and from experience, Elias Zerhouni, (NIH director from 2002 to 2008), concluded with three lessons. One of these lessons was that science has relied too heavily upon animal data: “[in over-utilizing knockout mice] We have moved away from studying human disease in humans… We all drank the Kool-Aid on that one, me included. The problem is that it hasn’t worked, and it’s time we stopped dancing around the problem…We need to refocus and adapt new methodologies for use in humans to understand disease biology in humans.”5

This is not to say that there have been no reproducible animal models with positive impacts on medicine and public health, and I am not suggesting to throw out the baby with the bathwater. Instead, I am calling into question the extent to which animal models are funded and relied upon in drug discovery. Given such high attrition rates, perhaps something is being lost in translation? While there is a place for animal models in drug discovery, our expectations of these models must be managed. Some have been useful in identifying pathological mechanisms, but fewer have proven to be predictive for drug development and that distinction must be realized for young and veteran scientists alike. Since money and time have not been convincing, what will it take an already pinched funding situation to focus resources more efficiently and effectively?

1Garner, J. (2014). The significance of meaning: why do over 90% of behavioral neuroscience results fail to translate to humans, and what can we do to fix it? ILAR Journal, 55(3), 438-456.

2 Paul, S. M., et al. (2010). How to improve R&D productivity: the pharmaceutical industry’s grand challenge. Nature reviews Drug discovery, 9(3), 203-214.

3 Collins, F. S., & Tabak, L. A. (2014). NIH plans to enhance reproducibility. Nature, 505(7485), 612–613.

4 Freedman, L. P., Cockburn, I. M., & Simcoe, T. S. (2015). The Economics of Reproducibility in Preclinical Research. PLoS Biology, 13(6), e1002165. http://doi.org/10.1371/journal.pbio.1002165

5 https://nihrecord.nih.gov/newsletters/2013/06_21_2013/story1.htm

6 6https://www.nih.gov/research-training/rigor-reproducibility

By:
Michele Palopoli