Wait-and-See Approach Best for Newly Approved Meds
I am a new hospitalist, out of residency for two years, and feel very uncertain about using new or recently approved medications on my patients. Do you have any suggestions about how or when new medications should be used in practice?
–David Ray, MD
Dr. Hospitalist responds:
I certainly can understand your trepidation about using newly approved medications. Although our system of evaluating and approving medications for clinical use is considered the most rigorous in the world, 16 so-called novel medications were pulled from the shelves from 2000 to 2010, which equates to 6% of the total approved during that period. All in all, not a bad ratio, but the number of poor outcomes associated with a high-profile dud can be astronomical.
I think there are several major reasons why we have adverse issues with medications that have survived the rigors of the initial FDA approval process. First, many human drug trials are conducted in developing countries, where the human genome is much more homogenous and the liabilities for injuries are way less than in the U.S. Many researchers have acknowledged the significant role of pharmacogenomics, and how each physiology and pathology is unique. Couple these with the tendency to test drugs one at a time in younger cohorts—very few medications are administered in this manner in the U.S.—and one can quickly see how complex the equation becomes.
Another reason is the influence relegated to clinical trials. All clinicians should be familiar with the stages (0 to 4) and processes of how the FDA analyzes human drug trials. The FDA usually requires that two “adequate and well-controlled” trials confirm that a drug is safe and effective before it approves it for sale to the public. Once a drug completes Stage 3, an extensive statistical analysis is conducted to assure a drug’s demonstrated benefit is real and not the result of chance. But as it turns out, because the measured effects in most clinical trials are so small, chance is very hard to prove or disprove.
This was astutely demonstrated in a 2005 article published in the Journal of the American Medical Association (2005;294(2):218-228). John P. Ioannidis, MD, examined the results of 49 high-profile clinical-research studies in which 45 found that proposed intervention was effective. Of the 45 claiming effectiveness, seven (16%) were contradicted by subsequent studies, and seven others had found effects that were stronger than those of subsequent studies. Of the 26 randomly controlled trials that were followed up by larger trials, the initial finding was entirely contradicted in three cases (12%); another six cases (23%) found the benefit to be less than half of what had been initially reported.
In most instances, it wasn’t the therapy that changed but the sample size. In fact, many clinicians and biostatisticians believe many more so-called “evidence-based” practices or medicinals would be legitimately challenged if subjected to rigorous follow-up studies.
In my own personal experience as a hospitalist, I can think of two areas where the general medical community accepted initial studies only to refute them later: perioperative use of beta-blockers and inpatient glycemic control.
In light of the many high-profile medications that have been pulled from the market, I don’t like being in the first group to jump on the bandwagon. My general rule is to wait three to five years after a drug has been released before prescribing for patients. As always, there are exceptions. In instances where new medications have profound or life-altering potential (i.e. the new anticoagulants or gene-targeting meds for certain cancers) and the risks are substantiated, I’m all in!