By S. L. Baker, features writer
Consumers constantly are told how complicated it is to get a new drug on the market. After all, researchers have to jump through all sorts of hoops to assure safety before new therapies are approved for the public, right?
It turns out they may be missing some of those hoops or not jumping through some of the most important ones.
In fact, huge red flags are being raised about how drugs are tested and approved in two new studies, including one just published in the May 4th issue of the Journal of the American Medical Association (JAMA).
A case in point: it turns out that only about half of the new prescription medications pushed onto the market over the last decade had the proper data together for the U.S. Food and Drug Administration - yet the FDA approved them anyhow.
The information in question is known specifically as comparative effectiveness data. And it is - or should be - a very big deal when it comes to deciding whether a drug should be approved and sold to the public.
According to the Institute of Medicine, comparative effectiveness data is defined as the "generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care."
In other words, how does a new drug stack up against other treatments -- is it more beneficial, safer, or does it have more potential dangers?
Comparative effectiveness information on drugs is especially important when doctors are making decisions about whether to prescribe a med, and to whom, soon after a drug is approved. That's because when Big Pharma medications first hit the market, physicians are relying on what drug companies and the FDA tell them about a medication. It takes a while for real life reports to come in as people report reactions, side effects (including deaths related to a drug) to become clearer.
Also, there are usually not data from large head-to-head trials comparing multiple treatments available when a medication first hits the marketplace. "Comparative effectiveness is taking on an increasingly important role in U.S. health care, yet little is known about the availability of comparative efficacy data for drugs at the time of their approval in the United States," according to background information in the new JAMA study.
It's not like there's not money to come up with this information, either. In 2009, Congress allocated $1.1 billion of taxpayers' money to comparative effectiveness research.
For the JAMA study, researcher Nikolas H. Goldberg and colleagues from Brigham and Women's Hospital and Harvard Medical School, Boston, investigated the proportion of recently approved drugs that had comparative efficacy data available at the time they were authorized by the FDA to be sold in the U.S. They also examined the availability of this information over time and by therapeutic indication by checking out approval packages publicly available through the online database of drug products (dubbed new molecular entities, NMEs, for short) approved by FDA between 2000 and 2010.
The researchers found that only about half of 197 eligible approved NMEs between 2000 and 2010 had comparative efficacy data available at the time they were approved to be marketed.
Meanwhile, another recent study throws needed light on the limited data behind the safety and effectiveness of some Big Pharma drugs.
Research led jointly by Alexander Tsai of Harvard University and Nicholas Rosenlicht of the University of California San Francisco just published in PLoS Medicine zeroed in on the medication aripiprazole, which is prescribed treating bipolar disorder.
1 | 2