People who believe in “evidence-based medicine” say that double-blind clinical trials are the best form of evidence. Generally this is said by people who know very little about double-blind clinical trials. One reason they are not always the best form of evidence is that data may be missing. Nowadays more data is missing than in the past:
By [missing data] he [Thomas Marciniak] means participants who withdrew their consent to continue participating in the trial or went “missing” from the dataset and were not followed up to see what happened to them. Marciniak says that this has been getting worse in his 13 years as an FDA drug reviewer and is something that he has repeatedly clashed with his bosses about.
“They [his bosses] appear to believe that they can ignore missing and bad data, not mention them in the labels, and interpret the results just as if there was no missing or bad data,” he says, adding: “I have repeatedly asked them how much missing or bad data would lead them to distrust the results and they have consistently refused to answer that question.”
In one FDA presentation, he charted an increase in missing data in trials set up to measure cardiovascular outcomes.
“I actually plotted out what the missing data rates were in the various trials from 2001 on,” he adds. “It’s virtually an exponential curve.”
Another sort of missing data involves what is measured. In one study of whether a certain drug (losartan) increased cancer, lung cancer wasn’t counted as cancer. In another case, involving Avandia, a diabetes drug, “serious heart problems . . . were not counted in the study’s tally of adverse events.”
Here is a presentation by Marciniak. At one point, he asks the audience, Why should you believe me rather than the drug company (GSK)? His answer: “Neither my job nor (for me) $100,000,000’s are riding on the results.” It’s horrible, but true: Our health care system is almost entirely run by people who make more money (or make the same amount of money for less work) if they exaggerate its value — if they ignore missing data and bad side effects, for example. Why the rest of us put up with this in the face of overwhelming evidence of exaggeration (for example, tonsillectomies) is an interesting question.
Thanks to Alex Chernavsky.