In a comment on an article in The Scientist, someone tells a story with profound implications:
I participated in 1992 NCI SWOG 9005 Phase 3 [clinical trial of] Mifepristone for recurrent meningioma. The drug put my tumor in remission when it regrew post surgery. However, other more despairing patients had already been grossly weakened by multiple brain surgeries and prior standard brain radiation therapy which had failed them before they joined the trial. They were really not as young, healthy and strong as I was when I decided to volunteer for a “state of the art” drug therapy upon my first recurrence. . . . I could not get the names of the anonymous members of the Data and Safety Monitoring committee who closed the trial as “no more effective than placebo”. I had flunked the placebo the first year and my tumor did not grow for the next three years I was allowed to take the real drug. I finally managed to get FDA approval to take the drug again in Feb 2005 and my condition has remained stable ever since according to my MRIS.
Apparently the drug did not work for most participants in the trial — leading to the conclusion “no mnore effective than placebo” — but it did work for him.
The statistical tests used to decide if a drug works are not sensitive to this sort of thing — most patients not helped, a few patients helped. (Existing tests, such as the t test, work best with normality of both groups, treatment and placebo, whereas this outcome produces non-normality of the treatment group, which reduces test sensitivity.) It is quite possible to construct analyses that would be more sensitive to this than existing tests, but this has not been done. It is quite possible to run a study that produces for each patient a p value for the null hypothesis of no effect (a number that helps you decide if that particular patient has been helped) but this too has not been done.
Since these new analyses would benefit drug companies, their absence is curious.