The Trouble With Rigor

This is an easy question: When writing down numbers, when is it bad to be precise? Answer: When you exceed the precision to which the numbers were measured. If a number was measured with a standard error of 5 (say), don’t record it as 150.323.

But this, apparently, is a hard question: When planning an experiment, when it is bad to be rigorous? Answer: When the effort involved is better used elsewhere. I recently came across the following description of a weekend conference for obesity researchers (December 2006, funded by National Institute of Diabetes & Digestive & Kidney Diseases):

Obesity is a serious condition that is associated with and believed to cause much morbidity, reduced quality of life, and decreased longevity. . . . Currently available treatments are only modestly efficacious and rigorously evaluating new (and in some cases existing) treatments for obesity are clearly in order. Conducting such evaluations to the highest standards and so that they are maximally informative requires an understanding of best methods for the conduct of randomized clinical trials in general and how they can be tailored to the specific needs of obesity research in particular. . . . We will offer a two-day meeting in which leading obesity researchers and methodologists convene to discuss best practices for randomized clinical trials in obesity.

Rigorously evaluating new treatments”? How about evaluating them at all? Evaluation of new treatments (such as new diets) is already so difficult that it almost never occurs; here is a conference about how to make such evaluations more difficult.

This mistake happens in other areas, too, of course. Two research psychiatrists have complained that misguided requirements for rigor have had a very bad effect on finding new treatments for bipolar disorder.