In a letter linked to by Nature, Daniel Kahneman told social psychologists that they should worry about the repeatability of what are called “social priming effects”. For example, after you see words associated with old age you walk more slowly. John Bargh of New York University is the most prominent researcher in the study of these effects. Many people first heard about them in Malcolm Gladwell’s Blink.
Questions have been raised about the robustness of priming results. The storm of doubts is fed by several sources, including the recent exposure of fraudulent researchers [who studied priming], general concerns with replicability that affect many disciplines, multiple reported failures to replicate salient results in the priming literature, and the growing belief in the existence of a pervasive file drawer problem [= studies with inconvenient results are not published] that undermines two methodological pillars of your field: the preference for conceptual over literal replication and the use of meta-analysis.
He went on to propose a complicated scheme by which Lab B will see if a result from Lab A can be repeated, then Lab C will see if the result from Lab B can be repeated. And so on. A non-starter, too complex and too costly. What Kahneman proposes requires substantial graduate student labor and will not help the grad students involved get a job — in fact, “wasting” their time (how they will see it) makes it harder for them to get a job. I don’t think anyone believes grad students should pay for the sins of established researchers.
I completely agree there is a problem. It isn’t just social priming research. You’ve heard the saying: “1. Fast. 2. Cheap. 3. Good. Choose 2.” When it comes to psychology research, “1.True. 2. Career. 3. Simple. Choose 2.” Overwhelmingly researchers choose 2 and 3. There isn’t anything wrong with choosing to have a career (= publish papers) so I put a lot of blame for the current state of affairs on journal policies, which put enormous pressure on researchers to choose “3. Simple”. Hardly any journals in psychology publish (a) negative results, (b) exact replications, and (c) complex sets of results (e.g., where Study 1 finds X and apparently identical Study 2 does not find X). The percentage of psychology papers with even one of these characteristics is about 0.0%. You could look at several thousand and not find a single instance. My proposed solution to the problem pointed out by Kahneman is new journal policies: 1. Publish negative results. 2. Publish (and encourage) exact replications. 3. Publish (and encourage) complexity.
Such papers exist. I previously blogged about a paper that emphasized the complexity of findings in “choice overload” research — the finding that too many choices can have bad effects. Basically it concluded the original result was wrong (“mean effect size of virtually zero”), except perhaps in special circumstances. Unless you read this blog — and have a good memory — you are unlikely to have heard of the revisionist paper. Yet I suspect almost everyone reading this has heard of the original result. A friend of mine, who has a Ph.D. in psychology from Stanford, told me he considered Sheena Iyengar, the researcher most associated with the original result, the greatest psychologist of his generation. Iyengar wrote a book (“The Art of Choosing”) about the result. I found nothing in it about the complexities and lack of repeatability.
Why is personal science important? Because personal scientists — people doing science to help themselves, e.g., sleep better — ignore 2. Career and 3. Simple.