Assorted Links

Thanks to Brent Pottenger, Phil Alexander, dearime, and Casey Manion.

5 Replies to “Assorted Links”

  1. I love Earth Clinic.

    I do have a question about Kombucha. I’ve only seen it at Whole Foods even tried it a couple times. I was wondering if you think that Kombucha is good enough. When I googled it I found people making their own and was wondering if you make your own or buy it somewhere? Do you think if I just bought it at whole foods it would be just as good as making it? Also, how much should I drink a day would you suggest? The bottles from WF are pretty large.. like at least 20oz I think.

    1. yes, Whole Foods kombucha is good enough. I prefer to make my own, partly because it tastes better and partly because it’s much cheaper (the ingredients are practically free) and more convenient. Try half a bottle per day. That might be enough to notice benefits.

  2. Seth, I’d really like to hear your thoughts about the Stapel case. I’ve been talking to a lot of people both at my work, on google+ and on facebook. One thing that keeps turning up are questions about how to make science more robust (well, that is my take on the discussions about open source data, publication of direct replication), perhaps not to prevent fraud, because there will always be the temptation for some to cheat, but to limit the damage.

    I always appreciate what you have to say (whether or not I agree), and I know you have been involved with detecting fraud.

    1. Given that Stapel supposedly faked so many papers, I would like to know if the numbers in his papers pass that distribution-of-first-digit test that has been used to detect accounting fraud. If he made them up, they should fail that test, whereas the same sort of digits from a random sample of similar papers should pass that test. That would be a way to learn to detect fraud in the future. My experience with fraud detection suggests that fraudulent data is pretty easy to notice.

      A good exercise would be to ask for this or that paper: How can we test whether the data in this paper are real? Perhaps now that journals are more likely to require raw data, that first-digit test can be more sensitive since it can be done on the raw data.

      It would be interesting to give grad students (or someone else) five papers, one by Stapel, with names removed. Five similar papers. Then ask them which is most likely to be faked.

      I also wonder what other researchers in his area thought of his research before the fraud was detected. Now too late to find out, I suppose.

  3. Thanks Seth

    And, yes, it is too late, but, for what it is worth, I can give you my n=1 experience from having attempted to adapt one of Stapel’s paradigms. And a kind of filtered current impression. I’m kind of peripheral when it comes to the field. I’m interested in emotion, and I was interested in modeling emotion (from a kind of dynamical systems perspective), so I ended up accepted into a joint PhD at IU [Indiana University?] where I did a lot of cog sci/math psych classes as well as social cognition. They were very accepting of my somewhat unusual position. I’m still very peripheral for various reasons.

    Stapel visited during my first couple of years (so this is late 90’s). He gave a talk, things surrounding the talk have by now become short propositions mostly in my memory. A couple of things stand out (it will be relevant). He had done work showing that the Ebbinghaus illusion is influenced by social categories. Instead of using the regular circles, they had used line drawings which in some instances depicted people of different social status (they also used trucks). Rich Shiffrin was incredulous. Not, as I recall, in a “fake data” way, but more in a “must be a fluke” way.

    But the result of the visit was that my adviser, Paula Niedenthal, thought we could adapt this paradigm for investigating our interests which was how emotional state influences perception of emotional stimuli. Her former grad student got stimuli put together at his lab, and I did the actual running of subjects. We did two versions of it, and nothing panned out, which was unremarkable. I had just run 5 studies based on a promising pilot, and nothing panned out. I think all of us have experienced that (I keep thinking about your comment on your experience with rats and data).

    Also, we are very familiar with results that make us incredulous and are hard to replicate, etc. At the time the priming work from Bargh was something people wondered about. I’ve done some work on that (we tried to tease out whether the effects we see from felt emotion were from emotion or from the priming of emotion concepts which also comes with the feeling). That was 10 versions of studies, that eventually got published (and conclusion is that it is the emotion, not the priming).

    I was reading Stapel’s paper about the Ebbinghaus work, and at that time none of it struck me as fishy. I still don’t know if this was one of the papers where he faked data, I’m now simply assuming it. But I must say I was quite shocked when I heard that he had been fabricating data for a long long long time, possibly because I had done that work.

    I’m reading a lot of discussions about this (and I’m working on initiating a discussion at my university where I want to focus on how to make the science more robust – more to lessen the impact of the inevitable cheaters than to catch them out). Some of those that are in the middle of the field are (understandably) very defensive. Some discuss ideas about how to prevent this from happening again. Some talk about whether this is a failure of the process or not (I’m reminded of the book Plastic Fantastic). One person claims that Stapel’s work wasn’t that big of a deal. More flash than theory advancement. Hard to make something out of it, considering that there are myriad reasons why someone would think that now.

    The anecdote I told is my only encounter with his work, so take it for what it is worth.

Comments are closed.