Nassim Taleb on Incompetent Experts

Via, Nassim Taleb said this:

I was in Korea last week with a collection of suit-wearing hotshots. On a panel sat Takatoshi Kato, IMF Deputy Managing Director. Before the discussion he gave us a powerpoint lecture showing the IMF projections for 2010, 2011, …, 2014. I could not control myself and got into a state of rage. I told the audience that the next time someone from the IMF shows you projections for some dates in the future, to show us what they PROJECTED for 2008 and 2009 in 2004, 2005, …, and 2007. They would then verify that Mr. Takatoshi and his colleagues provide a prime illustration to the “expert problem”: they serve as experts while offering the scientific reliability of astrologers. Anyone relying on them is a turkey.

This allowed me to show the urgency of my idea of robustness. We cannot get rid of charlatans. My point is that we need to build a society robust to charlatanism and expert-error [emphasis added], one in which Mr. Takatoshi and his staff can be as incompetent as they want without endangering the general public. We need less reliance on these people and the Obama administration has been making us more dependent on the “expert problem”.

I completely agree. This highlights two hidden strengths of self-experimentation.

First, the more you can rely on data about yourself, the less you need to rely on data from other people, which until recently — internet forums,, — almost always came to you through experts (usually doctors), who filtered it to suit their purposes. When I was a grad student, I had acne. My dermatologist prescribed tetracycline, a powerful and dangerous antibiotic. Studying myself, I quickly figured out tetracycline didn’t work. My dermatologist had failed to figure that out — that it didn’t work in at least some cases.  In his practice, he must have encountered examples of this, but he ignored them. It served his purposes to think it worked. That’s one sort of filtering: Ignoring inconvenient data. Self-experimentation made me less reliant on my dermatologist.

Second, self-experimentation allows researchers such as myself to do innovative research in some area without getting permission from experts in that area. Self-experimentation is very cheap; no grant is required. A self-experimenter can be as heretical as he cares to be. My research on weight control has been breezily dismissed by nutrition professors, for example. Obviously they wouldn’t fund it. The Animal Care and Use Committee at UC Berkeley turned down my application to do rat research about it — my ideas couldn’t possibly be true, they said. My research on mood isn’t just utterly different than what clinical psychologists and psychiatrists say to each other in meetings and papers; it also, at first glance, sounds absurd. Self-experimentation allowed me to do it. That’s another sort of filtering: control of what research gets done.

I don’t think conventional research in nutrition, clinical psychology, or psychiatry is worthless — far from it. I think it is very valuable. (For one thing, it helped me see that my self-experimental conclusions, as unorthodox as they were, had plenty of empirical support.) What is hard for outsiders to grasp is how what they see — what they read in magazines and newspapers and even books — is heavily filtered to conform to a party line. Plenty of research supports the Shangri-La Diet, for example (such as research about the set point theory of weight control), but you are unlikely to read about it in, say, The New Yorker because it doesn’t fit conventional ideas. Plenty of conventional research supports my ideas about mood, but you are unlikely to read about that research because it doesn’t support the party line of “dopamine imbalance” causing depression or whatever. This is what Leonard Syme taught his public-health students — that the party line was a lot more questionable than an outsider would ever guess. They hadn’t heard that before. (And it was unpleasant: Uncertainty is unpleasant.) This is a third sort of filtering: What data reaches outsiders.

I never had a teacher like Leonard Syme — I’ve never even heard of someone else doing what he did — but self-experimentation taught me the same thing. I came to see the fragility of mainstream claims about all sorts of things related to health. As Taleb says, we are used to thinking the charlatans are on the fringes. But they’re not — there’s plenty of them at the centers of power.

Thanks to Dave Lull. Frontline’s recent show “The Warning” makes the same point as Taleb, that there is great incompetence at the highest levels of power.

6 Replies to “Nassim Taleb on Incompetent Experts”

  1. Tim Conrad who teaches a course on algortihmic bioinformatic repeats at every lecture the mantra:
    Don’t trust your theories or other people’s data. You don’t know whether they are true or false.
    I like him 🙂

  2. In my experience, I have been able to trust other people’s data most of the time. The Shangri-La Diet derived from trusting my own theory. It’s the party line — the consensus — you can’t trust. That’s quite different than what Conrad is saying. Did Conrad give any examples to support the idea that you shouldn’t trust other people’s data?

  3. I have noticed that some experts totally ignore personal experience. They ignore facts that are sitting in front of them.
    The world needs more people who can think for themselves, but this is discouraged. It’s wise, for example, not to be a genius as geniuses are often persecuted. Experts often become oppressive because they want people to take their advice. My opinion is that the experts are not usually the best and brightest.
    I accept the council of experts, but I believe that no one will take better care of me than I will.
    One thing I have always wondered why some people are listened to but even wiser people are ignored.

  4. How appropriate with Thanksgiving right around the corner. Don’t be a turkey! “The Black Swan” was one of the best and most influential books I read this summer. Right up there with the Primal Blueprint by Mark Sisson and Good Calories, Bad Calories by Gary Taubes. And the Shangri-La diet of course. *wink*

  5. Seth,

    I think self-experimentation is valuable. In fact, I’m trying the SLD right now (why not? right?).

    HOWEVER, I think it’s very important to make something very clear to people.

    If you’re going to encourage self-experimentation, I think you need to impress two other things on people

    1. Some understanding of statistical validity.
    2. Skeptical habits of mind.

    It’s not enough to form a hypothesis, and then test it, noticing only the positive confirmations. If we don’t carefully state our hypothesis, the vagueness will allow the confirmation bias to have full sway.

    And confirmation bias is a huge ugly monster. Take all the other logical fallacies humans commit, and I’d argue that the harm done by confirmation bias outweighs them all.

    That gives us very good reason to be skeptical of our own theories and our own experiments.

    Once we test a hypothesis, and get some confirmation, we should also try to think of all the plausible competing explanations we can think of. And, if we’re committed to proporitioning our confidence to the evidence, we should design tests that will decide among competing explanations.

    And it takes a lot longer than most people think to do all of that properly.

    I say this not because I think self-experimentation is dangerous, but because I think encouraging people to have confidence in poorly conducted self-experimentation is dangerous.

    I guess I would advise people to hold their theories very loosely until they get the kind of evidence that would justify more confidence.


    P.S. keep up the good work. On balance, I like your approach, and your writings.

  6. A lot of data contains assumption that the person who measured the data made.
    Taleb for example discusses the saying: “What doesn’t kill you makes you stronger” in the Black swan and notes that there’s a selection biases at work.

    The voodoo social neuroscience paper would be another example of an analysis that showed that a lot of data is wrong because someone made a mistake when they measured.

    Therefore it’s always important to be aware that the assumptions that underlie some data might be flawed.

    If you start to believe that the consensus in your own head is necessarily better than the consensus among experts there’s probably confirmation bias at work.
    Truth is always complicated and you should always be aware of the assumptions that you make but can’t proof.
    In your case of self experimenting that’s for example that you assume that short term and longterm effects of interventions like eating more omega3 are similar.

Comments are closed.