In the latest issue of American Scientist, Andrew Gelman (an old friend) and Kaiser Fung criticize Freakonomics and Superfreakonomics by Steve Levitt and Stephen Dubner (who wrote about my work). Although the article is titled “Freakonomics: What Went Wrong?” none of the supposed errors are in Freakonomics. You can get an idea of the conclusions from the title and this sentence: “How could an experienced journalist and a widely respected researcher slip up in so many ways?”
Gelman and Fung examine a series (“so many ways”) of what they consider mistakes. I will comment on each of them.
1. The case of the missing girls. I agree with Gelman and Fung: Levitt and Dubner accepted Emily Oster’s research too uncritically.
2. The risk of driving a car. I think Gelman and Fung miss the point. Yes, the claim (driving drunk is safer than walking drunk) was not well-supported by the evidence provided because the comparison was so confounded. However, I read the whole example differently. I didn’t think that Levitt and Dubner thought drunk people should drive. I thought their point was more subtle — that comparisons are difficult (“look how we can reach a crazy conclusion”).
3. Stars are made not born. I think Gelman and Fung fail to see the big picture. The birth-month effect in professional sports, which Gelman and Fung dismiss as “very small,” is of great interest to many people, if not to Gelman and Fung. It suggests what Levitt and Dubner and Gladwell and others say: Early success matters. That’s not obvious at all. There are lots of similar associations in epidemiology. They have been the first evidence for many important conclusions, such as smoking causes lung cancer. Are professional sports important? Maybe. But epidemiology and epidemiological methods are surely important. By learning about this effect, we learn about them. Lots of smart people fail to take epidemiology seriously enough (e.g., “correlation does not equal causation”).
4. Making the majors and hitting a curve ball. Gelman and Fung point out that one sentence is misleading. One sentence. This is called praising with faint damn.
5. Predicting terrorists. Gelman and Fung say that the terrorist prediction algorithm of a man named Ian Horsley, which Levitt and Dubner seem to take seriously, is not practical. But their review fails to convince me it was presented as practical. Since there are no data about how well the algorithm works, and Levitt and Dubner are all about data….
6. The climate change dust-up. I agree with Gelman and Fung that Nathan Myrvold’s geoengineering ideas are unimportant. (My view of Myrvold’s patent trolling.) But in this case, I’d say both sides — Gelman and Fung and Levitt and Dubner — miss what’s really important, namely that the usual claims that humans are dangerously warming the planet are held far too strongly. The advocates of this view are far too sure of themselves. I have blogged about this many times. In a nutshell, the climate models that we are supposed to trust have never been shown to persuasively predict the climate ten or twenty years from now (or even one year from now). There is no good reason to believe them. That Levitt and Dubner seem to take that stuff seriously is the only big criticism I have of their work . At least in that geoengineering stuff Levitt and Dubner were dissenting from conventional wisdom. Gelman and Fung do not. They fail to realize that something we’ve been told thousands of times is nonsense (in the sense of being wildly overstated). It was Levitt and Dubner’s comments about this that led me to look closely at all that climate-change scare stuff. I was surprised how poor the evidence was.
The biggest problem with Gelman and Fung’s critique is that they say nothing about the great contribution of Steve Levitt to economics. They fail to grasp that he has made economics considerably more of a science, if by science you mean a data-driven enterprise as opposed to an ideologically-driven or prestige-driven one (mathematics is prestigious, the more difficult, the more prestigious). He did so by pioneering a new way to use data to learn interesting things. His method is essentially epidemiological, except his methods are considerably better (better matching, less formulaic) and his topics much more diverse (e.g., sumo wrestling) than mainstream epidemiology. A large fraction of prestige economics is math, divorced from empirical tests. This stuff wins Nobel Prizes, but, in my and many other people’s opinion, contributes very little to understanding. (Psychology has had the same too much math, too little data problem — minus the Nobel Prizes, of course.) To persuade a big chunk of an entire discipline to pay more attention to data is a huge accomplishment.
Levitt’s methodological innovation makes Freakonomics far from what Gelman and Fung call “pop statistics”. It is actually an amusing and well-written record of something close to a revolution. In the 1980s, a friend of mine at UC Berkeley took an introductory economics class. She told me a little of what the teacher said in class. All theory. What about data? I said. It’s a strange science that doesn’t care about data. My friend went to office hours. She asked the instructor (a Berkeley economics professor): What about data? Don’t worry about data, he replied. Gelman and Fung fail to appreciate what economics used to be like. The ratio of strongly-asserted ideas to persuasive data used to be very large. Now it is less.
Thanks to Ashish Mukharji.