The Link Between Lead and Crime

In the 1960s, a Caltech geochemist named Clair Patterson made the case that there had been worldwide contamination of living things by lead, due to the lead in gasoline. There were great increases in the amount of lead in fish and human skeletons, for example. More than anyone else he was responsible for the elimination of lead in gasoline. (By coincidence, this was just shown on the new Cosmos TV series.) A professor of pediatrics at the University of Pittsburgh named Herbert Needleman did some of the most important toxicology, linking lead exposure (presumably from paint) and IQ in children. Children with more lead in their teeth had lower IQ scores. The importance of this finding is shown by the fact he was accused of scientific misconduct. Continue reading “The Link Between Lead and Crime”

Hobby versus Job: Casa Pepe Guest House, Seoul

Yesterday I was in Seoul, at Casa Pepe Guest House. Sensationally good at a very low price. It really is a guest house — attached to a house — with a separate entrance. There are four rooms, with shared kitchen and bathroom. The owner is an renowned chef. The first evening he brought salad and wine from his (Japanese) restaurant. The first morning, he invited me to come with him to buy fish at the Seoul fish market. Every morning, he made breakfast — something different each time.

I found it through On their map, it was off by itself. I thought that meant bad location, but the opposite was true. It is the sort of good location you cannot normally get. It is near the Blue House (Korea’s White House) and many foreign embassies and is very safe. Dozens of interesting restaurants and cafes are nearby. (Even more than the rest of Seoul.) The neighborhood is the Beverly Hills of Korea, with better (and cheaper) restaurants and less pretentious architecture. Casa Pepe started about a year ago, with a remodelling. Everything is new and clean. The floor is heated. The building is up a steep path and has a nice view of streets, hills and houses. Free laundry. All for less than $50/day.

During my stay I briefly overlapped with a Tsinghua student (how could that possibly happen?) but otherwise I was the only person.

Why is it so nice? The owner said, “It’s my hobby.” I think that explains it.

I’ve said that doing a job and doing science are fundamentally incompatible. Any job requires steady and repeated output. You do the same thing over and over. The goal of science is discovery — and a discovery is inherently unpredictable and unrepeatable. (Art is a job with science-like elements — and artists were the first scientists.) Casa Pepe Guest House illustrates another side of the job/science conflict: A job is inherently conformist. You give people, especially customers and your boss, what they expect. Science is inherently nonconformist. The more a discovery challenges “what everyone knows”, the better. Hobbies make this point because they can vary more than jobs. If you make tables as a hobby, for example, your tables can vary more than if you make tables for a living. Casa Pepe is way outside (better) what one expects from a rented room.

Another way Casa Pepe is unusual is that it is very hard to find, even if you study the directions. I found it by knocking on a neighbor’s door. The neighbor called Casa Pepe. Someone from Casa Pepe came to meet the neighbor and me on the street — it was too hard to tell the neighbor where it was. Here are better directions. From Incheon Airport, take airport bus 6112 to the Hangsun University stop. Go to Exit 6 of the nearby subway station (Hangsun University Station on Line 4). Walk up the street (Seongbuk-ro) indicated by Exit 6 — toward the hills. After walking about 13 minutes, where the road veers right, you will see a sign that says Seongbuk-ro 19-gil (gil = side street), which points almost exactly to a steep concrete path on the left perpendicular to the street. It is the width of a driveway. Go up about 40 meters. Casa Pepe is on the right — a white house with a red door, with a sign that says “casa pepe”. Don’t be misled by the fact that the listed address is not on Seongbuk-ro 19-gil.

The Trouble with Critics of Science, Such as John Ioannidis

I haven’t been interested in the work of John Ioannidis because it seems unrelated to discovery. Ioannidis says too many papers are “wrong”. I don’t know how the fraction of “wrong” papers is related to the rate of discovery. For example, what percentage of “wrong” papers produces the most discovery? Ioannidis doesn’t seem to think about this. Yet that is the goal of science — better understanding. Not “right” papers. Continue reading “The Trouble with Critics of Science, Such as John Ioannidis”

Truth to Power: Eric Lander’s Reddit AMA

A year ago, Eric Lander, who identified himself as “President and Founding Director of the Broad Institute of Harvard and MIT [and] one of the principal leaders of the Human Genome Project, directing the largest center in the international project” did a Reddit AMA (Ask Me Anything). One of the questions did not go as expected:

Question As an advisor to the President, what is being done or do you think will be done to increase the attractiveness of students finishing PhD programs in science?

Lander We need to shorten the time for getting a PhD and for a first faculty job. Young people should get out into the scientific world early, when they have lots of fresh ideas. We should encourage grants to young scientists and should encourage them to take big risks. When you’re taking big risks, science is amazingly fun.

The response to this answer was very negative.

With all due respect, this is a ludicrous statement. . . The true problem is the way in which you fund science. You fund projects and proposals. In order to get these projects funded, the preliminary data has to be essentially the whole project being done. Then you fund at a 6% percent line. It leads to cronyism in the peer review process and a general sense of despair in scientists. How about you radically change the funding system for PIs?

I too am disappointed with Dr. Lander’s response to possibly THE most important question here regarding training basic scientists.

Do you truly believe this? . . . There is no reason to encourage more students to go into science if there is not enough government funding to support their careers.

Alas, this is not important. It just pleased me that someone questioned Dr. Lander’s absurd claims, which he makes often. “We should encourage young scientists to take big risks”. Yes, I agree, does he really believe this? Do he really believe that someone coming up for tenure should take big risks?

Nick Szabo is Satoshi Nakamoto, the Inventor of Bitcoin

There were many funny things about Leah Goodman’s claim in Newsweek that a California engineer invented bitcoin. One was her observation that he put two spaces after a period — just like the inventor of bitcoin. Another was her observation that his relatives said he was “brilliant”, without giving any examples. His brilliance had remained perfectly hidden — until now. A third was her conclusion that he was obsessed with secrecy and distrusted government — just like the inventor of bitcoin (according to her). Felix Salmon was quite wrong when he said there are some very strange coincidences and the pieces of her argument “fit elegantly together”. Actually, her argument is worthless from top to bottom. Salmon was right, however, when he said that the engineer’s English shows he couldn’t possibly have invented bitcoin. As Salmon says, Goodman ignored this itty-bitty problem.

Who is the inventor of bitcoin? I’m sure it’s Nick Szabo, a former law professor at George Washington University. This idea first surfaced a few months ago in an anonymous blog post based on textual analysis. Szabo used certain phrases in the original bitcoin description far more than a bunch of other possible candidates. That is real evidence. The hypothesis that Szabo is the inventor passes several other tests as well: Continue reading “Nick Szabo is Satoshi Nakamoto, the Inventor of Bitcoin”

More Muscle Strength, Less Cancer

A 2009 study followed about 9000 men for 10-20 years. It found that strength (how much you can bench and leg press) measured at the start of the study was associated with likelihood of dying of cancer during the study. Men in the upper two-thirds of the study population in strength had 40% less cancer mortality. This might be the most surprising result:

Further adjustment for BMI, percent body fat, waist circumference, or cardiorespiratory fitness had little effect on the association. The associations of BMI, percent body fat, or waist circumference with cancer mortality did not persist after further adjusting for muscular strength.

In other words, muscle strength was a better predictor than several similar measures (BMI, etc.) and these other measures stopped predicting when corrected for muscle strength. Muscle strength is closely connected to something important.

Men who are stronger by and large exercise more, no doubt. Yet muscle strength is determined by resistance training, not aerobic exercise — and it is aerobic exercise (and to some extent walking) that have been promoted by countless experts since the 1960s and the invention of the concept aerobic. Jogging reduces how much time you have for resistance training.

These findings interest me because I do a lot of resistance training — stand on one leg to exhaustion several times per day — purely to sleep better. By improving something easy to measure (sleep), these data suggest I have also been improving something hard to measure (chance of dying from cancer). Not surprising, but reassuring.

My data also suggest two different possible reasons for the strength-cancer association. One is that men who exercise more sleep better as a result; better sleep, better immune function, less cancer. Another possibility is that strength is a marker for good sleep. Among men who do equal amounts of exercise, those who sleep better will be stronger.

From The Breviary.

Charles Dickens, Demons, and Personal Science

In a review of biographies of Charles Dickens I found this:

In 1849 he showed a short account of his early years to his close friend John Forster, revealing a story he never told his own family: the shame-inducing months he spent, while his father was in a debtor’s prison, as a 12-year-old “laboring hind” in a factory that bottled shoe-blacking.

Suddenly I understood why he wrote Oliver Twist and why it is so good. Budding writers are told write what you know. They should be told write what you feel bad about.

The work of James Pennebaker has shown the benefits  of even small amounts of self-disclosure. No doubt this is why all sorts of psychotherapy, supposedly based on enormously different theories, help roughly the same amount: All involve self-disclosure. I see this effect as something built into us by evolution  to increase self-disclosure. Talking about bad experiences helps your listeners avoid what happened to you. To motivate such disclosures, evolution has built into us something that causes us to feel better after we talk this way.

Scientists are not told study what you know and they are especially not told study what you feel bad about. Scientists are mostly men, of course, and that sort of thing makes men uncomfortable. My personal science, however, suggests the correctness of this idea.

Assorted Links

  • Dangers of Splenda. Never use it in baked goods.
  • Overdiagnosis of attention deficit disorder. “So many medical professionals benefit from overprescribing that it is difficult to find a neutral source of information. . . . The F.D.A. has cited every major A.D.H.D. drug, including the stimulants Adderall, Concerta, Focalin and Vyvanse, for false and misleading advertising since 2000, some of them multiple times.”
  • David Suzuki, prominent environmentalist, former genetics professor, founder of the David Suzuki Foundation, once voted the greatest living Canadian, is asked a question about climate change that turns out to be surprisingly hard.
  • Confucius Peace Prize. Awarded to Putin because Russia makes China look good?
  • Top 10 retractions of 2013. There is a website for retractions (Retraction Watch) but no website for discoveries that could have been made but weren’t, except maybe this blog. I’m not joking. I am far more alarmed by lack of progress than retractions.

Thanks to Dave Lull.

Science Critics Are Human: Cautionary Tale

One reason personal science is a good idea is it is simple and immediate (in the sense of near). You study one person, you do experiments (easier to interpret than surveys), you can easily repeat the experiment (so you are not confused by secular trends — big changes over time — and implausible statistical assumptions), you are aware of unusual events during the experiment (so you are less confused by anomalous results and outliers), you are close to the data collection (so you understand the limits and error rates of the measurements). These elements make good interpretation of your data much easier. Professional science generally lacks some of these elements. For example, the person who writes the paper may not have collected the data. This makes it harder to understand what the data mean.

I hear criticism of (professional) science more now than ten years ago. Lack of replicability, for example. What I rarely hear — actually, never — is how often science critics make big blunders. As far as I can tell, as often as those they criticize. This is not to say they are wrong — who knows. Just overstated.
An example is a critique of salt and blood pressure studies I read recently. Many people say salt raises blood pressure. The critique, by Michael Alderman, a professor of epidemiology at Albert Einstein College of Medicine, said, not so fast. The title is: “Salt, blood pressure and health: a cautionary tale.” It’s a good review, with lots of interesting data, but the reviewer, at the same time he is criticizing others, makes a major blunder.

He describes a study in which people were placed on a low-salt diet. Their blood pressure was measured twice, before the diet (Time 1) and after they had been on the diet for quite a while (Time 2). Comparison of the two readings showed a wide range of changes. Some people’s blood pressure went up, some people’s blood pressure stayed the same, and some people’s blood pressure went down. Alderman called this result “enormous variation between individuals on the effect of salt on pressure”. Oh no! He assumes that if your blood pressure is different at Time 2 than Time 1, it was because of the change in dietary salt. There are dozens of possible reasons a person’s blood pressure might differ at the two times (leaving aside measurement error, another possibility). Dozens of things that affect blood pressure were not kept constant.

Had there been a second group that did not change their diet and was also measured at Time 1 and Time 2 — and had the subjects given the low-salt diet showed a larger spread of Time 2/Time 1 difference scores than the no-change group, then you could reasonably conclude that there was variation in the response to the low-salt diet. To conclude “enormous variation” you’d want to see an enormous increase in difference-score variability. But there was no second group.

This is not some small detail. Alderman actually believes there is great variation in response to salt reduction. It is the main point of his article. Spy magazine had a great column called Review of Reviewers. Such as book and movie reviewers. Unfortunately there is no such thing in science.

Frontlines of Personal Science: Confirmation of After Dinner Sweets Effect

During the last week I have looked into the possibility that my sleep can be further improved — in addition to the bedtime honey improvement — by eating a similar amount of sugar (fructose and glucose) a few hours before bedtime. After I accidentally slept better than usual (or even better than usual), I tried to determine why. Several things had been unusual the day before. Two tests (here and here) pointed to the sugar (honey or banana) a few hours before bedtime.

Last night (Christmas Eve) I tried again. I ate a banana (132 g, peeled) about 3 hours (7 pm) before I fell asleep (10 pm). I fell asleep within a minute and woke up, after an apparently dreamless night, feeling perfectly rested. On my 0-100 percentage scale (100% = completely rested, no detectable tiredness), which I have been using for about 8 years, it was the first ever 100%.  I had slept about 6 hours, a good amount of time.

To celebrate, I had a cup of black tea. I didn’t need it to wake up but I like the taste. I reflected that countless people had drunk tea or coffee to wake up. I had found a better way.

Discovery that an hours-before-bedtime sweet improves sleep (in addition to bedtime honey — that’s what’s interesting) is significant not just for the obvious practical reason (better sleep) but also because it is the confirmation of a prediction. After I slept unusually well, I thought of six possible reasons. The notion that sugar improves sleep pointed to one of them. The results of every test I’ve done (three nights) have agreed with that prediction. I believe the only real test of a theory (such as an explanation) is whether it makes correct predictions — especially, whether it leads to the discovery of new cause-effect relationships. Many things people say haven’t passed that test. An example is weight control. That low-carb diets cause weight loss has been known since the 1800s. Many explanations have been proposed; not one has made correct predictions, as far as I know. In contrast, my theory of weight control led me to three new ways to lose weight (sushi, low-glycemic foods, and fructose water).

I doubt it’s a placebo effect because the sleep improvement has happened whether I expect it or not. A commenter named Paolo Paiva, after reading my posts about this, realized something similar had happened to him:

Today I told my wife how deep I had slept and connected it to the 1 tbspoon of honey and 1 tbspoon of apple cider vinegar mixed with half a cup of water before bed (it tastes really good). Then I saw this post and remembered that yesterday I had had banana flour pancakes topped with honey 3 hours before bedtime!

Thanks, Paolo. May you continue to sleep well. May the rest of you sleep equally well.

Merry Christmas!

Front Lines of Personal Science: More Progress on Sleep

To recap: Three days ago I slept extremely well, better than usual. I wondered why. What had made the difference? That day (the day before the night I slept so well) had been different from previous days in at least five ways (e.g., chocolate, new brand of honey). I repeated four of them, and did not sleep better than usual. That suggested the remaining difference — I had eaten yogurt, blueberries (125 g) and honey (8 g?) a few hours before bedtime — was responsible. (Every night I had 1 tablespoon — 20 g — honey at bedtime. It wasn’t that.) Then I repeated all five elements, including yogurt, blueberries (125 g) and honey (14 g) two hours before bedtime. I woke up wired (jittery). Very rested, but wired, which wasn’t pleasant. Too much sugar, perhaps.

The next night I had a banana roughly two hours before bedtime. (In addition, I repeat, to 1 tablespoon honey at bedtime.)  A banana has about 6 g glucose, 6 g fructose, and 3 g sucrose, similar to 1 tablespoon honey. I had a strong craving for something sweet at that time, which was new to me — I almost never eat dessert. In the evening I had more brain power than usual. Yet at bedtime I fell asleep quickly, in about a minute. 

The next morning I woke up and felt great. Almost perfectly rested, neither tired nor wired. Even though I’d only slept 4.7 hours, a bit low for me. It really was the yogurt, blueberries and honey — almost surely their sugar, which is almost all they have in common with a banana — that had made me sleep so well.

Conclusion: For the best sleep, have sugar after dinner and sugar at bedtime. By sugar I mean a glucose/fructose mixture but for all I know sucrose would work, too.

Modern Cargo Cult Science: Evidence-Based Medicine, Science Fiction in China

In a graduation speech, Richard Feynman called certain intellectual endeavors “cargo cult science,” meaning they had the trappings of science but not the substance. One thing he criticized was rat psychology. He was wrong about that. Sure, as Feynman complained, lots of rat psychology experiments have led nowhere, just as lots of books aren’t good. But you need to publish lots of bad books to support the infrastructure necessary to publish a few good ones. The same is true of rat psychology experiments. A few are very good. The bad make possible the good. Rat psychology experiments, especially those by Israel Ramirez and Anthony Sclafani, led me to a new theory of weight control, which led me to the Shangri-La Diet.

Cargo cult science does exist.  The most important modern example is evidence-based medicine. Notice how ritualistic it is and how little progress medicine has made since it became popular. An evidence-based medicine review of tonsillectomies failed to realize they were worse than voodoo. Voodoo, unlike a tonsillectomy, does not damage your immune system. The evidence-based medicine reviewers appeared not to know that tonsils are part of the immune system. Year after year, the Nobel Prize in Medicine or Physiology tells the world, between the lines of the press release, that once again medical researchers have failed to make progress on any major disease, as the prize is always given for work with little or no practical value. In the 1950s, the polio vaccine was progress; so was figuring out that smoking causes lung cancer (which didn’t get a Nobel Prize). There have been no comparable advances since then. Researchers at top medical schools remain profoundly unaware of what causes heart disease, most cancers, depression, bipolar disorder, obesity, diabetes and so on.

I came across cargo-cult thinking recently in a talk by Neil Gaiman:

I was in China in 2007, at the first party-approved science fiction and fantasy convention in Chinese history. And at one point I took a top official aside and asked him Why? SF had been disapproved of for a long time. What had changed?

It’s simple, he told me. The Chinese were brilliant at making things if other people brought them the plans. But they did not innovate and they did not invent. They did not imagine. So they sent a delegation to the US, to Apple, to Microsoft, to Google, and they asked the people there who were inventing the future about themselves. And they found that all of them had read science fiction when they were boys or girls.

I know about Chinese engineers at Microsoft and Google in Beijing. They want to leave the country. An American friend, who worked at Microsoft, was surprised by the unanimity of their desire to leave. I wasn’t surprised. Why innovate or invent if the government might seize your company? Which is the main point of Why Nations Fail. Allowing science fiction in China doesn’t change that.

Thanks to Claire Hsu.

Association of Sleep and Chronic Illness

A recent PatientsLikeMe survey found a strong correlation between chronic illness and poor sleep. Here are the most interesting results:

PatientsLikeMe survey respondents in the U.S. (n=3,284) . . . are almost nine times more likely to [have] insomnia than the general adult population. . . . PatientsLikeMe members with health conditions experience [each] of the four symptoms of insomnia [= trouble falling asleep, trouble staying asleep, early awakening, and waking up not rested] at twice the rate of the general adult population.

This supports my view that bad sleep causes illness. The correlations could have plausibly been the other way (better sleep among survey respondents). People sleep more when sick. Whatever makes sick people sleep more might also make them fall asleep faster and wake up less often. Continue reading “Association of Sleep and Chronic Illness”

“Science is the Belief in the Ignorance of Experts” — Richard Feynman

“Science is the belief in the ignorance of experts,” said the physicist Richard Feynman in a 1966 talk to high-school science teachers. I think he meant science is the belief in the fallibility of experts. In the talk, he says science education should be about data —  how to gather data to test ideas and get new ideas — not about conclusions (“the earth revolves around the sun”). And it should be about pointing out that experts are often wrong. I agree with all this.

However, I think the underlying idea — what Feynman seems to be saying — is simply wrong. Did Darwin come up with his ideas because he believed experts (the Pope?) were wrong? Of course not. Did Mendel do his pea experiments because he didn’t trust experts? Again, of course not. Darwin and Mendel’s work showed that the experts were wrong but that’s not why they did it. Nor do scientists today do their work for that reason. Scientists are themselves experts. Do they do science to reveal their own ignorance? No, that’s blatantly wrong. If science is the belief in the ignorance of experts, and X is the belief in the ignorance of scientists, what is X? Our entire economy is based on expertise. I buy my car from experts in making cars, buy my bread from bread-making experts, and so on. The success of our economy teaches us we can rely on experts. Why should high-school science teachers say otherwise? If we can rely on experts, and science rests on the assumption that we can’t, why do we need scientists? Is Feynman saying experts are wrong 1% of the time, and that’s why we need science?

I think what Feynman actually meant (but didn’t say clearly) is science protects us against self-serving experts. If you want to talk about the protection-against-experts function of science, the heart of the matter isn’t that experts are ignorant or fallible. It is that experts, including scientists, are self-serving. The less certainty in an area, the more experts in that area slant or distort the truth to benefit themselves.  They exaggerate their understanding, for instance. A drug company understates bad side effects. (Calling this “ignorance” is too kind.) This is common, non-obvious, and worth teaching high-school students. Science journalists, who are grown ups and should know better, often completely ignore this. So do other journalists. Science (data collection) is unexpectedly powerful because experts are wrong more often than a naive person would guess. The simplest data collection is to ask for an example.

When Genius by James Gleick (a biography of Feynman) was published, I said it should have been titled Genius Manqué. This puzzled my friends. Feynman was a genius, I said, but lots of geniuses have had a bigger effect on the world. I heard Feynman himself describe how he came to invent Feynman diagrams. One day, when he was a graduate student. his advisor, John Wheeler, phoned him. “Dick,” he said, “do you know why all electrons have the same charge? Because they’re the same electron.” One electron moves forward and backward in time creating all the electrons we observe. Feynman diagrams came from this idea. The Feynman Lectures on Physics were a big improvement over standard physics books — more emotional, more vivid, more thought-provoking — but contain far too little about data, in my opinion. Feynman failed to do what he told high school teachers to do.

The Blindness of Scientists: The Problem isn’t False Positives, It’s Undetected Positives

Suppose you have a car that can only turn right. Someone says, Your car turns right too much. You might wonder why they don’t see the bigger problem (can’t turn left).

This happens in science today. People complain about how well the car turns right, failing to notice (or at least say) it can’t turn left. Just as a car should turn both right and left, scientists should be able to (a) test ideas and (b) generate ideas worth testing. Tests are expensive. To be worth the cost of testing, an idea needs a certain plausibility. In my experience, few scientists have clear ideas about how to generate ideas plausible enough to test. The topic is not covered in any statistics text I have seen — the same books that spend many pages on to how to test ideas. Continue reading “The Blindness of Scientists: The Problem isn’t False Positives, It’s Undetected Positives”

Showers and the Ecology of Knowledge

In a recent post, I said a well-functioning system will produce both optimality and complexity. I meant important systems like our bodies, economies, and formal education. If you look at the nutrition advice provided by the United States Department of Agriculture — the food pyramid, the food plate, the recommended daily allowances, and the associated reports — you will find nothing that increases the complexity of metabolism inside our bodies (in particular, the diversity of metabolic pathways). The advice is all optimality — for example, the best amounts of various micronutrients. The people behind the USDA advice, reflecting the thinking of the best nutrition scientists in the world, utterly fail to grasp the importance of complexity. Half of nutrition research — or more than half, since the topic has been so neglected — should be about how to increase internal complexity. In practice, almost none of it is. It’s obvious, I think, that the microbes within us are very important for health. They are mostly in our intestines and must be heavily influenced by what we eat. How did they get there? How can their number be increased? How can their diversity be increased?

The absence is especially striking because the point is so simple. To solve actual problems, you need both optimality and complexity. Showers — what we use to take a shower — provide an example. You want to adjust the water temperature. If you try to do this while taking a shower, it can be hard because of the delay between changing the hot/cold water proportions and feeling the effects. It is better to use the bathtub (lower) tap to set the temperature (measuring it with your wrist) and only after you’ve optimized the temperature, shift the water to the shower head. The bathtub tap produces simple output (a single stream of water) that is easy to optimize. The shower head produces more complex output that is harder to optimize but does a better job of washing (an actual problem). You need both bathtub tap (for optimization) and shower head (for complexity) to do a good job solving the problem. Likewise, we need both an understanding of necessary nutrients (Vitamin A, etc.), which can be optimized, and an understanding of microbes, which cannot be optimized but can be made more complex, to make good decisions about what food to eat. Ordinary food is the hardware, you might say; and microbes are the software.


Assorted Links

Sunlight and Heart Disease

Vitamin D and Cholesterol: The Importance of the Sun (2009) by David Grimes, a British doctor, contains more than a hundred graphs and tables. Most of the book is about heart disease.  Grimes argues that a great deal of heart disease is due to too little Vitamin D, usually due to too little sunlight. I recently blogged about other work by Dr. Grimes — about the rise and fall of heart disease.

Part of the book is about problems with the cholesterol hypothesis (high cholesterol causes heart disease).  One study found that in men aged 56-65, there was no relationship between death rate and cholesterol level over the next thirty years, during which almost all of them died (Figure 29.2). There is a positive correlation between death rate and cholesterol level for younger men (aged 31-39). The same pattern is seen with women, except that women 60 years or older show the “wrong” correlation: women in the lowest quartile of cholesterol level have by far the highest death rate (Figure 29.5). A female friend of mine in England, who is almost 60, was recently told by her doctor that her cholesterol is dangerously high.

The book was inspired by Grimes’ discovery of a correlation between latitude and heart disease: People who lived further north had more heart disease. This association is clear in the UK, for example (Figure 32.4). Controlling for latitude, he found a correlation between hours of sunshine and heart disease rate (Table 32.3): Towns with more sunshine had less heart disease. No doubt you’ve heard that dietary fat causes heart disease. In the famous Seven Countries study, there was indeed a strong correlation between percent calories from fat and heart disease death rate (Figure 30.2). You haven’t heard that in the same study there was a strong correlation between latitude and dietary fat intake (Figure 30.8): People in the north ate more fat than people in the south. The fat-heart disease correlation in that study could easily be due to a connection between latitude and heart disease. The correlation between latitude and heart disease, on the other hand, persists when diet is controlled for.

Grimes convinced me that the latitude/sunshine correlation with heart disease reflects something important. It is large, appears in many different contexts, and has resisted explanation via confounds. Maybe sunshine reduces heart disease by increasing Vitamin D, as Grimes argues, or maybe by improving sleep — the more sunshine you get, the deeper (= better) your sleep. Sleep is enormously important in fighting off infection, and a variety of data suggest that heart disease has a microbial aspect. As long-time readers of this blog know, I take Vitamin D3 at a fixed time (8 am) every morning, thereby improving my Vitamin D status and improving my sleep.

Grimes and his book illustrate my insider/outsider rule: To make progress, you need to be close enough to the subject (enough of an insider) to have a good understanding but far enough away (enough of an outsider) to be able to speak the truth. As a doctor, Grimes is close to the study of disease etiology. However, he’s a gastroenterologist, not a cardiologist or epidemiologist. This allows him to say whatever he wants about the cause of heart disease. He won’t be punished for heretical ideas.


Academic Job Advice: Be Able to Say Why You Study What You Study

Recently I interviewed two job candidates for an assistant professor position at Tsinghua. I asked both of them: “Why did you decide to study this?” (this = their field of research). One had no answer at all. The other had an answer that didn’t make sense. I didn’t mean it as a tough question. If they had said “because that’s what they were doing where I got a postdoc” I would have been perfectly happy. If that were the answer, I might have asked “why does your advisor study it?” — to which “I don’t know” would have been perfectly acceptable. Of course, there are better answers.

When I was a graduate student, I read Adventures of a Mathematician by Stanislaw Ulam (a very good well-written book). One of the book’s comments impressed me: That John Von Neumann was able to distinguish the main lines of growth of the tree of mathematics from the branches. My research was about how rats measure time. The relevance to big questions in the psychology of learning wasn’t obvious. I wondered: Am I studying something important? Or something that will be irrelevant in twenty years? My advisor didn’t seem to have thought about this. 

When I interviewed for jobs at various universities, no one asked me why do you study this? But it was still a question worth answering. As a grad student I had no choice. But eventually I would have a choice: I could continue to study how rats measure time. Or I could study something else. (Eventually I did change — to studying what controls variation in behavior.)

Here’s what I would say now about how to choose a research topic.

What’s best is a new method. If you can use a new method to answer questions in your field, do that. The cheaper, easier and more available the method, the better. As a graduate student, I developed a new way to study how rats measure time, which I called the peak procedure. It made it easier to determine if an experimental treatment affected an animal’s internal clock.

What’s second best is a new experimental effect. Discovering a new way to change something of interest. The bigger, cheaper, newer, and more surprising the effect, the better. Using the peak procedure, my colleagues and I discovered a large and surprising effect (at a certain time during the peak procedure, the variability of bar-press duration — how long a rat holds down the bar when pressing it — became much larger). When I first saw the result, I assumed it was due to a software mistake. It turned out to be a window in what controls the variability of behavior — an easy way of studying that. In that sense it was also a new method.

I don’t know if the two job candidates I interviewed were doing either of these two things. Maybe not. My broader point is that if you don’t have a good understanding of how to choose a research topic you will have to retreat to studying something simply because others are studying it. Which is exactly the wrong thing to do if you want to be an innovator and a leader.