PayPal, the [Empty Promise] Way to Pay

A few minutes after I sent credit-card payment for a Wii via PayPal, I phoned PayPal asking them to cancel the transaction. Your payment did not go through, I was told. “Did not”. So there was no need to cancel it. I was safe. To warn others, I wrote my earlier post about this.

Well, I was misinformed. My payment was not unauthorized, i.e., dead — it was, rather, not-yet-authorized. When I phoned PayPal, it could have been canceled but it wasn’t. A few days later it went through. Maybe I am easily amazed but this is amazing. At PayPal customer service, the account history screen seen by employees does not distinguish between two meanings of unauthorized: “authorization failed” and “not yet authorized”. What is this, 1960?

I was pissed. I called PayPal and was told in part that this was somehow my fault. I should have known [something]. To file a dispute I must call another number. I called that number. I filed the dispute. You’re safe, I was told. Will I have to call again? I asked. No, I was told.

Well, I wasn’t safe. Although I won the dispute, there was no money in the seller’s account. A possibility that hadn’t been mentioned. Too bad for me.

So I phoned my credit card company. I was told I should get my money back from either PayPal or the credit-card company. Fearing more untrustworthiness from PayPal, I emptied my PayPal account.

Bonus PayPal helping scammers, from the comments:

Same story! $300 for a Wii and a Nintendo DS! The day after my husband ordered I tried to go back to the website and order some games. I got a message saying the company “could not accept PayPal at this time” I then emailed PayPal’s customer service and asked them if this meant that they were not a reputable company that I should not do business with. PayPal said, and I quote, “It does not mean that they are not reputable, they could be experiencing problems with their internet connection to PayPal.”

The Financial System and the Immune System

In this interview, Nassim Taleb says, as he has often said, that booms and busts are a fact of financial life, what we should do is make the financial system robust against them. He put it like this:

Capitalism will always produce shocks and crashes. I want a society that has a buffer against shocks.

Likewise, I say bacteria are a fact of life. To be healthy we need to make our bodies resistant to them — which means having a well-functioning immune system.

These are not subtle or difficult points. What interests me is the difficulty that experts have appreciating them. To repeat a story I’ve told before on this blog, a few years ago I noticed that the UC Berkeley School of Public Health had a wide-ranging epidemiology course taught by someone I knew. I phoned him. “Will the course cover what makes us more or less susceptible to infection?” I asked. “No,” he said. I wasn’t exactly surprised — I have never seen this topic covered in any epidemiology textbook or even any epidemiology research paper — but still it is an amazing omission. They know we have an immune system, they just don’t think it matters! There’s an elephant in the room, and they’re ignoring it.

The parallel point about the financial system is that there is no study of what makes a financial system robust against shocks. Somehow finance professors, like epidemiology professors, haven’t grasped that something is missing.

Here are two more vast areas of ignorance:

1. Scientists know a lot about how to test ideas. They know almost nothing about how to come up with ideas worth testing. When a good way to generate ideas comes along — such as self-experimentation — they are dismissive. This is truly crippling: In an experimental science, for example, interesting new experimental effects aren’t discovered. Experimental psychology suffers from this problem. Experimental psychologists could self-experiment, but they don’t.

2. Economists know very little about how to generate new businesses — what makes the rate of new-business generation high or low. I came across a 500-page introductory economics textbook that had three empty paragraphs on the topic. Without new businesses to solve the problems created by old businesses (such as pollution), your society is in real trouble. The problems will pile up unsolved. This is what Jane Jacobs saw so clearly in The Economy of Cities and Jared Diamond completely missed in Collapse.

How to Avoid Infection: Something I Didn’t Know

A book called Survival of the Cleanest (2005) by Jacob I. T. Van Der Merwe is about how to avoid infection. As far as I could tell from Google Books, it says nothing about how to boost your immune function. It is all about avoiding public bathrooms, frequent handwashing, and pointing out the many ways in which we can get infected (e.g., touching shopping carts). It is heartfelt but I didn’t find it persuasive. There was almost no data about the efficacy of the book’s thousands of suggestions.

Here is something I couldn’t find in the book. A few months ago, I noticed that my eyes itched. Apparently I had some sort of infection. My eyes almost never itch and this happened to coincide with something else very rare: I hadn’t changed the pillowcases on my bed in a few weeks. So I started changing my pillowcases more often. The itching went away and hasn’t returned. My explanation: The pillowcases were acting as staging areas for the bacteria. Ordinarily my immune system would fight them off but on the pillowcases they were safe. The pillowcases shifted the balance of power.

Survival of the Cleanest does say “correctly laundering clothes kills germs and drastically reduces the risk of infection” but since this particular bit of vague advice (what’s “correctly”?) is mixed with a thousand other bits of advice, such as avoiding doorknobs, it doesn’t get the attention it deserves. For what it’s worth, when I do laundry I do a second cycle without soap, in order to get a really good rinse. I’m less interested in killing germs than I am in washing them off.

Does Bad Medicine Drive Out Good? The Case of Eczema

In an article on weight regulation I read this:

One subject . . . developed symptoms possible related to EFA [essential fatty acid] deficiency (ie, mild eczema relieved by the addition of fat to the diet).

In other words, the subject — in a metabolic ward at Rockefeller University where everything he ate was supplied by the researchers — developed eczema when fed a zero-fat diet. When fat was added, the eczema disappeared. The researchers understood that not enough fat in your food can cause eczema. This research was done around 1960. The conclusion is supported by dozens of reports from people doing the Shangri-La Diet who said that when they started drinking oil their skin improved. Dry areas disappeared. I found the same thing myself. (And judging by the large fraction of people who have dry skin, a lot of people aren’t eating enough fat.)

The notion that eczema can be cured by eating more fat — perhaps high in omega-3 — could hardly be simpler. Around 1960, at least some doctors understood this (in a situation, I admit, where it was easy to understand). Yet here is how eczema is treated today, according to Bottom Line/Women’s Health (April 2009, p. 9):

Eczema (dry, itchy, swollen skin) usually is treated with topical anti-inflammatory cream twice daily during flare-ups. Patients who applied tacrolimus (Protopic) twice weekly to lesion-prone areas even when no lesions were visible went 142 days between flare-ups, on average . . . versus 15 days for placebo users. Tacrolimus can cause nausea and muscle pain and may increase skin cancer risk — ask your doctor about the pros and cons of preventative eczema treatment.

The information comes from a study done by Sakari Reitamo, a professor of dermatology at University of Helsinki, and others published recently in Allergy.

The surface things — the things that impress many readers — appear good: large sample, big difference between groups, peer-reviewed journal, good university. Yet once you know that eczema can be cured by eating more fat, the whole thing sounds Orwellian.

Genes Or Environment . . . Or Environment?

Forty or fifty years ago, psychologists and other scientists talked about “genes” determining this or that. (James Watson still talks this way.) A certain percentage of the variation of this or that (e.g., intelligence) was attributed to “genes”. Hardly anyone outside genetics or behavior genetics knew what this meant, but many people thought they did. In reaction to the huge misunderstanding (e.g., those who said intelligence was “80% genetic” but did not know what this meant), psychologists began to talk about gene-environment interaction. “Is the area of a rectangle determined by its height or its width?” they like to say.

But notice how fact-free this view is. A tiny number of studies have observed gene-environment interactions but they are very difficult. I think this has made it hard to realize something basic and important. Years ago, I heard a talk about squirrel circadian rhythms by Patricia DeCoursey, the scientist who introduced the concept of phase-response curves. At her talk, she showed results from about 15 squirrels. She tested each one — with an emphasis on individual results that resembles self-experimentation — to determine how much light it needed to become entrained to a 24-hour light/dark cycle. One squirrel needed much stronger light than the others.

Here was an interesting finding that another scientist might have missed. What did it mean? Because the squirrels lived under very similar conditions (e.g., identical diets), it was almost surely a genetic difference. Let’s assume it was. In nature, sunlight is plenty strong. The lab light was weaker. In nature, the genetic difference wouldn’t make an observable difference. Only under artificial conditions did it become visible. It only became visible when the artificial conditions didn’t supply enough of something important (sunlight). In other words, the newly-visible genetic difference implied there was something lacking in the artificial conditions. The genetic difference implied the environment mattered. The opposite of the usual interpretation.

I don’t know any reason to think this is an unusual case. Aaron Blaisdell told me a story that shows its relevance to human health. Aaron is unusually sensitive to sunlight. Until recently, he could only spend 5 or 10 minutes in the sun before it became unpleasant. The condition is genetic. His mother has it; her father had it. It’s called Erythropoietic Protoporphyria. It is autosomal-dominant. Scientists even know where the gene is. That’s where the understanding of most scientists stops. A genetic condition. Recently, however, Aaron drastically changed his diet with great results, as noted earlier. At the same time as the dietary changes, his sun sensitivity got much better. He can now stay in the sun for an hour or more without discomfort. This is a gene-environment interaction, of course, but of a particular sort: The genetic effect showed there was something wrong with the environment, just as it did in DeCoursey’s experiment.

Sure, there’s always genetic variation — it’s just usually hard to see. The wrong environment makes it much easier to see. It reveals a range of genotypes, all of which would be harmless in the right environment. So when you come across a “genetic disorder” such as Erythropoetic Protoporphyria, it is likely to imply an environmental problem. No one had ever told Aaron or his mother or her father that their condition suggested that environmental changes would help them.

Antibiotics and Debt: Sources of Weakness

Alexander Fleming, the Scottish bacteriologist who discovered penicillin, the first antibiotic, served in the military during World War I. According to Happy Accidents (2007) by Morton Meyers, soldiers in that war often died from infections in relatively minor wounds. Rather than conclude that something was wrong with their immune systems, and wonder why, Fleming — unsurprisingly for a bacteriologist — began to think we needed more substances that killed bacteria. A hundred years later, the blind spot still exists. A few years ago I noticed that a wide-ranging course on epidemiology was being taught in the UC Berkeley School of Public Health. I knew the professor. I asked him, “Will the course cover what makes the immune system weak or strong?” “No,” he said. You will look in vain for that topic in any epidemiology text. To call it a blind spot is being nice. Half the subject — the more important half — is being ignored. And Schools of Public Health favor prevention. Medical schools are worse.

In an editorial in today’s Financial Times, Nassim Nicholas Taleb and Mark Spitznagel point out that debt is inherently destabilizing because it creates less room for error. Financial professionals and economists, including those at the very top, don’t realize this:

Alan Greenspan, former Federal Reserve chairman, tried playing with the business cycle to iron out bubbles, but it eventually got completely out of control. Bubbles and fads are part of cultural life. We need to do the opposite to what Mr Greenspan did: make the economy’s structure more robust to bubbles.

Taleb and Spitznagel note that the dotcom bubble, when it burst, had only minor consequences. That’s because it was an equity bubble rather than a debt bubble. The stimulus package is just more debt: public rather than private. It doesn’t reduce the source of the problem: A too-fragile system. A great point — fascinating how rarely I hear it.

Just as Greenspan failed to understand the problem and chose the wrong lever to pull, so did Fleming and a million doctors and medical/drug researchers. They have tried to deal with a too-fragile system by killing bacteria. Bacteria, like financial bubbles and fads, are part of life. We need to make our bodies more robust to them. Fermented foods do that. By killing off bacteria inside our bodies, antibiotics do the opposite: Make us even more fragile.

Progress Announced in Scurvy Research

From here:

“Cure just around the corner”

WASHINGTON (Reuters) – Scientists from the National Scurvy Institute (NSI) recently convened a 5-day conference to assess progress in the War on Scurvy. . . .
A cure is just around the corner, announced a spokesman at the conference. Over the past 30 years NSI, NIH (National Institute of Health) and ASS (American Scurvy Society) have spent over $30 billion on scurvy research. Pharmaceutical companies have over 80 new drugs in development to combat scurvy according to the FDA.

The 5-year survival rate after diagnosis of scurvy is over 50% up from 30% just 20 years ago although 500,000 Americans continue to die of the disease each year. . . .
Risk factors for scurvy include cigarette smoking, diets high in saturated fat, and long ocean voyages. Sailors are particularly at risk for the disease. A researcher at the University of Washington has speculated that there may be a substance in sea water that triggers the disease. . . .
Researchers at the University of Maryland, working on the Human Genome Project, have identified a “Scurvy gene.” From this it may be possible to develop a test to identify individuals at risk for the disease.

Scurvy doctors have long emphasized the importance of frequent screenings for scurvy in at-risk individuals. The disease can be effectively treated if detected early enough. Men and women over 40 should get regular checkups.

Conventional treatments for scurvy include frequent gum cleanings to combat the bleeding associated with the disease, surgical amputation of atrophied limbs that have been ravaged by the disease, and stimulants to combat the lassitude characteristic of the disease.

A pilot research program has been proposed to NSI that would study a possible connection between Vitamin C and scurvy. A study conducted on 20,000 Americans at the University of Florida showed a substantially higher rate of scurvy in people who don’t eat fruits and vegetables. Dr. Henry Jacobson, assistant director of NSI, was quick to point out that no such connection has ever been scientifically proven. Vitamin C as a treatment for scurvy remains on NSI’s “unproven remedies” list. Clinical trials conducted in the 70’s showed no effect of Vitamin C on scurvy, added an NSI spokesman.

In related news, officials at the American Pellagra Society (APS) have designated the month of May as “Pellagra Awareness Month”…

Teaching Kids to Cook

Outside Berkeley Whole Foods I encountered this cooking camp in session — they teach kids 8-12 years old to cook in two-week sessions, 4 hours/day. I love the idea. I think childhood obesity is due to eating ditto foods (foods, usually factory-made, that taste exactly the same each time) — teaching someone how to cook is a good way to reduce that.

I asked if they included any fermented foods in the curriculum. “Tomorrow we’re making tofu,” said one of the counselors — a Nutrition major at UC Berkeley.  Tofu is not a fermented food, I said. She wasn’t sure what a fermented food was.

Yes, Canker Sores Prevented (and Cured) by Omega-3

Here is a comment left on my earlier canker-sore post by a reader named Ted:

I found out quite by accident WALNUTS get rid of [canker sores] quite quickly. The first sign of an ulcer I chew walnuts and leave the paste in my mouth for a little while (30 seconds or so).

The first time was by accident, my ulcers disappeared so quickly I knew it had to be something I ate. And the only thing I had eaten differently the past day was walnuts.

Flaxseed oil and walnuts differ in lots of ways but both are high in omega-3. My gums got much better around the time I started taking flaxseed oil. I neither noticed nor expected this; my dentist pointed it out. Several others have told me the same thing. Tyler Cowen’s gums got dramatically better. One reader started and stopped and restarted flaxseed oil, making it blindingly clear that the gum improvement is caused by flaxseed oil. There is plenty of reason to think the human diet was once much higher in omega-3. All this together convinces me that omega-3 can both prevent and cure canker sores. Not only that, I’m also convinced that canker sores are a sign of omega-3 deficiency. You shouldn’t just get rid of them with walnuts; you should change your diet. Omega-3 has other benefits (better brain function, less inflammation, probably others).

Let’s say I’m right about this — canker sores really are prevented and cured by omega-3. Then there are several things to notice.

1. Web facilitation. It was made possible by the internet. My initial interest in flaxseed oil came from reading the Shangri-La Diet forums. I didn’t have to read a single book about the Aquatic Ape theory; I could learn enough online. Tyler Cowen’s experience was in his blog. Eric Vlemmix contacted me by email. No special website was involved.

2. Value of self-experimentation. My flaxseed oil self-experimentation played a big part, although it had nothing to do with mouth health. These experiments showed dramatic benefits — so large and fast that something in flaxseed oil, presumably omega-3, had to be a necessary nutrient. Because of these results, I blogged about omega-3 a lot, which is why Eric emailed me about his experience.

3. Unconventional evidence. All the evidence here, not just the self-experimentation, is what advocates of evidence-based medicine and other evidence snobs criticize. Much of it is anecdotal. Yet the evidence snobs have, in this case, nothing to show for their snobbery. They missed this conclusion completely. Nor do you need a double-blind study to verify/test this conclusion. If you have canker sores, you simply drink flaxseed oil or eat walnuts and see if they go away. Maybe this omnipresent evidence snobbery is . . . completely wrong? Maybe this has something to do with the stagnation in health research?

4. Lack of credentials. No one involved with this conclusion is a nutrition professor or dentist or medical doctor, as far as I know. Apparently you don’t need proper credentials to figure out important things about health. Of course, we’ve been here before: Jane Jacobs, Elaine Morgan.

5. Failure of “trusted” health websites. Health websites you might think you could trust missed this completely. The Mayo Clinic website lists 15 possible causes — none of them involving omega-3. (Some of them, we can now see, are correlates of canker sores, also caused by lack of omega-3.) If canker sores can be cured with walnuts, the Mayo list of treatments reads like a list of scurvy cures from the Middle Ages. The Harvard Medical School health website is even worse. “Keep in mind that up to half of all adults have experienced canker sores at least once,” it says. This is supposed to reassure you. Surely something this common couldn’t be a serious problem.

6. Failure of the healthcare establishment. Even worse, the entire healthcare establishment, with its vast resources, hasn’t managed to figure this out. Canker sores are not considered a major health problem, no, but, if I’m right, that too is a mistake. They are certainly common. If they indicate an important nutritional deficiency (too little omega-3), they become very important and their high prevalence is a major health problem.

The Nutritional Wisdom of Young Chicks

After I wrote that young children may be picky eaters because they are offered unhealthy food, some readers disagreed. But here is another example:

I myself have been amazed to see hungry young chicks refuse to touch a purified diet until we added thiamin, which we discovered to have been accidentally omitted from a published formula.

From Kenneth Carpenter’s excellent Beriberi, White Rice, and Vitamin B (2000), p. 193. If young chicks can better judge the nutritional quality of food than nutrition professors, perhaps young children can, in some situations, better judge the nutritional quality of food than their parents. And rightly decide that food their parents think is healthy isn’t so healthy.

The wisdom of the one-year-old picky eater.

What Causes Asthma? Not What the Tovars Think

From Joyce Cohen’s The Hunt column:

For reasons unknown, Florida didn’t agree with little Noah Tovar. Since his toddler years, Noah, now 7, had suffered terribly from asthma. His parents, Jari and Selene Tovar, moved their family several times, trying to escape the mold or pollen or whatever it was that caused his breathing problems. Nothing helped much.

Noah’s parents didn’t know, I can tell, about a 1992 study of childhood asthma and allergies in Germany. Maybe childhood asthma is caused by air pollution, the researchers thought. Let’s test that idea by comparing a clean West German city (Munich) with a dirty East German one (Leipzig). Here’s one of the results:

The lifetime prevalence of asthma diagnosed by a doctor was 7.3% (72) in Leipzig and 9.3% (435) in Munich.

Less asthma in the dirty city! It wasn’t a significant difference but similar differences, such as hay fever and rhinitis (runny nose), were in the same direction and significant. Hay fever was much rarer in Leipzig.

Noah’s asthma cleared up, to his parents’ surprise, on a trip to New York. So the family moved to New York.

Even though “everyone was under the impression that New York would cause him more distress, it was just the opposite,” Mrs. Tovar said. “Not one doctor nor myself can explain what it is.”

Mrs. Tovar’s doctors are badly out of date. The hygiene hypothesis has been around since the 1990s, supported by plenty of data that, like the German study, shows that childhood allergies are better in dirtier environments. Noah is better in New York because New York air is dirtier than Florida air — that’s the obvious explanation.

In The Probiotic Revolution (2007) by Gary Huffnagle with Sarah Wernick, which I’ve mentioned earlier, Dr. Huffnagle, a professor of immunology at the University of Michigan, describes a self-experiment he did:

Could probiotics relieve something as tenacious as my lifelong allergies and asthma? I decided to take a probiotic supplement and make a few simple changes to my diet to my diet, just to see what happened. Yogurt became my new breakfast and my new bedtime snack. I also upped my intake of fruits and vegetables. Whenever possible, I substituted whole grains for processed ones. And I tried to cut back on sugar. [Why he made the non-probiotic changes is not explained. In another part of the book he says he also increased his spice intake.] No big deal.

Because I doubted this little experiment would work, I didn’t mention it to anyone, not even my wife. And I didn’t bother to record my allergy symptoms. . . My “aha” moment came after about a month: I’d spent the evening writing a grant proposal, a box of tissues at my side. After all these years, I knew to be prepared for the inevitable sneezing and runny nose caused by my mold allergies, which kicked up at night. But when I finished working and cleared the table, I realized I hadn’t touched the tissues. And as I looked back on the previous month, I could see other changes. This wasn’t my first sneeze-free evening; I hadn’t needed my asthma inhaler for several months. To my astonishment, the experiment had been a great success.

This is a great and helpful story. Only after I read it did I realize I’d had a similar experience. I’ve never had serious allergies but I used to sneeze now and then in my apartment and my nose would run a lot; I went through more than one box of Kleenex in a month. Maybe 4 in one morning. In January, I made just one change: I started to eat lots more fermented foods (yogurt, kimchi, kefir, etc.). My sneezing and Kleenex use are now almost zero.

The Tovars can live wherever they want, I’m sure, if they feed their son plenty of fermented food.

Previous post about childhood allergies and fermented food.

More After the column appeared, someone wrote to the Tovars:

Funny, same thing happened to me.  I moved from England where I had chronic asthma, to New York City where I had none.  Stayed in NY for twenty years asthma free, then moved back to England with my wife for the last ten years and my asthma has returned all the time I’ve been back.

The Experts Speak: Nutrition

I have nothing against a paleolithic diet, but I think its advocates, like many experts, are overconfident. It’s not easy to know which features of a diet that varies in 20 ways from modern diets are the crucial ones. I came across this while reading about paleolithic diets:

The general gist of eating like a caveman—namely, focusing on foods in their whole, natural state, is not going to get much argument. “It comes down to the advice your mother gave you,” says Leonard [William Leonard, chair of the anthropology department at Northwestern University]. “Eat a balanced diet and a diversity of foods.”

I beg to differ.

1. Whole, natural state. I find flaxseed oil very helpful. It supplies omega-3 missing from my diet, but presumably present in diets that contained lots of seafood or vegetation-fed meat. Flaxseed oil is not food in a whole and natural state.

2. Whole, natural state. I find fermented food very helpful. Bacteria break down food, making it less whole. Modern food of all sorts is unnaturally low in bacteria (due to refrigeration, food safety laws, shelf-life requirements, etc.), just as modern meat is unnaturally low in omega-3. Fermented food is unnaturally high in bacteria, correcting the deficit.

3. The advice your mother gave you. Traditional diets, yes, what your mom thinks, no. When I was growing up we ate margarine instead of butter — poor choice. We had skim milk, not whole milk — poor choice. The absence of butter and whole milk is, if Weston Price is right, why my teeth are slightly crooked. We ate almost no fermented food — very poor choice. (Which I suspect is why I had mild allergies.) We rarely ate fish — poor choice. And yet we didn’t have a TV — very good, very unusual choice. Even my mom, who thought for herself far more than most moms, had serious misconceptions about nutrition. Given the epidemic of childhood obesity, not to mention less visible increases in autism, allergies, and ADHD, I am very skeptical that the average kid’s mom knows what to eat.

4. Eat a balanced diet. Plenty of communities in excellent health eat diets that American experts would describe as not balanced at all — no fruit for example, or too much dairy. Eskimos and the Swiss in isolated villages studied by Weston Price are two examples. Price found that a wide range of diets, most violating one or more popular nutritional precepts, produced excellent health.

5. A diversity of foods. Several healthy communities studied by Price did not eat a wide range of foods. The human diet became a lot more diverse around the time of the “broad-spectrum revolution” — broad-spectrum meaning wider range of food. Around that time human height decreased. Apparently the new, more diverse diet was less healthy than the old diet. An anthropology professor might know this.

The title of this post comes from the book The Experts Speak which is full of examples of how experts were wildly wrong.

Brainwashing in High Places: Genes and Disease

From an article by Nicholas Wade in the NY Times:

Since the human genome was decoded in 2003, researchers have been developing a powerful method for comparing the genomes of patients and healthy people, with the hope of pinpointing the DNA changes responsible for common diseases.

This method, called a genomewide association study, . . . has been disappointing in that the kind of genetic variation it detects has turned out to explain surprisingly little of the genetic links to most diseases.

Wade means the genetic variation is surprisingly poor at distinguishing healthy people and sick people. That is the empirical result.

Unlike the rare diseases caused by a change affecting only one gene, common diseases like cancer and diabetes are caused by a set of several genetic variations in each person.

This is the faith-based statement. Wade knows this how? What about the possibility that cancer and diabetes are caused by environmental differences? That there are consistent environmental differences (e.g., dietary differences) between those who get cancer and those who don’t?

I know of no evidence that common diseases like cancer and diabetes are caused by several genetic variations in each person. I know of a lot of evidence that they are caused by the wrong environment — lung cancer caused by smoking, for example.

Preachers say: If you do X, you will go to heaven. In other words, do something that helps me (the preacher) now and you will benefit later. It has been an effective argument. This is what the geneticists have been doing. They say to granting agencies — who believe what they read in the NY Times — if you give us money now we will find the genetic basis of Disease X. Just as there was no clear reason to believe the preachers’ claims, there was no clear reason to believe the geneticists’ predictions. Which unfortunately for them can be shown to be wrong.

The success of my self-experimentation at solving common problems led me to think the environment is more powerful than NY Times readers, or at least NY Times reporters, had been led to believe. Good news for people with problems but bad news for scientists who want large grants. My research was essentially free.

History Repeating Itself: Fear of Bacteria

In the late 1800s in the United States, babies started developing scurvy; there was a veritable plague. It turned out that the vast majority of victims were being fed milk that had been heat treated (as suggested by Pasteur) to control bacterial disease. Pasteurization was effective against bacteria, but it destroyed the Vitamin C.

From a history of nutrition. Now children are probably getting all sorts of immune disorders, such as hay fever, for the same core reason: fear of bacteria.

Nobel-Prize Cluelessness (stomach ulcers)

Wherein the Nobel Prize is given for discoveries that are misleading. From a New Scientist article about medical self-experimentation:

Junior doctor Barry Marshall was sure the medical establishment was wrong about the cause of stomach ulcers. The received wisdom was that they were caused primarily by lifestyle factors, but Marshall and pathologist Robin Warren were sure that the bacterium Helicobacter pylori was to blame.

It turned out that Helicobacter pylori was present in half the stomachs in the world — only a tiny fraction of which developed ulcers. So much for causation. Marshall and Warren did not consider that lifestyle factors might cause immune efficiency to go down, leading to increased growth of the bacterium. In a famous example of self-experimentation, Marshall ingested a giant amount of the supposedly dangerous bacterium — but, uh-oh, didn’t get an ulcer.

Thanks to JR Minkel.

Will Like vs. Might Love vs. Might Hate

What to watch? Entertainment Weekly has a feature called Critical Mass: Ratings of 7 critics are averaged. Those averages are the critical response that most interests me. Rotten Tomatoes also computes averages over critics. It uses a 0-100 scale. In recent months, my favorite movie was Gran Torino, which rated 80 at Rotten Tomatoes (quite good). Slumdog Millionaire, which I also liked, got a 94 (very high).

Is an average the best way to summarize several reviews? People vary a lot in their likes and dislikes — what if I’m looking for a movie I might like a lot? Then the maximum (best) review might be a better summary measure; if the maximum is high, it means that someone liked the movie a lot. A score of 94 means that almost every critic liked Slumdog Millionaire, but the more common score of 80 is ambiguous: Were most critics a bit lukewarm or was wild enthusiasm mixed with dislike? Given that we have an enormous choice of movies — especially on Rotten Tomatoes — I might want to find five movies that someone was wildly enthusiastic about and read their reviews. Movies that everyone likes (e.g., 94 rating) are rare.

Another possibility is that I’m going to the movies with several friends and I just want to make sure no one is going to hate the chosen movie. Then I’d probably want to see the minimum ratings, not the average ratings.

So: different questions, wildly different “averages”. I have never heard a statistician or textbook make this point except trivially (if you want the “middle” number choose the median, a textbook might say).  The possibility of “averages” wildly different from the mean or median is important because averaging is at the heart of how medical and other health treatments are evaluated. The standard evaluation method in this domain is to compare the mean of two groups — one treated, one untreated (or perhaps the two groups get two different treatments).

If there is time to administer only one treatment, then we probably do want the treatment most likely to help. But if there are many treatments available and there is time to administer more than one treatment — if the first one fails, try another, and so on — then it is not nearly so obvious that we want the treatment with the best mean score. Given big differences from person to person, we might want to know what treatments worked really well with someone. Conversely, if we are studying side effects, we might want to know which of two treatments was more likely to have extremely bad outcomes. We would certainly prefer a summary like the minimum (worst) to a summary like the median or mean.

Outside of emergency rooms, there is usually both a wide range of treatment choice and plenty of time to try more than one. For example, you want to lower your blood pressure. This is why medical experts who deride “anecdotal evidence” are like people trying to speak a language they don’t know — and don’t realize they don’t know. (Their cluelessness is enshrined in a saying: the plural of anecdote is not data.) In such situations, extreme outcomes, even if rare, become far more important than averages. You want to avoid the extremely bad (even if rare) outcomes, such as antidepressants that cause suicide. And if a small fraction of people respond extremely well to a treatment that leaves most people unchanged, you want to know that, too. Non-experts grasp this, I think. This is why they are legitimately interested in anecdotal evidence, which does a better job than means or medians of highlighting extremes. It is the medical experts, who have read the textbooks but fail to understand their limitations, whose understanding has considerable room for improvement.

What One Economist Has Learned From the Financial Crisis

Three things, he said:

  1. Finance professors have all been working for hedge funds. Their research has been about how to price derivatives and options. In other areas of economics, the research topics are much broader and include policy questions.
  2. Macroeconomics hasn’t made progress since the 1930s.
  3. Recommendations what to do about the crisis, even from economics professors, are based on very little they learned in graduate school. They hardly differ from opinions. Listening to his colleagues’ recommendations, he thought they would be backed up by something solid. They weren’t.

Standing, Sleep, and Stereotype Threat

Part of my long self-experimentation paper was about a connection between standing and sleep. If I stood a lot (more than 8 hours), I slept better.

Why might this be? I argued that if you use sleep to maintain muscles, you will begin to need sleep to maintain muscles. (And the more you use a muscle, the more maintenance it needs. Thus the stand/sleep connection.) Catherine Johnson describes here a parallel process: Because men opened doors for her (in college), she began to need them to open doors for her. In situations where she was stereotypically expected to be weak, she actually became weaker (mentally).

However much sense this makes it is not part of conventional thinking. Should we fight against germs by killing them? Of course, says the conventional problem solver. The notion that germs might keep us strong isn’t part of the discussion. Let me be more explicit: If you make everything clean you may begin to need everything clean. The overwhelming evidence for the hygiene hypothesis shows that this line of thinking is reasonable.

So that’s three examples of a general principle, an advanced version of “use it or lose it”.

If you think this is somehow obvious, let me ask: What about terrorism? Should we simply try to eliminate it? Or is the question of how to respond more complex?

The Twilight of Expertise (medical doctors)

Long ago the RAND Corporation ran an experiment that found that additional medical spending provided no additional health benefit (except in a few cases). People who didn’t like the implication that ordinary medical care was at least partly worthless could say that it was only at the margin that the benefits stopped. This was unlikely but possible. Now a non-experimental study has found essentially the same thing:

To that end, Orszag has become intrigued by the work of Mitchell Seltzer, a hospital consultant in central New Jersey. Seltzer has collected large amounts of data from his clients on how various doctors treat patients, and his numbers present a very similar picture to the regional data. Seltzer told me that big-spending doctors typically explain their treatment by insisting they have sicker patients than their colleagues. In response he has made charts breaking down the costs of care into thin diagnostic categories, like “respiratory-system diagnosis with ventilator support, severity: 4,” in order to compare doctors who were treating the same ailment. The charts make the point clearly. Doctors who spent more — on extra tests or high-tech treatments, for instance — didn’t get better results than their more conservative colleagues. In many cases, patients of the aggressive doctors stay sicker longer and die sooner because of the risks that come with invasive care.

Perhaps the doctors who ordered the high-tech treatments, when questioned about their efficacy, would have responded as my surgeon did to a similar question about the surgery she recommended (and would make thousands of dollars from): The studies are easy to find, just use Google. (There were no studies.)

It’s like the RAND study: Defenders of doctors will say that some of them didn’t know what they were doing but the rest did. But that’s the most doctor-friendly interpretation. A more realistic interpretation is that a large fraction of the profession doesn’t care much about evidence. In everyday life, evidence is called feedback. If you are driving and you don’t pay attention to and fix small deviations from the middle of the road, eventually you crash. You don’t need a double-blind clinical trial not to crash your car — a lesson the average doctor, the average medical school professor, and the average Evidence-Based-Medicine advocate haven’t learned.