A few days ago I wrote about a study that suggested that people who’d had bariatric surgery were at much higher risk of liver poisoning from acetaminophen than everyone else. I learned about the study from an article by Erin Allday in the San Francisco Chronicle. The article included this:
At this time, there is no reason for bariatric surgery patients to be alarmed, and they should continue using acetaminophen if that’s their preferred pain medication or their doctor has prescribed it.
This was nonsense. The evidence for a correlation between bariatric surgery and risk of acetaminophen poisoning was very strong. Liver poisoning is very serious. Anyone who’s had bariatric surgery should reduce their acetaminophen intake.
Who had told Allday this nonsense? The article attributed it to “the researchers” and “weight-loss surgeons”. I wrote Allday to ask.
She replied that everyone she’d spoken to for the article had told her that people with bariatric surgery shouldn’t be alarmed. She did not understand why I considered the statement (“no need for alarm”) puzzling. I replied:
The statement is puzzling because it is absurd. The evidence that acetaminophen is linked to liver damage in people with bariatric surgery is very strong. Perhaps the people you spoke to didn’t understand that. The size of the sample (“small”) is irrelevant. Statisticians have worked hard to be able to measure the strength of the evidence independent of sample size. In this case, their work reveals that the evidence is very strong.
If the experts you spoke to (a) didn’t understand statistics and (b) were being cautious, that would be forgivable. That’s not the case here. They (a) don’t understand statistics and (b) are being reckless. With other people’s health. It’s fascinating, and very disturbing, that all the experts you spoke to were like this.
I have no reason to think that the people Allday talked to were more ignorant than typical doctors. I expect researchers to be better at statistics than average doctors. One possible explanation of what Allday was told is that most doctors, given a test of basic statistical concepts, would flunk. Not only do they fail to understand statistics, they don’t understand that they don’t understand. Another possible explanation is that most doctors have a strong “doctors do everything right” bias, even when it endangers patients. Either way, bad news.
“17 percent had had weight-loss surgery. . . . Less than 1 percent of the general population has had the surgery.”
The rate of general population might be misleading. What if bariatric patients have a much higher probability of taking acetaminophen? What the researchers should have looked at is the rate of liver poisoning of acetaminophen users in general vs. acetaminophen users who had bariatric surgery. Maybe some researchers really do not understand statistics.
Seth: Sure, the evidence could be improved. This is always true. The sort of evidence I discuss resembles the first evidence that smoking causes lung cancer.
Seth, please consider rewriting/expanding this blog post into an Op-Ed piece. It belongs in the NY Times.
I second what Tom said.
If your only information about a study is the information that’s contained in a press article about the study, it’s a bad idea to start critizing researchers who are familiar with the study.
“Statisticians have worked hard to be able to measure the strength of the evidence independent of sample size. In this case, their work reveals that the evidence is very strong.”
That claim is wrong. Observational studies produce frequently results that don’t replicate even when the P values are under 0.05. See Ioannidis.
Maybe acetaminophen is more effective in people who had bariatric surgery and as a result those people are more likely to take acetaminophen than the general population.
More importantly, how many who take acetaminophen actually develop liver poisioning? How many bariatric surgery patients who don’t take acetaminophen develop develop liver poisioning?
How does that risk compare to the advantages that acetaminophen provides?
Instead of contacting the reporter, how about contacting the person behind the study and asking them why he thinks that the reporter wrote “there is no reason for bariatric surgery patients to be alarmed”?
Seth: By “strength of evidence” I meant the p value. The p value is very low. Statisticians have indeed worked hard to compute p values independent of sample size. This study unquestionably makes use of acetaminophen more dangerous for those who have had bariatric surgery. Someday the uncertainties will be resolved — until then, a causal explanation of these results (bariatric surgery makes acetaminophen more dangerous) remains plausible. Which is why acetaminophen should be avoided — at least, avoided more than before these results.
Thank you for your suggestion that I contact the researcher. That’s a good idea. Of course I can do both: contact the reporter and the researcher.
By synchronicity, Prof. Bruce Charlton today put up a blog post about how when he did research in epidemiology, he came to understand that his fellow researchers don’t understand and cannot interpret statistics–including not only the researchers running clinical trials, but even the professional statisticians.
http://charltonteaching.blogspot.com/2012/12/the-uk-census-in-rationalistic-secular.html
So why would anyone expect doctors to?
Speaking of incompetent mathematicians, does anyone remember that infamous newspaper column written by Marilyn vos Savant? She outlined a brain-teaser with a very counter-intuitive solution (the puzzle is called the Monty Hall problem). Many people, including some professors of mathematics, sent her nasty letters, claiming that she was wrong. She was not wrong. You can see some of the letters here:
http://marilynvossavant.com/game-show-problem/
Here, for example, is a note from someone named E. Ray Bobo, Ph.D., of Georgetown University:
An interesting follow-up question is whether any of Marilyn’s critics apologized to her later. Apparently, not many did:
http://answers.google.com/answers/threadview/id/510729.html
A friend of mine is a doctor who majored in math at Berkeley as an undergrad. He said that in med school and residency, being the one doctor in the room who actually can do and understand mathematics was kind of like having a superpower.
Marilyns answer is wrong, as are all the rest.
The true answer here is kind of meta, because the probabilities are very dependent on why the host does what he does, and what he does, such as what he would have done if another door had been chosen.
All answers assume that the host behave in a specific way, and when they assume differently, they get different probabilities.
The true answer is that there is not enough information to calculate probabilities.
The REALLY interesting part is how this answer escape virtually all people, even experts. It seems to be some kind of blind spot in normal humans.
First, @Kim, you’re so right. There’s really no clear answer to the Monty Hall problem, at least at the level of real life people on the real life show. I wonder why “smart” people can’t see that?
Secondly, I don’t really have enough information to tell, but I doubt that these experts are as bad at statistics as you’re saying. A more likely explanation is that they don’t feel that they have the authority to change policy regarding bariatric patients and acetaminophen. It’s the whole “trusting authority” problem. Don’t trust regular people with information. It might cause a panic!! Just let this new information about bariatric patients slowly percolate up the chain of command to whoever makes policy announcements and in 4 or 5 years, you’ll see bariatric surgeons telling their patients to avoid acetaminophen. Problem solved! [sarcasm]
Seth: “They don’t feel they have the authority to change policy.” That’s a good way to put it. That’s what I was trying to get at with my second possible explanation, they have a strong “doctors do everything right” bias.
I could not agree with you more. My mom is currently in ICU. She is a type 2 diabetic. Her blood glucose is tested every 6 hours. No matter the level, she gets insulin – the amount depends on what “the sliding scale” tells the nurse to do.
Her blood sugars are below 100 about half the time and they STILL give her insulin. I argued over and over with the nurse and MD on staff to NOT give her insulin if it was 120 or below. I finally had to sign a release. And they still did not like it.
Simply, MD’s (nurses and therapists) are unable to think. And when they think like you suggest (as they do), it’s a bad combination.
Marilyn is wrong. Look at the chart on her page. Game 3 and 6 are never played because the host does not open the door with the auto. She just lost track of her variables. “Door #3” is the door that the host opens, not actually Door #3 and the host opened another door.
Nope, I am wrong. I ran a simulation.
With no switching you have .33 chance of picking correctly. does not matter if a door is revealed.
You have a .66 chance on picking the incorrect door. In this case the host reveals the other incorrect choice, and a switch puts you on the correct door guaranteed.
If you picked the right door (.33) initially, then the switch moves you to the wrong choice.
@Kim
The true answer is not “meta” or ambiguous, because the host’s actions are described with no ambiguity in the problem statement. There are three doors, and after the contestant makes his choice the host opens a door that does not have the car behind it.
Matt, there is a LOT of ambiguity in this problem, as stated by Marilyn vos Savant. She states that the host knows what´s behind doors. This means the host can change behaviour depending on that. This dependence can change probabilities a lot. For instance, the host could open another door only when you choose the one hiding the car, to tempt you away.
She gives us just one scenario: You choose door 1, host opens door 3, showing a goat.
But since we lack information about what the host do depending on where the car is, we cannot calculate probabilities. We do not know how likely other scenarios are.
To simplify: Why did the host open that door?
Answer: We do not know.
Kim, the mathematicians (and others) who objected to Marilyn’s answer didn’t do so based on the ambiguity that you described. They seemed to assume the scenario that Marilyn intended (though perhaps didn’t express unambiguously). So I think we can still conclude that the objectors were spectacularly wrong.
Alex, yes.
In other words, they guess what she means, and luckily get that right, and then proceeds to think wrongly, even after being corrected. To me, that behaviour seems very typical, very human, and quite disappointing.
To me, the Monty Hall problem was very confusing, and the confusion did not go away when I saw the explanations. When the absence of information was pointed out to me, everything became obvious, even all the other explanations, as just variations on how people fill in that blank, which they do not see, like the blind spot, which it might be a cognitive equivalent of.
I agree with Tom too.
What was the difference in risk of the outcome for the two groups in the study? Large correlation doesn’t suggest anything to me either, because there’s no quantification of the size of the association.
Seth: I don’t understand the question. Which two groups? The essential finding is that people who’d had bariatric surgery seemed have about 20 times higher risk of acetaminophen poisoning than everyone else.