Plagiarism by British Drug Tsar

Leslie Iversen, a retired Oxford professor of pharmacology, is Chair of the British government’s Advisory Council on the Misuse of Drugs. He is also a Fellow of the Royal Society, a foreign associate of the National Academy of Sciences, and chairman of the board and director of Acadia Pharmaceuticals, San Diego.

In 2008, Oxford University Press published a book by Iversen called Speed, Ecstasy, Ritalin: The Science of Amphetamines. Four passages in it are very close to a website about MDMA (Ecstasy). The duplicated material was on the website in 2002. Continue reading “Plagiarism by British Drug Tsar”

Assorted Links

  • meaning-based computing
  • academic plagiarism. “One of my own students turned in a paper on “Great Expectations” which was an exact copy of Dorothy Van Ghent’s essay – an essay so celebrated that I recognized it right off and, at the first opportunity, raised the issue with my student. “Shit!” she said. “I paid seventy-five dollars for that.” “
  • The dark side of fermentation. I am very pleased to see that Edward Jay Epstein is writing a book about the 9-11 Commission.

Research Fraud in China

From the New York Times:

Last December, a British journal that specializes in crystal formations announced that it was withdrawing more than 70 papers by Chinese authors whose research was of questionable originality or rigor. . . . “Even fake papers count because nobody actually reads them,” said Mr. Fang, who is more widely known by his pen name, Fang Zhouzi, and whose Web site, New Threads, has exposed more than 900 instances of fakery, some involving university presidents and nationally lionized researchers.

Recently a Tsinghua colleague asked me to fix the English in his paper. Most paragraphs required a few changes every sentence but here and there were whole paragraphs with no mistakes. Presumably he copied them from somewhere else. The material in them was boring — it was like copying from the phone book — so it was hard to care (he wasn’t taking credit for anyone else’s ideas) but I wonder if he realized how obvious it was. I don’t mean this is typical. I have looked at several other papers by Chinese authors and found no patches of perfect English.

The article begins with a false claim by a Chinese doctor — and of course these are truly damaging. In my experience, false claims by American doctors are common. An example is my surgeon recommending an operation that, she said, evidence showed would benefit me. There was no such evidence. One value of self-experimentation is that you can find out if a medicine works, rather than take your doctor’s word for it. I became impressed with self-experimentation when it showed me that an acne medicine (tetracycline, an antibiotic) my dermatologist had prescribed didn’t work. Not at all. He didn’t express any doubts when he prescribed it. Call it forensic DNA testing (e.g., The Innocence Project) for the rest of us.

Perhaps the Chinese people, faced with even more false claims than Americans, can benefit even more from self-experimentation.

Thanks to Tim Beneke.

Shamelessness in Chinese Academia

Professor Wang Hiu, a Tsinghua faculty member in the Chinese Language Department, was accused of plagiarism several months ago. You can read about it here. Professor Wang is no stranger to controversy:

Wang Hui was involved in controversy following the results of the Cheung Kong Dushu Prize in 2000. The prize was set up by Sir Li Ka-shing, which awards one million RMB in total to be shared by the winners. The 3 recipients of the prize in 2000 were Wang Hui, who served as the coordinator of the academic selection committee of the prize, Fei Xiaotong, the Honorary Chairman of the committee, and Qian Liqun, another committee member. Wang Hui was then the editor-in-chief of Dushu magazine, which was the administrative body of the prize.

He awarded the prize to himself! And his fellow committee members. Wang was editor in chief of Dushu for ten years. During that time, he published many hard-to-understand articles by his friends. The influence of the magazine shrank considerably.

Academic Horror Story (Duke University)

Duke University officials have known since 2009 that there were serious problems with Anil Potti’s research — serious enough to believe it is fraudulent. Here is how one researcher put it:

The Duke investigators said their data showed that expression of a particular gene, ERCC1, correlated with response to some agents. However, the commercial microarray chip the Duke investigators said they used in their experiments does not include that gene. “I admit this is one for which I do not have a simple, charitable explanation,” [said] Dr. Baggerly.

Potti, you may remember, lied about having a Rhodes Fellowship. Duke’s first investigation found him innocent.

Later events caused Duke officials to reconsider. They are still making up their minds. This is a horror story because a clinical trial based on Potti’s research is in progress. A hundred cancer patients are getting treated according to Potti’s research — that is, according to research that is probably fraudulent. Duke has done nothing to warn the patients or stop the trial.

The whole thing reminds me of UC Berkeley researchers taking weeks to tell a woman she had a large lump in her brain. As if their legal liability were more important than her life.

Is Science Self-Correcting?

Lots of scientists say science is self-correcting. In a way this is surely true: a non-scientist wouldn’t understand the issues. If anyone corrects scientific fraud, it will be a scientist. In another way, this is preventive stupidity: it reassures and reduces the intelligence of those who say it, helping them ignore the fact that they have no idea how much fraud goes undetected. If only 1% of fraud is corrected, it is misleading to say science is self-correcting. A realistic view of scientific self-correction is that there is no reward for discovering fraud and plenty of grief involved: the possibility of retaliation, the loss of time (it won’t help you get another grant), and the dislike of bearing bad news. So whenever fraud is uncovered it’s a bit surprising and bears examination.

What I notice is that science is often corrected by insider/outsiders — people with enough (insider) knowledge and (outsider) freedom to correct things.  As I’ve said before, Saul Sternberg and I were free to severely criticize Ranjit Chandra. Because we were psychologists and he was a nutritionist, he couldn’t retaliate against us. Leon Kamin, an outsider to personality psychology, was free to point out that Cyril Burt faked data. (To his credit, Arthur Jensen, an insider, also pointed in this direction, although not as clearly.) The Marc Hauser case provides another example: Undergraduates in Hauser’s lab uncovered the deception. They knew a lot about the research yet had nothing invested in it and little to lose from loss of Hauser’s support. This is another reason insider/outsiders are important.

Plastic Fantastic by E. S. Reich

Plastic Fantastic: How the Biggest Fraud in Physics Shook the Scientific World by Eugenie Samuel Reich, a science writer, tells how a young physicist named Jan Hendrik Schoen, working at Bell Labs on making electronic devices from organic materials, managed to fool the physics community for several years, publishing many papers with made-up data in Science and Nature. This podcast summarizes the story, with the new detail that after his disgrace — even his Ph.D. was revoked — Schoen managed to get a job as an air-conditioning engineer in Germany.

I enjoyed the book, partly for the drama, partly for the physics, and partly for the light it sheds on the culture of physics and Bell Labs. When anyone says “science is self-correcting” I’m amused because, as the speaker must know, the amount of fraud that goes uncorrected is unknown. It may be far larger than the amount that is detected.

The author’s website.

The Marc Hauser Case

It would have been harsh to title this post “Marc Hauser, RIP”. However, unless  the following is shown to be in error, I’ll never believe anything he writes or has written:

According to the document that was provided to The Chronicle, the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant’s codes, he found that the monkeys didn’t seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust.

But Mr. Hauser’s coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. “I don’t feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder,” he wrote.

A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it. After several back-and-forths, it became plain that the professor was annoyed.

“i am getting a bit pissed here,” Mr. Hauser wrote in an e-mail to one research assistant. “there were no inconsistencies! let me repeat what happened. i coded everything. then [a research assistant] coded all the trials highlighted in yellow. we only had one trial that didn’t agree. i then mistakenly told [another research assistant] to look at column B when he should have looked at column D. … we need to resolve this because i am not sure why we are going in circles.”

The research assistant who analyzed the data and the graduate student decided to review the tapes themselves, without Mr. Hauser’s permission, the document says. They each coded the results independently. Their findings concurred with the conclusion that the experiment had failed: The monkeys didn’t appear to react to the change in patterns.

They then reviewed Mr. Hauser’s coding and, according to the research assistant’s statement, discovered that what he had written down bore little relation to what they had actually observed on the videotapes. He would, for instance, mark that a monkey had turned its head when the monkey didn’t so much as flinch. It wasn’t simply a case of differing interpretations, they believed: His data were just completely wrong.

As word of the problem with the experiment spread, several other lab members revealed they had had similar run-ins with Mr. Hauser, the former research assistant says. This wasn’t the first time something like this had happened. There was, several researchers in the lab believed, a pattern in which Mr. Hauser reported false data and then insisted that it be used.

If taken literally, this description seems to imply that Hauser was making up data — writing down results much more favorable to his career than the actual results — and not realizing it! As if someone else was marking the data sheet. Since the videotapes are being coded by more than one person the fabrication/delusion/whatever would come to light, you might think, but he does it anyway! And then gets “a bit pissed” when things don’t work out perfectly.

I would love to hear Hauser’s side of this story, and see the videotapes being coded. So far Hauser has said nothing to make me doubt the straightforward interpretation: He made up data. After Saul Sternberg and I published a paper implying that Ranjit Chandra had made up data, Chandra retired.

Derek Bickerton says Hauser “fell victim to a soon-to-be-outdated view of evolution”. I am more interested in what this says about Harvard and Hauser’s co-authors. In particular, I wonder what Noam Chomsky, one of Hauser’s co-authors, will say. The incident makes Chomsky look bad. Hauser appears to be a person who pushes aside the truth of things. That Chomsky wrote a major paper with him suggests that Chomsky failed to notice this.

Thanks to Dave Lull and Language Log.

Assorted Links

  • A new paper debunks Michael Mann’s Hockey Stick global temperature graph. “Climate scientists have greatly underestimated the uncertainty of proxy-based reconstructions and hence have been overconfident in their models.” Very well written.
  • “Obscure, contemporary ethics books . . . were actually about 50% more likely to be missing than non-ethics books.” Paper. The study was done entirely online and covered 32 large university libraries.
  • Gladys Reid, Australian discoverer of benefits of feeding zinc to farm animals. “Reid was reluctant to make direct dose recommendations after claiming the Director General of Agriculture had told her she would be taken to court for misleading practices if she did. However she won followers from farming wives in particular. Many would call asking for zinc advice after tiring of seeing suffering livestock and husbands on the brink of suicide from crippling stock and production losses.”
  • Using a treadmill while working
  • The Potti Scandal continues
  • How loud are Sunchips?

Thanks to Don Sheridan and Melissa Francis.

Animal Cognition Paper Retracted

A paper in Cognition by Harvard professor Marc Hauser and others has been retracted:

The paper tested cotton-top tamarin monkeys’ ability to learn generalized patterns, an ability that human infants had been found to have, and that may be critical for learning language. The paper found that the monkeys were able to learn patterns, suggesting that this was not the critical cognitive building block that explains humans’ ability to learn language.

The note to be published about the retraction says almost nothing about why: “An internal examination at Harvard University . . . found that the data do not support the reported findings.”

Several other papers from Hauser’s lab have also been questioned.

The usual explanation would be that someone in Hauser’s lab made the results better than they actually were. A co-author of the paper said Hauser had told him “there were problems with the videotape record of the study”.  That’s consistent with the usual explanation: Someone edited the tapes (via deletions) to make the results appear better than they were. But it’s also possible that many tapes are missing, which might be an accident. When The New Yorker archives were moved from Building A to Building B several years ago, much of the archives was lost.

Thanks to Aaron Blaisdell.

The Joan Evans Scandal

I came across the Potti scandal while trying to find out about the trouble faced by a woman named Joan Evans because a statistical analysis couldn’t be reproduced. Robert Gentleman had mentioned this in a talk at the Joint Statistical Meetings in Vancouver. Look for The Cancer Letter, Gentleman said.

I now realize that Joan Evans is Joe Nevins, who co-authored a major paper with Potti.

Speaking of Potti, members of the Duke administration are said to “have warned people not to even Google the name ‘Anil Potti,’”

The Potti Scandal

A Duke University associate professor named Anil Potti who does cancer research turns out to have fabricated numerous details on applications for research money. The first fabrication to be noticed was that he had received a Rhodes Fellowship.

This is interesting because Duke had previously investigated him:

Late last year [2009], there was a crescendo that caused Duke to stop clinical trials on three of his research programs, two involving lung cancer and one involving breast cancer. In each program, Potti was giving patients chemotherapy — determining what drugs might work best and in what dosage — based upon his genome research.

In January Duke let these programs resume after an internal review. [emphasis added] And these are the precise programs where Duke — for the second time — has now suspended new [emphasis added] enrollments. . . . In an official statement on the winter review, Duke said it had determined Potti’s approaches were “viable and likely to succeed.”

Someone who appears to be a total fraud is called to Duke’s attention — and they find him innocent! This is what happened with the SEC and Madoff and Memorial University and Ranjit Chandra. Chandra’s research assistant, a nurse, told Memorial something was wrong and Memorial did nothing, or very little. Chandra then sued the nurse. He went on to write the paper that Saul Sternberg and I investigated.

Someone lies on his resume — it happens. That a prestigious institution like Duke let him continue to get away with it, possibly endangering patients and surely wasting vast resources, after it’s brought to their attention — not so well-known. So far, the New York Times has only covered the false-resume side of the story. You may recall how poorly Duke responded to charges against its lacrosse team.

As this unfolded, Duke had the following headline on its website: “Crisis management 101: What can BP CEO Hayward’s mistakes teach us”. From a CNN story in which a Duke expert was quoted.

Duke.Fact.Checker notes that Potti’s papers have at least 26 co-authors! Many with M.D.’s, who have or will tell thousands of trusting patients “you should take Drug X”. The patient endangerment is not trivial.

The Cancer Letter on Potti. Another issue of The Cancer Letter about it.

Unlikely Data

Connoisseurs of scientific fraud may enjoy David Grann’s terrific article about an art authenticator in the current New Yorker and this post about polling irregularities. What are the odds that two such articles would appear at almost the same time?

I suppose I’m an expert, having published several papers about data that was too unlikely. With Saul Sternberg and Kenneth Carpenter, I’ve written about problems with Ranjit Chandra’s work. I also wrote about problems with some learning experiments.

Harvard Student Almost Gets Away With It

If Adam Wheeler, a former Harvard student, hadn’t applied for a Rhodes Fellowship, it appears he would have gotten away with four years of academic dishonesty. While at Harvard, he won several prizes. On his Rhodes application, he listed “numerous books he had co-authored, lectures he had given, and courses he had taught”. “Numerous books”? Yet this is how he was caught:

A Harvard professor first became suspicious of Wheeler while reviewing his application for the Rhodes scholarship. He discovered that Wheeler had plagiarized his piece almost completely from the work of another professor.

His “piece”? What’s that? When you apply for a Rhodes fellowship you don’t submit an academic article as part of your application. Why didn’t the reviewer check if the “numerous books” that a college senior claimed to have written actually existed? What’s next, a sixth-grader says she’s won a Nobel Prize and a Harvard prof doesn’t notice a problem?

Like Wheeler, Ranjit Chandra was caught toward the very end of his academic career. My impression with Chandra is that, as he repeatedly escaped detection, the falsifications became more extreme.

The Hockey Stick Illusion

Recently a WSJ columnist told this story:

I was chatting with a friend who, over the years, has helped her kids slog through the obligatory science-fair projects.

“The experiments never turned out the way they were supposed to, and so we were always having to fudge the results so that the projects wouldn’t be screwy. I always felt guilty about that dishonesty,” she said, “but now I feel like we were doing real science.”

Yes, science with a human touch. The Hockey Stick Illusion by Andrew Montford (sent to me by the publisher) is a great book because it tells a great story. That story has a hero (Stephen McIntryre) and a villain (Michael Mann) and illustrates a basic truth about the world: A consensus of the “best people” can be wrong. This point was first made, as far as I know, by The Emperor’s New Clothes. It was later made by the Asch experiment (about line-length judgments). It’s not obvious; Elizabeth Kolbert and her editors at The New Yorker, not to mention Bill McKibben, have yet to understand it. (“No one has ever offered a plausible account of why thousands of scientists at hundreds of universities in dozens of countries would bother to engineer a climate hoax,” Kolbert recently wrote, with the permission of her editors.)  It’s a sad comment on our education system that I first learned it via self-experimentation. My results showed that an acne medicine that my dermatologist prescribed didn’t work — a possibility for which my dermatologist (in consensus with other dermatologists) hadn’t allowed. As truths go, this one is scary: It means you have to think for yourself. But it is also the most liberating truth I know.

The Hockey Stick Illusion tells how McIntyre, skeptical of Mann’s hockey-stick result (a sharp increase in global temperature to unprecedented levels during the 20th century), tried to get the data and computer code that Mann used. Mann put him off. He still hasn’t released the computer code he used. Mann found a hockey stick where none existed because (a) he used principal-components analysis to summarize a lot of temperature series (bad idea), (b) he used that method in an unusual way, making a bad idea worse, and (c) one of his time series had a serious problem. After McIntyre noticed this problem and pointed it out, the story really begins: How did everyone react? Much as a reader of The Emperor’s New Clothes would expect. Nature denied it. The Washington Post denied it. Most climate scientists denied it (and continue to). Montford started writing the book before Climategate, whose overall message was the same — that climate scientists have been distorting the truth, that the case for man-made global warming is far weaker than they say, that a consensus of experts can be wrong. As Montford puts it,

None of the corruption and bias and flouting of rules we have seen in this story [and in the Climategate emails] would have been necessary if there is, as we are led to believe, a watertight case that mankind is having a potentially catastrophic effect on the climate.

Climategate and the story within The Hockey Stick Illusion are bad news for some very powerful people, such as Al Gore and those who gave him a Nobel Prize, but are helpful to the rest of us. When Big Shot X says “This is incredibly clear, everyone knows this” . . . maybe they’re wrong.

Assorted Links

  • Matt Ridley reviews The Hockey Stick Illusion. “One of the best science books in years. It exposes in delicious detail . . . how a great scientific mistake of immense political weight was perpetrated, defended and camouflaged by a scientific establishment that should now be red with shame.” Of the response to Stephen McIntyre’s damning critique: “I find the reaction of the scientific establishment more shocking than anything. . . . Shut-eyed denial.”
  • Answer to medical mystery is food allergies. If doctors can’t recognize food allergies, they are even further from understanding their cause.
  • Der Spiegel looks skeptically at man-made global warming. Will Elizabeth Kolbert (the New Yorker writer) ever realize she’s been credulous?
  • Low cholesterol bad? “Cholesterol levels in men with dementia and, in particular, those with Alzheimer disease had declined at least 15 years before the diagnosis and remained lower than cholesterol levels in men without dementia throughout that period.” Body weight also declines before the diagnosis.

Thanks to Peter Spero.

Assorted Links

Thanks to Vic Sarjoo, Anne Weiss, and Marian Lizzi.

Climate Science Slowly Becomes Less Settled

Andrew Gelman, in a comment on the previous post, said that he believes the science of climate change is “much more settled” than I do. He’s right — in the sense that I believe the state of the world is different (less certain) than claimed. Andrew sees correct certainty; I see false certainty. Because science slowly becomes more accurate, I think the science will slowly shift toward “less settled” — a prediction I don’t think Andrew would make. Here’s an example of such a shift. According to the Mail on Sunday, Phil Jones

admit[s] that there is little difference between global warming rates in the Nineties and in two previous periods since 1860 and accept[s] that from 1995 to now there has been no statistically significant warming.He also leaves open the possibility, long resisted by climate change activists, that the ‘Medieval Warm Period’ from 800 to 1300 AD, and thought by many experts to be warmer than the present period, could have encompassed the entire globe.

Phil Jones slowly shifts.