A Chinese friend of mine is a chemistry major. In one of her classes, the textbook was so hard to understand she said the authors are “trying to confuse you.” They use difficult words, for example. A Berkeley art history major told me much the same thing. In her reading assignments, she said, the writers couldn’t write a sentence without a few big words. They were trying to impress readers, she believed. Continue reading ““Trying to Confuse You”: Pluses and Minuses of the Professorial Value System”
Oral rehydration therapy (ORT) is given to people (usually children) suffering from diarrhea, which before ORT was often fatal. It is very simple: The sufferer drinks water with sugar and salt ad libitum (as much as they want). You probably haven’t heard of ORT — at least, I hadn’t. Everyone has heard of antibiotics. Yet “in 10 years [ORT] saved more lives than penicillin had in 40.” Infant diarrhea was once (and may still be) the main cause of death in poor countries.
A history of its discovery supports several things I’ve said on this blog. One is Thorstein Veblen’s point about the disdain among professional scientists for useful research:
ORT might also have been developed long before 1968 but for the attitudes of the dominant medical establishment toward practical experimentation, which the Cholera Research Laboratory and the National Institutes for Health shared. Nalin believes that “the people at the lab … got kudos for the extent to which [their] work was not practical. As soon as it became practical it was discarded like a soiled towel–it was too common, too hands-on… so the prestige went to people who measured trans-intestinal fluxes or electrical currents”.
No one who has attended an elite law school, medical school, or graduate program in education will be surprised by this.
Another is the great resistance among the medical establishment to cheap and effective solutions:
The formidable and persistent ignorance of the Western medical establishment, which continues over twenty-five years after the discovery of ORT, is phenomenal. While its refusal to advocate ORT may be due in part to the notion that ORT is only necessary for people in the developing world, its actions appear to be driven also by financial considerations. Most hospitals do not train physicians in the use of ORT since they have no financial reason to do so. [I think “since” overstates what is known — Seth] The use of intravenous therapy, which often involves keeping a dehydrated child overnight, assures [greater] insurance reimbursement. Sending children home with ORT would [reduce] profits. Furthermore, recent studies show that diarrhoeal illness among the elderly may incur even greater health care costs that could also be reduced by the use of ORT. At a time of heated discussion about cost-containment in health care, it seems all the more ironic and egregious that a superior, cheap, and proven therapy [fails to replace] a far more expensive one. Estimates based on the cost of hospitalizations and physician visits suggest that ORT could save billions of dollars annually.
As an example of the resistance of American doctors to a better therapy, an ORT researcher, who had used it on Apache reservations in America, told this story:
I had an anthropologist friend who adopted an Apache child from the [Arizona] reservation where we were working. He used to be the anthropologist on the reservation. And then he [left the reservation and] went to Arkansas to teach and the Apache child came down with severe diarrhea and he called me up and he said desperately, “Look, my son’s in the hospital and they’re giving him all sorts of intravenous fluids. The diarrhea’s not stopping, he’s losing weight, they’re not feeding him. I know that you did this work in Arizona [on the reservation] and it didn’t look like that. . . . Would you call this professor of pediatrics and just collegiately talk to him?” So I called up the professor and told him that in our experience with Apache children this is what we found and here’s the publication and so on. And he said to me, “Doctor, doctor, our [Arkansas] children are not the same as your [reservation] children”. He was treating an Apache child from the same reservation.
Shades of Downton Abbey (where Lady Sybil died because a London doctor was listened to instead of a rural doctor).
Why Quantified Self Matters is the title of a talk I gave yesterday at a Quantified Self conference in Beijing. I gave six examples of things I’d discovered via self-tracking and self-experiment (self-centered moi?), such as how to lose weight (the Shangri-La Diet) and be in a better mood. I said that the Quantified Self movement matters because it supports that sort of thing, i.e., personal science, which has several advantages over professional science. The Quantified Self movement supports learning from data, in contrast to trusting experts.
If I’d had more time, I would have said that personal science and professional science have different strengths. Personal science is good at both the beginning of research (when a new idea has not yet been discovered) and the end of research (when a new idea, after having been confirmed, is applied in everyday life). It is a good way to come up with plausible new ideas and a good way to develop them (assess their plausibility when they are still not very plausible, figure out the best dose, the best treatment details). That’s the beginning of research. Personal science is also a good way to take accepted ideas and apply them in everyday life (e.g., a medical treatment, an idea about deficiency disease) because it fully allows for human diversity (e.g., a medicine that works for most people doesn’t work for you, you have an allergy, whatever). That’s the end of research.
Professional science works well, better than personal science, when an idea is in a middle range of plausibility — quite plausible but not yet fully accepted. At that point it fits a professional scientist’s budget. Their research must be expensive (Veblen might have coined the term conspicuous research, in addition to “conspicuous consumption” and “conspicuous leisure”) and only quite plausible ideas are worth expensive tests. It also fits their other needs, such as avoidance of “crazy” ideas and a steady stream of publishable results (because ideas that are quite plausible are likely to produce usable results when tested). Professional science is also better than personal science for studying all sorts of “useless” topics. They aren’t actually useless but the value is too obscure and perhaps the research too expensive for people to study them on their own (e.g., I did research on how rats measure time).
In other words, the Quantified Self movement matters because it gives all of us a new scientific tool. A way to easily see where the scientific tools we already have cannot easily see.
Many people think that personal genomics will change medicine. Doctors will choose treatments based on your genome, learning your genome will tell you what diseases you are at high risk of so you can take precautions, and so on. One person who believes this is Eric Topol. In his new book, The Creative Destruction of Medicine, he writes:
The biggest leap came in the first decade of the twenty-first century. The six billion bases of the human genome were sequenced, and this led to the discovery of the underpinnings of over one hundred common diseases, including most cancers, heart disease, diabetes, autoimmune disorders, and neurologic conditions.
Here is the founder of a company that makes sequencers: ““I believe that the impact on the medical community of whole human genome sequencing at a cost comparable to a comprehensive blood test will be profound.”
I disagree. I have seen nothing that suggests genes make a big difference in any common disease and plenty that suggests environment makes a big difference. My self-experimentation led me to one powerful environmental factor after another, for example. Biologists have invested heavily in the study of genes for reasons that have nothing to do with practical applications, as Thorstein Veblen would be the first to point out.
In 1999, New Yorker staff writer Michael Specter wrote an admiring article about a neurology professor named Kari Stefansson. Stefansson had returned to his native Iceland to take advantage of Iceland’s genetic homogeneity to find genes for common diseases. “In the past, drugs were discovered almost by chance,” Specter wrote, as if this would soon change. The wishful thinking involved is indicated by passages like this:
[Stefansson] and Gulcher selected the five per cent of Icelanders among the hundreds of thousands in their genealogical database who had lived the longest— most of them over ninety. The database allowed the two scientists to seek an answer to a simple question: Are these people who live so long related to each other more often than the average in Iceland? The answer quickly became apparent. People over ninety are much more closely related to each other than people in the general population are, and their children are more likely to live longer than the children of others. That provides strong evidence that the trait is inherited.
“Strong” evidence? The “people over ninety” observation is strong evidence that longevity is inherited only if relatives share nothing but genes. The “their children are more likely” observation is strong evidence of genetic control only if parents pass on to their children only genes. Both assumptions are highly unlikely. For example, surely an Icelandic person lives closer to his relatives than to randomly selected Icelanders.
The article quotes no one with my view (geneticists are overstating the practical value of their work), but it does say that “Stefansson set out to raise capital at a time  when investors had become skeptical about the many unfulfilled promises made by companies claiming that genetic research would solve the ills of humanity.”
Will reality overtake hype? Here is an indication this is happening:
Kari [Stefansson], a neurologist, was a Harvard professor when he co-founded deCODE in 1996. Two years later, Iceland’s parliament gave deCODE access to one of the country’s unique resources—health records of the genetically homogenous population. DeCODE debuted on the NASDAQ stock exchange in 2000, and it made dramatic discoveries of genetic factors associated with cancer, heart disease and other conditions. But the company never turned a profit and filed for bankruptcy protection in 2009.
In a recent post I wondered what’s a good word to describe the next step in economic progress after specialization — when making/doing X is done by the general public (not as a job) instead of just by paid specialists (as a job). For example, the introduction of cheap cameras allowed the general public, not just professional photographers, to take pictures. Personal science is an example of such a shift, of course. Thank you for your many suggestions, such as laitization, deguilding, promethization, and several more. The combination of Keimpe Wiersma’s suggestion (DIY) and wobbly’s suggestion (deguilding) led me to DIYing and DIYization.
DIYing, I learned, is an existing word with a different meaning (to do DIY). Although ordinary DIY (Home Depot) is associated with men, women appear to use DIYing far more than men and they use it to describe traditionally feminine activities (see this). For example, there is a blog DIYing To Be Domestic by a woman. This is irrelevant to whether I use it — it’s just interesting.
DIYization is much rarer. It appears in a 2005 essay called “Scandinavian Dreams: DIY, Democratisation and IKEA” where it refers not to a change in an activity but to a change in society — toward more DIY. IKEA, says the essayist, is an example of “the DIYization of society.”
DIYing is shorter. DIYization is more self-explanatory, less likely to be confused with dying, and makes clearer the connection with specialization. Not to mention it is more pompous — more Veblenesque. In the last chapter of The Theory of The Leisure Class, Veblen used long rare words to say that academics show off their uselessness using by using long rare words.
SOPA is an example of what Thorstein Veblen called “the vested interests” trying to prevent change. In an essay called “The Vested Interests and the Common Man” he pointed out “the existence of powerful vested interests which stand to gain from the persistence of the existing, but outdated system of law and custom.” Jane Jacobs said much the same thing. The most important conflict in any society, she wrote at the end of The Economy of Cities, isn’t between the rich and poor or management and labor; it is between those who benefit from the status quo and those who benefit from change. If those who benefit from the status quo usually win, problems stack up unsolved.
Thorstein Veblen might have gloated that this 2011 article — about the uselessness of law schools and legal scholarship — so thoroughly supports what he wrote in a book published in 1899 (see the last chapter of The Theory of the Leisure Class). Why are law schools useless? Because law professors feel compelled to imitate the rest of academia, which glorifies uselessness:
“Law school has a kind of intellectual inferiority complex, and it’s built into the idea of law school itself,” says W. Bradley Wendel of the Cornell University Law School, a professor who has written about landing a law school teaching job. “People who teach at law school are part of a profession and part of a university. So we’re always worried that other parts of the academy are going to look down on us and say: ‘You’re just a trade school, like those schools that advertise on late-night TV. You don’t write dissertations. You don’t write articles that nobody reads.’ And the response of law school professors is to say: ‘That’s not true. We do all of that. We’re scholars [i.e., useless], just like you.’ ”
Yeah. As I’ve said, there’s a reason for the term ivory tower. And seemingly useless research has value. Glorifying useless research has the useful result of diversifying research, causing a wider range of research directions to be explored. Many of my highly-useful self-experimental findings started or received a big boost from apparently useless research.
The pendulum can swing too far, however, and it has. A large fraction of health researchers, especially medical school researchers, have spent their entire careers refusing to admit, at least in public, the uselessness of what they do. Biology professors have some justification for useless research; medical school professors have none, especially given all the public money they get. Like law professors, they prefer prestige and conformity. The rest of us pay an enormous price for their self-satisfaction (“I’m scientific!” they tell themselves) and peace of mind. The price we pay is stagnation in the understanding of health. Like clockwork, every year the Nobel Prize in Medicine is given to research that has done nothing or very close to nothing to improve our health. And every year, like clockwork, science journalists (all of them!) fail to notice this. If someone can write the article I just quoted about law schools, why can’t even one science journalist write the same thing about medical schools — where it matters far more? What’s their excuse?
To rehabilitate his reputation, Alfred Nobel, in his will, established the Nobel Prizes, the crucial element of which was that they honor the most useful research. Nobel wanted to be associated with good works. This has become a considerable problem for the committee that awards the Physiology and Medicine prize because, if you haven’t noticed, the most prestigious research — the stuff done at great expense in gleaming new laboratories — isn’t useful. The uselessness of high-prestige academic research was emphasized by Thorstein Veblen in The Theory of the Leisure Class. Unfortunately Nobel died shortly before it was published.
For a long time, the Nobel prize-winning research in Medicine hasn’t provided significant help with major health problems (depression, obesity, diabetes, cancer, stroke, heart disease, etc.). Sometimes it has been a tiny bit helpful. Most often the prize-winning research has been, at the time of the award, no clear help at all. This is one of those years. The press release announcing the 2011 prize tries to hide this important truth. Here is the “what use is it?” section of this year’s press release:
From fundamental research to medical use
The discoveries that are awarded the 2011 Nobel Prize have provided novel insights into the activation and regulation of our immune system. They have made possible the development of new methods for preventing and treating disease, for instance with improved vaccines against infections and in attempts to stimulate the immune system to attack tumors. These discoveries also help us understand why the immune system can attack our own tissues, thus providing clues for novel treatment of inflammatory diseases.
“They have made possible the development of new methods for preventing and treating disease.” False (and, uh, just a wee bit grandiose). Such development was already possible. Note what isn’t said: “They led to new methods for preventing and treating disease.”
“Improved vaccines against infections.” I have heard nothing about this, in spite of the plural (vaccines rather than vaccine). In any case, this is faint praise because the improvement might be a small percentage. If you know whether this claim is true, please leave a comment. Again note what isn’t said: “New vaccines”. According to this article, the work led to a vaccine against prostate cancer. (With no noticeable benefit so far.) Does the press release writer think prostate cancer is infectious?
“Attempts to stimulate the immune system to attack tumors.” Attempts? As in failed attempts? Apparently.
The final sentence (“These discoveries also help us understand . . . “) is out of place. The section is about actually helping people (“medical use”) not ivory-tower stuff like “providing clues”. Whoever wrote this is like a student with not enough to say trying to meet a teacher’s minimum word count.
There you have it. The practical value of the research awarded the most prestigious prize in the world — a prize that Alfred Nobel’s will said should be given to “those who, during the preceding year, shall have conferred the greatest benefit on mankind.”
To make your immune system work better, I am sure there are two simple, practical and powerful ways of doing so: deepen your sleep and eat fermented foods.
I like this essay by Brad DeLong about the failure of economics professors. They didn’t just fail to predict the recent economics crisis but they have failed, as far as he can tell, to learn from it. If you are naive, of course this is astonishing — but DeLong is not naive. Yet he is “astonished”. That’s interesting.
It’s hard to imagine DeLong doesn’t know what I am about to say. I imagine anybody with any academic sophistication is aware of it — especially economists. As Thorstein Veblen (an economist) pointed out in The Theory of the Leisure Class (1899), a great deal of what professors do, including economics professors, is about signaling high status. In economics, this is done by being highly mathematical. (Same in statistics. In art history, it is done by using big words. In engineering it is done by being theoretical. In many areas of science, it is done by using expensive equipment and having a large lab. In many fields it is done by being useless — e.g., preferring “pure” research over “applied” research.) This is no mystery. Economists think a lot about signaling. Michael Spence wrote an influential paper (which included Veblen’s phrase “conspicuous consumption”) and book about it, for example, for which he won a Nobel Prize. (More examples from economics.) But DeLong ignores the signaling of economists. Let me propose why economists haven’t taken the steps DeLong is astonished they haven’t taken: Because it would make them more useful and less mathematical. Thereby signaling lower status.
Why is signaling so common? It is basic biology, yes. But it is also convenient. Here is what Veblen didn’t say: It is so much easier to signal than to make progress. Among animals, it is much easier to signal you will win a fight than to actually win one. Among professors, it is easier to use big words than to write clearly. DeLong wants economists to choose progress over signaling. Shouldn’t an economist not be astonished when the lower-priced option is chosen?
Felix Salomon argues that fancy restaurants often manage to make their food a Veblen good — something that becomes more desirable when the price goes up. Restaurant food is a way to show off your wealth, in other words. Continue reading “Is Medical Research a Veblen Good?”
- “As reprehensible as many [drug] industry practices are, I [Marcia Angell] believe the behavior of much of the medical profession is even more culpable.”
- Richard Posner, judge and erstwhile economist, reveals he is unfamiliar with Thorstein Veblen.
- Notes on nonviolence by Gene Sharp. “They struggled against slavery, achieved voting rights for the disenfranchised, seceded from empires,
undermined dictatorships, broke down racial segregation, strengthened exploited workers with the tools of the strike and economic boycott, restored independence to colonized nations, freed intended victims of the Holocaust, spread and defended civil liberties, achieved higher standards of living, ended discrimination, paralyzed an empire, and even defeated totalitarian systems.”
- Careerism in the military. Many similarities with careerism in health science. Careerism in health science is one reason many health problems, such as obesity and autism, are getting worse.
A friend of mine majored in design at Tsinghua and is now working as a designer. Her opinion of her education has gone down. Designers from other schools are better trained than she is, she sees.
At Tsinghua, her teachers denigrated learning to use this or that software program. To design something using a computer program was to be a design farmer, they said. They preferred to talk about big ideas. “I hate big ideas,” said my friend.
Her comments reminded me of law professors who would rather teach philosophy than how to be a lawyer (and are surprised when students play solitaire during class) and education professors who don’t teach their students how to teach.
From an excellent Atlantic article about John Ioannidis, who has published several papers saying that medical research is far less reliable than you might think:
A different oak tree at the site provides visitors with a chance to try their own hands at extracting a prophecy. â€œI [bring] all the researchers who visit me here, and almost every single one of them asks the tree the same question,â€ Ioannidis tells me . . . â€œâ€˜Will my research grant be approved?â€™â€
A good point. I’d say his main contribution, based on this article, is pointing out the low rate of repeatability of major medical findings. Until someone actually calculated that rate, it was hard to know what it was, unless you had inside experience. The rate turned out to be lower than a naive person might think. It was not lower than an insider might think, which explains lack of disagreement:
David Gorski . . . noted in his prominent medical blog that when he presented Ioannidisâ€™s paper on [lack of repeatability of] highly cited research at a professional meeting, â€œnot a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.â€
I also like the way Ioannidis has emphasized the funding pressure that researchers face, as in that story about the oak tree.Â Obviously it translates into pressure to get positive results, which translates into overstatement.
I also think his critique of medical research has room for improvement:
1. Black/white thinking. He talks in terms of right and wrong. (“We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. Thatâ€™s because being wrong in science is fine.”) This is misleading. There is signal in all that medical research he criticizes; it’s just not as strong a signal as the researchers claimed. In other words the research he says is “wrong” has value. He’s doing the same thing as all those meta-analyses that ignore all research that isn’t of “high quality”.
2. Nihilism (which is a type of black/white thinking). For example,
How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.
I’ve paid a lot of attention to health-related research and benefited greatly.Â Many of the treatments I’ve studied through self-experimentation were based on health-related research. An example is omega-3. There is plenty of research suggesting its value and this encouraged me to try it. Likewise, there is plenty of evidence supporting the value of fermented foods. That evidence and many other studies (e.g., of hormesis) paint a large consistent picture.
3. Bias isn’t the only problem, but, in this article, he talks as if it is. Bias is a relatively minor problem: you can allow for it. Other problems you can’t allow for. One is the Veblenian tendency to show off. Thus big labs are better than small ones, regardless of which would make more progress. Big studies better than small, expensive equipment better than cheap, etc. And, above all, useless is better than useful. The other is a fundamental misunderstanding about what causes disease and how to fix it. A large fraction of health research money goes to researchers who think that studying this or that biochemical pathway or genetic mechanism will make a difference — for a disease that has an environmental cause. They are simply looking in the wrong place. I think the reason is at least partly Veblenian: To study genes is more “scientific” (= high-tech = expensive) than studying environments.
Thanks to Gary Wolf.
Nassim Taleb recently criticized the Nobel Prize in Economics:
According to Taleb, there are a number of mistaken ideas about forecasting and measuring risk, which all contribute to events like the 2008 global crisis. The Nobel prize, he says, has given them a stamp of approval, allowing them to propagate.
It isn’t just economics. As I’ve said before, the Nobel Prize in medicine was not given for the discovery that smoking causes lung cancer. It was not given for the discovery that lack of folate causes birth defects. Both enormously useful. It has been given for several discoveries, such as the connection between teleomeres and aging, with (so far) little or no practical value.
This is no mystery. The Nobel Prize must be prestigious, therefore must honor high-prestige research. Veblen argued long ago that in academia high prestige correlates with low practical value. Just today I told a friend Veblen’s idea that professors use jargon for the same reason men wear ties — to show off how useless they are. The economics research (“Harry Markowitz, William Sharpe, Robert Merton, Myron Scholes, Robert Engle, Franco Modigliani and Merton Miller”) that Taleb is criticizing was high prestige. The so-far-useless biology that has received a Nobel Prize was high prestige; the highly-useful epidemiology that didn’t receive the prize was low prestige.
Thanks to Dave Lull.
One of the biggest ironies I hope you will take away from reading this book is that many of the tools likely to yield the strongest classroom results remain essentially beneath the notice of our theories and theorists of education.
Lemov continues with an example: Teaching students how to distribute classroom materials, such as handouts. This can save a lot of time. Then he adds:
Unfortunately this dizzyingly efficient technique — so efficient it is all but a moral imperative for teachers to use it — remains beneath the notice of our avatars of educational theory. There isn’t a school of education that would stoop to teach its aspiring teachers how to train their students to pass out papers.
The last chapter of Veblen’sÂ Theory of the Leisure Class is about just this — the importance that professors (like everyone else) place on status display and how this interferes with their effectiveness. The connection with self-experimentation is that no matter how effective it is, no psychology department would stoop to teach it. Or, at least, that’s the current state of affairs.
The book’s index doesn’t include Veblen, although it does include Richard Thaler.
I’ve blogged many times about the desire of professors to show off and how it interferes with being useful. It doesn’t just make them bad teachers, it makes them bad scientists. Here’s an example from economics (via Marginal Revolution):
â€œThe mainstream of academic research in macroeconomics puts theoretical coherence and elegance first, and investigating the data second,â€ says Mr. Rogoff. For that reason, he says, much of the professionâ€™s celebrated work â€œwas not terribly useful in either predicting the financial crisis, or in assessing how it would it play out once it happened.â€
â€œ[Academic economists] almost pride themselves on not paying attention to current events,â€ he says.
Pure Veblen, who in Theory of the Leisure Class provided many examples of people, including professors, priding themselves on being useless. Men wear ties, he said, to show they don’t do manual labor (which is clearly useful).
My research is closer to biology, where you can say the same thing: much of the profession’s celebrated work has not been terribly useful. Yesterday I gave an example (the oncogene theory of cancer).
A new Jeffrey Eugenides short story quotes Derrida. Quote 1:
In that sense it is the Aufhebung of other writings, particularly of hieroglyphic script and of the Leibnizian characteristic that had been criticized previously through one and the same gesture.
What writing itself, in its nonphonetic moment, betrays, is life. It menaces at once the breath, the spirit, and history as the spiritâ€™s relationship with itself. It is their end, their finitude, their paralysis.
“A little Derrida goes a long way and a lot of Derrida goes a little way,” said a friend of mine who was a graduate student in English. These quotes show why. In Theory of the Leisure Class, Veblen argued that professors write like this (and assign such stuff to their students) to show status. I have yet to hear a convincing refutation of this explanation nor a plausible alternative. Is there a plausible alternative?
Veblen was saying that professors are like everyone else. Think of English professors as a model system. Their showing-off is especially clear. It’s pretty harmless, too, but when a biology professor (say) pursues a high-status line of research about some disease rather than a low-status but more effective one, it does — if it happens a lot — hurt the rest of us. Sleep researchers, for example, could do lots of self-experimentation but don’t, presumably because it’s low-status. And poor sleep is a real problem. Throughout medical school labs, researchers are studying the biochemical mechanism and genetic basis of this or that disorder. I’m sure this is likely to be less effective in helping people avoid that disorder than studying its environmental roots, but such lines of research allow the researchers to request expensive equipment and work in clean isolated laboratories — higher status than cheap equipment and getting your hands dirty. I don’t mean high-status research shouldn’t happen; we need diversity of research. But, like the thinking illustrated by the Derrida quotes, there’s too much of it. A little biochemical-mechanism research goes a long way and lot of biochemical-mechanism research goes a little way.
An assistant manager at Marshall Fields, the Chicago department store, told Gawker the following story:
I was walking through the floor, and I hear a voice call my name. . . . Once she started speaking to me, I realized it was Oprah. Honestly, she is unrecognizable without the spackle/wig. Anyway, she was very nice, and asked me if I would offer my opinion on a china pattern she was looking at for her house. It was Villeroy and Boch (German, middle-range) “Petite Fleur.” Very cute, kind of French-country, with a small, scattered floral design. I said, “What’s not to like?” Oprah responded, “Well, it’s not that expensive, and I don’t want people who come to my house to think I’m cheap.”
Andrew Gelman writes:
If I had to come up with one statistical tip that would be most useful to you–that is, good advice that’s easy to apply and which you might not already know–it would be to use transformations. Log, square-root, etc.–yes, all that, but more! I’m talking about transforming a continuous variable into several discrete variables (to model nonlinear patterns such as voting by age) and combining several discrete variables to make something [more] continuous (those “total scores” that we all love). And not doing dumb transformations such as the use of a threshold to break up a perfectly useful continuous variable into something binary. I don’t care if the threshold is “clinically relevant” or whatever–just don’t do it. If you gotta discretize, for Christ’s sake break the variable into 3 categories.
I agree (and wrote an article about it). Transforming data is so important that intro stats texts should have a whole chapter on it — but instead barely mention it. A good discussion of transformation would also include use of principal components to boil down many variables into a much smaller number. (You should do this twice — once with your independent variables, once with your dependent variables.) Many researchers measure many things (e.g., a questionnaire with 50 questions, a blood test that measures 10 components) and then foolishly correlate all independent variables with all dependent variables. They end up testing dozens of likely-to-be-zero correlations for significance. Thereby effectively throwing all their data away — when you do dozens of such tests, none can be trusted.
My explanation why this isn’t taught differs from Andrew’s. I think it’s pure Veblen: professors dislike appearing useful and like showing off. Statistics professors, like engineering professors, do less useful research than you might expect, so they are less aware than you might expect of how useful transformations are. And because most transformations don’t involve esoteric math, writing about them doesn’t allow you to show off.
In my experience, not transforming your data is at least as bad as throwing half of it away, in the sense that your tests will be that much less sensitive.
Describing those who made money in the subprime mortgage market, Michael Lewis said this:
They were outsiders to the market that they were betting on. And in addition, they were, in many cases, personally curious people, not clubbable members of the group. And I think that was a key to the success. I think that the fact that they didnâ€™t feel compelled in any way, on any level, to think like other people gave them an advantage.
This is what Thorstein Veblen said about Jews in a 1917 essay titled “The intellectual pre-eminence of Jews in modern Europe.” Being outsiders gave them freedom of thought. Lewis may have read that essay. A few years ago, he compiled an anthology of economic classics, one of which was Veblen’s Theory of the Leisure Class. I mentioned this essay earlier.