How Little We Know: Big Gaps in Psychology and Economics

Seth’s final paper “How Little We Know: Big Gaps in Psychology and Economics” is published in a special issue of the International Journal of Comparative Psychology (Vol 27, Issue 2, 2014). This issue is about behavioral variability and is dedicated to Seth. Abstract of the paper follows:

A rule about variability is reducing expectation of reward increases variation of the form of rewarded actions. This rule helps animals learn what to do at a food source, something that is rarely studied. Almost all instrumental learning experiments start later in the learning-how-to-forage process. They start after the animal has learned where to find food and how to find it. Because of the exclusions (no study of learning where, no study of learning how), we know almost nothing about these two sorts of learning. The study of himan learning by psychologists has a similar gap. Motivation to contact new material (curiousity) is not studied. Walking may increase curiosity, some evidence suggests. In economics, likewise, research is almost all about how people use economically valuable knowledge. The creation and spread of knowledge are rarely studied.

The family is grateful to Aaron Blaisdell Ph.D. who completed final edits to Seth’s final manuscript for publication.

Who is the Smartest Person Who Believes Climate Change Fear-Mongering?

A few days ago I read about Apple CEO Tim Cook’s response to a shareholder complaint about sustainability programs:

At a shareholders meeting on Friday, CEO Tim Cook angrily defended Apple’s environmentally-friendly practices against a request from the conservative National Center for Public Policy Research (NCPPR) to drop those practices if they ever became unprofitable.

I support the practices Cook defended. But the incident was summarized by a headline writer like this: “Tim Cook tells off climate change deniers.” I am a climate change denier in the sense that I don’t believe that there is persuasive evidence that humans are dangerously warming the planet.

The headline — not what actually happened — reminded me of something surprising and puzzling I noticed soon after I became an assistant professor at Berkeley. I attended several colloquium talks — hour-long talks about research, usually by a visitor, a professor from somewhere else — every week. Now and then the speaker would omit essential information. Such as what the y axis was. Or what the points were. The missing information made it impossible to understand what the speaker was saying.

I didn’t expect graduate students to interrupt to ask for the missing info but surely, I thought, one of the five or eight professors in the room would. We all need to know this, I thought. Yet none of them spoke up. I cannot think of a single example of a professor speaking up when this happened (except me). Even now I am unsure why this happened (and no doubt still happens). Maybe it reflects insecurity.

I mention climate change on this blog because it is interesting that so many intelligent supposedly independent-thinking people actually believe, or claim to believe, that humans are dangerously warming the planet. The evidence for the supposedly undeniable claim (“97% of scientists agree!”) is indistinguishable from zero. Of course journalists, such as Elizabeth Kolbert of The New Yorker and Bill McKibben (a former journalist), are often English majors and intimidated by scientists. I don’t expect them to question what scientists say, although questioning authority is half their job. Of course actual climate scientists do not dissent, for fear of career damage. It is when smart people who are not journalists or climate scientists take this stuff seriously that I am impressed. Just as I was impressed by Berkeley professors who did nothing when they didn’t understand what they were being told.

It seems to me that the smarter you are the more easily you can see that climate change fear-mongering is nonsense. There must be some other important human quality (conformity? religiosity? diffidence? status-seeking? fear of failure?) that interferes with intelligence in non-trivial ways. To try to figure out what the quality is, I ask: who is the smartest person you know who believes global warming fear-mongering? Does that person do other extreme or unusual things? These might shed light on what the intelligence-opposing personality trait is.

People talk about intelligence quite often (“you’re so smart!” “she’s very bright”). Many people, including me, think it matters. There are tests for it. But this other trait, which can negate intelligence and therefore is just as important…not so much. In my experience, not at all. My fellow Berkeley professors were very smart. But they were also something else, much less apparent.

The Conditioned-Tolerance Explanation of “Overdose” Death

I recently blogged about Shepard Siegel‘s idea that heroin “overdose” deaths — such as Philip Seymour Hoffman’s — are often due to a failure of conditioned tolerance. In the 1970s and 80s, Siegel proposed that taking a drug in Situation X causes learning of a situation-drug association. Due to this association, Situation X alone (no drug) will cause an internal response opposite to the drug effect. For example, coffee wakes us up. If you repeatedly drink coffee in Situation X, exposure to Situation X without coffee will make you sleepy. As the learned response opposing the drug effect grows, larger amounts of the drug can be tolerated and the user needs larger amounts of the drug to get the same overall (apparent) effect — the same high, for example. Trying to get the same high, users take larger and larger amounts. But if you take a really large amount of the drug and don’t simultaneously evoke the opposing response, you may die. What is called “overdose” death may be due to a failure to evoke the conditioned response in the opposite direction.

Siegel’s Science paper about this — a demonstration with rats — appeared in 1982. Since then, plenty of evidence suggests the idea is important. Continue reading “The Conditioned-Tolerance Explanation of “Overdose” Death”

Philip Seymour Hoffman’s and Cory Montieth’s Death From Heroin: Why?

Philip Seymour Hoffman, the great actor, was found dead a few days ago with a needle in his arm. Last year, Cory Montieth, the actor, died in similar circumstances. Why did they die? It was hardly the first time they’d taken heroin.

Starting in the 1970s, Shepard Siegel, a psychology professor at McMaster University, did a series of rat experiments that showed that drug tolerance and craving involved a large amount of Pavlovian conditioning. Repeated exposure (e.g., injection) of Drug X in Situation Y (e.g., your bedroom at 11 p.m.) will cause learning of an association between X and Y. This association has two effects. First, when exposed to Y, you will crave X. Second, when you take Drug X in Situation Y, the effect of the drug is diminished. You become “tolerant” to it. Continue reading “Philip Seymour Hoffman’s and Cory Montieth’s Death From Heroin: Why?”

Assorted Links

Thanks to Aaron Blaisdell and Peter Lewis.

Signaling and Higher Education: Email With Bryan Caplan

I recently emailed back and forth with Bryan Caplan about a signaling view of higher education, which Bryan elaborates in these slides. I wrote to him:

Having looked at your slides, I would say we pretty much agree. I think employers have little control over the content of college education and, as you say, use quality of college because it works better than IQ tests and the like — as you say.

Perhaps we also agree that just as British aristocrats have a lot less power now than they did 200 years ago — the message of Downton Abbey — so are American college professors slowly losing power. MOOCs are one example, blogs are another. Parents and professors are quite happy with the current system, students and employers are not, and they are gaining power. That is my theory, anyway.
I think a signaling explanation does a very good job of explaining why sense of humor matters so much, especially in mate choice. Sense of humor = Nature’s IQ test. Sense of humor signals problem solving ability, which really matters but is hard to measure directly. I used to think that we have two basic tasks in life, manipulating things and manipulating other people (long ago nobody was depressed, etc.) and they were really different.

Queen Late

When a Chinese friend of mine was in first grade, she was habitually late for school. Usually about ten minutes. Her mom took her to school on a bike. One day she was 20 minutes late. The door was closed. My friend opened the door. “May I come in?” she asked the teacher. The teacher came to the door. She took my friend to the front of the class. “Here is Queen Late (迟到大王),” she said.

Everyone laughed, including my friend. She thought it was a funny thing to say, not mean. The name stuck. Many years later, she was called Queen Late by those who knew her in primary school. Her teacher was not a great wit. Other students at other schools were called the same thing. It was/is a standard joke.

Sometimes I think Chinese have, on average, a better sense of humor than Americans, but who really knows? A more interesting contrast is how lateness is handled. At UC Berkeley, about 20 years ago, I attended a large lecture class (Poli Sci 3, Comparative Politics) taught by Ken Jowitt, a political science professor. Jowitt was considered an excellent lecturer, which was why I was there, but he was also famous for being hard on students who came in late. When I was there, a student came in late. Jowitt interrupted what he was saying to point out the offender and said something derogatory.  I don’t remember what Jowitt said but I do remember thinking — as someone who also taught large lecture classes where students came in late — that he was making a mountain, an unattractive mountain, out of a molehill. It didn’t occur to me to wonder how he could have dealt with the problem in a way that made everyone laugh.

 

 

Drawing a Line Where No Line Was Needed: GQ Editor Defends Hugo Boss

The comedian Russell Brand, at a GQ awards show in London, “joked” — according to Brand, it was a joke — that the sponsor of the event, Hugo Boss, clothed the Nazis. Fine. More interesting to me was something that happened later. According to Brand, the following conversation took place:

GQ editor Dylan Jones What you did was very offensive to Hugo Boss.

Brand What Hugo Boss did was very offensive to the Jews.

Sure, Jones was upset. But nothing in his job description requires him to defend Hugo Boss. Especially in the least nuanced possible way. In contrast to Brand’s criticism of Boss, which makes Brand look good, Jones’s criticism of Brand, if it has any effect at all (probably not), makes Jones look foolish. He did not make his remark out of carefully-calibrated self-interest.

Jones’s comment interests me because now and then something in my head pushes me to do two things I know are unwise:

1. Tell someone else what to do when there is no reason to think they want my advice.

2. Simplify a complicated situation.

Jones did both things. I try to resist — try to say nothing — but am not always successful. Maybe Desire #1 is why professors are fond of teaching what they call “critical thinking” — it allows them to indulge Desire #1. On the face of it, appreciative thinking — especially nuanced appreciation — seems at least as important, but I have never heard a professor say he teaches that.

Deirdre McCloskey and Me

In an appreciation of Ronald Coase, I came across an article by Deirdre McCloskey, the economist. It reminded me of our back and forth emails in 2007 about her and Lynn Conway’s treatment of Michael Bailey, who had written a book they hated. I reread the emails and found them still interesting, especially McCloskey’s claim that she and Conway have/had no special power. Is there a variant of sophistry that refers to self-deception? You can read the whole correspondenceMcCloskey’s version, which omits my final email, or my version (“McCloskey and Me: A Back-and-Forth”, plus plenty of context — my article starts on p. 117 of the 139 pp).

Thank god she and Conway failed to end Bailey’s career. The Man Who Would Be Queen (pdf) — about male homosexuals and cross-dressers — remains the best psychology book I have ever read. Last year I assigned my Tsinghua students to read a third of it (any third they wanted). One student said it was so good she read the whole thing.

A Little-Noticed Male/Female Difference: Pressure to Conform

In Americanah, Chimamanda Adichie’s new novel, she writes (p. 240):

Ojiugo wore orange lipstick and ripped jeans, spoke bluntly, and smoked in public, provoking vicious gossip and dislike from other girls, not because she did those things but because she dared to without having lived abroad, or having a foreign parent, those qualities that would have made them forgive her lack of conformity.

Here is another example, from a profile of Claire Danes:

She changed schools twice, “fleeing one mean girl only to find another incarnation of that same girl in the next school.” She was targeted for her looks, her nerdy curiosity, her refusal to conform.

My impression is that these examples illustrate a large male/female difference: Women will commonly criticize another woman for lack of conformity (unless somehow “earned”); men are much less likely to criticize another man this way. When women do it, it is called being catty. There is no equivalent term when men do it — presumably because no one invents a term for something that doesn’t happen.

I have never seen this mentioned in the literature on male/female differences (nor in Sheryl Sandberg’s Lean In). It isn’t easy to explain. Could it be learned? Well, in my experience girls are under more pressure to “act a certain way” than boys (Japan is an example), but I can’t explain that, either, nor can I see why that would translate to women putting pressure on other women to conform.

One reason this tendency is hard to explain is its effect on leadership. Putting pressure on other women to conform makes it harder for women to become leaders — leadership is the opposite of conformity. Making it harder for women to be leaders makes it easier for men to be leaders. It is hard to see how this particular effect (there are many others) benefits women.

Rewarding Criticism Put Nicely Produced Long-Lasting Change

Eliezer Yudkowsky, I’m told, used to be a not-nice critic. The problem was his delivery: “blunt, harsh, not sufficiently tempered by praise for praiseworthy things” (Alicorn Finley). However, this changed about a year ago, when Anna Salamon and Alicorn Finley decided to try to train him to be nicer. Alicorn describes it like this:

Me, Eliezer, Anna, and Michael Blume were all sitting in my and Michael’s room (where we lived two houses ago) working on, I think it was, a rationality kata [= way of doing things], and we were producing examples and critiquing each other.  Eliezer sometimes critiqued in a motivation-draining way, so we started offering him M&Ms when he put things more nicely.  (We also claimed M&Ms when we accomplished small increments of what we were working on.)

Eliezer added:

Some updates on that story. M&M’s didn’t work when I tried to reward myself with them later, and I suspect several key points:

1)  The smiles/approval from the (highly respected) friends feeding me the M&Ms probably counted for more than the taste sensation.

2)  Being overweight, M&Ms on their own would be associated with shame/guilt/horror/wishing I never had to eat again etc.

3)  Others have also reported food rewards not working.  One person says that food rewards worked for them after they ensured that they were hungry and could only eat via food rewards.

4)  I suspect that the basic reinforcement pattern will only work for me if I reward above-average performance or improvement in performance (positive slope) rather than trying to reward constant performance, because only this makes me feel that the reward is really ‘deserved’.

Also:

  • Andrew Critch advises that ‘step zero’ in this process is to make sure that you have good internal agreement on wanting the change before rewarding movements in the direction of the change
  • The Center for Applied Rationality (CFAR) has some experience learning to teach this.
  • CFAR has excellent workshops but not much published/online material.  A good mainstream book is Don’t Shoot the Dog by Karen Pryor.

I like this example because the change was long-lasting and important.

Suicidal Gestures at Princeton: A Staggering Increase

A friend of mine knows a former (retired) head of psychological services at Princeton University. She told him that in the 1970s, there were one or two suicidal gestures per year. Recently, however, there have been one or two per day.

Something is terribly, horribly wrong. Maybe the increase is due to something at Princeton. For example, maybe new dorms are more isolating than the old dorms they replaced. Or maybe the increase has nothing to do with Princeton. For example, maybe the increase is due to antidepressants, much more common now than in the 1970s.

Whatever the cause, tt would help all Princeton students, present and future, and probably millions of others, if the problem were made public so that anyone, not just a vanishingly small number of people, could try to solve it. It isn’t even clear that anyone is trying to explain/understand/learn from the increase.

Princeton almost surely has records that show the increase. If, as is likely, Princeton administrators never allow the increase to be documented, it will be a tragedy. It is an extraordinary and unprecedented clue about what causes suicidal gestures. Nothing in all mental health epidemiology has found a change by factor of a hundred or more — much less a mysterious huge change.

The increase is an unintended consequence of something else, but what? Because it is so large, there must be something extremely important that most people, or at least Princeton administrators, don’t understand about mental health. The answer might involve seeing faces at night. I found that seeing faces in the morning produced an enormous boost in mood and that faces at night had the opposite effect. I cannot say, however, why seeing faces at night would have increased so much from the 1970s to now.

More about Give and Take by Adam Grant

Yesterday I commented about Give and Take by Adam Grant, a professor at Wharton who teaches organizational psychology.

When Grant was a graduate student (at the University of Michigan), he was asked to help people at the university’s fund-raising call center raise more money. They call alumni, asking for money. The person who ran the center had tried the usual motivational tactics, such as offering bonuses. They hadn’t worked.

Grant noticed that most of the money being raised went for scholarships. He tried various ways of making the call center employees aware that the money they raised helped students directly. The most effective way turned out to be a 5-minute meeting with a scholarship recipient. This had a staggering effect:

The average caller doubled in calls per hour and minutes on the phone per week . . . Revenue quintipled: callers averaged $412 [per week] before meeting the scholarship recipient and more than $2000 afterward.

A huge effect — and a useful huge effect. And one that is not even hinted at in countless introductory psychology books. Notice that physical conditions of the job and the “physical” payoff (the salary) didn’t change. All that changed was employees’s mental models of their job.

I conclude that people are far more motivated by a desire to help others than you would ever guess from reading psychology textbooks — and, even more, from reading economics textbooks. Grant says nothing about this, at least in the book, but I’d guess that the employees were considerably happier at their jobs as well. You might think that there has been so much research on job design that there were no big effects left to be discovered. You’d be wrong.

Give and Take by Adam Grant

The publisher sent me a copy of Give and Take by Adam Grant after I sent several emails asking for a review copy. I expected it to be the best book about psychology in many years and it is.

The book’s main theme is the non-obvious advantages of being a “giver” (someone who helps others without concern about payback). Grant teaches at Wharton, whose students apparently enter Wharton believing (or are taught there?) that this is a poor strategy. With dozens of studies and stories, Grant argues that the truth is more complicated — that a giver, properly focussed, does better than others. Whether this reflects cause and effect (Grant seems to say it does) I have no idea. Perhaps “givers” are psychologically unusually sophisticated in many ways, not just a relaxed attitude toward payback, and that is why some of them do very well. Continue readingGive and Take by Adam Grant”

Assorted Links

Thanks to Greg Pomerantz and Casey Manion.

“Brain Games are Bogus”: More Trouble for Posit Science

which I raised questions about — is aimed at older people. However, it would be surprising if brain games have no effect until you reach a certain age. More plausible is that they never provide substantial benefits — at least, benefits broad enough and strong enough and long-lasting enough to be worth the training time (one hour/day for many weeks).

I read a Posit Science paper, with older subjects, that seemed to me to show that its training had little lasting benefit. The stated conclusions of the paper were more positive. Too bad the head of Posit Science didn’t answer most of my questions.

Thanks to Alex Chernavsky.

 

Posit Science: Does It Work? (Continued)

In an earlier post I asked 15 questions about Zelinski et al. (2011) (“Improvement in memory with plasticity-based adaptive cognitive training: results of the 3-month follow-up”), a study done to measure the efficacy of the brain training sold by Posit Science. The study asked if the effects of training were detectable three months after it stopped. Henry Mahncke, the head of Posit Science, recently sent me answers to a few of my questions.

Most of my questions he declined to answer. He didn’t answer them, he said, because they contained “innuendo”. My questions were ordinary tough (or “critical”) questions. Their negative slant was not at all hidden (in contrast to  innuendo). For the questions he didn’t answer, he substituted less critical questions. I give a few examples below.  Unwillingness to answer tough questions about a study raises doubts about it.

His answers raised more doubts. Continue reading “Posit Science: Does It Work? (Continued)”

Consistent- versus Inconsistent-Handed Predicts Better than Right- versus Left-Handed

At Berkeley, Andrew Gelman and I taught a freshman seminar about left-handedness. Half the students were left-handed. We did two fascinating studies with them that found that left-handers tend to have left-handed friends. I kick myself for not publishing those results, which I bring up in conversation again and again.

After the class ended I got a call from a journalist who was writing an article about ridiculous classes. I told him the left-handedness class had value as a way of introducing methodological issues but all I cared about was that his article be accurate. He decided not to include our class in his examples.

Stephen Christman, who got his Ph.D. from Berkeley (and did quirky interesting stuff even as a graduate student), and two colleagues have now published a paper that is a considerable step forward in the understanding of handedness. They argue that what really matters is not direction of handedness but the consistency of it. The terms left-handed and right-handed hide a confounding. Right-handers almost all have very consistent handedness (they do everything with the right hand). In contrast, left-handers much more often have inconsistent handedness: they do some things with the left hand, some with the right. I am a good example. I write with my right hand, bat and throw left-handed, play tennis left-handed, ping-pong right-handed. In fact, I am right-wristed and left-armed. When something involves wrist movement (writing, ping-pong) I use my right hand. When something involves arm movement (batting, throwing a ball, tennis), I use my left hand. Right-handers are much more similar to each other than left-handers.

Christman and his co-authors point to two things: 1. When you can get enough subjects to unconfound the two variables, it turns out that consistency of handedness is what makes the difference. Consistent left-handers resemble consistent right-handers.  2. Consistency of handedness predicts many things. Inconsistent-handers are less authoritarian than consistent-handers. They show more of a placebo effect. They have better memory for paragraphs. And on and on — about 20 differences. It isn’t easy to say what all these differences have in common but maybe inconsistent-handers are more flexible in their beliefs. (Which would explain the friendship findings in our handedness class.)

I think about these differences as another example of how every economy needs diversity and our brains have been shaped to provide it, one idea underlying my theory of human evolution. Presidents of the United States are left-handed much more than the general population. For example, Obama is left-handed. The difference between Presidents and everyone else is overwhelming and must mean something. Yet left-handers die younger. I would say that in any group of people you need a certain fraction, not necessarily large, to be open-minded and realistic. That describes inconsistent-handers (who are usually left-handed). These people make good leaders because they will respond to changing conditions. People who are not open-minded make good followers. Just as important as realism is cooperation, ability to work together toward a common goal.

 

Posit Science: More Questions

Posit Science is a San Francisco company, started by Michael Merzenich (UCSF) and others, that sells access to brain-training exercises aimed at older adults. Their training program, they say, will make you “remember more”, “focus better”, and “think faster”. A friend recently sent me a 2011 paper (“Improvement in memory with plasticity-based adaptive cognitive training: results of the 3-month follow-up” by Elizabeth Zelinski and others, published in the Journal of the American Geriatrics Society) that describes a study about Posit Science training. The study asked if the improvements due to training are detectable three months after training stops. The training takes long enough (1 hour/day in the study) that you wouldn’t want to do it forever. The study appears to have been entirely funded by Posit Science.

I found the paper puzzling in several ways. I sent the corresponding author and the head of Posit Science a list of questions:

1. Isn’t it correct that after three months there was no longer reliable improvement due to training according to the main measure that was chosen by you (the investigators) in advance? If so, shouldn’t that have been the main conclusion (e.g., in the abstract and final paragraph)?

2. The training is barely described. The entire description is this: “a brain plasticity-based computer program designed to improve the speed and accuracy of auditory information processing and to engage neuromodulatory systems.” To learn more, readers are referred to a paper that is not easily available — in particular, I could not find it on the Posit Science website. Because the training is so briefly described, I was unable to judge how much the outcome tests differ from the training tasks. This made it impossible for me to judge how much the training generalizes to other tasks — which is the whole point. Why wasn’t the training better described?

3. What was the “ET [experimental treatment] processing speed exercise”? It sounds like a reaction-time task. People will get faster at any reaction-time task if given extensive practice on that task. How is such improvement relevant to daily life? If it is irrelevant, why is it given considerable attention (one of the paper’s four graphs)?

4. According to Table 2, the CSRQ (Cognitive Self-Report Questionnaire) questions showed no significant improvement in trainees’ perceptions of their own daily cognitive functioning, although the p value was close to 0.05. Given the large sample size (~500), this failure to find significant improvement suggests the self-report improvements were small or zero. Why wasn’t this discussed? Is the amount of improvement suggested by Posit Science’s marketing consistent with these results?

5. Is it possible that the improvement subjects experienced was due to the acquisition of strategies for dealing with rapidly presented auditory material, and especially for focusing on the literal words (rather than on their meaning, as may be the usual approach taken in daily life)? If so, is it possible that the skills being improved have little value in daily life, explaining the lack of effect on the CSRQ?

6. In the Methods section, you write “In the a priori data analysis plan for the IMPACT Study, it was hypothesized that the tests constituting the secondary outcome measure would be more sensitive than the RBANS given their larger raw score ranges and sensitivity to cognitive aging effects.” Do the initial post-training tests (measurements of the training effect soon after training ended) support this hypothesis? Why aren’t the initial post-training results described so that readers can see for themselves if this hypothesis is plausible? If you thought the “secondary outcome measure would be more sensitive than the RBANS” why wasn’t the secondary outcome measure the primary measure?

7. The primary outcome measure was some of the RBANS (Repeatable Battery for the Assessment of Neuropsychological Status). Did subjects take the whole RBANS or only part of it? If they took the whole RBANS, what were the results with the rest of the RBANS (the subtests not included in the primary outcome measure)?

8. The data analysis refers to a “secondary composite measure”. Why that particular composite and not any of the many other possible composite measures? Were other secondary composite measures considered? If so, were p values corrected for this?

9. If Test A resembles training more closely than Test B, Test A should show more effect of training (at any retention interval) than Test B. In this case Test A = the RBANS auditory subtests and Test B = the secondary composite measure.  In contrast to this prediction, you found that Test B showed a clearer training effect (in terms of p value) than Test A. Why wasn’t this anomaly discussed (beyond what was said in the Methods section)?

10. Were any tests given the subjects not described in this report? If there were other tests, why were their results not described?

11. The secondary composite measure is composed of several memory tests and called “Overall Memory”. The Posit Science website says their training will not only help you “remember more” but also “think faster” and “focus better”. Why weren’t tests of thinking speed (different from the training tasks) and focus included in the assessment?

12. Do the results support the idea that the training causes trainees to “focus better”?

13. The Posit Science homepage suggests that their training increases “intelligence”. Was intelligence measured in this study? If not, why not?

14. Do the results support the idea that the training causes trainees to become more intelligent?

15. The only test of thinking speed included in the assessment appears to be a reaction-time task that was part of the training. Are you saying that getting faster on one reaction-time task after lots of practice with that task shows that your training causes trainees to “think faster”?

Update: Henry Mahncke, the head of Posit Science, said that he would be happy to answer these questions by phone. I replied that I was sure many people were curious about the answers and written answers would be much easier to share.

Further update: Mahncke replied that he would prefer a phone call and that some of the questions seemed to him hard to answer in writing. He said nothing about the sharing problem. I repeated my belief that many people are interested in the answers and that a phone call would be hard to share. I offered to rewrite any questions that seemed hard to answer in writing.

Earlier questions for Posit Science.

 

Assorted Links

Thanks to Paul Nash, Grace Liu and Anne Weiss.