In a recent post I described an amazingly influential class on epidemiology taught by Leonard Syme, a professor in the UC Berkeley School of Public Health. Andrew Gelman (“inspiring”) and Matthew Henty (“THIS is how to learn”) were impressed. To find out more about the class, I asked Syme a few questions:
1. What gave you the idea of teaching the course this way?
I was struck by the fact that we can’t do classic experiments in epidemiology. We can’t assign one randomly selected group of babies to be smokers for the rest of their lives and another random group to refrain. Instead, we have to study people as we find them (in religious groups, in jobs, in various locations, marital statuses, etc) and then try to statistically adjust for the things we think might be confounders. In general, we end with evidence that is not very good and the burden on us is to assess the data very, very carefully. I have defined epidemiology as the the activity of evaluating lousy data as best we can. The class merely illustrated this issue. The theme of the class was how can so many bright and caring people come to such different conclusions looking at basically the same data. The lesson was that we really needed to be clear about our biases and expectations and that we needed to think about the data as carefully as possible. I thought the class should have been called “The Sociology of Knowledge”.
2. What were a few of the accepted ideas that you covered?
a. Everyone knew that high fat diets were related to serum lipids and coronary heart disease. The data then (and now) do not support that belief.
b. Everyone knew that the surgical treatment of breast cancer required radical surgery. There was a rumor that lumpectomy would do as good a job but few people believed that. The evidence showed that a more limited procedure was just as good.
c. Some people had been calling for research on the relationship between race and IQ. Majority scholars argued that no good could come from such research and they were refusing to fund such work. How do we decide what is worth studying? Because there might be harm?
d. A major national clinical trial on the treatment of diabetes showed no results but it turned out the randomization procedures were seriously flawed. People in the treated group consistently had higher risk factors to begin with and this doomed the trial. How do we take account of the fact that randomization is a method and not a result? Unbalanced randomization results will occur with a predictable regularity. This study led to the idea of stratification in sampling.
e. Everyone knew that multiphasic screening was good to do. It detected disease early. The evidence did not support this. The evidence showed that early detection means you live longer with the disease but you still die on Thursday morning at 10 AM. You just knew about it longer.
There were 10 sessions like this. Three hours each! Students (n = 15-20) had to read hundreds of pages each week and had to present their case with great frequency – probably 3 or 4 times during the semester. The only rule for presentations was that people could not summarize the papers. Everyone had already read everything and they had to get on with the argument.
3. How long did you teach the course? Did the course change over the years? If so, how?
I taught the course for 12 years. It changed each year only because I updated the literature on particular issues and because I found a new issue that I thought might be more interesting than one of the older topics. But the way in which the course was organized did not change.
4. Apart from lots of epidemiology, what did you learn from teaching the course? For example, did you learn anything about teaching?
I’m not sure. As a teacher, my emphasis has always been on challenging people to think hard about issues. My favorite definition of a good book is one that forces you to do your own thinking. When I lecture, I get very nervous when I see people taking notes. What are they writing? What I’m saying? Not good. Unless they are writing things down so that they can refute my points later on.