Anysley Kellow is a professor of political science at the University of Tasmania. In an interview five years ago, he said, about global warming, “weâ€™ve got a much broader range of choice to respond to a problem that is much more uncertain than certain people who are pushing the issue would have us believe.”
As in a protection racket, the people trying to scare you benefit from you being scared:
I did a study of electricity planning, including here in Tasmania, the good old Hydro Electric Commission in the old daysâ€”and the logic was much the same; they would produce forecasts of [hard-to-meet] future demand which were then taken as immutable, and then they would try and justify particular policy responses to those. In the case here it was with hydro dam construction.
I learned about Professor Kellow’s work from a comment about status-trading among scientists. I wrote to him to ask what work of his was being referred to. He replied:
I think the reference is just to my 2007 book (Science and Public Policy: The Virtuous Corruption of Virtual Environmental Science), where I write about the shrinking size of groups which possess expertise, the effect of the communications revolution in establishing close networks of cooperation, and the effect of this on quality-assurance processes like peer review. The prevailing paradigm then becomes a â€˜club goodâ€™ from the defense of which all members benefit (in status, grant success and career advancement).
The problem is exacerbated by some of the circumstances revealed by Climategate: not just pressure on editors, and influence in being IPCC lead authors, but peer review in climate journals where submitting authors nominate reviewers, the identity of authors is known to those approached to referee papers, and so on. I am so accustomed to double-blind peer review that I found it hard to believe that this was a common practice.
When we add this to the lack of disclosure of raw data and code, we have serious reliability problems underlying science upon which we are basing very costly policy. We know in social science research the potential for subjective factors to obtrude into data manipulation even when researchers do not consciously mean for this to happen, so we often see data preparation and analysis performed by independent teams, and emphasise transparency, disclosure of methods, double-blind peer review, and so on.
That’s a good point about single-blind peer review. I agree, it should all be double-blind, no exceptions. In psychology authors don’t know the name of reviewers but reviewers know the names of authors. You can request double-blind review but then your paper enters the review process with a “paranoid” label attached.