Posts Tagged ‘peer review’

Revkin on Steig, O’Donnell, peer review and solid scientific basics

February 12, 2011

Andy Revkin wrote a good overview of the recent kerfuffle between Eric Steig and Ryan O’Donnell. His piece is centred around contrasting the conflicting views at the edge of the scientific development with the well understood basics of scientific knowledge that makes up the big picture:

I also hope that tussles at the edges of understanding, where data are scant or uncertainty is high, don’t distract the public too much from the basics of climate science, which are boringly undisputed yet still speak of a rising risk that sorely needs addressing.

That’s a very important point to make, and I applaud Revkin for doing so. Media attention to new results (which are usually disputed to a certain extent) can sometimes lead to a skewed picture of the scientific knowledge in the field as a whole, which tend to be underreported. That’s why Revkin’s framing here is important, as it drives home the fact that a dispute at the edge of knowledge (spatial statistics as applied to Antarctic temperature trends) does not mean that the whole theory of climate change is suddenly disputed. Revkin:

Everything laid out above tends to draw attention away from the broad and deep body of work pointing to a growing and long-lasting human influence on the climate system.

Revkin does however exhibit a misunderstanding of peer review when he writes:

The exchanges between Steig and O’Donnell do raise questions about peer review, given that Steig has said he was an early anonymous reviewer (…)

This got quite a few people riled up. I wrote in to state that I think it still is a

Very good article, and good to see attention to detail not go at the cost of also providing the context of what is known.

One comment:
You say this all argument raises questions about peer review. But in fact, it is completely normal, or expected even, that authors whose paper is being critized are one of the reviewers. They are most familiar with the issues, plus it enables the editor to hear both sides.

Of course the editor needs to be aware of the position of this reviewer as the one being critiqued and weigh the review accordingly with other reviews from more disinterested parties. Revkin has since posted Louis Derry’s response, an editor of a geosciences journal:

1. Editors make final decisions. Reviewers make recommendations only.

2. It is common for a submission that critiques previous work to be sent to the author of the critiqued work for review. 2a. That emphatically does NOT mean the reviewer has veto power. It means that his/her opinion is worth having. Such a choice is usually balanced by reviewers that editors believe are reasonably independent, and the review of the critiqued is weighted accordingly. Suggestions that asking Steig to review O’Donnell was somehow unethical are utterly without support in normal scientific practice. Obviously, Steig did not have veto power over O’Donnell’s paper.

3. The fact that O”Donnell’s paper went through several rounds of review is absolutely unsurprising and unexceptional. Many papers on far less public topics do the same.

4. Some have questioned why Stieig 09 got “more” visibility than O’Donnell 10. The answer is simple. Steig had a “result,” O”Donnell had a technical criticism of methodology.

He also chimes in about the importance of the context as provided by Revkin:

Finally, Revkin’s point that the Steig vs O’Donnell debate is not unusual in the progress of science and does not have much of anything to say about the majority of the evidence is correct. Disagreement about how to model the flight of a Frisbee correctly doesn’t imply that basic aerodynamics are wrong. Disagreement about how many EOFs [empirical orthogonal functions] to use to model Antarctic [temperature] changes doesn’t imply that climate physics is wrong.

The Frisbee comment reminded me of one of my favorite sayings: Observing a bird in the sky doesn’t disprove gravity. The science may not be settled, but solid it is.

Some more things have been said about peer review by others. E.g. Andrew “Bishop Hill” Montford writes in the Hockeystick illusion, page 205 (h/t Tim Lambert):

As the CC [Climatic Change] paper was critical of his work, McIntyre was invited to be one of the peer reviewers.

 Guess we can all agree on that aspect of peer review now.

Update: John Nielsen-Gammon has some useful things to say about peer review here (on revealing the identity of reviewer), here (retelling the story and why it makes sense to have had Steig as a reviewer; quoting Steig; interesting dicussion), and here (explaining the dynamics of peer review and making the interesting suggestion of mentoring  relative outsiders navigate peer review).

Advertisements

The nature of blogging (“having a beer”) vs the nature of science

April 19, 2010

Robert Grumbine (a scientist-blogger well worth reading) explains how the scientific process works and how scientists communicate, and how it differs from blog debates (which he describes as “having a beer”). The following is lifted from his comment at Chris Colose’s blog a while ago. He sets up his argument in response to another commenter (self-identifying as “a genuine skeptic”):

———————————-

As a skeptic, I share your frustration equally, and as a genuine skeptic, and someone who does care about the environment, I am ready any day of the week to have my opinion sway back to believing in / trusting the consensus / IPCC position. Further, I know exactly what would sway me: dialogue, and constructive debate with the skeptics, in particular those skeptics of the ilk of Lindzen, Christy, and, although he refuses the label “skeptic”, Pielke Sr. (…)

(…) I’ve disagreed with Pielke Sr., for instance, but in the scientific norm. Tenderhearted readers, unaccustomed to the scientific norm, might have thought I was awfully hard on Roger. (One did say so.) But his own comment was that he appreciated my constructive discussion. This is a cultural issue that I think the general population does not understand. Normal exchanges, for science, about what’s going on, what’s good, or not, are fairly rough and tumble. It may not be the best thing that science is conducted this way, but it is what it is.

The scientific norm issue is a different matter. The scientific norm is the professional literature, not blog commentary. If you look in to Lindzen and the response in the scientific literature, you’ll find that he’s been met properly (by the standards of science, that is). Namely, he suggested his ‘adaptive iris’ idea. This was based on there being a certain relationship (it had to have a particular sign, and large magnitude) between surface temperatures in the tropics and cloudiness (and, for that matter, particular types of cloud). One paper in the scientific literature doesn’t buy you much. It is the start of the conversation to publish in the literature, not blessing as holy writ. He published, and then got the best possible response — other people used other (better) data sets and observing methods to see if they could get the same answer as he had gotten in his first cut. Unfortunately for his hypothesis, the better data sets erased his effect. Indeed, not only was the magnitude much smaller than he thought, the sign was the opposite of what he thought.

As far as scientific norms go, he got extremely good treatment. a) he did publish his idea (no ‘conspiracy to suppress’) and b) other people took a serious look at it. It is significant work to take a look at somebody else’s new idea. As a scientist, if you can get others to look at your idea, you have done extremely well. As happens in science, perfectly normally, the initial proposition got rejected by more detailed analysis. Since what was at hand was deriving a relationship between observational quantities, and Lindzen is a theoretician, it’s no great surprise or shame that he didn’t get all the niceties on his data sets right. As usual, devils lay in the details, and the responses were from groups familiar with all the devils laying in those details.

Where things went problematic was that contrary to proper scientific practice, Lindzen didn’t drop his disproven idea. A bit of ‘is so’ publishing (sorry, it was painful to read his response article and this is all I can say of it) in response to the objections was it. And then much complaining outside the scientific literature about conspiracy, scam, censorship, … To be honest, even his original Iris publication was an example of lenient reviewing. There were problems in his data management in the original paper that even I saw (correctly) would be a problem for his idea — and I’m not a tropical person (polar regions mostly), nor, then, sea surface temperature, nor then or now satellite sensing of clouds. The later publications — in the scientific literature — about his errors confirmed my suspicions, and, unsurprisingly, added a number of problems to what I suspected. But that’s not what you see out on the blog universe.

You can make some headway over at scholar.google.com. A fair amount of the non-scientific world shows up there, but a fair amount of the scientific world is present.


always see an ad hominem attack for what it is (I refer to the attacks by commenters at RealClimate, which were not removed by the editors). In most cases, straw men arguments can also be seen for what they are. And then an argument, “we don’t have to answer that ’cause it wasn’t peer-reviewed” always also increases the lay public’s skepticism. No one takes that response seriously, and again, skepticism can only increase.

I agree that outside the scientific community nobody takes seriously that something didn’t appear in the scientific literature.

That is a problem with outside the scientific community.

Doing science is difficult. Over the past 400 years, the modern scientific method has accumulated a lot of knowledge and understanding. Doing science means changing that body of knowledge and understanding. Sometimes that means saying that even though we used to think that something was the case, it really isn’t. Making that argument successfully is hard work. ‘Even’ the easier argument of making an addition is hard work. It’s hard work because other people have to be able to rely very strongly on everything you say in your paper. (…)

There are two parts to the scientific publication process important for your comment here. One is, to publish in the professional literature about your idea, you have to examine and explain your idea thoroughly. ‘thoroughly’ turns out to be a lot of work. Second is, you have to research all the relevant aspects of your problem and honestly discuss them. The up side of this is, once you’ve finished a proper scientific paper, it can stand for some time. You might turn out to be wrong about something — because there were different data than you used, or a better technique than you used, or … several things. Being shown wrong by later and much more labor-intensive examinations is fine. But being shown wrong because you failed to do your homework is disaster.

In contrast is blog posts and comments. The standard there is what I’ll call ‘chatting over a beer’. If you and I sit down and start talking, both of us with something we like to drink, having a relaxed conversation, that’s wildly different than scientific literature. Both of us will say what we think, but there’s no concern about tomorrow you trying to write a paper on which our professional reputations will hang based on what I say. We’re just chatting. I’ll give you my best answer at the time, but if I’ve forgotten something, or the answer is 25.0 instead of 2.50, eh. Just chatting over a beer. Just a blog comment. If someone started writing a scientific paper based on comments in blogs … all kinds of wild things could show up. The earth is flat, hollow, expanding, 6000 years old, and so on. Somebody, somewhere, in the blogosphere has said all such things.

To do science, we need something much more reliable than ‘anything anybody ever says anywhere’. We even need something better than “well, he’s normally pretty good so even tough he’s never worked on this kind of problem before and doesn’t know how the satellites detect what he’s working with, he _must_ be right anyhow.” That something more is the professional scientific literature.

Within the world of science (all 20 or so of us), it is an extremely telling, and negative, thing that much of what the general public thinks is the case about science is actually based on things which are said _only_ outside the scientific literature. If the speaker had confidence in his statement, he’d try to publish it in the literature. And, if they were right about the ‘conspiracy’, they should at least have a rejection letter and comments from the editor and reviewers to show. Instead, they talk about the conspiracy, but have no rejection letters (_I’ve_ got rejection letters — they’re normal to trying to do science.)

But the public perception is quite different. Still, I have to think if someone won’t go in front of his professional peers and stand for what he thinks is scientifically correct, he doesn’t really believe it himself. If his only or major audience is people who don’t know the science thoroughly, I have to figure he thinks that’s the only audience who’ll let him get away with whatever it is he’s saying now. Fine for you and me over a beer. No fine for doing science.

——————————–

Just to add a qualifier: Not everything that appeared in the scientific literature is necessarily “good science”. Likewise, not everything that appeared on a blog is necessarily unscientific or wrong. Just as in a bar, the greatest ideas and insights can be heard. But amidst a helluva lot of chatter about the weather. Which is actually pretty nice right now. Enjoy your beer!


%d bloggers like this: