PBL survey shows strong scientific consensus that global warming is largely driven by greenhouse gases

by

Updates:

(5 Sep 2015): US Presidential candidate Rick Santorum used an erroneous interpretation of our survey results on the Bill Maher show. My  detailed response to Santorum’s claim is in a newer blogpost. Politifact and Factcheck also chimed in and found Santorum’s claims to be false. The blogpost below goes into detail about how different interpretations could lead to different conclusions and how some interpretations are better supported than others.

As Michael Tobis rightly points out, the level of scientific consensus that you find “depends crucially on who you include as a scientist, what question you are asking, and how you go about asking it”. And on how you interpret the data. We argued that our survey results show a strong scientific consensus that global warming is predominantly caused by anthropogenic greenhouse gases. Others beg to differ. Recent differences of opinion are rooted in different interpretations of the data. Our interpretation is based on how we went about asking certain questions and what the responses indicate.

To quantify the level of agreement with a certain position, it makes most sense to look at the number of people as a fraction of those who answered the question. We asked respondents two questions about attribution of global warming (Q1 asking for a quantitative estimate and Q3 asking for a qualitative estimate; the complete set of survey questions is available here). However, as we wrote in the ES&T paper:

Undetermined responses (unknown, I do not know, other) were much more prevalent for Q1 (22%) than for Q3 (4%); presumably because the quantitative question (Q1) was considered more difficult to answer. This explanation was confirmed by the open comments under Q1 given by those with an undetermined answer: 100 out of 129 comments (78%) mentioned that this was a difficult question.

There are two ways of expressing the level of consensus, based on these data: as a fraction of the total number of respondents (including undetermined responses), or as a fraction of the number of respondents who gave a quantitative or qualitative judgment (excluding undetermined answers). The former estimate cannot exceed 78% based on Q1, since 22% of respondents gave an undetermined answer. A ratio expressed this way gives the appearance of a lower level of agreement. However, this is a consequence of the question being difficult to answer, due to the level of precision in the answer options, rather than it being a sign of less agreement.

Moreover, the results in terms of level of agreement based on Q1 and Q3 are mutually consistent with each other if the undetermined responses are omitted in calculating the ratio; they differ markedly when undetermined responses are included. In the supporting information we provided a table (reproduced below) with results for the level of agreement calculated either as a fraction of the total (i.e., including the undetermined answers) or as a fraction of those who expressed an opinion (i.e., excluding the undetermined answers), specified for different subgroups.

Verheggen et al - EST 2014 - Table S3

For the reasons outlined above we consider the results excluding the undetermined responses the most meaningful estimate of the actual level of agreement among our respondents. Indeed, in our abstract we wrote:

90% of respondents with more than 10 climate-related peer-reviewed publications (about half of all respondents), explicitly agreed with anthropogenic greenhouse gases (GHGs) being the dominant driver of recent global warming.

This is the average of the two subgroups with the highest number of self-reported publications for both Q1 and Q3. In our paper we discussed both ways of quantifying the level of consensus, including the 66% number as advocated by Tom Fuller (despite his claims that we didn’t).

Fabius Maximus goes further down still, claiming that the level of agreement with IPCC AR5 based on our survey results is only 43-47%. This result is based on the number of respondents who answered Q1b, asking for the confidence level associated with warming being predominantly greenhouse gas-driven, as a fraction of the total number of respondents who filled out Q1a (whether with a quantitative or an undetermined answer). As Tom Curtis notes, Fab Max erroneously compared our statement to the “extremely likely” statement in AR5, whereas in terms of greenhouse gases AR5 in Chapter 10 considered it “very likely” that they are responsible for more than half the warming. Moreover, our survey was undertaken in 2012, long before AR5 was available, so if respondents had IPCC in mind as a reference, it would have been AR4. If anything, the survey respondents were by and large more confident than IPCC that warming had been predominantly greenhouse gas driven, with over half assigning a higher likelihood than IPCC did in both AR4 and AR5.

PBL background report - Q1b

Let me expand on the point of including or excluding the undetermined answers with a thought experiment. Imagine that we had asked whether respondents agreed with the AR4 statement on attribution, yes or no. I am confident that the resulting fraction of yes-responses would (far) exceed 66%. We chose instead to ask a more detailed question, and add other answer options for those who felt unwilling or unable to provide a quantitative answer. On the other hand, imagine if we had respondents choose whether the greenhouse gas contribution was -200, -199, …-2, -1, 0, 1, 2, … 99, 100, 101, …200% of the observed warming. The question would have been very difficult to answer to that level of precision. Perhaps only a handful would have ventured a guess and the vast majority would have picked one of the undetermined answer options (“I don’t know”, “unknown”, “other”). Should we in that case have concluded that the level of consensus is only a meagre few percentage points? I think not, since the result would be a direct consequence of the answer options being perceived as too difficult to meaningfully choose from.

Calculating the level of agreement in the way we suggest, i.e. excluding undetermined responses, provides a more robust measure as it’s relatively independent of the perceived difficulty of having to choose between specific answer options. And, as is omitted by the various critics, it is consistent with the responses to the qualitative attribution question, which also provides a clear indication of a strong consensus. If you were to insist on including undetermined responses in calculating the level of agreement, then it’s best to only use results from Q3. Tom Fuller’s 66% becomes 83% in that case (the level of consensus for all respondents), showing the lack of robustness in this approach when applied to Q1.

Verheggen et al - Figure 1 - GHG contribution to global warming

Some other issues that came up in recent discussions:

See also the basic summary of our survey findings and the accompanying FAQ.

 

Tags: , , , , , , , , , , , , , , , , , ,

73 Responses to “PBL survey shows strong scientific consensus that global warming is largely driven by greenhouse gases”

  1. ...and Then There's Physics Says:

    I think there are a number of issues with what Fabius has done. By using the answers to question 1b to determine the level of consensus (ignoring that this was done pre-AR5) is essentially equivalent to having a confidence interval for each result in an attribution study. In a sense an attribution study is based on using models to determine how likely something is and the confidence in the attribution comes from the analysis of all the model runs, not from the confidence in each model run. An equivalent would be to simply ask people how much warming they would attribute to anthropogenic influences and determining the confidence from the overall set of answers (essentially as you’ve done). By also getting people to provide a confidence interval for each answer is – in a sense – a double test.

    However, I think there is a more fundamental statistical issue. As I understand it, the attribution studies are essentially hypothesis tests in which you hypothesise that more than 50% of the warming could be non-anthropogenic. The result is that this is only possible in a small fraction of cases (less than 5%) and hence you reject this hypothesis with 95% confidence. Strictly speaking, we shouldn’t really say that we’re 95% sure that more than 50% is anthropogenic because that’s the prosecutor’s fallacy. Of course, since there is only anthropogenic and non-anthropogenic, this subtlety may not be that important, but I don think that strictly speaking the attribuion studies haven’t shown that we’re 95% sure that it’s anthropogenic, they’ve really shown that we’re 95% sure that it can’t be non-anthropogenic.

  2. thomaswfuller2 Says:

    Bart, this was inevitable given the way you’re reporting on the results.

    You’re opening a can of worms. I’m sorry I didn’t see where you reported the 66% figure which should be your headline. Can you show me where it is?

    I’d also like to repeat my request for a look at the raw data.

    Thanks

  3. thomaswfuller2 Says:

    I just found the reference. I will update my post. I apologize for this but the thrust of my criticism is unchanged. The 66% is the principal finding and you obscure it with your emphasis on number of publications.

  4. thomaswfuller2 Says:

    To briefly expand on my opinion, you and Fabius Maximus are both making the same mistake, just in different directions.

    You both are essentially telling the reader to ignore the principal finding of your survey and to pay attention to second order numbers that involve conjecture and unwarranted assumptions.

  5. Editor of the Fabius Maximus website Says:

    Bart,

    Thank you for your detailed reply. Here are a few quick comments.
    “Fabius Maximus goes further down still,”
    That’s not accurate framing. I described the top levels results. Your analysis drills down deeper, excluding some responses and focusing more on self-reported publications. Neither is an a priori superior perspective, of course.

    “As Tom Curtis notes, Fab Max erroneously compared our statement to the “extremely likely” statement in AR5…”
    I made a direct comparison of PBL questions 1a and 1b to the relevant keynote statements in AR4 & AR5. The assertion that climate scientists were unaware of the IPCC’s confidence levels is possible, but is unlikely in my opinion. I’m an amateur and don’t follow climate science closely, but I know them. Also, how much could different definitions of “extremely likely” affect the results? This is speculation — useful insights to guide future research, but weak rebuttal.
    “Moreover, our survey was undertaken in 2012, long before AR5 was available, so if respondents had IPCC in mind as a reference, it would have been AR4.”
    Yes. However, the IPCC reports are reflections of the state of climate science, not original research. Your survey ran during the process of producing AR5, and so provides a useful test of the consensus of climate scientists presented in AR5. The consensus is, of course, a moving “target”.

    Also, the definitions of confidence levels for “virtually certain”, “extremely likely”, and “very likely” were identical in AR4 and AR5.
    “Our results are in good agreement with other opinion surveys, including e.g. Doran and Kendall-Zimmermann”
    Yes. But from memory, your survey provided far more granular results. Which is why its additional insights provide are a valuable contribution.
    Re: the focus on self-reported publications as proxy for expertise
    While perhaps the best available proxy, it is obviously only a weak indicator. First, your excellent broad survey probably included both practitioners and academics. Publication rates tell us little about the former; finding a division between the 2 would be significant (note for future research).

    Second, publication rates vary across fields and tell us nothing about their quality or impact (unlike, for example H-indexes). A measure is obviously crude that equally weights someone with many publications in low-rated journals (or pay-to-play publications) with Roger Pielke Sr. (H-index of 84).
    Conclusions about your analysis.
    Detailed slicing and dicing of the data is always useful. But only in the context of the high-level results. I showed these. I find it odd that you did not.

    Which is more accurate or useful? There is no way to provide a definitive answer. People will draw their own conclusions. Hopefully future research will provide additional insights. Social science surveys are probes into the complex dynamics of people’s belief systems, and so provide only incremental data.

  6. ...and Then There's Physics Says:

    Fabius,

    I made a direct comparison of PBL questions 1a and 1b to the relevant keynote statements in AR4 & AR5.

    No, you didn’t really. The confidence interval in the AR4/AR5 statements come from the distribution of the results and testing this against a hypothesis. Question 1b is “what confidence level would YOUR estimate” which is not equivalent to the confidence interval in the AR4/AR5 statements.

  7. ...and Then There's Physics Says:

    Tom,

    The 66% is the principal finding and you obscure it with your emphasis on number of publications.

    Let’s just ask everyone in the WHOLE world (which should be said in the voice of Blackadder).

  8. Editor of the Fabius Maximus website Says:

    Bart,

    Again, thank you for your reply. Two replies, a narrow one and a broad one.

    “The confidence interval in the AR4/AR5 statements…”

    Yes, no correspondence between studies as different in conception as AR5 and the PBL survey can be exact.

    However, the high level conclusion from (1a + 1b)/respondents is imo a more reliable result than that which requires the extensive slicing and dicing of subgroups — as with yours. But that’s an opinion, subject to debate — and the results of further research (as usual with social science surveys).

    My primary objection to your view is that you appear to consider the high level result I show as “creative accounting” (your words from Twitter) — and your complex math as definitive. That looks to me like motivated reasoning, a demonstration of the dysfunctional nature of the climate science debate — one reason for the lack of public policy action.

    After scores of such discussions — mine, and watching others — I’ve drawn some conclusions. I suspect (speculation) that Marcia McNutt is correct: “The time for debate has ended”. But in a different sense than she intended: How we broke the climate change debates. Lessons learned for the future.

    We should expect no operationally useful results in the foreseeable future. Weather will determine how public policy evolves.

    I hope events prove me wrong. However, I have a good record at predicting such things.

  9. Hans Erren Says:

    Remember the concensus on stomach ulcers and continental drift. Consensus is not scientific proot.

  10. Bart Verheggen Says:

    Hans,

    Of course the existence of a consensus does not constitute scientific proof. But it’s not irrelevant either. A scientific consensus is the logical consequence of the evidence piling up in a certain direction over time. That’s why there is a scientific consensus on e.g. evolution. And yes, also on climate science. On both issues, society at large lags behind the science in terms of accepting the new paradigm.

    Here’s an interesting article about how the scientific consensus emerged re plate tectonics and what that may mean for the scientific consensus re climate change.

    Dismissing a scientific consensus as irrelevant almost unavoidably gets you into conspiracy thinking, sometimes subtly, sometimes less subtly. How else are you gonna explain that so many scientists agree on something that they should know darn well is wrong?

  11. johnrussell40 Says:

    @Hans Erren.

    Are you saying that someone is claiming that consensus is scientific proof? If so, where? Very clearly it’s not: so what’s your point?

    Consensus is just the word applied to indicate that an overwhelming majority have adopted a certain viewpoint. It would appear from your two examples that you’re saying ‘consensus’ means that, in every case, the majority ultimately turn out to be wrong? Which would also mean that mavericks always turn out to be right. Plainly that’s ludicrous. It’s like listing two plane crashes and claiming that it means all planes must crash.

    If your doctors tell you you have an illness, do you assume they’re wrong because they all agree with the diagnosis? There’s no logic behind your comment.

  12. thomaswfuller2 Says:

    The consensus is important. Especially after severely flawed studies using literature searches produced implausibly high percentages and were quickly dissected, it is important to know how robust the consensus really is.

    Bart, your research shows the state of the consensus clearly. The survey was quite good. The sampling, while not perfect, was probably as good as it could possibly be under the circumstances.

    The results are clear. One of the biggest questions in climate science is ‘How many scientists believe that human emissions of CO2 have caused half or more of recent warming?’

    The answer is 66%.

    The only reason to gloss over it–the only reason to label it Q1 and separate the percentage from the content of the question–the only reason to combine the answers to Q1 and Q1 and re-report the results after removing those who answered ‘don’t know’–will be assumed by skeptics to be because the results don’t satisfy your political expectations of the consensus.

    You are now exacerbating the problem your survey sought to resolve.

  13. thomaswfuller2 Says:

    So, Bart, here is my question. What percentage of practicing climate scientists believe half or more of recent warming is caused by human emissions of CO2?

    More on Verheggen et al: Great Survey. Pity about the report… It’s still 66%.

  14. thomaswfuller2 Says:

    You write above, “Undetermined responses (unknown, I do not know, other) were much more prevalent for Q1 (22%) than for Q3 (4%); presumably because the quantitative question (Q1) was considered more difficult to answer. This explanation was confirmed by the open comments under Q1 given by those with an undetermined answer: 100 out of 129 comments (78%) mentioned that this was a difficult question.”

    It is a difficult question. That’s why you’re doing the survey. To get an answer to a difficult question. ‘I don’t know’ is a legitimate answer to this question.

    Surveyors will exclude those not expressing an opinion at times, but only on questions like ‘Is the red car better than the purple car?’ Not on quantitative expressions of scientific understanding. Those answering ‘I don’t know’ need to be included in your calculations of a consensus, precisely because they do not form part of the consensus.

  15. Bart Verheggen Says:

    Tom,

    I disagree with this conclusion of yours:

    Those answering ‘I don’t know’ need to be included in your calculations of a consensus, precisely because they do not form part of the consensus.

    For the reasons given in this post. Many of those who answered I don’t know, unknown, or other, could realistically be expected to form part of the consensus in terms of thinking warming is predominantly human induced. That is clear from comparing the answers to Q1 to the answers to Q3 and from the responses to the open question box with Q1.

    To substantiate your approach, you’d need to answer types of questions such as:

    What is your explanation for the large number of undetermined answers to Q1?

    How would you explain the big difference between Q1 and Q3 based on your preferred approach of including the large fraction of undetermined answers?

    How would you think the same sample of scientists would have responded if we had asked one of the questions I discussed in the post, namely:

    Imagine that we had asked whether respondents agreed with the AR4 statement on attribution, yes or no. I am confident that the resulting fraction of yes-responses would (far) exceed 66%. We chose instead to ask a more detailed question, and add other answer options for those who felt unwilling or unable to provide a quantitative answer. On the other hand, imagine if we had respondents choose whether the greenhouse gas contribution was -200, -199, …-2, -1, 0, 1, 2, … 99, 100, 101, …200% of the observed warming. The question would have been very difficult to answer to that level of precision. Perhaps only a handful would have ventured a guess and the vast majority would have picked one of the undetermined answer options (“I don’t know”, “unknown”, “other”). Should we in that case have concluded that the level of consensus is only a meagre few percentage points? I think not, since the result would be a direct consequence of the answer options being perceived as too difficult to meaningfully choose from.

    Do you disagree with this quoted paragraph? If so, why?

  16. Bart Verheggen Says:

    Fab Max,

    You wrote:

    The assertion that climate scientists were unaware of the IPCC’s confidence levels is possible, but is unlikely in my opinion.

    I never made that assertion. What Curtis correctly noted is that you compared the survey results to IPCC’s “extremely likely” statement, whereas the corresponding statement in both AR4 and AR5 reads “very likely”. But, as you apparently agree, since most people are mostly aware of the top level statements in IPCC and take that as a reference, the point of reference for the respondents was AR4. Most of those who responded GHG > 50% attached a higher confidence to that statement than IPCC did. But ATTP made a good point on how to (not) interpret these confidence limits, something that I should also take to heart.

    Furthermore, our key point of contention is not that I prefer to look at subgroups with relevant publications and you (and Tom) to all respondents (though that is indeed a point of contention); the key point is that you (and Tom) gloss over the fact that there were so many undetermined answers to Q1 and seem intent on arriving at as low a consensus as possible.

    So Fab, suppose we had asked whether respondents thought GHG contributed 1%, 2%, 3%, …. of recent warming, and the result was that 90% gave an undetermined answer (dunno, unknown, other), 8% responded with one of the options 51%, 52%, … and 2% with one of the options below 50%, what would be your conclusion re the level of consensus? Presumably that it’s only 8%, based on your reasoning. I think that would be an incorrect conclusion, to the point of being misleading if it wasn’t accompanied by an explanation of why this percentage was so small.

    My conclusion would be that the level of consensus would be better approximated as 8/(8+2) = 80% in that case, though indeed that would not be very robust with such as extremely large fraction of undetermined answers. Luckily we also asked a more straightforward question based on which we can distill very similar information. And, lo and behold, it also comes at 80% agreement!

    Would you insist in this hypothetical case that the level of consensus is 8%, without reservation?

  17. Editor of the Fabius Maximus website Says:

    Bart,

    Again, thank you for your reply. This exchange, and those of others elsewhere, have proven quite enlightening.

    I am tied up now; I’ll respond to your question at a later time. However, here are 2 conclusions I’ve learned from the discussion. I believe both are “new” (although nothing is really new, it’s a relative term).

    Please excuse the imprecision and sloppy writing of this comment, as I’m under time pressure from work but wanted to respond promptly to you. It covers the general sense of what I mean.

    (1) Journalists often cite the wrong finding from AR5, not the one most relevant to Obama’s proposals to control CO2.

    Most cite D3 from the Summary for Policymakers: “It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.”

    In a comment at SkS Tom Curtis points to the more relevant finding from page 884 (Chapter 10): “We conclude, consistent with Hegerl et al. (2007b), that more than half of the observed increase in GMST from 1951 to 2010 is very likely due to the observed anthropogenic increase in GHG concentrations.”

    The difference is that the attribution statement for GHG is “very likely”, which AR5 (and AR4) define as at the 90% level. that’s below the usually required minimum of 95% level required for significance in science and public policy (i.e., “extremely likely” in AR5).

    I’ve asked several climate scientists to comment on this, as it’s over my pay grade to assess.

    But at least climate scientists should insist that journalists get this simple point correct.

    (2) Using the “no true Scotsman” rule to define “climate scientists”.

    The rebuttals by Curtis at SkS and you to a large extent focus on narrowing the definition of “climate scientists.” This undercut the expressed aim (and IMO a major contribution) of the PBL study, which was to survey “a large group of scientists studying various aspects of global warming and climate change (including impacts and mitigation) and who have published in peer-reviewed or, in a few cases, gray literature.”

    Unfortunately the survey did not provide sufficient data to reliably evaluate professional competence (as I discussed in some detail in an earlier comment), so complex dicing of the data was necessary to find a definition of climate scientist that produced the desired answer.

    It’s a hazardous exercise, at some point becoming confirmation bias (no true Scotsman believes), especially when done with weak data and no prior definition.

    To repeat an illustration from above, concluding that the authors of AR5 WG1 support the conclusions of AR5 WG1 shows institutional bias (e.g., selection bias, allegiance). I don’t believe it reliably tells us anything else, and illustrates the peril of this kind of narrowing.

  18. Editor of the Fabius Maximus website Says:

    Bart,

    “I never made that assertion.” {that climate scientists were unaware of the IPCC’s confidence levels.}

    My apologies. I was too quickly writing replies to too many people.

    That was said by Tom Fuller:

    Verheggen’s Consensus: Not 97%, not 47%. It’s 66%.

  19. thomaswfuller2 Says:

    Hiya Bart,

    It’s a busy day for me too–I will try to respond at length later.

    Really briefly, you asked 1,800 scientists about attribution. They provided an answer.

    That answer is 66% believe half or more of recent warming is due to anthropogenic emissions of CO2.

    That’s your headline.

    Your discussion of publication records and how to classify those who did not provide a numerical response is interesting and may even be relevant.

    But that goes below the headline. The key finding of your survey–the answer to the question that prompted you to conduct the survey, as stated in your introduction–that is your headline.

    Putting sidebar and secondary findings above your headline is not only inappropriate. It shows a lack of faith in your research.

  20. thomaswfuller2 Says:

    Hiya FM, I only said that climate scientists may not have used the IPCC numbers as they were not offered in the survey.

  21. Editor of the Fabius Maximus website Says:

    Tom,

    “But the IPCC offers a definition of 95% certainty in their publications. This definition was not presented to survey respondents prior to asking them about how certain they were. Some may have used the IPCC definition of certainty, some may not. ”

    Climate scientists are asked for their agreement about the most famous attribution statement from AR4, itself the most important single report in their field at that time — but you believe their answers used different definitions for the confidence levels. Because amnesia? Because just for fun?

    This isn’t my field, I’m interested in the public policy aspects not the science — and even I know the IPCC’s confidence levels.

    The survey is not well designed on these details, but as a point of ambiguity in the results that is one of the really tiny ones.

  22. thomaswfuller2 Says:

    Hi FM. It may have been a defect of the survey’s design, but you have to go with what you’ve got. There’s a lot of good data there. But when Bart makes undue inferences about the need to drop ‘don’t knows’ or wants to bury the answer to his important question underneath a pile of assumptions regarding the merit of numerous publications I will let him know.

    Similarly, if you make unwarranted assumptions about transferring the IPCC’s confidence statements to a survey that did not include them, I will let you know.

  23. thomaswfuller2 Says:

    Tom,

    I disagree with this conclusion of yours:

    Those answering ‘I don’t know’ need to be included in your calculations of a consensus, precisely because they do not form part of the consensus.

    For the reasons given in this post. Many of those who answered I don’t know, unknown, or other, could realistically be expected to form part of the consensus in terms of thinking warming is predominantly human induced. That is clear from comparing the answers to Q1 to the answers to Q3 and from the responses to the open question box with Q1.”

    Bart–could? Could?

    “To substantiate your approach, you’d need to answer types of questions such as:

    What is your explanation for the large number of undetermined answers to Q1?”

    What is your explanation for the fact that a huge majority of scientists were able to answer the question successfully?

    The explanation is simple. It is a difficult question and many scientists honestly do not know the answer. Many may believe it is possible but not shown or not proven, that half or more of the recent warming is due to human emissions.

    That is why they count as part of the global response to your question and do not count as part of the consensus.

    You continue: “How would you explain the big difference between Q1 and Q3 based on your preferred approach of including the large fraction of undetermined answers?”

    First of all, why do you neglect Q2 in trying to understand Q1 and Q3? In Q2, only 32% say the long term trend has changed. An equal percentage say the trend is masked by short term variation and 24% say it is impossible to state. To me that fully explains the percentage who say they don’t know to Q1 and still attribute warming to concrete causes in Q3.

    You write, “How would you think the same sample of scientists would have responded if we had asked one of the questions I discussed in the post, namely:

    Imagine that we had asked whether respondents agreed with the AR4 statement on attribution, yes or no. I am confident that the resulting fraction of yes-responses would (far) exceed 66%. We chose instead to ask a more detailed question, and add other answer options for those who felt unwilling or unable to provide a quantitative answer. On the other hand, imagine if we had respondents choose whether the greenhouse gas contribution was -200, -199, …-2, -1, 0, 1, 2, … 99, 100, 101, …200% of the observed warming. The question would have been very difficult to answer to that level of precision. Perhaps only a handful would have ventured a guess and the vast majority would have picked one of the undetermined answer options (“I don’t know”, “unknown”, “other”). Should we in that case have concluded that the level of consensus is only a meagre few percentage points? I think not, since the result would be a direct consequence of the answer options being perceived as too difficult to meaningfully choose from.

    Do you disagree with this quoted paragraph? If so, why?”

    Umm, Bart–it doesn’t work that way. The way to quantify a consensus is to count those who raise their hands when you ask them if they agree. It is only those who actively volunteer who form part of the consensus.

    If you think more would agree with the AR4 attribution statements, ask them. Your confidence in the answers you think they would provide is admirable. But don’t pretend that it was asked and answered in your survey. Your argument as reproduced here indicates that the consensus is so weak that only precise phrasing and ruthless pruning of those who don’t give the answer you want can bring it to light.

    If it’s robust (and 66% is robust–it just isn’t 97%) then you don’t have to make excuses for those who say they don’t know.

    I wish you had asked for help on this. There is a battery of questions you can use to get this information.

    As it stands, you are saying those who answer ‘I don’t know’ should be eliminated from the total. That’s an incorrect choice for honest analysis. I submit that many highly credentialed, qualified and good climate scientists honestly told you that they do not know what percentage of recent warming can be attributed to human emissions. Because they do not know they do not form part of the consensus.

  24. thomaswfuller2 Says:

    As for the publications thingy, that’s just hand-waving. ‘Look over here, the numbers are higher!’

    You need to show why you think higher numbers of (self-declared) publications are an indicator of a higher level of expertise for it even to be relevant. And you don’t even try.

    Younger scientists may have been educated with more up-to-date information and even techniques. They may be far more expert that old fuddie duddies who sit in a room writing papers, thirty years after learning anything new. (I’m not saying that’s common, just an obvious possibility that should make you cautious about using this metric.)

    A brilliant scientist might write one paper as a single author who sheds significant light on a subject, while her colleague might get his name onto 15 different papers as a co-author without doing anything significant.

    Authors who don’t agree with the consensus may be keeping their head down. Worse, they may face a wall of dissent from the consensus when they seek to publish.

  25. Bart Verheggen Says:

    Tom,

    In effect you (and FM) assume that the large fraction of respondents who answered with a “undetermined” answer (dunno, unknown, other) don’t agree with the consensus position (that recent warming is predominantly due to anthropogenic GHG). In our interpretation we assumed that we don’t know whether they agree or disagree. Note that we do not assume that they agree; we make no such assumption.

    You write:

    The explanation is simple. It is a difficult question and many scientists honestly do not know the answer. Many may believe it is possible but not shown or not proven, that half or more of the recent warming is due to human emissions.

    But Tom, we didn’t literally ask whether they agree with the IPCC statement on attribution. We asked them to specify a range of 25% width, with three different ranges all amounting to agreeing with the attribution statement (GHG > 50%). To have to choose between these ranges (as opposed to merely stating whether they agreed with the statement) is a very different cup of tea. That is the point I was trying to make with my hypothetical alternative questions. The question “do you think more than half of recent warming can be attributed to GHG?” would have been much easier to answer and we would have gotten a higher fraction of “determined” answers (for lack of a better word). I’ve repeated myself ad nauseam why we’re confident that that’s the case and you keep ignoring that.

    Why do we neglect Q2? In the article we focused on attribution and some miscellaneous issues; not on trend significance. But clearly, question 2 was interpreted differently by different people as we state in the background report: “The reference timescale ‘preceding decades’ is imprecise. This probably contributed to this question having been interpreted differently by different respondents, as reflected by the responses given.” I.e. those saying “slightly higher than before” presumably had the whole instrumental period in mind, whereas others (and we) had a shorter timespan of only a few decades in mind. This different interpretation by different respondents make it nearly impossible to deduce anything meaningful from this question. It doesn’t explain in the slightest what you’re claiming it does.

    Since you haven’t answered my question regarding my hypothetical alternative question, let me ask again in a slightly different way (as also posed earlier to FM):

    Suppose we had asked whether respondents thought GHG contributed 1%, 2%, 3%, …. of recent warming, and the result was that 90% gave an undetermined answer (dunno, unknown, other), 8% responded with one of the options 51%, 52%, … and 2% with one of the options below 50%, what would be your conclusion re the level of consensus? Presumably that it’s only 8%, based on your reasoning. I think that would be an incorrect conclusion, to the point of being misleading if it wasn’t accompanied by an explanation of why this percentage was so small.

    My conclusion would be that the level of consensus would be better approximated as 8/(8+2) = 80% in that case, though indeed that would not be very robust with such as extremely large fraction of undetermined answers. Luckily we also asked a more straightforward question based on which we can distill very similar information. And, lo and behold, it also comes at 80% agreement!

    Would you insist in this hypothetical case that the level of consensus is 8%, without reservation?

    Looking forward to your answer.

  26. Bart Verheggen Says:

    Tom,

    Your caveats re number of publications are mostly valid. But even if imperfect, it can still be a useful indicator of a respondent’s expertise, esp because we cast such a wide net of respondents. An agricultural scientist who happened to have written an article on how corn yield in Africa may change under a certain global warming scenario has less relevant expertise in the context of our survey than a climate scientist with 50 climate related publications under their belt.

    It’s the same adagium as with model: All models are wrong but some are useful. All criteria for expertise are wrong, but some are useful (depending, as with models, on the context and on the intended purpose). Within one discipline I’m highly critical of the number of publications being so all-important in determining an academic’s performance. But when assessing expertise across many different disciplines it can be a useful metric to distinguish the group by number of publications on a certain interdisciplinary topic to assess how deep different people have delved into that interdisciplinary field.

  27. Editor of the Fabius Maximus website Says:

    Bart,

    “In effect you (and FM) assume that the large fraction of respondents who answered with a “undetermined” answer (dunno, unknown, other) don’t agree with the consensus position (that recent warming is predominantly due to anthropogenic GHG).”

    That’s just weak. Your survey tested for those who have a certain set of beliefs. It determined those who said they do have those beliefs. That’s the finding.

    You can guess to your heart’s content about the other respondents. We just don’t know. Social science surveys produce only incremental information. Slicing and dicing to go beyond that is an exercise in confirmation bias.

    My personal opinion — as an experienced user of such data — is that you’ve provide an example of this. But only a light example. I’ve shown other examples of climate scientists aggressively guessing about such things to produce (in their own minds) the desired outcomes of surveys.

    If you would like better data, do what social scientists do — and your team did in this survey. Learn from the prior research and do another round. There is no shortcut to better insights.

  28. Bart Verheggen Says:

    FM,

    You misread what I wrote. You and Tom assume that undetermined answers do not agree with the consensus. I say I don’t know.

    We do assume something else about those ‘undetermined’ respondents, namely that many of them chose an undetermined answer option because choosing between the 6 ranges provided (note that we did not ask respondents whether they think GHG are responsible for more than half the warming or not) was considered too difficult. We provided evidence supporting that assumption.

    You and Tom provided none for your assumption and ignored the evidence for ours.

    Confirmation bias…?

  29. Editor of the Fabius Maximus website Says:

    Bart,

    (1) “You and Tom assume that undetermined answers do not agree with the consensus. I say I don’t know.”

    That’s an odd response to my comment that says exactly the opposite of what you attribute to me: “We just don’t know” {about the others}.

    My post stated the fraction that responded with an explicit answer about GHG attribution AND their confidence level, stated as a fraction of the total responding — and also as a fraction of the total responding excluding “don’t know” answers.

    I’ll say this again: I made no assumptions about the other responses.

    (2) “the key point is that you (and Tom) gloss over the fact that there were so many undetermined answers to Q1 and seem intent on arriving at as low a consensus as possible.”

    Do you refer to the large graphs of the response that dominate my post? I gave only simple math of the numbers shown, assuming that my readers can read simple bar chart.

    (3) “ignored the evidence for ours.”

    An odd rebuttal, since I have commented twice specifically about your analysis of that here.

    (4) So far I have seen misrepresentations of what I said and guessing about my motives. This discussion might prove fruitful if you’d avoid mind-reading my intent and refer to direct quotes of what I say.

    I am trying to be nice about this, but I haven’t seen a substantive response to my comments, let alone supporting evidence to your accusation of “creative accounting.” In my world that’s a serious claim.

  30. Editor of the Fabius Maximus website Says:

    Apologies for excess bold in my comment. My error with the HTML code.

  31. thomaswfuller2 Says:

    Hi Bart, I’m busy again today (Cat 5 typhoon heading for Taipei, some last minute shopping to do). There are some statements and assumptions in your recent comments that need to be addressed.

    But at a meta level, I repeat that being part of a consensus should be–needs to be–easy to determine, a positive attitude if not statement.

    Having to infer that people who respond ‘don’t know’ to one question and answer something else to another question are part of a consensus is really weak.

    Not as weak as pulling responses out of the survey,but weak.

    It’s not confident research.

    As for the publication thing, you admit that it’s weak. What other cross tabs did you run?

    And when can I look at the raw data? If you ran a cross tab you surely have it. Can I see it? Excel, SPSS, ASCII, whatever.

  32. Bart Verheggen Says:

    FM,

    We look at this very differently indeed, since what you attribute to me is exactly what I see you doing. And vice versa apparently.

    Implicitly you assume that the large fraction of undetermined answers really have no opinion about whether recent warming is predominantly human GHG induced. I argue that we have evidence that many of them aren’t truly agnostic about that issue, but merely wanted to avoid having to pick a very specific range of values. Again, we have evidence for that, which you have studiously ignored, despite your claims to the contrary.

    Talking about guessing motives: You brought up accusations of motivated reasoning and confirmation bias, which I find pretty rich coming from you.

    I’m still interested in how you would interpret the hypothetical responses the alternative question.

  33. Bart Verheggen Says:

    Tom,

    Hope you keep well amidst the hurricane.

  34. thomaswfuller2 Says:

    Hiya Bart, thanks–we got through it fine, just a little damp.

    Now, about that data… :)

  35. Editor of the Fabius Maximus website Says:

    Bart,

    (1) “since what you attribute to me is exactly what I see you doing.”

    The difference between us is that I give quotes as evidence, so you can explain why I’m wrong. Your replies tend to make assertions and give no evidence. I respond to your comments with quotes showing I said something different. You repeat your assertion. I see no point to this exercise.

    (2) “You brought up accusations of motivated reasoning and confirmation bias”

    (a) I said: “My primary objection to your view is that you appear to consider the high level result I show as ‘creative accounting’ (your words from Twitter) — and your complex math as definitive. That looks to me like motivated reasoning…”

    I don’t believe that requires telepathic powers to evaluate. Readers can decide for themselves. However I understand why you believe it does. I withdraw it, and apologize. I should have stated my objection in more neutral terms, despite my annoyance at your inflammatory statement and refusal to provide any support for it.

    (b) Confirmation bias is an inductive reasoning error. Neither it nor the “no true Scotsman” logical fallacy requires telepathic powers to discern. I gave a specific example:

    “Unfortunately the survey did not provide sufficient data to reliably evaluate professional competence (as I discussed in some detail in an earlier comment), so complex dicing of the data was necessary to find a definition of climate scientist that produced the desired answer. It’s a hazardous exercise, at some point becoming confirmation bias (no true Scotsman believes), especially when done with weak data and no prior definition.”

    (c) You stated that I “seem intent on arriving at as low a consensus as possible.” You give no evidence for this statement, so readers cannot evaluate it. I replied that it’s false; your replied by repeating your assertion, no evidence.

    Also, intent is irrelevant to science (e.g., alchemists had intents that we would consider daft, but made chemical discoveries of note).

    (3) I have given 5 rounds of comments. Typical is my most recent comment in which I made 5 points. You reply ignored 4 points, and repeated your assertion on the 5th but gave no evidence.

    Concluding note

    You are a busy guy, and I too have things to do. If you want to debate these matters, fine. If not, that’s OK too.

  36. Bart Verheggen Says:

    FM, this is getting a little tedious. You have steadfastly ignored what I consider the crucial difference in our points of view and which I described in a recent comment again. Your reasoning may be intuitively appealing, but you refuse to even consider why there are so many undetermined responses, or answer my question re an alternative type of question with even more answer options and what you would conclude from that. Or have I missed your reply to that, FM? From where I’m sitting it’s you who is arguing by assertion.

    I think the number you arrive at is nowhere near the actual consensus among climate scientist about whether recent warming is predominantly human induced. However, my “creative accounting” comment was in hindsight overly antagonistic and likewise I withdraw it and apologize.

  37. thomaswfuller2 Says:

    Hi all,

    If analyzing subgroups based on their level of expertise had been part of the project design, perhaps the questions should have and perhaps would have been written differently.

    Analyzing by level of expertise is not mentioned as one of the project goals in the Introduction to your paper, nor in any of the material I’ve seen written about it prior to fielding the survey. It appears to be something added after you looked at the results.

    This is not unusual–often data surprises researchers and provides new avenues to explore. But never in 20 years of doing this have I seen it completely eclipse the principal objective of the research.

    Bart, I’m specifically excluding you from what follows–I think one member of your team (John Cook) is an apologist for the worst of the climate activist community and this is primarily aimed at the activist community. Please feel free to correct me–if Mr. Cook was completely neutral and acted like all our best visions of a scientist at work, let me know.

    Because John Cook is lead author of a heavily publicized paper that trumpets a 97% consensus in the literature, I believe that the 66% consensus found in your survey ( and repeated in Bray, von Storch 2010) was considered either unhelpful or anomalous. I note you cite other studies but not Bray, von Storch. I am struggling why you would fail to note that another survey conducted in 2008 came up with exactly the same percentage of agreement with the consensus (although the definition was different).

    It appears from what has been written regarding the survey that because the topline agreement with the consensus statement came in at 66% that a decision was reached to highlight the results of other questions.

    That would explain why the topline percentage was reported only by question number only and combined with the figure for another question in the only sentence where it was mentioned in the report.

    As there is a clear difference in responses between those with more publications and those with fewer, that became the story that was reported.

    To repeat–it is not wrong or even unusual to note differences between subgroups–that’s why you ask demographic or organographic questions in the first place.

    But to bury the topline finding and focus on the subgroups is something I’ve never seen before. Ever.

    I’ve designed, fielded, analyzed and reported on the results of over a thousand surveys. In addition, I have trained other researchers, coached them, corrected their mistakes (and learned from them) and edited their reports. I am not a scientist but I believe I will claim status of subject matter expert on the technical aspects of quantitative surveys, both consumer and professional.

    You know I have the highest regard for you–for years you were the only consensus blogger I knew that ‘played fairly’ and I have learned a lot of what little I know about climate science here at your blog.

    So I don’t say this lightly. Your survey is good. The reporting of the results is not.

  38. Mal Adapted Says:

    Hans Erren:

    Consensus is not scientific proot.

    JohnRussell40:

    Are you saying that someone is claiming that consensus is scientific proof?

    Uhm, apparently Hans is saying something about “proot”.

  39. Bart Verheggen Says:

    Tom,

    John Cook has been a very good collaborator on this research and none of your insinuations that he tried to rig the results in a particular direction are true. There was never any pressure from him or attempt from us as a group to push the results in a certain direction. We just tried to get the most meaningful and best supported answers from our collective data as possible and John has been very helpful.

    The ghost-stories about him don’t ring true to me in the slightest. Please refrain from personal or group-wise accusations (as per my comment policy).

  40. ...and Then There's Physics Says:

    Tom,
    The problem is that the number of publications is relevant. It speaks to a person’s expertise. So, seeing how the level of consensus changes with publication number tells you something of how it changes as the those surveyed increase in expertise and experience. To try and argue that the correct level of consensus is 66% is essentially simply saying that if you have a very inclusive survey, you find a consensus level of 66%. If anything, it sets a lower bound. From the same survey, you can conclude that the level of consensus increases to just below 90% for those with 11 or more publications and to above 90% for those with more than 30 publications.

    I also want to second Bart’s comment about John Cook. The stories about John Cook are simply not true – or, there is no evidence to support them and hence they should be assumed as untrue until shown otherewise. Those who promulgate these stories are promoting things about another individual that are simply not true and, in my opinion, such behaviour is objectionable, is a form of libel, and should be called out. Oh, wait….

  41. thomaswfuller2 Says:

    Bart, I’m happy to hear that about Mr. Cook.

    As for getting the most meaningful answers out of the survey, you already have my opinion on that. You buried the lede.

    ATTP, you are incorrect. The consensus as shown by this survey is 66%. There were significant differences between subgroups, one of which is reported in the paper.

  42. ...and Then There's Physics Says:

    Tom,

    ATTP, you are incorrect. The consensus as shown by this survey is 66%. There were significant differences between subgroups, one of which is reported in the paper.

    The consensus of all surveyed is 66% if you include undetermined. I can see why you might want to promote this result and ignore all the others. This doesn’t mean, however, that the other results have no value, or do not indicate a stronger consensus amongst those with more expertise which – whether you like it or not – is relevant.

  43. thomaswfuller2 Says:

    ATTP, I don’t know where you think I said that I want to ignore all the other results. I did not. You have a habit of putting words into people’s mouths. I wish you wouldn’t do it to me.

    I think it is immensely valuable to know how the consensus varies between groups, including age, type of institution and job specialization. And yes, also number of publications.

    Those are normally reported on following any commercial survey. I don’t know why it isn’t done here–lack of time, perhaps? Perhaps another paper is in the works?

  44. ...and Then There's Physics Says:

    Tom,

    I don’t know where you think I said that I want to ignore all the other results. I did not. You have a habit of putting words into people’s mouths. I wish you wouldn’t do it to me.

    That is the only way conclusion I could draw given your statement that I was incorrect. If you don’t want people to put words in your mouth, stop making statements that lead them to conclude things you don’t like. It is also hard not to conclude this given your statement that “the consensus given this survey is 66%”.This is not complicated. If you want to claim that the overall consensus given all those surveyed and including all those who are undertermined is 66%, fine. That is obvious. However, the consensus of those with more publications and, hence, more expertise is higher than 66%. This, again, is not complicated.

  45. thomaswfuller2 Says:

    And if it had been reported that way I would be praising Bart’s paper. He could have used your exact words.

  46. thomaswfuller2 Says:

    Although in a sidebar, ATTP, feel free to tell us all why more publications means more expertise. See above for some reasons why it might not.

  47. ...and Then There's Physics Says:

    Tom,

    Although in a sidebar, ATTP, feel free to tell us all why more publications means more expertise. See above for some reasons why it might not.

    I didn’t say it means, I said it speaks to someone’s expertise. It’s certainly possible that someone with very few publications does have a great deal of understanding of a topic, but it’s would be very odd to describe – based on publications alone – someone with 1 or 2 papers as an expert. We’re trying to estimate the level of consensus about a scientific topic amongst relevant scientists. You present perfectly valid reasons as to why only a few papers doesn’t mean that someone lacks valid understanding of a topic, and why having lots of papers doesn’t mean that someone doesn’t have a poor understanding. That still doesn’t change that number of papers is a reasonable proxy for expertise in a research area.

  48. dpy6629 Says:

    This issue of number of publications is only a very rough indicator of competence. Citations is perhaps a little better but still not very good for recent work, say within the last 10 years, and even then it can go badly wrong. The real problem here and one that ATTP resists addressing is that there are serious problems in the scientific publication process itself that can result in totally incorrect conclusions in some cases.

    For example, in aerodynamic optimization, there is a whole string of publications by Jameson, a Fellow of the Royal Society, and someone who was a true pioneer in the field of aerodynamics, applying continuous control theory to computational aerodynamic optimization. These papers ignore the field of numerical optimization, a rigorous and huge field with strong successes in many application areas. Jameson’s papers are large in number and widely cited, but give a very one sided and in some cases wrong impression of the issues. This string of papers doesn’t address the controversy or even acknowledge its existence. This situation is perhaps more a matter of people not wanting to harm their career by challenging one of the leaders of the field. We have a recent paper that does challenge these ideas in which we actually compared several methods for some challenging problems. But I am not holding my breath. In fact, in private, most people say “why would you even consider the continuous optimization approach since we know its not as suited to computational settings?”

    In this case, a survey of the literature, such as Cook did, would give a very wrong result. An anonymous survey would be better, but still not give very good results. You would need to survey numerical optimization experts generally (including ones who had no publications in “aerodynamic optimization”), not just people who claim they are experts in “aerodynamic optimization” which is a much smaller class. One could argue that in fact the general numerical optimization expert is more to be trusted on the issue than the “aerodynamic optimization” expert whose career may depend on not offending the powerful.

  49. ...and Then There's Physics Says:

    The real problem here and one that ATTP resists addressing is that there are serious problems in the scientific publication process itself that can result in totally incorrect conclusions in some cases.

    It’s hard to resist something that hasn’t come up yet.

    I’m not sure that I follow the rest of your argument. There clearly will be publications that are poor and draw incorrect conclusions. However, they would only significantly influence a consensus study if the sample was small and they were over-represented. A sufficiently large sample should address such an issue, unless there were a lot of poor publications, in which case presumably there would be no consensus.

  50. thomaswfuller2 Says:

    ATTP, you write, “That still doesn’t change that number of papers is a reasonable proxy for expertise in a research area..”

    How has publication numbers been tested as such a proxy? By whom? I would assume it would have been published.

  51. dpy6629 Says:

    ATTP, My argument should not be hard to follow but does require reading carefully what was written. The example is one where a literature review would give a very misleading result because of the internal dynamics of this narrow field in which what most people will say privately is never said in the literature because of flaws in the way science has worked over the last 30 years. The Economist, Nature, and the Lancet all have published editorials saying that these problems are quite serious and need to be addressed. If you disagree, you should at least acknowledge the substance of what is being said and give your reasons for disagreeing.

  52. thomaswfuller2 Says:

    And once more, ATTP, you didn’t say the number of publications speaks to expertise.

    What you wrote was “However, the consensus of those with more publications and, hence, more expertise is higher than 66%. This, again, is not complicated.”

  53. ...and Then There's Physics Says:

    Tom,

    And once more, ATTP, you didn’t say the number of publications speaks to expertise.

    The very first time I mentioned this, I did. I assumed that my first mention of this was sufficient to not have to repeat it precisely over and over again. I still make the mistake of assuming that I’m dealing with reasonable people whose goal is not to nitpick. I keep forgetting that I’m not.

    How has publication numbers been tested as such a proxy? By whom? I would assume it would have been published.

    Why? Who would be stupid enough to dispute that a reasonable proxy for expertise in a research area is number of publications.

    dpy6629,

    The example is one where a literature review would give a very misleading result because of the internal dynamics of this narrow field

    We’re not talking about a narrow field.

  54. thomaswfuller2 Says:

    ATTP, I guess I would number among those stupid people. It’s bad enough that you count numbers of publications for the publish or perish foolishness in academia. I’m surprised that you want to perpetuate the idiocy outside it.

  55. ...and Then There's Physics Says:

    Tom,

    It’s bad enough that you count numbers of publications for the publish or perish foolishness in academia.

    I don’t.

    I’m surprised that you want to perpetuate the idiocy outside it.

    I’m not.

    Seriously, this is not complicated. Expertise is something one gains with time and experience. In academia, publishing one’s research is an important part of the process. Hence, more publications is an indication of how much work someone has been involved with and an indicator of their overall level of expertise. Therefore, number of publications is a reasonable proxy for expertise.

    However, it’s clearly not linear. Someone with a few relevant publications is likely to be relatively new to a field. Someone with 10 or so relevant publications, probably has a reasonable amount of expertise. Someone with 30 or more relevant pubications has probably spent a good number of years in the field. None of this is perfect, or exact, but it’s reasonable. However, I would certainly not argue that someone with 300 publications has much more expertise than someone with 30. If we’re going from 0-3 to > 30, then I would argue that that is consistent with going from a group with relatively little expertise in the field to one that probably has more expertise. Again, not perfect, but reasonable.

    This doesn’t even seem all that controversial. It would be extremely rare in any field to describe someone relatively new to the field as having a great deal of expertise, but quite normal to do so for someone who has been in the field for a long time. This applies both inside and outside academia.

  56. thomaswfuller2 Says:

    ATTP, I’m sure that works for some in academia. How well does that work in the private sector?

    Also, it makes no allowance for younger members of the profession who have received superior education due to improved knowledge and better pedagogical techniques.

    It falsely rewards those who play the multi-author game and penalizes those who spend more time on a single publication.

    As for it being rare to see someone relatively new to the field as having a great deal of expertise, I would have thought you would be holding up the shining example of the 29-year-old Michael Mann riding his Hockey Stick to lead authorship in an IPCC assessment.

    I believe using publication numbers as a metric is a mistake. I think it is commonly used only because of a paucity of alternatives. Even in academia they are now turning towards leaning on impact factor rather than mere numbers.

  57. ...and Then There's Physics Says:

    Tom,

    It falsely rewards those who play the multi-author game and penalizes those who spend more time on a single publication.

    It does no such thing. I’m not suggesting that indiviually every person in the group who has > 30 relevant publications is better than, more technically adept, more worthy of career advancement, than every person in the group with only 0-3 publications. I’m also using the word expertis, not ability, or potential. I’m simply pointing out something that I think is relatively self-evident. If you have a large group of researchers all of whom have > 30 relevant publication, then it is likely that that group has more relevant experience (expertise) than a group in which noone has more than 3 relevant publications.

    As for it being rare to see someone relatively new to the field as having a great deal of expertise, I would have thought you would be holding up the shining example of the 29-year-old Michael Mann riding his Hockey Stick to lead authorship in an IPCC assessment.

    FFS. Is it not possible to have a discussion without someone like you bringing up Michael Mann? Next it will be Al Gore. Also, if Michael Mann were a large group of people, you might have a point. Since he clearly isn’t a large group of people, it’s entirely irrelebvant.

    I believe using publication numbers as a metric is a mistake. I think it is commonly used only because of a paucity of alternatives. Even in academia they are now turning towards leaning on impact factor rather than mere numbers.

    If you mean impact factors as normally defined, then you’re illustrating your ignorance. There is a huge push against impact factors (which are meant to measure the impact of the journal in which you publish) as it is regarded as an extremely poor indicator of quality. Also, I’m not using publication numbers as a metric for an individual, I’m simply pointing out that a large group of highly published researchers likely has more experience than a group of researchers who have very few publications. I’m also not using publication numbers as an indicator of quality or ability, simply as an indicator of the general level of expertise of a group.

  58. thomaswfuller2 Says:

    Experience or expertise? Do you think the two are the same?

  59. ...and Then There's Physics Says:

    Experience or expertise? Do you think the two are the same?

    Not precisely, but does it really matter (i.e., are you about to become an irritating pedant)? Someone new to a field could have a great deal of ability and potential, but little experience/expertise. Let’s cut to the chase. This is about groups of researchers, not about individuals. Assessing an individual on the basis of the number of publications is very poor. Trying to distinguish between the abilities/potential of individuals on the basis of number of publications would be equally poor. We’re not doing that, though. We’re considering two groups of researchers, one of which has a relatively large number of relevant publications (> 30) the other does not (noone has more than 3 relevant publications). Concluding that the first group likely has more relevant expertise than the latter seems reasonable. Concluding that the all of the individuals in the first group are more skillful, have more potential and are more capable, than all of the researchers in the second is not.

    I’m, however, rather tired of this discussion. If you want to conclude that we can say nothing (or little) about the relevant expertise of these two groups simply because number of publications is typically a poor metric, go ahead. I will simply conclude that this is another way to pedantically dismiss a result you don’t like.

  60. thomaswfuller2 Says:

    Actually, ATTP, I’m sorry you’re tired. You make good points. I’m not entirely convinced–Bart would have to show that other groupings were not more explanatory–but since we are at odds on so many other points (where I remain convinced you are in error) that it would be wrong of me not to say that you are making good points here.

  61. thomaswfuller2 Says:

    As for irritating pedant, I am happy to irritate the advocates of a false Konsensus, but I don’t have enough diplomas to really be pedantic. You could if you wish say boring or repetitive (or boring and repetitive) but pedantic? You’re the professor here.

  62. ...and Then There's Physics Says:

    Tom,

    but since we are at odds on so many other points (where I remain convinced you are in error) that it would be wrong of me not to say that you are making good points here.

    I’m not entirely sure what I’m in error about. I think I have two simple claims, one of which seems obvious, given the survey results, and the other seems reasonable, given my understanding of research communities.

    1. The result of this survey indicates that if you choose sub-samples defined by the number of self-declared publications, then the level of consensus increases with increasing number of self-declared publications.

    2. Consider two groups of researchers; Group 1 comprises researchers who all have > N relevant publications, and Group 2 comprises researchers who have n, n small (close to 0), and N large (greater than 10).

    Given 2, 1 suggests that the level of consensus increases with increasing expertise/experience. That’s about all.

    You don’t need to have a diploma to be a pedant, and spelling consensus with a K makes you seem silly.

  63. ...and Then There's Physics Says:

    This one got messed up:

    2. Consider two groups of researchers; Group 1 comprises researchers who all have more than N relevant publications, and Group 2 comprises researchers who all have less than n relevant publications. It is likely that Group 1 will comprise researchers with more relevant expertise/experience than Group 2, as long as N is greater than n, n is small (close to 0), and N is large (greater than 10).

  64. thomaswfuller2 Says:

    I think it would be far more interesting and possibly have more explanatory power to run the crosstabs by length of time and by job specialization for Q’s 1, 2 and 3.

    Fewer inferences involved, and if experience/expertise is a factor it would be easier to see by looking at time in grade, time in post, however it was asked.

  65. ...and Then There's Physics Says:

    Tom,

    I think it would be far more interesting and possibly have more explanatory power to run the crosstabs by length of time and by job specialization for Q’s 1, 2 and 3.

    Sure, but I don’t think that somehow means that what I said above isn’t a reasonable interpretation.

  66. thomaswfuller2 Says:

    But we don’t really read scientific papers to get a ‘reasonable interpretation,’ do we? Not when there is actual data, at least.

  67. ...and Then There's Physics Says:

    Tom,
    We read papers for many reasons. At the end of the day, this is just a consensus study. It doesn’t tell us whether the consensus view is correct or not. Trying to nail down some kind of specific number is probably not all that important. I’ve no objection to further analysis, but I have no idea if the data is actually there or if it’s allowed to be released (presumably those involved have the right to not be identified).

  68. willard (@nevaudit) Says:

    Manual pingback:

    One of the objectives of our survey was indeed to find out with much more detail than before what exactly climate scientists agree on and to what extent.
    However, the number Fabius Maximus arrives at is nowhere near the actual consensus among climate scientist about whether recent warming is predominantly human induced. I replied to his and Tom Fuller’s line of reasoning here:

    PBL survey shows strong scientific consensus that global warming is largely driven by greenhouse gases

    Basically, they ignore the fact that a large fraction (22%) of respondents responded with either I don’t know, unknown, or other, to one of the two attribution questions. We argue that many of these respondents aren’t truly agnostic about that issue, but merely wanted to avoid having to pick a very specific range of values. We back that up with evidence, both in the ES&T article and in the abovementioned blogpost. It’s based on comparing the responses to the other attribution question (which is studiously ignored by critics) and on respondents’ open comments to the first question.

    The conceits of consensus

  69. Joshua Says:

    ==> “You both are essentially telling the reader to ignore the principal finding of your survey ”

    Since determining their principal finding is apparently being crowdsourced…

    ..I wonder if the principal finding should be that about 12% of the 1,868 respondents indicated a belief that ACO2 contribution to warming since the mid-20th century is less than 50%.

  70. Joshua Says:

    From Fabius:

    ==> “(b) Confirmation bias is an inductive reasoning error. Neither it nor the “no true Scotsman” logical fallacy requires telepathic powers to discern. I gave a specific example:”

    I would take Fabius’ instructions about fallacious reasoning with a grain of salt. In just one thread, he fallaciously asserts definitive conclusions about longitudinal trends based on nothing other than cross sectional and anecdotal evidence, employs the ad populum fallacy, and tops it off with ad homs.

    Remember the Ebola hysteria. What did we learn from it?

  71. thomaswfuller2 Says:

    Joshua, despite your motivated reasoning to find bad faith in all of your opponents, I wrote that the consensus is a very robust 66% and that “In fact, only 12% indicated that GHGs caused between zero and 50% of warming since the middle of the 20th century.”

    Bart Verheggen’s Survey of Climate Scientists

    Not that that will make any difference in what you write.

  72. 23443 Says:

    What percent believe it’s more than 0%? Then looks like carbon is creating externalities. Sure seems grounds for “taxation” (*screams*) to me.

  73. Richard Says:

    You actually make it seem really easy with your presentation however I in finding this topic to be really something which I believe I’d by no means understand. It seems too complex and very broad for me. I’m taking a look ahead on your next publish, I will try to get the grasp of it!

Leave a comment