Posts Tagged ‘97%’

Richard Tol misrepresents consensus studies in order to falsely paint John Cook’s 97% as an outlier

September 24, 2015

John Cook warned me: if you attempt to quantify the level of scientific consensus on climate change, you will be fiercely criticized. Most of the counterarguments don’t stand up to scrutiny however. And so it happened.

The latest in this saga is a comment that Richard Tol submitted to ERL, as a response to John Cook’s study in which they found 97% agreement in the scientific literature that global warming is human caused. Tol tries to paint Cook’s 97% as an outlier, but in doing so misrepresents many other studies, including the survey that I undertook with colleagues in 2012. In his comment and his blogpost he shows the following graph:

Richard Tol misrepresenting existing consensus estimates

Richard Tol comes to very different conclusions regarding the level of scientific consensus than the authors of the respective articles themselves (Oreskes, 2004; Anderegg et al., 2010; Doran and Kendall Zimmerman, 2009; Stenhouse et al., 2013; Verheggen et al., 2014). On the one hand, he is using what he calls “complete sample” results, which in many cases are close to meaningless as an estimate of the actual level of agreement in the relevant scientific community (that counts most strongly for Oreskes and Anderegg et al). On the other hand he is using “subsample” results, which in some cases are even more meaningless (the most egregious example of which is the subsample of outspoken contrarians in Verheggen et al).

The type of reanalysis Tol has done, if applied to e.g. evolution, would look somewhat like this:

  • Of all evolutionary biology papers in the sample 75% explicitly or implicitly accept the consensus view on evolution. 25% did not take positon on whether evolution is accepted or not. None rejected evolution. Tol would conclude from this that the consensus on evolution is 75%. This number could easily be brought down to 0.5% if you sample all biology papers and count those that take an affirmative position in evolution as a fraction of the whole. This is analogous to how Tol misrepresented Oreskes (2004).
  • Let’s ask biologists what they think of evolution, but to get an idea of dissenting views let’s also ask some prominent creationists, e.g. from the Discovery Institute. Never mind that half of them aren’t actually biologists. Surprise, surprise, the level of agreement with evolution in this latter group is very low (the real surprise is that it’s not zero). Now let’s pretend that this is somehow representative of the scientific consensus on evolution, alongside subsamples of actual evolutionary biologists. That would be analogous to how Tol misrepresented the “unconvinced” subsample of Verheggen et al (2014).

Collin Maessen provide an detailed take-down of Richard Tol on his blog, quoting extensively from the scientists whose work was misrepresented by Tol (myself included). The only surveys which are not misrepresented are those by Bray and von Storch (2007; 2010). This is how I am quoted at Collin’s blog RealSkeptic:

Tol selectively quotes results from our survey. We provided results for different subsamples, based on different questions, and based on different types of calculating the level of agreement, in the Supporting Information with our article in ES&T. Because we cast a very wide net with our survey, we argued in our paper that subgroups based on a proxy for expertise (the number of climate related peer reviewed publications) provide the best estimate of the level of scientific consensus. Tol on the other hand presents all subsamples as representative of the scientific consensus, including those respondents who were tagged as “unconvinced”. This group consists to a large extent of signatories of public statements disapproving of mainstream climate science, many of whom are not publishing scientists. For example, some Heartland Institute staffers were also included. It is actually surprising that the level of consensus in this group is larger than 0%. To claim, as Richard Tol does, that the outcome for this subsample is somehow representative of the scientific consensus is entirely nonsensical.

Another issue is that Richard Tol bases the numbers he uses on just one of the two survey questions about the causes of recent climate change, i.e. a form of cherry picking. Moreover, we quantified the consensus as a fraction of those who actually answered the question by providing an estimate of the human greenhouse gas contribution. Tol on the other hand quantifies the consensus as a fraction of all those who were asked the question, including those who didn’t provide such an estimate. We provided a detailed argument for our interpretation in both the ES&T paper and in a recent blogpost.

Tol’s line of reasoning here is similar to his misrepresentation of Oreskes’ results, by taking the number of acceptance papers not just as a fraction of papers that take position, but rather as a fraction of all papers, including those that take no position on current anthropogenic climate change. Obviously, the latter should be excluded from the ratio, unless one is interested in producing an artificially low, but meaningless number.

Some quotes from the other scientists:

Oreskes:

Obviously he is taking the 75% number below and misusing it. The point, which the original article made clear, is that we found no scientific dissent in the published literature.

Anderegg:

This is by no means a correct or valid interpretation of our results.

Neil Stenhouse:

Tol’s description omits information in a way that seems designed to suggest—inaccurately—that the consensus among relevant experts is low.

Doran:

To pull out a few of the less expert groups and give them the same weight as our most expert group is a completely irresponsible use of our data.

You can read their complete quotes at RealSkeptic.

See also this storify of my twitter discussion with Richard Tol.

Advertisement

PBL survey shows strong scientific consensus that global warming is largely driven by greenhouse gases

August 4, 2015

Updates:

(5 Sep 2015): US Presidential candidate Rick Santorum used an erroneous interpretation of our survey results on the Bill Maher show. My  detailed response to Santorum’s claim is in a newer blogpost. Politifact and Factcheck also chimed in and found Santorum’s claims to be false. The blogpost below goes into detail about how different interpretations could lead to different conclusions and how some interpretations are better supported than others.

As Michael Tobis rightly points out, the level of scientific consensus that you find “depends crucially on who you include as a scientist, what question you are asking, and how you go about asking it”. And on how you interpret the data. We argued that our survey results show a strong scientific consensus that global warming is predominantly caused by anthropogenic greenhouse gases. Others beg to differ. Recent differences of opinion are rooted in different interpretations of the data. Our interpretation is based on how we went about asking certain questions and what the responses indicate.

To quantify the level of agreement with a certain position, it makes most sense to look at the number of people as a fraction of those who answered the question. We asked respondents two questions about attribution of global warming (Q1 asking for a quantitative estimate and Q3 asking for a qualitative estimate; the complete set of survey questions is available here). However, as we wrote in the ES&T paper:

Undetermined responses (unknown, I do not know, other) were much more prevalent for Q1 (22%) than for Q3 (4%); presumably because the quantitative question (Q1) was considered more difficult to answer. This explanation was confirmed by the open comments under Q1 given by those with an undetermined answer: 100 out of 129 comments (78%) mentioned that this was a difficult question.

There are two ways of expressing the level of consensus, based on these data: as a fraction of the total number of respondents (including undetermined responses), or as a fraction of the number of respondents who gave a quantitative or qualitative judgment (excluding undetermined answers). The former estimate cannot exceed 78% based on Q1, since 22% of respondents gave an undetermined answer. A ratio expressed this way gives the appearance of a lower level of agreement. However, this is a consequence of the question being difficult to answer, due to the level of precision in the answer options, rather than it being a sign of less agreement.

Moreover, the results in terms of level of agreement based on Q1 and Q3 are mutually consistent with each other if the undetermined responses are omitted in calculating the ratio; they differ markedly when undetermined responses are included. In the supporting information we provided a table (reproduced below) with results for the level of agreement calculated either as a fraction of the total (i.e., including the undetermined answers) or as a fraction of those who expressed an opinion (i.e., excluding the undetermined answers), specified for different subgroups.

Verheggen et al - EST 2014 - Table S3

For the reasons outlined above we consider the results excluding the undetermined responses the most meaningful estimate of the actual level of agreement among our respondents. Indeed, in our abstract we wrote:

90% of respondents with more than 10 climate-related peer-reviewed publications (about half of all respondents), explicitly agreed with anthropogenic greenhouse gases (GHGs) being the dominant driver of recent global warming.

This is the average of the two subgroups with the highest number of self-reported publications for both Q1 and Q3. In our paper we discussed both ways of quantifying the level of consensus, including the 66% number as advocated by Tom Fuller (despite his claims that we didn’t).

Fabius Maximus goes further down still, claiming that the level of agreement with IPCC AR5 based on our survey results is only 43-47%. This result is based on the number of respondents who answered Q1b, asking for the confidence level associated with warming being predominantly greenhouse gas-driven, as a fraction of the total number of respondents who filled out Q1a (whether with a quantitative or an undetermined answer). As Tom Curtis notes, Fab Max erroneously compared our statement to the “extremely likely” statement in AR5, whereas in terms of greenhouse gases AR5 in Chapter 10 considered it “very likely” that they are responsible for more than half the warming. Moreover, our survey was undertaken in 2012, long before AR5 was available, so if respondents had IPCC in mind as a reference, it would have been AR4. If anything, the survey respondents were by and large more confident than IPCC that warming had been predominantly greenhouse gas driven, with over half assigning a higher likelihood than IPCC did in both AR4 and AR5.

PBL background report - Q1b

Let me expand on the point of including or excluding the undetermined answers with a thought experiment. Imagine that we had asked whether respondents agreed with the AR4 statement on attribution, yes or no. I am confident that the resulting fraction of yes-responses would (far) exceed 66%. We chose instead to ask a more detailed question, and add other answer options for those who felt unwilling or unable to provide a quantitative answer. On the other hand, imagine if we had respondents choose whether the greenhouse gas contribution was -200, -199, …-2, -1, 0, 1, 2, … 99, 100, 101, …200% of the observed warming. The question would have been very difficult to answer to that level of precision. Perhaps only a handful would have ventured a guess and the vast majority would have picked one of the undetermined answer options (“I don’t know”, “unknown”, “other”). Should we in that case have concluded that the level of consensus is only a meagre few percentage points? I think not, since the result would be a direct consequence of the answer options being perceived as too difficult to meaningfully choose from.

Calculating the level of agreement in the way we suggest, i.e. excluding undetermined responses, provides a more robust measure as it’s relatively independent of the perceived difficulty of having to choose between specific answer options. And, as is omitted by the various critics, it is consistent with the responses to the qualitative attribution question, which also provides a clear indication of a strong consensus. If you were to insist on including undetermined responses in calculating the level of agreement, then it’s best to only use results from Q3. Tom Fuller’s 66% becomes 83% in that case (the level of consensus for all respondents), showing the lack of robustness in this approach when applied to Q1.

Verheggen et al - Figure 1 - GHG contribution to global warming

Some other issues that came up in recent discussions:

See also the basic summary of our survey findings and the accompanying FAQ.

 


%d bloggers like this: