Posts Tagged ‘Richard Tol’

How blogs convey and distort scientific information about polar bears and Arctic sea ice

December 22, 2017

Our article on sea ice and polar bears proved to be a hot-button issue in the blogosphere. This was not entirely unexpected, of course. What is striking though, is that amidst all the criticism nobody has challenged our core finding: blogs on which man-made climate change and its impacts are downplayed are far removed from the scientific literature, at least regarding the topic of shrinking Arctic sea ice and the resulting future threat to polar bears.

Even more so, alternative figures that have been prepared by some critics basically underscore this same message (see examples below). That’s not so strange of course, since the signal is so clear: there is hardly any overlap between contrarian blogs and the scientific literature on this topic. Take a look at the pie-charts below for the three statements on sea ice and those on polar bears, for the two different groups of blogs (termed denier and science-based blogs, respectively), and the peer-reviewed scientific articles that investigate both polar bears and Arctic sea ice. This is basically an extension of figure 1 in the paper, in which only the two blog categories were shown. Most scientific articles as well as science-based blogs assess Arctic sea ice extent to be shrinking and polar bears to be threatened as a result, and most denier blogs take a contrary view on both sea ice and polar bears. They are poles apart.

You may argue that it was overkill to use an elaborate statistical analysis such as PCA on this dataset. It was used mainly to visualize our results in one figure. All the criticism on the PCA and the details of how data were analyzed misses the forest for the trees: there is a clear distinction between blogs, where the group that accepts AGW appears to base their claims on peer-reviewed science, and the group that doesn’t accept AGW does not. The latter group appear to base their claims to a large extent on blogs written by one particular biologist, Susan Crockford, whose views run counter to the relevant ecological literature.

Our paper is first and foremost a characterization of the blogosphere, and how it compares to the scientific literature. We restricted our literature search to scientific articles that investigate both polar bears and sea ice, and that shed light on polar bear ecology and how it may or may not depend on the presence of sea ice. An article such as “Evolutionary roots of iodine and thyroid hormones in cell signaling” does not fit that bill, to name just one example of Crockford’s scientific articles that has been pointed out as evidence of her having published on polar bear ecology. She has not.

Even though it is not the main scope of our paper, we described the scientific context of polar bear ecology and explained how and why polar bears depend on their sea ice habitat (summarized in my previous blog post). As such, we argued that the scientific understanding of arctic sea ice decline and polar bear ecology is more credible than the viewpoints put forward on contrarian blogs. However, providing new ecological evidence was not the point of this paper. The point was to investigate how our current ecological understanding is conveyed and distorted in the blogosphere.

If some people think that our conclusion is wildly wrong, then they could at least show some evidence to prove their point, right? They probably realize that our conclusion is robust, so instead they try to nitpick on details and make it appear as if that undermines our conclusion. It does not.

 

Appendix: A collection of PCA graphs depicting our results, all basically underscoring the main conclusion that one group of blogs correctly conveys our current scientific understanding, while another group of blogs distort this understanding and promotes a very different viewpoint regarding sea ice and polar bears.

From top to bottom the following PCA figures are shown:

  • As published in the Bioscience paper, in which missing values are replaced by zero after scaling the data
  • List-wise deletion of all records with missing values, considerably reducing overall sample size
  • Using multiple imputation with logistic regression (5 rounds of 40 iterations each)
  • PCA figure of the same data as produced by Richard Tol, where sample size of each location in the graphs is depicted by symbol size
  • PCA figure of the same data as produced by RomanM at ClimateAudit, without information on sample size

As mentioned in the supplemental information with our paper, jittering was applied to our PCA figure to gently offset data with the exact same entries from each other for graphical purposes. Tol uses an alternative method to provide information on sample size for specific data entries, namely via the size of the symbol used in the figure. Whatever your preference, the conclusion drawn from these figures is the same: there is a clear gap between the consensus in the scientific literature and science-based blogs on the one hand, and contrarian blogs on the other hand. We thank Roman Mureika and Richard Tol for underscoring the validity of our conclusion.

Advertisement

Consensus on consensus: a synthesis of consensus estimates on human-caused global warming

April 13, 2016

Most scientists agree that current climate change is mainly caused by human activity. That has been repeatedly demonstrated on the basis of surveys of the scientific opinion as well as surveys of the scientific literature. In an article published today in the journal Environmental Research Letters (ERL) we provide a review of these different studies, which all arrive at a very similar conclusion using different methods. This shows the robustness of the scientific consensus on climate change.

This meta-study also shows that the level of agreement that the current warming is caused by human activity is greatest among researchers with the most expertise and/or the most publications in climate science. That explains why literature surveys generally find higher levels of consensus than opinion surveys. After all, experienced scientists who have published a lot about climate change have, generally speaking, a good understanding of the anthropogenic causes of global warming, and they often have more peer-reviewed publications than their contrarian colleagues.

Scientific consensus on human caused climate change vs expertise in climate scienceFigure: Level of consensus on human-induced climate change versus expertise in climate science. Black circles are data based on studies of the past 10 years. Green line is a fit through the data.

The initial reason for this review article was a specific comment by Richard Tol on John Cook’s literature survey as published in ERL in 2013. Cook found a 97% consensus on anthropogenic global warming in the scientific literature on climate change. This article has both been vilified and praised. Tol argued that Cook’s study is an outlier, but he did so by misrepresenting most other consensus studies, including the survey I undertook while at the Netherlands Environmental Assessment Agency (PBL). To get a gist of the discussion with Tol see e.g. this storify I made based on my twitter exchanges with him (warning: for climate nerds only). Suffice to say the authors of these other consensus studies were likewise not impressed by Tol’s caricature of their work. This is how the broad author team for the current meta-analysis arose, which shows that Cook’s literature survey fits well within the spectrum of other studies.

The video below provides a great overview of the context and conclusions of this study:

Surveys show that among the broad group of scientists who work on the topic of climate change the level of consensus is roughly between 83 and 97% (e.g. Doran, Anderegg, Verheggen, Rosenberg, Carlton, Bray, Stenhouse, Pew, Lichter, Vision Prize). If you zoom in on the subset of most actively publishing climate scientists you find a consensus of 97% (Doran, Anderegg). Analyses of the literature also indicate a level of consensus of 97% (Cook) or even 100% (Oreskes). The strength of literature surveys lies in the fact that they sample the prime locus of scientific evidence and thus they provide the most direct measure of the consilience of evidence. On the other hand, opinion surveys can achieve much more specificity about what exactly is agreed upon. The latter aspect – what exactly is agreed upon and how does that compare to the IPCC report- is something we investigated in detail in our ES&T article based on the PBL survey.

As evidenced by the many –unfounded- criticisms on consensus studies, this is still a hot topic in the public debate, despite the fact that study after study has confirmed that there is broad agreement among scientists about the big picture: our planet is getting warmer and that is (largely) due to human activity, primarily the burning of fossil fuels. A substantial fraction of the general public however is still confused even about the big picture. In politics, schools and media climate change is often not communicated in accordance with the current scientific understanding, even though the situation here in the Netherlands is not as extreme as e.g. in the US.

Whereas the presence of widespread agreement is obviously not proof of a theory being correct, it can’t be dismissed as irrelevant either: As the evidence accumulates and keeps pointing in the same general direction, the experts’ opinion will logically converge to reflect that, i.e. a consensus emerges. Typically, a theory either rises to the level of consensus or it is abandoned, though it may take considerable time for the scientific community to accept a theory, and even longer for the public at large.

Although science can never provide absolute certainty, it is the best method we have to understand complex systems and risks, such as climate change. If you value science it is wise not to brush aside broadly accepted scientific insights too easily, lest you have very good arguments for doing so (“extraordinary claims require extraordinary evidence”). I think it is important for proper democratic decision making that the public is well informed about what is scientifically known about important issues such as climate change.

More info/context/reflections:

Dutch version at sister-blog “klimaatverandering”

Column by first author John Cook in Bulletin of the Atomic Scientists

Stephan Lewandowsky on the psychology of consensus

Collin Maessen tells the backstory starting with Richard Tol’s nonsensus

Ken Rice at …And Then There’s Physics

Dana Nuccitelli in the Guardian

Sou at HotWhopper

Amsterdam University College (AUC) news item

 

Richard Tol misrepresents consensus studies in order to falsely paint John Cook’s 97% as an outlier

September 24, 2015

John Cook warned me: if you attempt to quantify the level of scientific consensus on climate change, you will be fiercely criticized. Most of the counterarguments don’t stand up to scrutiny however. And so it happened.

The latest in this saga is a comment that Richard Tol submitted to ERL, as a response to John Cook’s study in which they found 97% agreement in the scientific literature that global warming is human caused. Tol tries to paint Cook’s 97% as an outlier, but in doing so misrepresents many other studies, including the survey that I undertook with colleagues in 2012. In his comment and his blogpost he shows the following graph:

Richard Tol misrepresenting existing consensus estimates

Richard Tol comes to very different conclusions regarding the level of scientific consensus than the authors of the respective articles themselves (Oreskes, 2004; Anderegg et al., 2010; Doran and Kendall Zimmerman, 2009; Stenhouse et al., 2013; Verheggen et al., 2014). On the one hand, he is using what he calls “complete sample” results, which in many cases are close to meaningless as an estimate of the actual level of agreement in the relevant scientific community (that counts most strongly for Oreskes and Anderegg et al). On the other hand he is using “subsample” results, which in some cases are even more meaningless (the most egregious example of which is the subsample of outspoken contrarians in Verheggen et al).

The type of reanalysis Tol has done, if applied to e.g. evolution, would look somewhat like this:

  • Of all evolutionary biology papers in the sample 75% explicitly or implicitly accept the consensus view on evolution. 25% did not take positon on whether evolution is accepted or not. None rejected evolution. Tol would conclude from this that the consensus on evolution is 75%. This number could easily be brought down to 0.5% if you sample all biology papers and count those that take an affirmative position in evolution as a fraction of the whole. This is analogous to how Tol misrepresented Oreskes (2004).
  • Let’s ask biologists what they think of evolution, but to get an idea of dissenting views let’s also ask some prominent creationists, e.g. from the Discovery Institute. Never mind that half of them aren’t actually biologists. Surprise, surprise, the level of agreement with evolution in this latter group is very low (the real surprise is that it’s not zero). Now let’s pretend that this is somehow representative of the scientific consensus on evolution, alongside subsamples of actual evolutionary biologists. That would be analogous to how Tol misrepresented the “unconvinced” subsample of Verheggen et al (2014).

Collin Maessen provide an detailed take-down of Richard Tol on his blog, quoting extensively from the scientists whose work was misrepresented by Tol (myself included). The only surveys which are not misrepresented are those by Bray and von Storch (2007; 2010). This is how I am quoted at Collin’s blog RealSkeptic:

Tol selectively quotes results from our survey. We provided results for different subsamples, based on different questions, and based on different types of calculating the level of agreement, in the Supporting Information with our article in ES&T. Because we cast a very wide net with our survey, we argued in our paper that subgroups based on a proxy for expertise (the number of climate related peer reviewed publications) provide the best estimate of the level of scientific consensus. Tol on the other hand presents all subsamples as representative of the scientific consensus, including those respondents who were tagged as “unconvinced”. This group consists to a large extent of signatories of public statements disapproving of mainstream climate science, many of whom are not publishing scientists. For example, some Heartland Institute staffers were also included. It is actually surprising that the level of consensus in this group is larger than 0%. To claim, as Richard Tol does, that the outcome for this subsample is somehow representative of the scientific consensus is entirely nonsensical.

Another issue is that Richard Tol bases the numbers he uses on just one of the two survey questions about the causes of recent climate change, i.e. a form of cherry picking. Moreover, we quantified the consensus as a fraction of those who actually answered the question by providing an estimate of the human greenhouse gas contribution. Tol on the other hand quantifies the consensus as a fraction of all those who were asked the question, including those who didn’t provide such an estimate. We provided a detailed argument for our interpretation in both the ES&T paper and in a recent blogpost.

Tol’s line of reasoning here is similar to his misrepresentation of Oreskes’ results, by taking the number of acceptance papers not just as a fraction of papers that take position, but rather as a fraction of all papers, including those that take no position on current anthropogenic climate change. Obviously, the latter should be excluded from the ratio, unless one is interested in producing an artificially low, but meaningless number.

Some quotes from the other scientists:

Oreskes:

Obviously he is taking the 75% number below and misusing it. The point, which the original article made clear, is that we found no scientific dissent in the published literature.

Anderegg:

This is by no means a correct or valid interpretation of our results.

Neil Stenhouse:

Tol’s description omits information in a way that seems designed to suggest—inaccurately—that the consensus among relevant experts is low.

Doran:

To pull out a few of the less expert groups and give them the same weight as our most expert group is a completely irresponsible use of our data.

You can read their complete quotes at RealSkeptic.

See also this storify of my twitter discussion with Richard Tol.

To publish BS or not, that’s the question

November 11, 2011

Richard Tol levied a strong accusation at Judith Curry for highlighting two seriously flawed papers (via twitter):

Its wrong, but with @JudithCurry lending her authority it becomes disinformation

Judith defended herself in a post where she tries to shift the blame to the mainstream scientists:

 Here is a quiz for you.  How many of these disinformation tactics [a list containing a mix of logical fallacies and avoidance tactics] are used by:

  • JC (moi)
  • Public spokespersons for the IPCC
  • Joe Romm
  • Marc Morano

If that’s not a dog-whistle I don’t know what is. 

Keith has a nice rundown of the discussion, and the ensuing thread over there contains many good comments. He’s got a knack for hosting interesting discussions.

Richard has since laid out his argument as to what’s wrong with the papers in a guest post over at CE.  Basically they’re methodologically flawed:

Using “detrended” fluctuation analysis to study “trends” was a dead giveaway that something is not quite right with these papers.

Tol goes on to write: 

7. There is a substantial body of climate research that is credible — even if it reaches opposite conclusions — but there are also papers (left, right, and center) that are just flawed.
8. If flawed papers reach a certain prominence, they should be debunked. Prominent but flawed research does damage as it misinforms people about climate change. Publicly criticizing such research hardens the existing polarization.
9. If flawed papers linger in obscurity, they should be ignored. The papers are wrong but do no damage. Lifting a flawed paper out of obscurity only to debunk it, is no good to anybody.

Curry takes especially issue with the last statement:

Yours isn’t a statement about science, but about playing politics with science, and reinforces the gatekeeping mentality in climate science that was embarassingly revealed by the CRU emails. (…)

Of course scientists don’t want the public to be misinformed about the science! So If I’m concerned about public understanding of science, I’m automatically “playing politics with science”? Then I sure hope every scientist is.

Judith rightly says that “Of course there are flawed papers that get published.” But why shining the spotlight on them? What’s gained by doing so?

It’s true that these discussions don’t occur about science without policy relevance. Research on the mating behavior of fruit flies won’t result in argument whether a flawed paper should be promoted in the public sphere or not.

The differences are that 1) such research is not present in the public sphere, because the public isn’t interested, and 2) even though flawed papers exist in any field, the more its conclusions clash with ideologies, the more attempts will be made to reach opposite conclusions and thus the more deeply flawed/biased papers will be published. It’s not a coincidence that there’s no fruitflies-version of EIKE or Heartland. 

Curry:

Most people don’t come to climate etc. to reinforce their prejudices (there are far too many echo chambers where this is much more satisfyingly accomplished). They come here to learn something by considering the various arguments.

The general tone of comments at CE makes me strongly doubt this last statement.

Tol:

@Anteros
I would agree with you [no harm done by highlighting flawed studies] if climate blogs were exclusively read by well-intentioned, well-informed, and intelligent people.

Richard further shows his mastery in the tweet-universe with one-liners such as

I argue for self-censorship. It is what separates adults from children.

Over at CaS, Roger Pielke Jr makes the point that wrong or bad articles can be a useful teaching tool. And indeed they can. But as Stoat rightly says,

within a managed class structure with someone guiding the discussion, it is fine to discuss flawed texts, for the reason given: it encourages critical thinking. That wasn’t what Curry was doing.

Tol:

Curry took two papers that almost nobody had read, and put them in the limelight.
The papers say 2+2=5.
There are a lot of people who would like to believe that. It is not true.
So now there is yet another dogfight about whether the answer is 3, 4, or 5. We can do without that.
There are plenty of real issues to argue over.

Jonathan Gilligan, consistently thoughtful, writes:

Pielke has said that he views blogs as more like the kind of discussions people conduct over beers at the neighborhood bar, and from that perspective Richard’s criticism makes no more sense than telling the crowd at the pub to leave sports commentary to the experts. 

Tol makes some valid points here, but Pielke is more persuasive. People will read these blogs or not as they choose, and when a blog repeatedly calls attention to crap, its credibility and its audience will adjust to reflect this. Climate Etc. is not The Wall Street Journal, so the greater danger in Curry’s gushing over crap is to Curry’s reputation, not to the public understanding of science.

I have also compared blogs to bar-discussions (quoting Bob Grumbine), but that comparison is about the presence (or lack) of quality control. As Tol rightly says, 

With academics blogging and tweeting, and journalists, and prime ministers, and institutes, departments, agencies and companies, I don’t think there is a one-rule-fits-all.

At CE, thousands of people are listening. Judith’s opinion and her writings make their way to the general public and politicians via mainstream media and Senate hearings as well. By the scale of those who are engaged in the conversation, that is orders of magnitude different from a discussion in a bar. That also means that the risk is twofold: Both to Curry’s reputation (her problem) and to the public understanding of science (everyone’s problem, even though Curry tries to belittle that).

Whereas Tol argued based on methodological flaws, Fred Moolten explains why the papers’ conclusions are unsupportable on physical grounds and I made a similar argument:

Conservation of energy precludes the climate to wonder off too far in any direction without being “forced” to by changing boundary conditions. Unless of course the energy is merely being redistributed within the system. Which it isn’t, since all other compartments of the climate system are gaining energy.

The paper’s conclusion that the observed warming is “predominantly a natural 100-year fluctuation” is at odds with conservation of energy.

All very reminiscent of the random walk saga and the Harry Potter theory of climate.


%d bloggers like this: