Archive for the ‘English’ Category

Earth’s temperature over the past two million years

October 6, 2016

A new reconstruction of global average temperature over the past two million years has recently appeared in Nature (Snyder, 2016). That is quite a feat and a first for this duration. The figure below, made by Jos Hagelaars, shows Snyder’s temperature reconstruction, combined with the observed warming since 1880 and projected warming until the year 3000 for two IPCC scenarios, RCP6.0 and RCP8.5.

snyder-et-al-2016-rcp8-5-rcp6-0-nr3

The RCP8.5 can be viewed as a “no mitigation” scenario, whereas RCP6.0 would be a “limited mitigation” scenario. It is clear that in both scenarios global warming over the next centuries will take us out of the temperature realm of the past two million years. A similar figure (which I tweeted yesterday) but then with temperature projections stopping in the year 2100 can be found here.

Even though lauded as a very valuable and novel contribution to the field, Snyder’s reconstruction has also been criticized because the temperature amplitude between glacial and interglacial states appears relatively large (~6 degrees) compared to other recent reconstructions, e.g. by Shakun et al (2012) (~4 degrees). Somewhat related, Snyder estimates the global average temperature during the previous interglacial (Eemian) to be warmer than now, whereas e.g. Hansen et al (2016, under review) argue that they are similarly warm. By the way, sea levels were 6 to 9 metres higher in the Eemian than now. Sea level responds very slowly to a change in temperature, yet another sign of the vast inertia in the climate system.

Shakun_Marcott_HadCRUT4_A1B_Eng

Somewhat overshadowing the actual temperature reconstruction that Snyder presented was her calculation of an earth system sensitivity (ESS) based on a correlation between temperature and CO2 over the past few glacial cycles. The earth system sensitivity denotes the long-term temperature response to a doubling in CO2 concentrations, including e.g. the response of ice sheets (which is typically excluded from the more often used equilibrium climate sensitivity, ECS). She then applied the ESS value of a whopping 9 degrees, obtained from this simple correlation, to the current warming, stating in the abstract:

This result suggests that stabilization at today’s greenhouse gas levels may already commit Earth to an eventual total warming of 5 degrees Celsius (range 3 to 7 degrees Celsius, 95 per cent credible interval) over the next few millennia as ice sheets, vegetation and atmospheric dust continue to respond to global warming.

Where “commit” means that this level of warming would be eventually expected based on current CO2 concentrations.

As Gavin Schmidt wrote, this is simply wrong.

The reason why I think it’s wrong is that in her calculation of ESS she takes the radiative forcing caused by albedo changes (resulting from the massive change in ice coverage between a glacial and interglacial state) and assumes it to be a feedback on the CO2 induced temperature-change.

There are two issues with this:

1) In reality both the changes in albedo (reflectivity) and CO2 concentration are feedbacks on the orbital forcing, and the relation in the one direction (a change in earth’s orbit causing a temperature change which in turn causes albedo and CO2 levels to change) is not necessarily the same as the relation in the reverse direction, as is currently happening with human-induced increases in CO2. Gavin Schmidt makes this point in two consecutive posts at RealClimate (here and here), though you might also want to read Hansen’s take, who has used a similar approach as Snyder did).

2) The ESS value obtained would (ignoring the more complex first point) perhaps be applicable to a glacial-interglacial transition, but decidedly not to an interglacial-‘hyperinterglacial’ transition, where the ice-albedo feedback would of course be much smaller because of the much smaller ice-covered surface area.

This second point was also made by James Annan in response to Hansen’s 2008 Target CO2 paper, where he essentially used the same method as Snyder is using (but came to a smaller ESS value of 6 degrees, because Snyder uses a greater temperature-amplitude between glacial-interglacial). Hansen noted in his paper though that “The 6°C sensitivity reduces to 3°C when the planet has become warm enough to lose its ice sheets.”

In other words, using Snyder’s very (and probably too) high ESS value to project future warming is unwarranted and wrong.

Climate inertia

August 9, 2016

Imagine you’re on a supertanker that needs to change its direction in order to avoid a collision. What would you do? Would you continue going full steam ahead until you can see the collision object right in front of you? Or would you try to change course early, knowing that changing a supertanker’s course takes a considerable amount of time?

The supertanker’s inertia means that you have to act in time if you wish to avoid a collision.

The climate system also has a tremendous amount of inertia built in. And like with the supertanker, this means that early action is required if we want to change the climate’s course. This inertia is a crucial aspect of the climate system, both scientifically but also societally – but in the latter realm it’s a very underappreciated aspect. Just do a mental check: when did you last hear or read about the climate’s inertia in mainstream media or from politicians?

Inertia

The inertia of the climate system could be compared to that of a supertanker: if we want to change its course, it’s important to start steering the wheel in the desired direction in time.

Why is it so important? Because intuitively many people might think that as soon as we have substantially decreased our CO2 emissions (which we haven’t), the problem will be solved. It won’t, not by a very long shot. Even if we reduce CO2 emissions to zero over a realistic timeframe, the CO2 concentration in the atmosphere – and thus also the global average temperature- will remain elevated for millennia, as can be seen in the figure below. The total amount of carbon we put in the atmosphere over the course of a few hundred years will affect life on this planet for hundreds of thousands of years. And if we want to reduce the amount of warming that we commit the future to, we need to reduce our carbon emissions sooner rather than later. The longer we postpone emission reductions, the stronger those emissions reductions would need to be in order to have the same mitigating effect on long-term warming.

That’s why climate inertia is so important.

Zickfeld 2013

Modeled response of the atmospheric CO2 concentration (panel b) and surface air temperature compared to the year 2000 (panel c) to prescribed CO2 emissions (panel a). The CO2 concentration remains elevated long after CO2 emissions have been reduced, because the long-term sinks for CO2 operate very slowly (see e.g. IPCC FAQ 6.2 for an explanation of these carbon sinks). Since CO2 impedes infrared heat loss, for millennia the globe will remain warmer than it was before CO2 concentrations rose. The temperature lags behind the CO2 concentration because of the time it takes for the oceans to warm up. Figure from Zickfeld et al (2013).

As I wrote before: Postponing meaningful mitigation action until the shit hits the fan comes with considerable risk, because many changes in climate are not reversible on human timescales. Once you notice the trouble, it’s only the beginning, because of the inertia in the various systems (energy system, carbon cycle and climate system). The conundrum is thus that those who caused the problem are in the best position to solve it, but since the full consequences will not materialize until much later, they have the least incentive to do so.

Over at Bits of Science two Dutch science journalists, Rolf Schuttenhelm and Stephan Okhuijsen, published an interesting piece that focuses on the same issue: we only see a portion of the warming that we have committed ourselves to, due to the thermal inertia provided by the oceans. Just as a pot of water doesn’t immediately boil when we turn on the stove, the oceans take time to warm up as well. And since there’s a lot of water in the oceans, it takes a lot of time.

They included the following nifty graph, with the observed surface temperature but also the eventually expected temperature at the corresponding CO2 concentration (which they dub the ’real global temperature’), based on different approaches to account for warming in the pipeline:

real-global-temperature-graph - Bits of Science

Observed and eventually expected (“real”) temperature at concurrent CO2 concentration, via Bits of Science

This is a nice way to visualize the warming that’s still in the pipeline due to ocean thermal inertia. From a scientific point of view the exact execution and framing could be criticized on certain aspects (e.g. ECS is linearly extrapolated instead of logarithmically; the interpretation that recent record warmth are not peaks but rather a ‘correction to the trend line’ depends strongly on the exact way the endpoints of the observed temperature are smoothed; the effect of non-CO2 greenhouse gases is excluded from the analysis and discussion), but the underlying point, that more warming is in store than we’re currently seeing, is both valid and very important.

Timescales, timescales, timescales. Why art thou missing from the public discussion about global warming?

Update: ClimateInteractive has a good simulation of how this inertia works out in practice. By moving the slider at the bottom the figure you can choose between different emission scenarios. In the graphs above you then see the effect this has on the CO2 concentration, the global average temperature, and the sea level, and how this response is damped. The further down the cause-effect chain, the more damped – or better: the more slowed down- the response is. The sea level will continue to rise the longest (even long after the temperature has stabilized or even starts decreasing), but will take a while to get going. This simulation only runs to the year 2100 though.

A Dutch version of this post can be found on my sister blog KlimaatVerandering.

New survey of climate scientists by Bray and von Storch confirms broad consensus on human causation

June 22, 2016

Bray and von Storch just published the results of their latest survey of climate scientists. It contains lots of interesting and very detailed information, though some questions are a little biased in my opinion. Still, they find a strong consensus on human causation of climate change: 87.4% of respondents are to some extent convinced that most of recent or near future climate change is, or will be, the result of anthropogenic causes (question v007). Responses were given on a scale from 1 (not at all) to 7 (very much). In line with Bray (2010) a response between 5 and 7 is considered agreement with anthropogenic causation. In their 2008 survey the level of agreement based on the same question was 83.5% and in 2013 it was 80.9%.

How convinced are you that most of recent or near future climate change is, or will be, the result of anthropogenic causes? (v007)
not at all   1     2     3     4     5     6     7   very much

Bray and von Storch 2015 - v007 how convinced are you that most of recent and future GW is or will be the result of anthropogenic activity

Question v013 asked a somewhat similar question as we did in our 2012 climate survey, namely the percentage of global warming that is attributable to human activities:

Since 1850, it is estimated that the world has warmed by 0.5 – 0.7 degrees C. Approximately what percent would you attribute to human causes? (v013)

1=0%   2=1-25%   3=25-50%   4=51-75%   5=76-100%

Bray and von Storch 2015 - v013 What percentage of global warming since 1850 do you attribute to human causes

84.2% of respondents picked one of the two answer options that correspond to the canonical “more than half” or “most” of global warming that according to the IPCC is human caused. However, the corresponding IPCC statement is with regards to warming since the 1950’s, about which there is a lot more confidence, whereas this question specifies the warming since 1850.

But wait a moment, hasn’t the earth warmed a lot more than 0.5-0.7 degrees C since 1850? Yes, it definitely has; we’ve recently breached the 1 degree mark relative to the 1850-1880 average, so the range given in their question is quite outdated. A defensible choice at the time of drafting the survey would have been to quote the latest IPCC number of 0.85 (0.65 to 1.06) degrees warming over the time period 1880-2012, even if current temperatures have gone up sharply since then.

HadCRUT4_Ref_1850_tm_1880

Global average surface temperature relative to the 1850-1880 mean. Last annual average shown is 2015; if the first few months of 2016 are a guide, the vertical scale might have to be adapted for 2016. Figure by Jos Hagelaars.

Moreover, the answer options for v013 do not cover the full range of possibilities. Natural factors could have caused warming or cooling. Imagine that natural factors would have caused a cooling of 0.1 degrees C since pre-industrial times (which is not at all implausible), then to achieve closure with the observed warming of 1.0 degrees, anthropogenic factors should have contributed 1.1 degrees, or 110% of the observed warming. We discussed this argument in detail in our ES&T paper emanating from the climate science survey we conducted in 2012.

I emailed Dennis Bray about these and other issues after having responded to their survey back in 2015. He defended their choice of lowballing the observed warming as being consistent with their previous surveys and not being much different from more recent, and also likely contested, estimates. Strangely, he disagreed with the possibility of a factor being responsible for more than 100% of the observed warming, even in the hypothetical example above.

Cook et al (myself included) recently wrote an article in which we reviewed the existing ‘consensus’ estimates. This latest Bray and von Storch survey finds a level consensus on attribution that is consistent with other studies, though towards the lower end of the range. From their description I don’t think there is a bias in their sample of scientists, though there is always the possibility of self-selection, where people might be more likely to respond to a survey if it originates from a source who they perceive to be credible. Repeatedly, surveys have found that the level of consensus goes up as you zoom in to a sample of scientists with more relevant expertise. The Bray and von Storch results, as are ours, are mostly representative of a broad group of climate related scientists.

A detail of particular interest to me is that the survey questions included the response option “no answer”. That explains the different sample size for different questions (“Number of obs”). It’s probably no coincidence that question v013 (asking for a specific range of percent contribution) has a smaller sample size (n=587), and by inference more “no answer” responses, than the other, but simpler, attribution question v007 (n=640). This is consistent with what we found in our 2012 climate science survey: fewer respondents picked a specific percentage range of attribution compared to providing a qualitative judgment thereof. Though admittedly “no answer” (in the Bray and von Storch survey) is less ambiguous in this context than “I don’t know”, “unknown” or “other” (in our survey).

Amidst the questions on science and society I perceived some questions to have an “anti-consensus” (v069) or “anti-alarmist” (v067) tone to it, but there were no questions asking for mirror image perceptions. Doomsday stories need to be investigated before they get out of hand (v067): of course. But no question was asked whether stories downplaying a scientifically established risk should be investigated. I would have likewise responded: of course. To his credit, Dennis Bray acknowledged in his email that this was an oversight on their part.

Consensus on consensus: a synthesis of consensus estimates on human-caused global warming

April 13, 2016

Most scientists agree that current climate change is mainly caused by human activity. That has been repeatedly demonstrated on the basis of surveys of the scientific opinion as well as surveys of the scientific literature. In an article published today in the journal Environmental Research Letters (ERL) we provide a review of these different studies, which all arrive at a very similar conclusion using different methods. This shows the robustness of the scientific consensus on climate change.

This meta-study also shows that the level of agreement that the current warming is caused by human activity is greatest among researchers with the most expertise and/or the most publications in climate science. That explains why literature surveys generally find higher levels of consensus than opinion surveys. After all, experienced scientists who have published a lot about climate change have, generally speaking, a good understanding of the anthropogenic causes of global warming, and they often have more peer-reviewed publications than their contrarian colleagues.

Scientific consensus on human caused climate change vs expertise in climate scienceFigure: Level of consensus on human-induced climate change versus expertise in climate science. Black circles are data based on studies of the past 10 years. Green line is a fit through the data.

The initial reason for this review article was a specific comment by Richard Tol on John Cook’s literature survey as published in ERL in 2013. Cook found a 97% consensus on anthropogenic global warming in the scientific literature on climate change. This article has both been vilified and praised. Tol argued that Cook’s study is an outlier, but he did so by misrepresenting most other consensus studies, including the survey I undertook while at the Netherlands Environmental Assessment Agency (PBL). To get a gist of the discussion with Tol see e.g. this storify I made based on my twitter exchanges with him (warning: for climate nerds only). Suffice to say the authors of these other consensus studies were likewise not impressed by Tol’s caricature of their work. This is how the broad author team for the current meta-analysis arose, which shows that Cook’s literature survey fits well within the spectrum of other studies.

The video below provides a great overview of the context and conclusions of this study:

Surveys show that among the broad group of scientists who work on the topic of climate change the level of consensus is roughly between 83 and 97% (e.g. Doran, Anderegg, Verheggen, Rosenberg, Carlton, Bray, Stenhouse, Pew, Lichter, Vision Prize). If you zoom in on the subset of most actively publishing climate scientists you find a consensus of 97% (Doran, Anderegg). Analyses of the literature also indicate a level of consensus of 97% (Cook) or even 100% (Oreskes). The strength of literature surveys lies in the fact that they sample the prime locus of scientific evidence and thus they provide the most direct measure of the consilience of evidence. On the other hand, opinion surveys can achieve much more specificity about what exactly is agreed upon. The latter aspect – what exactly is agreed upon and how does that compare to the IPCC report- is something we investigated in detail in our ES&T article based on the PBL survey.

As evidenced by the many –unfounded- criticisms on consensus studies, this is still a hot topic in the public debate, despite the fact that study after study has confirmed that there is broad agreement among scientists about the big picture: our planet is getting warmer and that is (largely) due to human activity, primarily the burning of fossil fuels. A substantial fraction of the general public however is still confused even about the big picture. In politics, schools and media climate change is often not communicated in accordance with the current scientific understanding, even though the situation here in the Netherlands is not as extreme as e.g. in the US.

Whereas the presence of widespread agreement is obviously not proof of a theory being correct, it can’t be dismissed as irrelevant either: As the evidence accumulates and keeps pointing in the same general direction, the experts’ opinion will logically converge to reflect that, i.e. a consensus emerges. Typically, a theory either rises to the level of consensus or it is abandoned, though it may take considerable time for the scientific community to accept a theory, and even longer for the public at large.

Although science can never provide absolute certainty, it is the best method we have to understand complex systems and risks, such as climate change. If you value science it is wise not to brush aside broadly accepted scientific insights too easily, lest you have very good arguments for doing so (“extraordinary claims require extraordinary evidence”). I think it is important for proper democratic decision making that the public is well informed about what is scientifically known about important issues such as climate change.

More info/context/reflections:

Dutch version at sister-blog “klimaatverandering”

Column by first author John Cook in Bulletin of the Atomic Scientists

Stephan Lewandowsky on the psychology of consensus

Collin Maessen tells the backstory starting with Richard Tol’s nonsensus

Ken Rice at …And Then There’s Physics

Dana Nuccitelli in the Guardian

Sou at HotWhopper

Amsterdam University College (AUC) news item

 

Antarctica: ice gain or loss?

November 25, 2015

Guest post by Dr. Jan Wuite, Enveo, Innsbruck

A new study released by NASA scientist Jay Zwally and colleagues in the Journal of Glaciology, receiving wide coverage in the media last month, reports an 82 Gigatons per year increase of land ice in Antarctica during the period 2003-2008. The study received much skepticism from other leading scientists in the field, as there are many indications that point at the contrary: ice loss, possibly irreversible. How does this new study fit in that picture, what are the consequences for expected sea level rise and are these numbers correct? Glaciologist and polar scientist Jan Wuite, working at Enveo in Innsbruck and involved in various international studies related to Antarctica explains.

One of the adverse consequences of climate change is global sea level rise. At more than 3 mm per year, the current sea level rises twice as fast as during the 20th century. The expectations are a rise of at least 70 cm by the turn of this century. The principal causes are clear: global decline of land ice (mountain glaciers & ice sheets) and thermal expansion of ocean water (water expands as it becomes warmer). To clarify, land ice is resting on land and can reach a thicknesses of up to several kilometers, in contrast to seasonally restricted sea ice (mainly just frozen ocean water), that is typically only a few meters thick and has no direct influence on sea level. Studies have indicated an increasing contribution of the two largest ice sheets, the Greenland and Antarctic ice sheets, to sea level rise.

The largest unknown for future sea level rise is caused by uncertainty in the predicted response of the Antarctic ice sheet to global warming. As warmer air can hold more moisture, it is possible that increasing snow accumulation compensates part of the sea level rise. On the other hand it is also possible that ice drains faster to the oceans accelerating it.

There is a lot of ice in Antarctica; in some places the ice thickness reaches well over 4 km. There is enough ice to, when melted completely, raise global sea levels with roughly 58 m. But even if only a small part of that melts it could have a significant impact on coastal communities, or ocean circulation. For this reason scientists are very interested in mass changes of the ice sheets: the mass balance.

Figure 1. An illustration of key processes determining ice sheet mass balance. Source: Zwally et al figure 1.

(more…)

From complex clarity to nuanced misunderstanding: Response to Hollin and Pearce

October 24, 2015

Earlier this year, Greg Hollin and Warren Pearce published a short letter in Nature Climate Change entitled Tension between scientific certainty and meaning complicates communication of IPCC reports.

In contrast to their claims, we demonstrate in our comment on this article that the IPCC correctly placed the hottest decade in the context of long-term trends. The IPCC did not dismiss the recent slowdown in surface warming, the so-called “hiatus” or “pause”, as scientifically irrelevant.

Hollin and Pearce’s central premise is nicely encapsulated in the abstract of their paper:

Here we demonstrate that speakers at the press conference for the publication of the IPCC’s Fifth Assessment Report (Working Group 1) attempted to make the documented level of certainty of anthropogenic global warming (AGW) more meaningful to the public. Speakers attempted to communicate this through reference to short-term temperature increases. However, when journalists enquired about the similarly short ‘pause’ in global temperature increase, the speakers dismissed the relevance of such timescales, thus becoming incoherent as to ‘what counts’ as scientific evidence for AGW.

This observation leads them to theorize about the tension between scientific certainty and meaning. But did they actually demonstrate what they claimed they did? We argue in a comment to this article that they did not.

They base their statement that IPCC speakers attempted to communicate ‘meaning’ by reference to short-term temperature increases on this statement made by Pachauri:

the decade 2001 onwards having been the hottest, the warmest that we have seen

This statement was explicitly placed in the context of long-term, climatically relevant trends, as indicated by Pachauri’s preceding words and the graph below (IPCC AR5 SPM fig 1) that was prominently shown at the press conference:

each of the last three decades has been successively warmer at the Earth’s surface than any preceding decade since 1850

ar5-spm-1

That is an entirely different ballgame than the short-term variability that underlies the slowdown of the surface warming trend (often referred to as “the pause”). So no, they did not demonstrate in the least that IPCC speakers “relied on temporally local events to increase public meaning”. Hollin and Pearce’s premise is based on misunderstanding the timescales that were discussed.

What about the claim that IPCC speakers dismissed the relevance of the so-called “pause”?

As we write in our comment, five of the 18 journalists asked a question about recent temperature trends; none were ignored. Also David Rose’s question, which is featured prominently in Hollin and Pearce’s argument, was not dismissed. Stocker responded to Rose’s question, followed by Jarraud explaining why he regarded it as “ill-­‐posed”, reframing it as a well-posed question, and responding to that. See the (freely available) Supplemental Information for more details.

So no, Hollin and Pearce did not demonstrate that the relevance of the slowdown was dismissed. Hotwhopper hammers home this and related points with lots of extra detail, showing that the IPCC message was clearly received by most journalists and that only one journalist who asked a question at the press conference “condemned” the IPCC for supposed dismissal. This journalist, as you might suspect, was David Rose, for whom IPCC bashing is a modus operandi.

The entire premise for their argument thus seems to rest on shaky ground. Their conclusions about e.g. the IPCC’s credibility being somehow eroded by their (in Hollin and Pearce’s eyes) mixed messages are thus not supported.

I’m actually surprised that in their reply to our comment, Hollin and Pearce don’t acknowledge their mistaken interpretation regarding timescales, but rather keep digging their heels in regarding that point. E.g. they reiterate that Pachauri’s quote above was “illustrative of references to the warmest decade made by all three speakers”, apparently without realizing, or not considering it as relevant, that this and other such references were made in the context of the long-term trend. Very peculiar.

They wrote a blogpost in which they try to reflect on the meaning of this back and forth in the scientific literature. It’s an interesting read, and they basically argue that their initial letter was based on inductive research: starting with the data, seeing patterns or interesting things, and the theories and broader claims are integrated later. They claim that this is more common in qualitative social science than it is in natural science and that this difference may be at the root of our disagreement. I’m actually quite comfortable with such inductive style research. I have often started my research by looking at “what the data told me” as my PhD advisor used to say, especially for a large body of field observations. That was also the approach we took in analyzing the data from our climate science survey.

And Then There’s Physics, also co-author of the response to Hollin and Pearce, goes into more detail on this point, and rightly wonders

why it makes any difference whether one’s approach is inductive or deductive. What you present should be a reasonable representation of reality, whether you approached it inductively (“the data looks interesting, why is that?”) or deductively (“I have a theory/hypothesis, let me collect some data to test it”). For example, either the IPCC fell into a trap by using one indicator to stress the certainty of AGW while dismissing another essentially equivalent indicator, or they didn’t; either the IPCC dismissed the so-called “pause”, or they didn’t. It can’t really be both.

I would hardly think that the difference in interpretation between Hollin and Pearce on the one side and Jacobs et al (myself included, but also e.g. cognitive psychologist Stephan Lewandowsky) on the other side boils down to a typical difference in approach between social science and natural science. Rather, it boils down to the former misinterpreting statements relating to timescales and basing the whole remainder of their argument on a false premise.

Update (27 Oct 2015)

Over at Pearce’s blog I replied to Greg Hollin:

I re-read your reply incl the SI and I’m still struggling to see understand your point of view regarding time scales.

Greg,

In the SI for example you quote Jarraud as saying
“more temperature records were broken than in any other decade”

with the emphasis (italics) on the word “any”. Doesn’t that point to these temperature record being presented in the context of the longer-term trend (“than in *any* other decade”)? To me the answer to that would be a clear yes, but apparently that’s not the same to you. Could you clarify your position in that respect?

It may very well be that speakers mentioned (spatially and temporally) local events as *examples* of what climate change might mean to a person’s live, so yes, to make it more societally meaningful. I’m not challenging that. What I’m challenging is your premise that in doing so the IPCC speakers provided an incoherent picture of timescales, on the one hand presenting a decade’s worth of data as scientifically meaningful and on the other hand as not meaningful. The former, the warmest decade, was consistently put in the context of the long term trend, even in the quote that you mentioned yourself. So there was no such incoherence.

 

Wanted: blog readers to be interviewed as part of PhD research into climate blogging

October 10, 2015

Guest post by Giorgos Zoukas. As part of his research on climate blogging he would like to interview blog readers. Please contact him if you’d like to participate. He has interviewed me as well as some other climate scientist bloggers. BV

 

Invitation to participate in a PhD research project on climate blogging

My name is Giorgos Zoukas and I am a second-year PhD student in Science, Technology and Innovation Studies (STIS, http://www.stis.ed.ac.uk/) at the University of Edinburgh (http://www.ed.ac.uk/home). This guest post is an invitation to the readers and commenters of this blog to participate in my project.

This is a self-funded PhD research project that focuses on a small selection of scientist-produced climate blogs, exploring the way these blogs connect into, and form part of, broader climate science communication. The research method involves analysis of the blogs’ content, as well as semi-structured in-depth interviewing of both bloggers and readers/commenters.

Anyone who comments on this blog, on a regular basis or occasionally, or anyone who just reads this blog without posting any comments, is invited to participate as an interviewee. The interview will focus on the person’s experience as a climate blog reader/commenter.*

The participation of readers/commenters is very important to this study, one of the main purposes of which is to increase our understanding of climate blogs as online spaces of climate science communication.

If you are interested in getting involved, or if you have any questions, please contact me at: G.Zoukas -at- sms.ed.ac.uk (Replace the -at- with the @ sign)

(Those who have already participated through my invitation on another climate blog do not need to contact me again.)

*The research complies with the University of Edinburgh’s School of Social and Political Sciences Ethics Policy and Procedures, and an informed consent form will have to be signed by both the potential participants (interviewees) and me.

Richard Tol misrepresents consensus studies in order to falsely paint John Cook’s 97% as an outlier

September 24, 2015

John Cook warned me: if you attempt to quantify the level of scientific consensus on climate change, you will be fiercely criticized. Most of the counterarguments don’t stand up to scrutiny however. And so it happened.

The latest in this saga is a comment that Richard Tol submitted to ERL, as a response to John Cook’s study in which they found 97% agreement in the scientific literature that global warming is human caused. Tol tries to paint Cook’s 97% as an outlier, but in doing so misrepresents many other studies, including the survey that I undertook with colleagues in 2012. In his comment and his blogpost he shows the following graph:

Richard Tol misrepresenting existing consensus estimates

Richard Tol comes to very different conclusions regarding the level of scientific consensus than the authors of the respective articles themselves (Oreskes, 2004; Anderegg et al., 2010; Doran and Kendall Zimmerman, 2009; Stenhouse et al., 2013; Verheggen et al., 2014). On the one hand, he is using what he calls “complete sample” results, which in many cases are close to meaningless as an estimate of the actual level of agreement in the relevant scientific community (that counts most strongly for Oreskes and Anderegg et al). On the other hand he is using “subsample” results, which in some cases are even more meaningless (the most egregious example of which is the subsample of outspoken contrarians in Verheggen et al).

The type of reanalysis Tol has done, if applied to e.g. evolution, would look somewhat like this:

  • Of all evolutionary biology papers in the sample 75% explicitly or implicitly accept the consensus view on evolution. 25% did not take positon on whether evolution is accepted or not. None rejected evolution. Tol would conclude from this that the consensus on evolution is 75%. This number could easily be brought down to 0.5% if you sample all biology papers and count those that take an affirmative position in evolution as a fraction of the whole. This is analogous to how Tol misrepresented Oreskes (2004).
  • Let’s ask biologists what they think of evolution, but to get an idea of dissenting views let’s also ask some prominent creationists, e.g. from the Discovery Institute. Never mind that half of them aren’t actually biologists. Surprise, surprise, the level of agreement with evolution in this latter group is very low (the real surprise is that it’s not zero). Now let’s pretend that this is somehow representative of the scientific consensus on evolution, alongside subsamples of actual evolutionary biologists. That would be analogous to how Tol misrepresented the “unconvinced” subsample of Verheggen et al (2014).

Collin Maessen provide an detailed take-down of Richard Tol on his blog, quoting extensively from the scientists whose work was misrepresented by Tol (myself included). The only surveys which are not misrepresented are those by Bray and von Storch (2007; 2010). This is how I am quoted at Collin’s blog RealSkeptic:

Tol selectively quotes results from our survey. We provided results for different subsamples, based on different questions, and based on different types of calculating the level of agreement, in the Supporting Information with our article in ES&T. Because we cast a very wide net with our survey, we argued in our paper that subgroups based on a proxy for expertise (the number of climate related peer reviewed publications) provide the best estimate of the level of scientific consensus. Tol on the other hand presents all subsamples as representative of the scientific consensus, including those respondents who were tagged as “unconvinced”. This group consists to a large extent of signatories of public statements disapproving of mainstream climate science, many of whom are not publishing scientists. For example, some Heartland Institute staffers were also included. It is actually surprising that the level of consensus in this group is larger than 0%. To claim, as Richard Tol does, that the outcome for this subsample is somehow representative of the scientific consensus is entirely nonsensical.

Another issue is that Richard Tol bases the numbers he uses on just one of the two survey questions about the causes of recent climate change, i.e. a form of cherry picking. Moreover, we quantified the consensus as a fraction of those who actually answered the question by providing an estimate of the human greenhouse gas contribution. Tol on the other hand quantifies the consensus as a fraction of all those who were asked the question, including those who didn’t provide such an estimate. We provided a detailed argument for our interpretation in both the ES&T paper and in a recent blogpost.

Tol’s line of reasoning here is similar to his misrepresentation of Oreskes’ results, by taking the number of acceptance papers not just as a fraction of papers that take position, but rather as a fraction of all papers, including those that take no position on current anthropogenic climate change. Obviously, the latter should be excluded from the ratio, unless one is interested in producing an artificially low, but meaningless number.

Some quotes from the other scientists:

Oreskes:

Obviously he is taking the 75% number below and misusing it. The point, which the original article made clear, is that we found no scientific dissent in the published literature.

Anderegg:

This is by no means a correct or valid interpretation of our results.

Neil Stenhouse:

Tol’s description omits information in a way that seems designed to suggest—inaccurately—that the consensus among relevant experts is low.

Doran:

To pull out a few of the less expert groups and give them the same weight as our most expert group is a completely irresponsible use of our data.

You can read their complete quotes at RealSkeptic.

See also this storify of my twitter discussion with Richard Tol.

Rick Santorum misrepresents our climate survey results on Bill Maher show

September 2, 2015

In our survey of more than 1800 scientists we found that the large majority agree that recent climate change is predominantly human induced. The article where we discuss our results is publicly available and a brief rundown of the main conclusions is provided in this blogpost.

We were quite surprised to hear a US Presidential candidate, Rick Santorum, make the opposite claim based on our survey. On the Bill Maher show he said:

The most recent survey of climate scientists said about 57 percent don’t agree with the idea that 95 percent of the change in the climate is caused by CO2. (…) There was a survey done of 1,800 scientists, and 57 percent said they don’t buy off on the idea that CO2 is the knob that’s turning the climate. There’s hundreds of reasons the climate’s changed.

What did we actually find in our survey?

In our survey of 1868 scientists studying various aspects of climate change, we asked two questions about the causes of recent global warming. Of all scientists who provided an estimate ~85% think that the influence of human greenhouse gases is dominant, i.e. responsible for more than half of the observed warming. ~15% think greenhouse gases are responsible for less than half of the observed warming. If you zoom in to those respondents with arguably more expertise, the percentage agreeing with human dominated warming becomes 90% or larger.

The existence of a strong scientific consensus about climate change is also clear from previous surveys of scientists and of the scientific literature and from statements of scientific societies. A scientific consensus is a logical consequence of the evidence for a certain position becoming stronger over time.

Rick Santorum’s claim is misleading and wrong because:

1) it is based on a wrong interpretation of just one of the two survey questions about the causes of recent climate change.

2) it is based on the argument that respondents who didn’t provide a specific estimate for the contribution of greenhouse gases (22% of the total number) think that this contribution is small. That is a wrong inference.

3) it is based on the argument that respondents who think it is “very likely” or “likely” or “more likely than not” that greenhouse gases are the dominant cause of recent warming disagree with this dominant influence. That is a wrong inference.

Politifact also did a fact-check of Santorum’s claim and found it to be false. It gives a very good overview of what’s wrong with Santorum’s claim. Several people are quoted in their article, myself included as well as the blogger Fabius Maximus who came up with the 43% consensus figure used by Santorum to claim a 57% dissensus:

(more…)

A warm 2015 and model –data comparisons

August 7, 2015

Guest post by Jos Hagelaars. Dutch version is here.

Discussions on the Internet regarding climate change are sometimes about scientific details, sometimes about the climate sensitivity regarding the equilibrium situation hundreds of years from now, but the most prevalent discussion topic is probably: the global average temperature. Will it get warmer or colder, is there a temporary slowdown or acceleration in the rise in temperature, are the models correct or not, will the eventual warming of our earth in the future be large or small? New numbers are released on a monthly basis and every month megabytes of text are generated about them. My forecast is that 2015 again will lead to a discussion-spike.

The graph above shows the evolution of the global surface temperature anomaly for three datasets, where the average of the period 1981-2010 is defined as 0. For the year 2015 only data are presented up to and including June. So far 2015 exceeds all other years and the evolving El Niño makes it likely that 2015 will set a new world record.
(more…)


%d bloggers like this: