Archive for the ‘Climate science’ Category

ClimateDialogue on Climate Sensitivity

May 15, 2014

After a bit of a “hiatus”, ClimateDialogue (CD) has re-opened again with a discussion on climate sensitivity. On the one hand this site is unique in bringing together ‘mainstreamers’ and ‘contrarians’ (both in the organization and in the discussions), hopefully leading to both enhanced clarity on what the (dis)agreements are really about and to decreased polarization. On the other hand it’s controversial because a ‘false balance’ is embedded in its structure (by purposefully inviting contrarian scientists to the discussion, rather than e.g. randomly inviting experts).

Whether the positives or negatives dominate is in the eye of the beholder (opinions about that vary wildly), but also depends very strongly on the participation of the mainstream (both as invited experts and as contributing to the public discussion). See also my initial reflections at the time of the first launch. Discussions on ClimateDialogue will be facilitated and moderated by Bart Strengers (NL Environmental Assessment Agency, PBL) and Marcel Crok (freelance journalist), where the former has a mainstream view of climate science and the latter a contrarian view. I am still involved in the background, as is KNMI (NL Meteorological Institute). ClimateDialogue is funded by the Dutch Ministry of Infrastructure and Environment.

In the current ‘dialogue’ James Annan, John Fasullo and Nic Lewis are discussing their views about climate sensitivity (the equilibrium warming after a doubling of CO2 concentrations, ECS). In the latest IPCC report (AR5) the different and partly independent lines of evidence are combined to conclude that ECS is likely in the range 1.5°C to 4.5°C with high confidence. The figure below shows the ranges and best estimates of ECS in AR5 based on different types of studies, namely:

- the observed or instrumental surface, ocean and/or atmospheric temperature trends since pre-industrial time

- short-term perturbations of the energy balance such as those caused by volcanic eruptions, included under “instrumental” in the figure

- climatological constraints by comparing patterns of mean climate and variability in models to observations

- ECS as emergent property of global climate models

- temperature fluctuations as reconstructed from palaeoclimate archives

- studies that combine two or more lines of evidence

(more…)

John Christy, Richard McNider and Roy Spencer trying to overturn mainstream science by rewriting history and re-baselining graphs

February 22, 2014

Who are the Flat Earthers?

Before the advent of modern climatology, common wisdom had it that we tiny humans couldn’t possibly influence climate. Modern science shows we can. Yet John Christy and Richard McNider claim the exact opposite in a recent WSJ op-ed, in which they claim that their outdated views on climate somehow make them modern-day Galileo’s (or in their words: Why they are the ones declaring that the earth is round while the vast majority of the climate scientists persist in thinking the earth is flat). They couldn’t be more wrong.

Back then, scientific evidence slowly overturned the religious-cultural notion that the Earth was the centre of the universe. This resulted in a scientific consensus that the Earth revolves around the sun. More recently scientific evidence has started overturning the notion that humans can’t possibly influence something as gigantic as the Earth’s climate. This too has resulted in a scientific consensus  (though a public consensus is still lagging behind). In both cases, the pre-scientific notion was mostly culture-based, as opposed to being evidence-based.

As Jeff Nesbit tweeted: “Being the last scientist to accept established climate science doesn’t make you Galileo.” Quite the opposite indeed.

The Galileo-complex also suggests a rather simplistic view of how science progresses. Rather than a lone skeptic overthrowing a scientific (rather than a cultural) consensus, scientific progress is a usually a gradual process. New evidence has to be reconciled with the existing mountain of evidence; it doesn’t simply replace it. Observing a bird in the air doesn’t disprove gravity. “Skeptics” and their supporters often bring up Galileo as an example of that the scientific consensus can also be wrong, and has been wrong in the past. True enough, though as Carl Sagan said: “they laughed at Galileo, but they also laughed at Bozo the clown”.

Hot spot

Besides their entirely misplaced Galileo-framing, Christy and McNider also make a range of unsupported and/or incorrect statements. One argument deals with the so-called tropical tropospheric hot spot. This refers to the expected stronger warming of the tropical troposphere as compared to the surface. This “hot spot” is independent of the cause of the warming. But what do Christy and McNider write in the WSJ:

(the warming of the deep atmosphere is) the fundamental sign of carbon-dioxide-caused climate change, which is supposedly behind these natural phenomena

But hang on, didn’t Christy admit to the basic science that this hot spot is not specific to a greenhouse effect? Yes, he did (at the ClimateDialogue discussion in which he participated):

“Yes, the hot spot is expected via the traditional view that the lapse rate feedback operates on both short and long time scales. (…) it [the hot spot] is broader than just the enhanced greenhouse effect because any thermal forcing should elicit a response such as the “expected” hot spot.”

So why is he claiming something in the WSJ that he knows to be untrue?

Model-observation comparison

It almost goes without saying that any climate model-observation mismatch can have multiple (non-exclusive) causes (as succinctly summarized at RC):

  1. The observations are in error
  2. The models are in error
  3. The comparison is flawed

But rather than doing a careful analysis of various potential explanations, McNider and Christy, as well as their colleague Roy Spencer, prefer to draw far reaching conclusions based on a particularly flawed comparison: They shift the modelled temperature anomaly upwards to increase the discrepancy with observations by around 50%. Using this tactic, Roy Spencer showed the following figure on his blog recently:

Roy Spencer misleading figure - CMIP5-90-models-global-Tsfc-vs-obs-thru-2013

So what did he do? Jos Hagelaars tried to reproduce the different steps involved. A comparison of annual data, using a 1986-2005 baseline, would look as follows:

Jos Hagelaars - comparison_cmip5_hadcrut4_uah

Spencer used a 5 year running mean instead of annual values, which would (should) look as follows:

Jos Hagelaars - spencers-graph-reconstructed-part-1

The next step is re-baselining the figure to maximize the visual appearance of a discrepancy: Let’s baseline everything to the 1979-1983 average (way too short of a period and chosen very tactically it seems):

Jos Hagelaars - spencers-graph-reconstructed-part-2

Which looks surprisingly similar to Spencer’s trickery-graph. But critiquing Roy Spencer comes at a risk: He may call you a “global warming Nazi”. Those nasty CO2 molecules, that’ll teach them!

Many thanks to Jos Hagelaars for the data analysis and figures.

Is Climate Science falsifiable?

February 17, 2014

Guest post by Hans Custers. Nederlandse versie hier.

A very, ehhrmm… interesting piece on
Variable Variability, Victor Venema’s blog: Interesting what the interesting Judith Curry finds interesting. And I don’t mean interesting in a rhetoric, suggestive way; I mean it is a well-written and well-reasoned article, worth reading.

Victor writes about the meme regularly used by the anti climate science campaign, often supported by some straw man arguments, that the science of human impacts on climate would not be falsifiable. He shows it’s nonsense, by giving some examples of how it could be falsified. Or, more likely, already would have been falsified, if the science would be wrong. Victor’s post inspired me to think of more options to falsify generally accepted viewpoints in climate science. If there are any ‘climate change skeptics’ who want to contribute to real science, they might see this as a challenge. Maybe they can come up with a research proposal, based on one of the options for falsification. Like proper scientists would do.

First, a few more things about falsifiability in general. Bart wrote a concise post about the subject four years ago, explaining that a bird in the sky does not disprove gravity. What looks like a refutation at first, might on second thoughts be based on partial or total misunderstanding of the hypothesis. Natural climate forcings and variations do not exclude human impacts. Therefore, the existence of these natural factors in itself, cannot falsify anthropogenic climate change. A real skeptic is cautious about both scientific evidence and refutations. ‘Climate change skeptics’ like to mention the single black swan, that disproves the hypothesis that all swans are white. Of course that is true, unless that single black swan appears to be found near some oil spill.

Some of the falsifications that I mention later on might be somewhat cheap, or far-fetched. It is not very easy to find options to falsify the science of human impacts on climate. Not because climate scientists don’t respect philosophical principles of science, but simply because there’s such a huge amount of evidence. There are not a lot of findings that would disprove all the evidence at once. A scientific revolution of this magnitude only happens very rarely. Whoever thinks differently, doesn’t understand how science works. (more…)

Andrew Dessler’s testimony on what we know about climate change

January 19, 2014

In his recent testimony, Andrew Dessler reviewed what he thinks “are the most important conclusions the climate scientific community has reached in over two centuries of work”. I think that’s a very good choice to focus on, as the basics of what we know is most important, “at least as to the thrust and direction of policy” (Herman Daly). This focus served as a good antidote to the other witness, Judith Curry, who emphasizes (and often exaggerates) uncertainty to the point of conflating it with ignorance.

Dessler mentioned the following “important points that we know with high confidence”:

1.  The climate is warming.

Let’s take this opportunity to show the updated figure by Cowtan and Way, extending their infilling method to the entire instrumental period (pause? which pause?):

Cowtan and Way - Global Avg Temp 1850 - 2012

2. Most of the recent warming is extremely likely due to emissions of carbon dioxide and other greenhouse gases by human activities.

This conclusion is based on several lines of evidence:

- Anthropogenic increase in greenhouse gases

- Physics of greenhouse effect

- Observed warming roughly matches what is expected

- Important role of CO2 in paleoclimate

- No alternative explanation for recent warming

- Fingerprints of enhanced greenhouse effect (e.g. stratospheric warming cooling, which was predicted before it was observed)

Dessler:

Thus, we have a standard model of climate science that is capable of explaining just about everything. Naturally, there are some things that aren’t necessarily explained by the model, just as there’re a few heavy smokers who don’t get lung cancer. But none of these are fundamental challenges to the standard model.

He goes on to explain that the so-called “hiatius” is not a fundamental challenge to our understanding of climate, though it is “an opportunity to refine and improve our understanding of [the interaction of ocean circulation, short-term climate variability, and long-term global warming].”

What about alternative theories? Any theory that wants to compete with the standard model has to explain all of the observations that the standard model can. Is there any model that can even come close to doing that?

No.

And making successful predictions would help convince scientists that the alternative theory should be taken seriously. How many successful predictions have alternative theories made?

Zero.

3. Future warming could be large 

On this point I always emphasize that the amount of future warming depends both on a combination of factors:

- the climate forcing (i.e. our emissions and other changes to the earth’ radiation budget)

- the climate sensitivity (the climate system’s response to those forcings)

- the climate response time (how fast will the system equilibrates).

Internal (unforced) variability also plays a role, but this usually averages out over long enough timescales.

4. The impacts of this are profound.

In the climate debate, we can argue about what we know or what we don’t know. Arguing about what we don’t know can give the impression that we don’t know much, even though some impacts are virtually certain.

The virtually certain impacts include:

• increasing temperatures

• more frequent extreme heat events

• changes in the distribution of rainfall

• rising seas

• the oceans becoming more acidic

Time is not our friend in this problem.

Nor is uncertainty.

The scientific community has been working on understanding the climate system for nearly 200 years. In that time, a robust understanding of it has emerged. We know the climate is warming. We know that humans are now in the driver’s seat of the climate system. We know that, over the next century, if nothing is done to rein in emissions, temperatures will likely increase enough to profoundly change the planet. I wish this weren’t true, but it is what the science tells us.

Peter Sinclair posted a video of Andrew Dessler’s testimony. Eli Rabett posted Dessler’s testimony in full.

A key distinction in the two senate hearings was that Andrew Dessler focused on what we know, whereas Judith Curry focused on what we don’t know (though “AndThenTheresPhysics” made a good point that Curry goes far beyond that, by e.g. proclaiming confidence in certain benign outcomes (e.g. regarding sensitivity) while claiming ignorance in areas where we have a half-decent, if incomplete, understanding, e.g. regarding the hiatus). I have argued before that emphasizing (let alone exaggerating) uncertainties is not the road to increase people’s understanding of the issue, where what we do know is much more important to convey (if your goal is to increase the public understanding of scientific knowledge). Alongside that I argue that much more attention is needed to explain the nature of science, which is needed to e.g. place scientific uncertainties in a proper context.

CartoonUncertainty

Herman Daly said it as follows, in a quote I’ve used regularly over the past few years:

If you jump out of an airplane you need a crude parachute more than an accurate altimeter.

Arguing whether the altimeter might be off by a few inches is interesting from a scientific/technological perspective, but for the people in the plane it’s mostly a distraction.

Cowtan and Way global average temperature observations compared to CMIP5 models

November 15, 2013

It is well known that the Arctic is warming up much faster than the rest of the globe. As a consequence, datasets which omit this region (HadCRUT and NOAA) underestimate the global warming trend. A new paper by Cowtan and Way addresses this cool bias by using satellite data to fill in these data gaps. They make a good case that this method also improves upon the NASA GISS dataset, which uses extrapolated data from surface stations to partly fill in the data sparse regions. Combining their new method of infilling with the most up-to-date sea surface temperatures gives a substantially larger trend over the last 15 years than the abovementioned datasets do. The temporary slowdown in global surface warming (also dubbed “the pause”) nearly disappears. As Michael Tobis notes:

This demonstrates is how very un-robust the “slowdown” is.

The corrections don’t amount to a huge change in absolute temperature change, and the new data actually fall inside the uncertainty envelope provided by HadCRUT4. As the paper correctly states:

While short term trends are generally treated with a suitable level of caution by specialists in the field, they feature significantly in the public discourse on climate change.

In the figure below (made by Jos Hagelaars) the global average temperature as calculated by Cowtan and Way (“C&W hybrid”) is compared to both the HadCRUT4 dataset and the CMIP5 multi-model mean as well as its 5% and 95% percentile values (RCP8.5): [Update: The figure below has
been replaced, since the original was found to be in error during discussions on CA). The confidence interval of this corrected graph is substantially narrower than the erroneous original one. Note that the current graph shows the 5 to 95 percentile range of model runs (i.e. the 90% confidence interval), whereas the previous ones showed the 95% confidence interval. At the bottom of the post a similar figure with both confidence intervals as well as the two sigma range is shown.
]

Cowtan_Way_Hadcrut_RCP85_5-95_Perc

Also with these data improvements, recent observations are at the low side of the CMIP5 model range. The comparison of observations to models has to be interpreted with caution however. Some people like to jump to preferred conclusions, but it’s good to keep in mind that the expected warming at a specific point in time depends on a combination of factors. Any of these factors -as well as shortcomings in the observational data, such as those discussed by Cowtan and Way- could contribute to a mismatch between observations and models:

- radiative forcing

- equilibrium climate sensitivity

- climate response time

- natural unforced variability

The last factor means that one shouldn’t expect the multi-model mean (in which most variability is cancelled out) to be identical to the observations (which are the result of a particular realisation of natural variability).

Cowtan and Way made a very clear video in which the main results of their paper are explained in just a few minutes. Highly recommended watching:

More commentary on the paper on e.g. RC (Rahmstorf), SkS (Cowtan and Way), Guardian (Nuccitelli), P3 (Tobis), Victor Venema, Neven. See also this very useful background information provided by the authors.

[some typos corrected and clarifications added, 16-11. Erroneous figure replaced 21-11.]

Update: Below a similar figure as above, with different confidence intervals for the model runs shown. 

Cowtan_Way_Hadcrut_RCP85_V3

Update 2 (Feb 2014):

Jos Hagelaars added Cowtan and Way’s data for 2013 to a figure comparing observations to model projections:

Jos Hagelaars - comparison_cmip5_hadcrut4_cowtanway_2013

Tropospheric hot spot

August 19, 2013

The current topic under discussion at ClimateDialogue is the tropospheric hot spot: Is it there, and if not, so what? Invited discussant are Steven Sherwood of the University of New South Wales in Sydney, Carl Mears of Remote Sensing Systems (working on the RSS satellite product) and John Christy of the University of Alabama in Huntsville (working on the UAH satellite product).

I’ll provide a short overview here (loosely based on the intro over at CD), interspersed with my own and other people’s commentary.

Based on theoretical considerations and simulations with General Circulation Models (GCMs), it is expected that any warming at the surface will be amplified in the upper troposphere. The reason for this is as follows: More warming at the surface means more evaporation and more convection. Higher in the troposphere the (extra) water vapour condenses and heat is released. Calculations with GCMs show that the lower troposphere warms about 1.2 times faster than the surface. For the tropics, where most of the moist is, the amplification is larger, about 1.4.

This means that, contrary to what some people claim, the hot spot is not specific to the enhanced greenhouse effect: Any surface warming (or cooling) would be expected to be magnified higher aloft, at least in the tropics. Lindzen says it as follows:

We know that the models are correct in this respect since the hot spot is simply a consequence of the fact that tropical temperatures approximately follow what is known as the moist adiabat. This is simply a consequence of the dominant role of moist convection in the tropics.

(more…)

EGU General Assembly: The Arctic, Models, and Data

June 7, 2013

Guest post by Heleen van Soest

In April, the annual European Geosciences Union conference was held in Vienna, Austria. Heleen van Soest, MSc student Climate Studies at Wageningen University, attended the conference, and shares some thoughts and tweets (@Hel1vs).

The opening reception, April 7, reveals that geoscientists are fond of beer. I get to talk to some nice people and hand out my first business cards. Yay! I talk with Walter Schmidt,  President of the Division on Geosciences Instrumentation and Data Systems, about observations and data. Lesson learned: data are important, but never take them for granted. Especially from satellites: they basically measure counts and voltages. To interpret the numbers and get something useful, we already need models, i.e. algorithms. Usually, model skill is tested against data. Disagreement between them is often blamed on model errors, assumptions, etc. Keep in mind that data might be wrong, too. Fortunately, raw data is increasingly archived as such, together with the algorithms used to interpret them. In that way, data can still be used if the algorithms are updated. I dedicate my first #egu2013 tweet to this conversation and go home. I am happy to find a Va Piano (Italian restaurant) in ‘my’ street. Together with Sherlock Holmes (the book, that is), I eat my pasta.

Tweet At #egu2013 opening reception, interesting conversation about models and data: “important, but never take them for granted” (Walter Schmidt)

Monday, 8 April

Permafrost day. An important issue, as permafrost contains about half of the world’s soil carbon. If permafrost thaws, the organic carbon becomes available for microbes to degrade. Greenhouse gas (methane) emissions are a result, further increasing temperatures. This positive feedback is sometimes compared to a time bomb. Modelling studies of permafrost do show it will degrade under further warming. For example, Greenland permafrost south of 76°N will disintegrate this century. However, see RealClimate before you start to worry that this bomb is about to explode.

But today is not only permafrost; I’ve also got something on ice observations.

(more…)

Long term persistence and internal climate variability

April 30, 2013

After a long hiatus, Climate Dialogue has just opened a second discussion. This time it’s about the presence of long term persistence in timeseries of global average temperature, and its implications (if any) for internal variability of the climate system and for trend significance. This discussion is strongly related to the question of whether global warming could just be a random walk, a question vigorously debated on this blog (incl my classic  april fool’s day post three years ago).

Invited expert participants in the discussion include Rasmus Benestad (of RealClimate fame), Demetris Koutsoyiannis and Armin Bunde. The introduction text here slightly differs from that posted on ClimateDialogue.org

(more…)

The two epochs of Marcott

March 19, 2013

Guest post by Jos Hagelaars. Dutch version is here.

The big picture (or as some call it: the Wheelchair): Global average temperature since the last ice age (20,000 BC) up to the not-too distant future (2100) under a middle-of-the-road emission scenario.

Shakun_Marcott_HadCRUT4_A1B_Eng

Figure 1: The temperature reconstruction of Shakun et al (green – shifted manually by 0.25 degrees), of Marcott et al (blue), combined with the instrumental period data from HadCRUT4 (red) and the model average of IPCC projections for the A1B scenario up to 2100 (orange).

Earlier this month an article was published in Science about a temperature reconstruction regarding the past 11,000 years. The lead author is Shaun Marcott from Oregon State University and the second author Jeremy Shakun, who may be familiar from the interesting study that was published last year on the relationship between CO2 and temperature during the last deglaciation. The temperature reconstruction of Marcott is the first one that covers the entire period of the Holocene. Naturally this reconstruction is not  perfect, and some details will probably change in the future. A normal part of the scientific process.

(more…)

Klotzbach Revisited

March 1, 2013

Guest blog by Jos Hagelaars. Dutch version here.

The average surface temperature of the earth, measured by ‘thermometers’, are released by a number of institutes, the most well-known of these datasets are GISTEMP, HadCRUT and NCDC. Since 1979 temperature data for the lower troposphere are released by the University of Alabama in Huntsville (UAH) and Remote Sensing Systems (RSS), which are measured by satellites.
The temperatures of these two methods of measurement show differences, for instance: the NCDC data indicate a trend over land of 0.27 °C/decade for the period 1979 up to and including 2012, while over the same period, the trend based upon the satellite data by UAH over land is significantly lower at 0.18 °C/decade. In contrast, the trends for global temperatures indicate much smaller differences, for NCDC and UAH these are respectively 0.15 °C/decade and 0.14 °C/decade for the same period.

Big deal? Almost everything related to climate is a ‘big deal’, so it is of no surprise that the same applies to these trend differences. In a warming world it is expected that the temperatures of the upper troposphere increase at a higher rate than at the surface, regardless of the cause of the warming. The satellite data (UAH and RSS) do not reflect this. Why is the upper troposphere expected to warm at a higher rate and what is the cause of these trend differences between the surface  and satellite temperatures?

The temperature gradient in the troposphere / the ‘lapse rate’

When you go up in the troposphere it gets colder. This is caused by the fact that rising air will cool down with increasing altitude due to a decrease in pressure with altitude, by means of so-called adiabatic processes. This temperature gradient is called the lapse rate, a concept one will frequently encounter in papers regarding the atmosphere in relation to climate. When the air is dry, this temperature drop is about 10 °C per km. When the air contains water vapor, this vapor will condense to water upon cooling as a result of the rising of the air, which releases heat of condensation. So in this way, heat is transported to higher altitudes and the temperature drop with height will decrease. For air saturated with water vapor, this vertical temperature drop is approximately 6 °C per km.

When the earth gets warmer, air can contain more water vapor. This also has an impact on the lapse rate, since more water vapor means more heat transfer to higher altitudes. This effect on the lapse rate is called the lapse rate feedback. More heat at higher altitudes implies that there will be more emission of infrared light, a negative feedback. This effect is particularly important in the tropics. At higher latitudes, the increase in temperature at the surface is dominant, therefore the change in the lapse rate will turn into a positive feedback. See figure 1 (adapted from the climate dynamics webpage of the University of Leuven).

(more…)


Follow

Get every new post delivered to your Inbox.

Join 127 other followers