Long term persistence and internal climate variability


After a long hiatus, Climate Dialogue has just opened a second discussion. This time it’s about the presence of long term persistence in timeseries of global average temperature, and its implications (if any) for internal variability of the climate system and for trend significance. This discussion is strongly related to the question of whether global warming could just be a random walk, a question vigorously debated on this blog (incl my classic  april fool’s day post three years ago).

Invited expert participants in the discussion include Rasmus Benestad (of RealClimate fame), Demetris Koutsoyiannis and Armin Bunde. The introduction text here slightly differs from that posted on ClimateDialogue.org

The Earth is warmer now than it was 150 years ago. This fact itself is uncontroversial. It’s not trivial though how to interpret this warming. The attribution of this warming to anthropogenic causes relies heavily on an accurate characterization of the natural behavior of the system. Here we will discuss how statistical assumptions influence the interpretation of measured global warming.

Agents of change

Global climate can change (say, on time scales > 10 years) due to a variety of processes. For the sake of this discussion, the following processes are distinguished:

–       natural unforced variability (e.g. oscillations or semi-random processes internal to the climate system)

–       natural forced variability (e.g. changes in the output of the sun or in volcanism)

–       anthropogenic forced variability (e.g. changes in greenhouse gas or aerosol concentrations)

Internal variability

Most experts agree that all three types of processes play a role in changing the Earth’s climate over the past 150 years. It is the relative magnitude of each that is in dispute. The IPCC AR4 report stated that “it is extremely unlikely (<5%) that recent global warming is due to internal variability alone, and very unlikely (< 10 %) that it is due to known natural causes alone.” This conclusion is based on detection and attribution studies of different climate variables and different ‘fingerprints’ which include not only observations but also physical insights in the climate processes.

The IPCC AR4 definitions of detection and attribution are:

“Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”

“Attribution of causes of climate change is the process of establishing the most likely causes for the detected change with some defined level of confidence.”

The phrase ‘change in some defined statistical sense’ in the definition for detection turns out to be the starting point for our discussion. Because what is the ‘right’ statistical model (assumption) to conclude whether a change is significant or not? And how does our understanding of internal variability enter into this picture?

According to AR4, “An identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.”  Detection is thus concerned with distinguishing the forced from the unforced component (sometimes referred to as signal and the noise), whereas attribution is concerned with assigning causes to the forced component.

There are different methods for estimating the magnitude of natural climate variability. In one approach control runs (without climate forcing) are performed with GCM’s. Critics wonder whether such control simulations are representative of the real world. In another approach a statistical analysis is performed on the observed climatic time series itself. Here the presence of (natural and anthropogenic) climate forcing forms a complicating factor. Some studies have combined both methods and compared modelled and observed time series, as well as their power spectra as a means to circumvent the influence of climate forcing on the timeseries (cf. AR4 fig 9.7).

Long term persistence

Critics argue though that most if not all changes in the climatological time series are an expression of long-term persistence (LTP). Long-term persistence means there is a long memory in the system, although unlike a random walk it remains bounded in the very long run. There are stochastic /unforced fluctuations on all time scales. More technically, the autocorrelation function goes to zero algebraically (very slowly). These critics argue that by taking LTP into account trend significance is reduced by orders of magnitude compared to statistical models that assume short-term persistence (AR1), as was applied e.g. in the illustrative trend estimates in table 3.2 of AR4. (Cohn and Lins, 2005[i]); Koutsoyiannis and Montanari, 2007[ii]).

This has consequences for attribution as well, since long term persistence is often assumed to be a sign of unforced (internal) variability (e.g. Cohn and Lins, 2005; Rybski et al, 2006). However, LTP can also be a consequence of a deterministic trend (e.g. GCM model output also exhibits LTP). In reaction to Cohn and Lins (2005), Rybski et al. (2006)[iii] concluded that even when LTP is taken into account at least part of the recent warming cannot be solely related to natural factors and that the recent clustering of warm years is very unusual (see also Zorita (2008)[iv]). This translates directly into the question of how important the statistical model used is for determining the significance of the observed trends.

Climate Dialogue
Although the IPCC definition for detection seems to be clear, the phrase ‘change in some defined statistical sense’ leaves a lot of wiggle room. For the sake of a focussed discussion we define here the detection of climate change as showing that some of this change is outside the bounds of internal climate variability. The focus of this discussion is how to best apply statistical methods and physical understanding to address this question of whether the observed changes are outside the bounds of internal variability. Discussions about the physical mechanisms governing the internal variability are also welcome.

Specific questions

  1. What exactly is long-term persistence (LTP), and why is it relevant for the detection of climate change?
  2. Is “detection” purely a matter of statistics? And how does the statistical model relate to our knowledge of internal variability?
  3. What is the ‘right’ statistical model to analyse whether there is a detected change or not? What are your assumptions when using that model?
  4. How long should a time series be in order to make a meaningful inference about LTP or other statistical models? How can one be sure that one model is better than the other?
  5. Based on your statistical model of preference do you conclude that there is a significant warming trend?
  6. Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?
  7. If you reject detection of climate change based on your preferred statistical model, would you have a suggestion as to the mechanism(s) that have generated the observed warming?

[i] Cohn,. T. A., and H. F. Lins (2005), Nature’s style: Naturally trendy,. Geophys. Res. Lett., 32, L23402, doi:10.1029/2005GL024476

[ii] Koutsoyiannis, D., Montanari, A., Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resour. Res., Vol. 43, W05429, doi:10.1029/2006WR005592, 2007

[iii] Rybski, D., A. Bunde, S. Havlin, and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi:10.1029/2005GL025591

[iv] Zorita, E., T. F. Stocker, and H. von Storch (2008), How unusual is the recent series of warm years?, Geophys. Res. Lett., 35, L24706, doi:10.1029/2008GL036228


Tags: , , , , , , ,

13 Responses to “Long term persistence and internal climate variability”

  1. Bill G Says:

    How is it possible for man to add millions and millions and millions of tons of carbon to the atmosphere and there not be major, severe changes to climate?

    Is there any grounds at all for a debate?

  2. Bart Verheggen Says:

    Bill G,

    A yes/no question would indeed not be very interesting, but the question of how much on the other hand is.

    Plus, putting conflicting arguments under scrutiny on a public forum may help non-scientists (and scientists) to determine what are the more robust arguments and interpretations.

  3. Paul S Says:

    Bill G,

    I guess you were thinking beyond this basic point, but if CO2 weren’t a greenhouse gas then adding so much to the atmosphere probably wouldn’t have a noticeable effect. I don’t know if there’s any intrinsic reason why the waste product of fossil fuel burning had to be a greenhouse gas? Almost a corollary to the anthropic principle?

    But, given that CO2 is a greenhouse gas, it must have some effect on climate. In defence of LTP I can see two, non mutually-exclusive, ways that historical CO2 (and other WMGHGs) increase doesn’t mean a dominant anthropogenic influence on the observed surface temperature increase:

    1) The effect of WMGHGs on climate is small. This would mean invoking negative feedbacks to the initial GHG impetus.

    2) Another important set of anthropogenic emissions that occur due to fossil fuel consumption and other processes are aerosols, which predominantly have a cooling effect on climate. It’s possible that anthropogenic WMGHGs and aerosols have substantially cancelled each other out, meaning that the surface temperature increase would have to be natural.

    One factor that doesn’t get much mention in the Climate Dialogue essays is observed ocean warming (Rasmus Benestad briefly alludes to it), which is determined directly from measurements indicating an increase in ocean heat content, and indirectly through observations of sea level rise. To me this is what probably rules out LTP as a primary driver of historical events, because it shows that the overall energy budget of the Earth atmosphere-ocean system has increased (indeed, increased by more than can be explained by initial WMGHG forcing by itself), which rules out surface temperature change occurring as a function of energy transfer from one part of the system to another. This can only occur due to a change in forcing and/or a shift in the behaviour of clouds and water vapour. Clouds and water vapour are very short lived phenomena in the atmosphere and therefore can only be responses (feedbacks) to local climate conditions.

    If the positive change in TOA energy imbalance is due to a LTP-induced SST warming trend, it would be so because of a tendancy towards a net positive feedback to surface warming. If LTP-induced surface warming causes net positive feedbacks there seems to be absolutely no reason why WMGHG-induced surface warming wouldn’t have the same response. If we are then agreed that the climate system has a general positive feedback response and WMGHGs have significantly increased over the past 100 years, can we really argue that observed warming has not essentially all been caused by anthropogenic influence?

    Actually there are a couple of possible alternative narratives: If anthropogenic aerosols have almost completely cancelled out anthropogenic WMGHGs then observed warming could be explained by LTP + positive feedbacks alone. However, all that means is that we are yet to experience a strong anthropogenic warming which will occur when the short-lived aerosols are cleaned out. Not exactly good news though…

  4. Paul S Says:

    It’s also worth noting that this is, at heart, a rather arcane topic concerning the extent to which statistical positive attribution of surface temperature warming can be assigned to anthropogenic WMGHGs. If we weren’t technically able to “reject the null” and make a positive attribution, that doesn’t actually mean WMGHGs are not significantly affecting climate – just that the data we have available is not enough to confirm or deny the effect. Arguably it’s possible we could see 3ºC global average warming over the next century and still LTP could be invoked to block a definitive attribution to WMGHGs.

    I would argue, as I have above, that we do have data relating to ocean warming which provides extra evidence to support a positive attribution to WMGHGs. It seems to me this data is usually avoided/ignored by those advocating for LTP. Having said that, it’s also the case that past climate change attribution has also tended to focus almost exclusively on surface temperature, so maybe that avoidance is fair enough and should be used as impetus for future attribution studies to include much more consideration of other data.

  5. MMM Says:

    I don’t have a Word Press account, so can’t comment directly on climatedialogue: but my general impression is that we have two philosophies:

    1) Rasmus, with what I’ll call the “standard” climate scientist approach which is heavy on physical reasoning, and medium-light on statistics.

    2) Demetris and Armin, who are heavy on statistics and very light on physical reasoning.

    As a physical scientist, I naturally find Rasmus more convincing. I think the distinction between forced and unforced variability, as well as a consideration of the whole system rather than one timeseries, is very important. The argument about the medieval warm period demonstrating that climate models are flawed was an example of the rather limited understanding of these issues that Demetris and Armin are presenting.

    However, to help them out, I will say that the Rasmus side has to be a little careful with its logic… it is tempting to use the logic that because we monitor both the oceans and the atmosphere, and because basic thermodynamics dictates that energy is conserved, that unforced variability would therefore mainly involve transfers of heat from the ocean to the atmosphere and vice versa and that therefore if both the ocean and the atmosphere are warming, that the system must be being forced. That’s almost true, but ignores the possibility that natural variability can change the net energy in the system by changing the planetary albedo: if, for example, a shift in ocean circulation were to reduce average cloud cover, then we could see a simultaneous warming in both the atmosphere and ocean without an “external” forcer such as changes in the sun, volcanoes, or GHG concentrations.

    I do think there is value in more sophisticated statistical approaches being more of a default in the climate science world – for example, my impression is that the Berkeley BEST temperature approach is an improvement over the previous temperature syntheses – but I am unconvinced that they’ll make a huge difference (see, again, the BEST analysis which looks very similar to GISS, CRUtem, and NCDC). And, unfortunately, there is a tendency for outside statisticians to get the climate world totally wrong (Demetris and Armin are far from the worst offenders here – Wegman comes to mind, of course). Also, there do exist statistically sophisticated climate scientists, but the median understanding of statistics among the community could stand to be improved. (of course, so could the median code writing ability, and many other aspects, and there is limited time in the day to become an expert at all things… but there are ways to improve the standard approaches of the community without requiring a huge time sink of individuals) But I think I begin to stray from my main point here…


    ps. I am also curious how LTP manages to avoid the bounding problems of a random walk…

  6. MMM Says:

    One last point – while cloud changes provides a potential mechanism for natural variability to change ocean & surface temperatures in the same direction, we could presumably use observations of clouds mechanism to provide evidence for or against the plausibility of such a mechanism… but again, that’s science, not just blind devotion to statistics.


  7. Paul S Says:

    Looks like Demetris Koutsoyiannis is refusing to understand the point about statistics in the absense of physics, and has decided to characterise himself as a heretic already.

    Oddly enough Koutsoyiannis does actually gives an example of the point people are making in his initial response to Armin Bunde’s claim of finding no statistical significance for SST change, but strong significance in land temperature change. He (correctly, in my view) points out that finding such different results for the two temperature series doesn’t make sense because land and sea are strongly coupled. This is an argument that physics trumps Armin Bunde’s purely statistical findings, as others have suggested in relation to Koutsoyiannis’ own work.


    I would suggest calculated diurnal temperature range from Tmax-Tmin land temperature measurements (See BEST again) offers a pretty strong indicator that planetary albedo has, if anything, increased over the past 60 years.

    Actually, on that point I would offer a re-framing for the physics/statistics argument, since that seems to be causing some consternation, as a clash between a holistic/systems approach and a reductionist approach. The latter appears to want to isolate surface tempeature as a dataset in its own right, whereas the former sees surface temperature as one expression of a larger system and therefore always wants to relate that dataset to others.

  8. Bart Verheggen Says:

    MMM, no wordpress account is needed to register at CLimDialogue.

  9. Bart Verheggen Says:

    Good catch, Paul. I’ve used the example you point out in my reply to Koutsoyiannis.

    MMM, regarding internal; variability being able to cause spontaneous changes in cloudiness: I think that’s very unlikely, though perhaps not impossible (hence my insertion of “likely” where I mentioned this redistribution of energy). I concur with what Paul S said about this: “Clouds and water vapour are very short lived phenomena in the atmosphere and therefore can only be responses (feedbacks) to local climate conditions.” Add to that the strong dependence of water vapor concentration to the ambient temperature and the tendency of the relative humidity to remain relatively constant, I do not see a plausible way for cloudiness to spontaneously increase or decrease by very much. If there would be a plausible way for that to happen, I would find the relative stability of the earth’s climate in the absence of strong forcings (e.g. over the bigger part of the Holocene) quite remarkable. The oceans may have long ago boiled away if cloudiness could just change independent of the ambient temperature.

    Moreover, a spontaneous change in cloudiness would actually be classified as a natural radiative forcing if I understand it correctly, since it would directly translate (via the albedo) in a change in net irradiance at the top of the atmosphere.

  10. Paul S Says:

    Hi Bart,

    I think I find MMM’s point a bit more persuasive. While clouds and water vapour are best regarded as feedbacks, the specifics of feedback outcomes can vary across different locations – different thresholds etc. – so I would acknowledge a naive possibility that circulation (ocean/atmosphere) shifts could rearrange conditions such that global average low cloud albedo is strongly increased/decreased.

    I also think it’s a stretch to regard such effects as radiative forcing in a standard framework for understanding climate change. To give a practical example, forced historical runs are often compared to unforced pi control runs in order to demonstrate the difference between forced and unforced change in the model. Given that the pi control run will likely include fluctuations in global average shortwave absorption at the surface (related to low cloud changes), I don’t think such things could be regarded as radiative forcing in the same way as CO2 increase.

    But yes, you would need a pretty major persistent shift to produce observed surface and subsurface temperature changes at the same time and the relative stability of the Holocene indicates that isn’t typical behaviour. I would also point to the diurnal temperature range statistic again: if cloud albedo decrease were a primary driver we would expect greater surface temperature increase at daily maximum. What we observe is the opposite.

  11. Bart Verheggen Says:

    Hi Paul,

    Indeed it may not be impossible that such changes would occur on longer time scales, though I find it very implausible because of the strong and quick dependence on temperature and other environmental conditions.

    Whether to call such a hypothetical change a forcing is perhaps a matter of definition, and I’m by no means sure that my interpretation thereof is correct. The distinction between feedback and forcing is in many cases operational in any case.

    But if you consider the following:

    A change in albedo as a result of slowly changing vegetation is called a forcing. This has occurred from anthropogenic activity, but who knows it’s also possible to happen spontaneously (not very plausible, because of a similar dependence on environmental conditions, but let’s consider it for the sake of the argument). The working mechanism and effect would be identical, whether anthropogenic or natural, so also in the latter case it should be classed as a forcing (and not as internal variability).

    Why then would a similar (hypothetical) change in albedo due to cloud cover not be classed as a forcing?

    Would a natural, spontanteous change in greenhouse gas concentrations be called internal variability? Likewise, I would class it as a forcing.

    Radiative Forcing is defined as a change in the radiation budget (incoming minus outgoing radiation). This is influenced by the solar irradiance, albedo and greenhouse gases (in a simple model represented by the number of layers). A change in any of these which is not a direct consequence of temperature is defined as a forcing; if it is caused by a temperature change it’s a feedback.

  12. DeWitt Payne Says:


    ps. I am also curious how LTP manages to avoid the bounding problems of a random walk…

    Because, unlike a random walk, the autocorrelation term in LTP does eventually go to zero.

  13. WebHubTelescope Says:

    “Because, unlike a random walk, the autocorrelation term in LTP does eventually go to zero.”

    That’s wrong. A pure random walk shows no correlation over a long enough interval. Given a long enough time, the random walker can move infinitely far from the origin. That makes the value of correlation equal to zero.

    An Ornstein-Ulenbeck random walk is the kind that has an AC that looks like exp(-a|x|). This is bounded of course.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: