After a long hiatus, Climate Dialogue has just opened a second discussion. This time it’s about the presence of long term persistence in timeseries of global average temperature, and its implications (if any) for internal variability of the climate system and for trend significance. This discussion is strongly related to the question of whether global warming could just be a random walk, a question vigorously debated on this blog (incl my classic april fool’s day post three years ago).
Invited expert participants in the discussion include Rasmus Benestad (of RealClimate fame), Demetris Koutsoyiannis and Armin Bunde. The introduction text here slightly differs from that posted on ClimateDialogue.org
The Earth is warmer now than it was 150 years ago. This fact itself is uncontroversial. It’s not trivial though how to interpret this warming. The attribution of this warming to anthropogenic causes relies heavily on an accurate characterization of the natural behavior of the system. Here we will discuss how statistical assumptions influence the interpretation of measured global warming.
Agents of change
Global climate can change (say, on time scales > 10 years) due to a variety of processes. For the sake of this discussion, the following processes are distinguished:
– natural unforced variability (e.g. oscillations or semi-random processes internal to the climate system)
– natural forced variability (e.g. changes in the output of the sun or in volcanism)
– anthropogenic forced variability (e.g. changes in greenhouse gas or aerosol concentrations)
Most experts agree that all three types of processes play a role in changing the Earth’s climate over the past 150 years. It is the relative magnitude of each that is in dispute. The IPCC AR4 report stated that “it is extremely unlikely (<5%) that recent global warming is due to internal variability alone, and very unlikely (< 10 %) that it is due to known natural causes alone.” This conclusion is based on detection and attribution studies of different climate variables and different ‘fingerprints’ which include not only observations but also physical insights in the climate processes.
The IPCC AR4 definitions of detection and attribution are:
“Detection of climate change is the process of demonstrating that climate has changed in some defined statistical sense, without providing a reason for that change.”
“Attribution of causes of climate change is the process of establishing the most likely causes for the detected change with some defined level of confidence.”
The phrase ‘change in some defined statistical sense’ in the definition for detection turns out to be the starting point for our discussion. Because what is the ‘right’ statistical model (assumption) to conclude whether a change is significant or not? And how does our understanding of internal variability enter into this picture?
According to AR4, “An identified change is ‘detected’ in observations if its likelihood of occurrence by chance due to internal variability alone is determined to be small.” Detection is thus concerned with distinguishing the forced from the unforced component (sometimes referred to as signal and the noise), whereas attribution is concerned with assigning causes to the forced component.
There are different methods for estimating the magnitude of natural climate variability. In one approach control runs (without climate forcing) are performed with GCM’s. Critics wonder whether such control simulations are representative of the real world. In another approach a statistical analysis is performed on the observed climatic time series itself. Here the presence of (natural and anthropogenic) climate forcing forms a complicating factor. Some studies have combined both methods and compared modelled and observed time series, as well as their power spectra as a means to circumvent the influence of climate forcing on the timeseries (cf. AR4 fig 9.7).
Long term persistence
Critics argue though that most if not all changes in the climatological time series are an expression of long-term persistence (LTP). Long-term persistence means there is a long memory in the system, although unlike a random walk it remains bounded in the very long run. There are stochastic /unforced fluctuations on all time scales. More technically, the autocorrelation function goes to zero algebraically (very slowly). These critics argue that by taking LTP into account trend significance is reduced by orders of magnitude compared to statistical models that assume short-term persistence (AR1), as was applied e.g. in the illustrative trend estimates in table 3.2 of AR4. (Cohn and Lins, 2005[i]); Koutsoyiannis and Montanari, 2007[ii]).
This has consequences for attribution as well, since long term persistence is often assumed to be a sign of unforced (internal) variability (e.g. Cohn and Lins, 2005; Rybski et al, 2006). However, LTP can also be a consequence of a deterministic trend (e.g. GCM model output also exhibits LTP). In reaction to Cohn and Lins (2005), Rybski et al. (2006)[iii] concluded that even when LTP is taken into account at least part of the recent warming cannot be solely related to natural factors and that the recent clustering of warm years is very unusual (see also Zorita (2008)[iv]). This translates directly into the question of how important the statistical model used is for determining the significance of the observed trends.
Although the IPCC definition for detection seems to be clear, the phrase ‘change in some defined statistical sense’ leaves a lot of wiggle room. For the sake of a focussed discussion we define here the detection of climate change as showing that some of this change is outside the bounds of internal climate variability. The focus of this discussion is how to best apply statistical methods and physical understanding to address this question of whether the observed changes are outside the bounds of internal variability. Discussions about the physical mechanisms governing the internal variability are also welcome.
- What exactly is long-term persistence (LTP), and why is it relevant for the detection of climate change?
- Is “detection” purely a matter of statistics? And how does the statistical model relate to our knowledge of internal variability?
- What is the ‘right’ statistical model to analyse whether there is a detected change or not? What are your assumptions when using that model?
- How long should a time series be in order to make a meaningful inference about LTP or other statistical models? How can one be sure that one model is better than the other?
- Based on your statistical model of preference do you conclude that there is a significant warming trend?
- Based on your statistical model of preference what is the probability that 11 of the warmest years in a 162 year long time series (HadCrut4) all lie in the last 12 years?
- If you reject detection of climate change based on your preferred statistical model, would you have a suggestion as to the mechanism(s) that have generated the observed warming?
[i] Cohn,. T. A., and H. F. Lins (2005), Nature’s style: Naturally trendy,. Geophys. Res. Lett., 32, L23402, doi:10.1029/2005GL024476
[ii] Koutsoyiannis, D., Montanari, A., Statistical analysis of hydroclimatic time series: Uncertainty and insights, Water Resour. Res., Vol. 43, W05429, doi:10.1029/2006WR005592, 2007
[iii] Rybski, D., A. Bunde, S. Havlin, and H. von Storch (2006), Long-term persistence in climate and the detection problem, Geophys. Res. Lett., 33, L06718, doi:10.1029/2005GL025591
[iv] Zorita, E., T. F. Stocker, and H. von Storch (2008), How unusual is the recent series of warm years?, Geophys. Res. Lett., 35, L24706, doi:10.1029/2008GL036228