Global average temperature increase GISS HadCRU and NCDC compared

by

I made some graphs of global temperature change according to the three major compilations based on measured surface temperatures: GISS, HadCRU and NCDC. They are expressed as the temperature difference (“anomaly”) with respect to the 1901-2000 average as the baseline.

Temperatures jiggle up and down, but the overall trend is up: The globe is warming.

To highlight the long term trend more clearly, below the same figure with in addition the 11 year running mean (which stops 5 years short of each endpoint for lack of data to calculate the mean):

Some people prefer you to only look at the last dozen of years:

Often, the last datapoint (representing 2009) is omitted, and only HadCRU temperatures (in blue) are shown, to create the most visually compelling picture for claiming that “global warming has stopped” or even reversed (“blogal cooling”, pun intended).

If however we look at the trend through the average of the three datasets over the period 1975-2009 (during which greenhouse gas forcing was the dominant driver of climate change), we see the following:

The trend over 1975 to 2009 is approximately the same (0.17 +/- 0.03 degrees per decade) for all three temperature series.

The error represents the 95% confidence interval for the trend, i.e. if you were to repeat the trend analysis a hundred times on the real underlying data, 95 times you would find that the trend is within the range 0.14 to 0.20 degrees per decade.

The thin black lines represent the 95% confidence “predictions bands” for the data: Based on the observed variability, 95% of the data are expected to fall within these lines.

The observed yearly variability in global temperatures (sometimes exceeding 0.2 degrees) is such that 10 years is too short to discern the underlying long term trend (0.17 degrees per decade). There is no sign that the warming trend of the past 35 years has recently stopped or reversed.

 More info:

A major difference between the datasets is that HadCRU omits the arctic (in effect assuming that is warms as the global average), while GISS estimates it by interpolation. I don’t know about NCDC. See also RealClimate and James Hansen.

Similar analysis of GISS, HadCRU and NCDC temperatures up to 2007 by Tamino. Other nifty analyses by Tamino relating to the same theme can be found here, here, here and here.

1998 was a record warm year in large part because of a very strong El Nino event. If the effect of the ENSO cycle is removed, the warming trend becomes even more apparent, see e.g. RealClimate. Other rebuttals of the spurious 1998-claim at SkepticalScience, Coby Beck, Zeke Hausfather, RealClimate, Scott Mandia, Greenfyre (including lots more links) and Peter Sinclair of the denial crock of the week youtube videoseries.

Four independent statisticians were given the data (up to 2008) and asked to look for trends, without being told what the numbers represented. Not surprisingly, they found no evidence of a downward trend. Story retold e.g. here and here.

Robert Grumbine explains the art of cherrypicking and why it is not science.

Update: If you want higher resolution versions of any of the figures here you can email me via the link on the right (under “pages”).

Tags: , , , , , , , , , ,

2,192 Responses to “Global average temperature increase GISS HadCRU and NCDC compared”

  1. Länkar 2010-03-02 Says:

    […] Global average temperature increase GISS HadCRU and NCDC compared […]

  2. Andrew Says:

    Nice graphs; crisp and clear.

    Of course, they only represent surface temperatures (land and water).

    Climate change also involves warming (heating) of subsurface waters, permafrost and land ice. As I recall, the amount of heat energy involved with rising surface temperatures amounts to only about 3% of the total heat change for the globe. That is to say about 97% of the heat from global warming is flowing into subsurface waters, permafrost and land ice.

  3. VS Says:

    Hi Bart,

    Actually, statistically speaking, there is no clear ‘trend’ here, and the Ordinary Least Squares (OLS) trend you estimated up there is simply non-sensical, and has nothing to do with statistics.

    Here is a series of Augmented Dickey-Fuller tests performed on temperature series (lag selection on basis of a standard enthropy measure, the SIC), designed to distinguish between deterministic and stochastic trends. This is the first and most essential step in any time series analysis, see for starters Granger’s work at http://nobelprize.org/nobel_prizes/economics/laureates/2003/

    Test resutls:

    ** CRUTEM3, global mean, 1850-2008:
    Level series, ADF test statistic (p-value<):
    -0.329923 (0.9164)
    First difference series, ADF test statistic (p-value<):
    -13.06345 (0.0000)

    Conclusion: I(1)

    ** GISSTEMP, global mean, 1881-2008:
    Level series, ADF test statistic (p-value<):
    -0.168613 (0.6234)
    First difference series, ADF test statistic (p-value<):
    -11.53925 (0.0000)

    Conclusion: I(1)

    ** GISSTEMP, global mean, combined, 1881-2008:
    Level series, ADF test statistic (p-value<): -0.301710 (0.5752)
    First difference series, ADF test statistic (p-value): -10.84587 (0.0000)

    Conclusion: I(1)

    ** HADCRUT, global mean, 1850-2008
    Level series, ADF test statistic (p-value<):
    -1.061592 (0.2597)
    First difference series, ADF test statistic (p-value<):
    -11.45482 (0.0000)

    Conclusion: I(1)

    These results are furthermore in line with the literature on the topic. See the following:

    ** Woodward and Grey (1995)
    – reject I(0), don’t test for I(1)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk. Simply calculating OLS trends and claiming that there is a 'clear increase' is non-sense (non-science). According to what we observe therefore, temperatures might either increase or decrease in the following year (so no 'trend').

    There is more. Take a look at Beenstock and Reingewertz (2009). They apply proper econometric techniques (as opposed to e.g. Kaufmann, who performs mathematically/statistically incorrect analyses) for the analysis of such series together with greenhouse forcings, solar irradiance and the like (i.e. the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated, as this makes them asymptotically independent. They, therefore have to be related via more general methods such as polynomial cointegration).

    Any long term relationship between CO2 and global temperatures is rejected. This amounts, at the very least, to a huge red flag.

    Claims of the type you made here are typical of 'climate science'. You guys apparently believe that you need not pay attention to any already established scientific field (here, statistics). In this context, much of McIntyre's criticism is valid, however much you guys experience it as 'obstructionism'.

    It would do your discipline well to develop a proper methodology first, and open up all of your methods to external scrutiny by other scientists, before diving head first into global policy consulting.

    PS. Also, even if the temperature series contained a deterministic trend (which it doesn't), your 'interpretation' of the 95% confidence interval is inprecise and misleading, at best. I suggest you brush up on your statistics.

  4. Heiko Gerhauser Says:

    Hi Bart,

    I think that the error bands for temperature need to be quite large, I also think that we have a very poor understanding of how aerosol forcings have changed over time, and therefore how total forcings have changed over time. In addition, there is quite a range of model outputs even for a given forcing history and we also have poor understanding of how total forcing will change.

    Or in other words, it might just be that between now and 2030 extra aerosol forcing will mask quite a bit of warming, or that it won’t and we’ll shoot up by 2C over that period.

    The last ten years of data don’t do a great deal to resolve the degree of masking experienced to date. 1C up or 0.5C down would have.

    Or in other words, they are not evidence against a climate sensitivity of 3C, but neither do they add to the evidence for a climate sensitivity of 3C.

    As for VS’s lengthy comment, I just don’t think it’s that helpful to torture the data with statistics. That won’t tell you whether it confirms or disproves the models, or what temperature will be in 2050.

    If you merely do curve fitting, a sine wave of 30 years and an amplitude of 0.2C, a constant underlying trend of 0.5C per century plus a bit of yearly noise (standard distribution, standard deviation of 0.04C) will do nicely, but of course has no predictive power for 2050 or 2100.

    James Annan and me had a discussion on global change a while back about tipping points. James largely doesn’t buy the idea. I am not so sure myself.

  5. Bart Says:

    Andrew,

    You’re right that of the excess energy in the earth system only a small part goes into warming the atmosphere, whereas the bulk goes into warming the oceans (which expand as a result, which is an important contributor to sea level rise). The deep ocean has continued accumulating heat, sea level rise has continues, Arctic sea ice has disappeared faster than models have predicted: Global change is abundantly clear in the aggregate of observed changes.

  6. Bart Says:

    VS,

    The observed changes in temperature over the past 130 years are no random walk.

    The Augmented Dickey-Fuller tests for the presence of a unit root in an autoregressive process, and my guess is that it’s not automatically applicable to the estimation of a trend in temperature series. Your conclusion of a ‘random walk’ is at odds with the observed changes in climate, both recent (past century) and in the deeper past, and also with what is known about the physics of climate:

    One difficulty with the notion that the global mean temperature behaves like a random walk is that it then would imply a more unstable system with similar hikes as we now observe throughout our history. However, the indications are that the historical climate has been fairly stable.

    And OTOH, based on large shift in climate in the past, there are indications that the climate can be pushed into a certain direction if pushed hard enough. A driving force is needed to provide that push; it doesn’t happen randomly. Plate tectonics, solar output, CO2: They all can be found to be important players in climate changes having occurred in the past, though on different timescales.

    The observed changes fall outside of the bounds of natural variability (i.e. stochastic changes); there must be some contribution of climate drivers (i.e. deterministic changes).

    It is remarkable statistically that the 13 (now 14) warmest years in the modern record have all occurred since 1990. The fact that the 13 warmest years since 1880 could have occurred by accident after 1990 corresponds to a likelihood of no more than 1:10 000.

    How would you estimate that chance?

    RC continues:

    An even more serious problem with (…) the random walk notion is that a hike in the global surface temperature would have physical implications – be it energetic (Stefan-Boltzmann, heat budget) or dynamic (vertical stability, circulation). In fact, one may wonder if an underlying assumption of stochastic behaviour is representative, since after all, the laws of physics seem to rule our universe.

    See also my reply to Andrew about changes in other parts of the climate system.

  7. Bart Says:

    Heiko,

    You’re right, aerosols are the big unknown in the 20th century changes in forcings, and the spread in individual model runs is large indeed (by the eye even larger than the spread in observed temperatures). For those reasons, the observed warming over the 20th century doesn’t provide a very strong constraint on climate sensitivity indeed.

    From emission inventories we do know that the aerosol precursor emissions rose sharply at the middle of the 20th century, and their cooling effect canceling the increasing GHG forcing is likely responsible for the 30 year stable period in global average temperature between the 1940s and 1970s.

    There are some educated guesses about what aerosol emissions will do in the near future (will write about that another time; briefly, over US and EU they will and are already decreasing, resp, whereas over Asia they will probably increase before decreasing later this century)

  8. VS Says:

    “As for VS’s lengthy comment, I just don’t think it’s that helpful to torture the data with statistics. That won’t tell you whether it confirms or disproves the models, or what temperature will be in 2050.”

    Hi Heiko,

    I wouldn’t classify the test results I posted above as ‘torture of the data’; coming from my field, that judgement would be far more applicable to what Mann et al are doing with their endless and statistically unjustified ‘adjustments’ to proxy/instrumental records.

    Posted above is a standard procedure (the first step even) in time series analysis. Given the test results, the calculations resulting in those confidence intervals (that you believe should be wider) are simply meaningless. They are conditional on:

    (1) the series containing a deterministic trend
    (2) that trend being determined by time only

    Obviously, as (1) is clearly rejected, both assertions are false. Ergo, those error bands make no sense.

    As for ‘statistics’ not being able to disprove a model: that’s a novelty for me. The scientific method, as I was taught, involves testing your hypothesis with observations. Statistics is the formal method of assessing what you actually observe.

    Given the hypercomplex and utterly chaotic nature of the Earth’s climate, and the non-experimental nature of observations it generates, I don’t see any other way of verifying/testing a model trying to describe or explain it.

    Here’s an interesting reading, it is an article published in 1944, in Econometrica, dealing with the probabilistic approach to hypothesis testing of economic theory (i.e. also a discipline attempting to model a hypercomplex chaotic system generating non-experimental observations).

    It is written by Trygve Haavelmo, who later received a Nobel Prize in Economics, in part also for this paper.

    Click to access the_probability_approach_in_econometrics.pdf

    You will note that many of the assertion made about the then-standard approach to hypothesis testing in economics, are in fact applicable to present day ‘climate science’ :)

  9. Heiko Gerhauser Says:

    In IPCC lingo, “likely” is fine. It’s hard to say more than that, if even the present day forcing of aerosols covers such a wide range. While we may have a fair idea of how much coal was burnt in 1960, I think, our knowledge of how much sulphur was in that coal, and what kind of aerosols (size, black or reflecting, residence time in the atmosphere) that led to, must surely be much poorer than for the more recent years, and as said, it’s not exactly a narrow range even for the present.

  10. VS Says:

    Hi Bart (just saw your reply),

    The ‘random walk’ concept is a bit tricky methodologically, and the fellows at RealClimate seem to be taking it too ‘literally’, so allow me to make an attempt to clarify it.

    I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks. That’s furthermore quite clear when we look at Ice-core data (up to 500,000 BC); at the very least, we observe a cyclical pattern, with an average cycle of ~100,000 years.

    However, we are looking at a very, very, small subset of those observations, namely the past 150 years or so. In this subsample, our record is clearly observed a random walk. For the purpose of statistical inference, it has to be treated as such for any analysis to actually make sense mathematically. Again, simply calculating trends via OLS is meaningless (and as I noted above, those confidence intervals are invalid)

    Statistics is ‘blind’, as it should be, when treating observations. Remember, we are trying to make objective inference.

    Remember that GHG forcings are also observed as a random walk. Now, statistically speaking, not all is lost, and the cointegration approach is well suited to relate these random walk series (it’s beautiful, if you think about it, hence the two Nobel prizes awarded for it so far :).

    Put differently the series can contain a common stochastic trend, which in its turn would imply an error correction mechanism, where the two series are never wondering ‘too far’ from each other. Finding such a link between greenhouse gas forcings and temperatures would be strong evidence for the hypothesized CO2/Temperature relationship (the link to the Nobel lecture by Granger is a good first reading to get you started on cointegration).

    An identified cointegration relationship would allow for proper confidence interval estimation, and any type of ‘forcing’ you describe above. Note also that a correlation (in this case emobdied by the cointegrating relationship) is a necessary, but not sufficient, condition for causation.

    However, when we try to relate them employing proper statistical/econometric methods, any long term relationship is rejected. This amasses to a huge red flag for the validity of any phenomenological model.

    PS. You cited the following from RC:

    “It is remarkable statistically that the 13 (now 14) warmest years in the modern record have all occurred since 1990. The fact that the 13 warmest years since 1880 could have occurred by accident after 1990 corresponds to a likelihood of no more than 1:10 000.”

    This is clear and utter nonsense. That likelihood might (might!!) be correct, if the ‘random walk’ would have somehow referred to levels, and not changes (i.e. first differnces). But it doesn’t.

    In time series analysis, one series is treated as a single sample realization from a given data generating process, so conditional on a given DGP, that probability up there is completely meaningless. Conditional on temperatures reaching their 1990 level, their observed 2000 level is very likely, assuming a random walk DGP.

    It is a bit like throwing a die 100 times (and generating a series of dice tosses). Then adding them up sequentially, where the realizations [1 2 3 4 5 6] are mapped as [-3 -2 -1 1 2 3], and then claiming that the sum at the end having some kind of deivant likelihood… in fact, assuming a random walk DGP, any realization is equally likely (note that the problems with those confidence intervals stems from exactly this).

    If anything (if! ;), that quote from RC actually confirms the random walk nature of temperature series.

  11. Scott Mandia Says:

    VS,

    I will admit my statistics background is essentially Stats I for science majors so I cannot question your analysis. I do have a few questions for you though:

    1) You appear to be hung-up on Mann, Kaufmann, and others whose reconstruction shows a hockey-stick type curve. Do you think the majority of the proxy curves are incorrectly analyzed? If so, I would assume that you would publish the analysis in a well-respected journal. You would also be paid handsomely by quite a few groups who are gunning for Mann and any hockey stick researcher so funding should not be an issue.

    2) How do the statistics explain the following?

    http://www.skepticalscience.com/Senator-Inhofe-attempt-to-distract-from-scientific-realities-of-global-warming.html

    Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). Curiously enough, CO2 concentrations and increasing rates are essentially “off the charts” historically. No causation?

    Again, this would be a landmark paper for you to publish.

    I am very impressed with your stats discussion but I would be a fan for life if you could publish answers to those two questions above.

  12. Bart Says:

    VS,

    “Remember that GHG forcings are also observed as a random walk.”

    ? How do I square that statement with the very strong increase in CO2 concentrations over the past 150 years? Just from recollection, I think it’s 3 million years ago that the CO2 concentration was as high as or higher than it is today. That sure is a counter intuitive definition of random walk, and has nothing to do with what non-statisticians (like me) would call “random”.

    “Conditional on temperatures reaching their 1990 level, their observed 2000 level is very likely, assuming a random walk DGP.”

    ? That sounds a bit like newspeak, “conditional on”. Sure, conditional on the temperatures reaching previous year’s level, there’s nothing strange with this year’s level, and that statement could be repeated for each year. But the long term trend is up, and in the physical world, such trends towards increasing (or decreasing) temperatures over climatologically relevant timescales do not happen without a reason.

    The 1 in 10,000 chance statement comes from a GRL paper, discussed in the PhysOrg link I provided (not RealClimate). They explain the following:

    “This likelihood (1 in 10,000) can be illustrated by using the game of chance “heads or tails”: the likelihood is the same as 14 heads in a row.”

    In your analogy of throwing a dice 100 times, the sum total of all realizations can be graphically depicted as a probability density function, and values around the center (0 in your mapping, or 350 if the nominal value of the die is taken) are more likely than those deviating far from it (even though every single realization has the same chance of occurrence) The probability density function of the total will look like a bell shaped curve with 350 as the mean (nominal values of the die taken), and a progressively smaller chance for outliers to either side. A one in 10,000 chance is perhaps reached at a total larger than, say 500 of thereabouts (just guessing). It is by all means a very unlikely event.

    But perhaps this comes to the root of the misunderstanding: In climate change, we’re interested in the change over time (the sum total of realizations in the previous analogy), not in any particular yearly value (any particular realization in the previous analogy), which indeed has a very strong ‘random’ component to it if you will (natural weather related variability).

  13. VS Says:

    Hi Mandia,

    You raise interesting issues, let me start with the first one; the reconstructions. Note, that I only referred to Mann’s proxy reconstructions in passing. What I posted above was related to the analysis of the instrumental record, as performed by e.g. Kaufmann. However, given that I grew up ‘academically’ analyzing real data, I can make a few comments.

    I find the ‘sticking on to each other’ of low variance proxy series on high variance instrumental record questionable, to say the least. We can study the general variance of the temperature series from temperature reconstructions based on various ice-core samples, such as the Vostok one. The variance structure is clearly different in the Hockey stick graph, in particular, there is a clear variance break in the series when the instrumental record kicks in. No such ‘break’ is observed in the continuous ice-core sample. The fundamental difference in the series is furthermore clearly shown by the divergence problem (and to be honest, I find the linear ‘divergence’ corrections performed by Briffa et al, to be highly suspect from a statistical and methodological point of view).

    One might assume that because the instrumental record is more precise and reliable, adding it to the less reliable proxy record ‘improves’ the total record.

    However, since econometrics/statistical modeling deals with explaining ‘variance’ (rather than levels, a common misconception), any statistical inference based on two series, with structurally different variances, is in fact invalid. Also, comparing the current instrumental record with the proxy record for the purpose of determining the ‘unprecedentedness’ of the current warming, is invalid.

    Now, I’m not saying that proxy series are useless (that would be stupid, data is data, and imperfect data is better than no data), but they simply cannot be used together with the instrumental record, because the two methods basically measure two different things (i.e. putting it very inprecisely: they have different ‘measurement’ biases).

    In broad lines, I have to side with the 2006 Wegman report on this one.

    As for the second issue, I have to note that statistics in general doesn’t ‘explain’ anything. Statistics is simply a method to formally deal with limited observations, when testing hypotheses. In that sense, I cannot ‘explain’ those findings. I can however test the hypothesis, on the basis of what we observe

    The link you posted stated the following:

    1) CO2 is rising
    2) Most of the rise is anthropogenic
    3) We see an increase in the amount of radiation health by the atmosphere

    Fine, I would say that there is a basis for a hypothesis of warming through CO2 emissions (i.e. a phenomenological model). Ergo, we should then be able to detect the effect of changes in such emissions on changes in temperatures. We have 150 years of proper observations, so something has got to give, right?

    But using the best tools available, we don’t find any proper correlation. In fact, such a relationship is rejected by the data. Now this is a problem for any hypothesis, and if this were an economic (phenomenological) model, it would have suffered a fatal blow by such test results (indeed, many ‘nice’ hypotheses in economics died at the hands of econometricians/statisticians).

    I hope this helps.

    As for the ‘landmark paper’, thanks for the confidence ;)… I’m considering writing something on exactly these topics, but I think I will have to do that in my spare time (I’m in a different field)..

    ——————————-

    Hi Bart,

    “This likelihood (1 in 10,000) can be illustrated by using the game of chance “heads or tails”: the likelihood is the same as 14 heads in a row.”

    This statement is simply wrong, and the fact that it comes from a peer-reviewed study published in the GRL, says more about the quality of the peer review, than it does about the statistical properties of temperature data. Allow me to elaborate:

    The ‘random walk’ component we are talking about (the one we test for) is the change, not the level. Take temperature at time t to be equal to Y(t). Now, if Y(t) follows the simplest version of a random walk, the specification is:

    Y(t)=Y(t-1)+error(t)

    Where the error is independently distributed (independent of itself! not per se other variables/errors, so there is room for relating variables)

    This series is integrated of the first order, or I(1). We can then take the first differences, and obtain a stationary series, D_Y(t)=Y(t)-Y(t-1)=error(t). Now this is the random part where you can apply your bell curve analysis.

    In the context of ‘changes’, tossing (H,H,H,H,H,H,H,H,H,H,H,H,H,H) is just as likely as tossing (H,T,H,T,T,T,H,H,T,T,H,H,T,T) or any other sequence of realizations, the probability of that particular realization being 0.5^14. You are correct to say that, if these tosses represent the changes (map (H,T)->(-1,1)), the expected value of total change from t=1, to t=14 would be, 0. However, the confidence interval of that total change would expand.

    ==================

    Simple illustration:

    The total change is equal to the following: Sum(e_t, t=1:14), where e_t is i.i.d,. with mean 0, and variance sigma(e).

    The expected value is equal to E(Sum of changes)=E(Sum(e_t, t=1:14))=Sum(E(e_t), t=1:14)=Sum(0)=0

    The variance however is equal to Var(Sum of changes)=E((Sum(e_t, t=1…14)-E(Sum(e_t, t=1:14)))^2). Where the second term is the expectation of the sum, which equals 0 (as per above). Eliminating it gives Var(Sum of changes)=E((Sum(e_t, t=1…14))^2).

    Now note that because e_t is i.i.d, E[e_i*e_j]=0 for i unequal to j. So the expression can be simplified to: E(Sum(e_t^2,t=1:14))=Sum(E(e_t^2),t=1:14)=Sum(sigma(e),t=1:14)..

    Now substitute n for 14 in that expression, take the limit of n->Inf, and you see what happens with your confidence interval. Asymptotically (n->Inf) the expected variance of the sum is infinite (hence, we are dealing with a nonstationary series).

    To make a very long story short, seeing a very high temperature level in 2000, starting in 1850, is not at all ‘unlikely’ and inconsistent with a ‘random walk’. I hope you also see now why that GRL comment is nonsense.

    ==================

    As for greenhouse gasses following a random walk, they are I(2), so (again, a simple representation for the sake of exposition) they look something like this:

    G(t)=G(t-1)+eta(t)
    eta(t)=Eta(t-1)+error(t)

    Where the error is independently distributed (same note as above). Note how we have to difference the series twice in order to obtain a stationary series on which we can perform valid statistical analysis.

    You nicely depict my point here:

    “In climate change, we’re interested in the change over time (the sum total of realizations in the previous analogy), not in any particular yearly value (any particular realization in the previous analogy), which indeed has a very strong ‘random’ component to it if you will (natural weather related variability).”

    Indeed, we are interested in the sum of all realizations of changes! (take a look at the I(1)/I(2) series definition above) However, what the GRL quote implies is that you guys take actual temperatures (instead of temperature changes) as realizations coming from a bell shaped curve. In that case, observing a series of oddly high values, would indeed be very ‘unlikely’. So, like I noted above, if these were ‘levels’, the author of the quote might (might!) have a point.

    Note also, that the Augmented Dickey Fuller test is designed to test exactly what you posted above, namely, whether it is likely that the underlying DGP is deterministic (trend) or if the series contains a stochastic trend. In doing so, the entire series is taken into account (the guys didn’t get two Nobel prizes for sloppy work :).

    In any case, the conclusions of these tests are unambiguous.

    PS. If we could simply ‘eyeball’ time series, and come to proper conclusions using our ‘intuition’, what would then be the purpose of statistical testing? :)

    PPS. Hmm, there goes my lunch break…

  14. Scott Mandia Says:

    VS,

    I need to read and re-read that Beenstock paper.

    Physics tells us that CO2 forces climate.
    We have increasing CO2 that is being measured.
    The concentrations today are the highest in the past 650,000 years and likely to be higher than at any time in the past 15 million years.
    The planet is warming.
    Models that use the best physics today can replicate this warming only by using increased GHG forcings. In fact, without GHG forcing we would be cooling.

    Although I admit I do not understand your statistics, it would apper to me that they must be wrong. You will need to explain away the above points if you believe that there is not a significant correlation between T and CO2. Can you do so in English?

    I ask this not be be snide but it is the only way non-experts such as myself can be convinced that science has now been stood on its head. Sorry for my stats ignorance.

  15. VS Says:

    Hi Scott

    Let me try to explain this, indeed, in plain English. Don’t worry, skepticism is fine, and your interest is welcomed (and I’m running some numerical analysis here in the background anyway, so I have some time to spare :)

    First of all, we need to go to properties of the climate models themselves. In particular, we have to agree on one thing: the models are phenomenological, and not fundamental. Putting it differently, they are derived from lab-based experimental (and arguably fundamental) results, such as the ones you named above, as well as phenomena observed (e.g. warming, higher CO2 concentrations etc). They are however not derived from fundamental equations directly.

    These models, therefore, are rough approximations of the hypothesized mechanisms driving global temperatures. For starters, while we do have a general idea of the direction of the influences, since these models are not fundamental, the magnitude of all effects must be estimated. Furthermore, because these are not fundamental models, all model specifications are, in broad lines, opinions of researchers (i.e. they are guided by fundamental results, but are not fundamental themselves).

    The thing with phenomenological models is that they still have to be validated with observations. A necessary condition hereby is a proper correlation. If, after exhausting all of the methods we can think of, we still cannot find this correlation, we should really start questioning the model itself.

    In particular, if we cannot detect any significant warming as a direct result of increased CO2 concentrations, perhaps this effect is negated by other latent forces we are not accounting for in our models. In this case, predictions of catastrophic man-made warming are quite premature, and certainly not solid enough to base global policy on.

    Enter empirical testing through econometric methods (i.e. statistical modeling). I’ll try to explain, in as plain English as possible, what Beenstock and Reingerwerz did.

    Let me try to explain the cointegration method first, very shortly, so that you understand what it is that the authors are trying to do. Assume that you have two I(1) series (i.e. first differences are stationary, see above). The implications are that at time t, you have no idea which way the series will move at t+1. However, these two random walks, can have, what is called a common stochastic trend. In other words, these two models might behave randomly for ‘our eyes’, but they do so together, and will not stray from each other in the long run. Speaking in statistical terms, while the two series are non-stationary, a linear combination (the cointegrating relationship) itself is stationar. The beauty here is that we do not have to understand the entire (arguably hypercomplex) data generating process, in order to establish a relationship between the two series. You can also think of it as a very elaborate, yet correct, method of establishing correlations.

    An established cointegrating relationship between, say, CO2 forcings and temperatures, would, in my eyes, be the first step in validating the man-made global warming hypothesis. However, the data seem to disagree.

    In our sample, we observe temperatures and solar irradiance as I(1). This implies that, as far as we can ‘see’, these series are behaving as random walks, and their first differences are stationary. Greenhouse gas forcings, on the other hand, are I(2), so only their second differences are stationary (first differences are also a random walk, observationally speaking). The issue here is that this results in these two series being asymptotically independent, i.e. they can never be (linearly) cointegrated.

    Kaufman et al (2006) for example, simply ‘ignores’ this issue (although he does note it, strangely enough), claims that the test must be wrong and not the model, and then goes on to cointegrate them anyway. This is wrong.

    What Beenstock and Reingewertz do is much more sophisticated. They allow for higher order (non-linear if you will, but I’m being a bit sloppy here) cointegration, also called polynomial cointegration, between the I(2) greenhouse forcings and I(1) temperatures. However, once all the test procedures are properly applied, they clearly reject any long term relationship between these series. They furthermore find that solar irradiance is by far the biggest determinant of temperatures (very clear cointegrating relationship) while a permanent CO2 increase only has a temporary effect.

    To be honest, considering possible negative feedback mechanisms which could hypothetically deal with CO2 warming, I don’t think the result is that outlandish.

    For me, the lack of (long-term) correlation, is truly a huge red flag. If we cannot match the hypothesized model with what we observe, then what does it stand on? Again, we are dealing with a phenomenological model, not a fundamental one. The model is therefore not set in stone.

    Now, I understand that the ‘reflex’ in this case is not to trust the statistics. My question then is: what should I trust?

    In economic theory there are (were) plenty of models that seemed logical, coherent and broadly in line with both observations and established micro-results (this echoes the ‘but the effect is physical and the Earth is warming!’…), but that simply failed the empirical tests. Note that many of these models performed well in simulations (i.e. they were able to ‘generate’ real world results), however, when put to a proper and rigorous empirical test, they were rejected.

    I combed through the latest IPCC report, and all I saw were simulations, simulations and more simulations (i.e. Chapter 8), and no proper empirical testing. That these simulations are able to ‘mimic’ real world processes is then taken as proof, which again, I find very awkward (i.e. even managing to correlate, proves nothing of the underlying causal relationship, it is a necessary, rather than sufficient condition, for validation).

    You also state that “Models that use the best physics today can replicate this warming only by using increased GHG forcings. In fact, without GHG forcing we would be cooling.”. With this too, I’m skeptical. We have a rising trend in temperatures, and we have a rising trend in GHG forcings. Naturally, if our model is unable, due to its own defects, to account for the recent warming, we could ‘plug’ this hole, by inserting the rising trend in GHG forcings. Without being snide myself here, I have to say, that failure to model global temperatures properly doesn’t impress me, and certainly doesn’t constitute a proof.

    I think that judging by what I posted here, you can understand why I’m skeptical. I simply have seen no proper verification of the hypothesis. I also find it very troubling that nobody in the climate science community is truly addressing this question, and that whenever it is brough up, the results are dismissed immediatly with “Hey, the effect is physical, so it MUST be there, the statistics are wrong..”

    At the same time, in statistical papers, like for example Kaufmann’s work, time and time again, the hypothesis trumps the statistical test (i.e. the test is rejected rather than the hypothesis).

    To me, this is the world upside down.

    NB. I might sound a bit harsh on Kaufmann here, but that’s just technical disagreement speak. I don’t think he’s a bad statistician, quite the contrary (even though that error in Kaufmann et al (2006), exposed by Beenstock and Reigenwertz, was a bit lame, and should have been picked up by a reviewer).

    However, I do think that his ‘belief’ in the model he’s testing is obscuring his objectivity, and making him too tolerant to rejection.

  16. Bart Says:

    VS,

    You wrote: “In the context of ‘changes’, tossing (H,H,H,H,H,H,H,H,H,H,H,H,H,H) is just as likely as tossing (H,T,H,T,T,T,H,H,T,T,H,H,T,T) or any other sequence of realizations”

    That’s exactly the kind of argument that I responded to in my previous reply: It’s about comparing the sum total. If you replace the H and T by 0 and 1, I’m sure you’ll agree that a sum total of 14 is much less likely than a sum total of 7 (because the latter has many more individual realizations leading to its value of the total). You do seem to be applying the wrong kind of statistical test for the problem at hand.

    Moreover, and this point was also made in the RealClimate link I provided above, temperature is a physical variable which is bounded (by the laws of physics), while a random walk is necessarily unbounded.

    If you’re so sure that there is no meaningful trend, and that the observed variations are just random (notwithstanding the fact that all yearly temperatures of the past 30 years are higher than any of the yearly average temperatures between 1880 and 1910), perhaps you wouldn’t mind betting on future temperature change?

    Based on your random walk argument, I assume you’d accept 2:1 odds of the globe continuing to warm (i.e. you 2x if you win (no change or cooling); I win x if I win (ie warming). Perhaps you’d even take this bet for the next 5 years temperature average compared to the 5 year average from, say 1901-1905, since it’s all a random walk anyway?

    The paper commented on on the PhysOrg site is here:

    Click to access zorita08grl.pdf

    “How unusual is the recent series of warm years?”

    I think you’re wrong in your description of climate models; to a large degree they are based on fundamental physics, such as radiativetransfer. Check the model FAQ’s on RealClimate for a start.

  17. VS Says:

    Hi Bart,

    I went over this ‘random walk’ thing in detail above, please do read (I spent a lot of time typing, if only for that :). I’m also most definitely not applying the ‘wrong kind of test’. Take a look at the references in my first post and check the methodology, this is the test for the job.

    Also, excuse the authority fallacy (and ensuing ridicule ;), but I’ll trust two Economics Nobel Prizes with my statistics, over some quote coming from a journal who’s editor is so sloppy in statistics that he write things like these in interviews with the BBC:

    “BBC – Do you agree that from 1995 to the present there has been no statistically-significant global warming

    Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level.”

    Not significant at a 95% significance level? Wow, that’s really not significant… it’s significantly insignificant even ;) (leave the ‘warming’, leave the discussion we had above, I’m simply showing how sloppy he is with statistics)

    As for temperature being bounded, sure, but in our sample of observations, it is classified as a random walk via statistical testing, hence, it should be treated as such (again, statistics works with what we observe, not what we think we ‘should’ observe). Any inference ignoring these test results is spurious.

    As for climate models, here’s a definition from wikipedia of the word phenomenological as related to science:

    “The term phenomenology in science is used to describe a body of knowledge which relates empirical observations of phenomena to each other, in a way which is consistent with fundamental theory, but is not directly derived from theory.”

    So the climate models are phenomenological.

    As far as I know, even the greenhouse effect conjecture is not derived directly from fundamental theory…

    ..but I’ll have to leave that discussion to my physicist friends, as my theoretical physics is not… eh… good enough to debate the laws of thermodynamics :)

    PS. As for that bet, with those odds, I might even take you up on it… :D

  18. VS Says:

    PPS. I’ll look at this paper you posted over the weekend. It doesn’t look good though:

    “Different statistical tests of the stationarity of the global mean temperature have yielded conflicting results [Stern and Kauffman, 2000].”

    Error in citation.. it’s not conflicting at all (most authors conclude I(1)). Also, see the tests/references I posted above.

    ..they then assume then that it is I(0) with a high persistence… It looks very fishy, and way too short for what they are trying to do.

    I’m also not impressed by the bibliography, which includes two references to statistics papers (where they simply apply the method blindly) and for the rest only climatological stuff (while what they are trying to do, is some kind of econometrical analysis.. again, ignoring an enormous body of literature)

    Anyhow, I’ll get back to you on this.

  19. Scott Mandia Says:

    VS:

    Thank you for the “English” to explain the stats. Now I can see the reasons behind your skepticism, although I am still skeptical.

    It has been my understanding that solar variance has been relatively small since the late 1800s with the IPCC estimating 0.1C of the 0.8C warming due to the sun.

    Have you see the following:

    Feulner, G., and Rahmstorf, S. (2010). On the effect of a new grand minimum of solar activity on the future climate on earth, Geophysical Research Letters, in press.

    As discussed at this post over at Skeptical Science, the authors conclude:

    For both the A1B and A2 emission scenario, the effect of a Maunder Minimum on global temperature is minimal. The TSI reconstruction with lesser variation shows a decrease in global temperature of around 0.09oC while the stronger variation in solar forcing shows a difference of around 0.3oC. Compare this to global warming between 3.7oC (A1B scenario) to 4.5oC (A2 scenario). Considering the less variable solar reconstruction shows such strong agreement with past temperature, the authors conclude the most likely impact of a Maunder Minimum by 2100 would be a decrease in global temperature of 0.1oC . With all uncertainties taken into account, the estimated maximum decrease in global temperature is 0.3oC.

    How are the oceans getting their increased heat? I am simplifying but it is either due to a greater source of incoming solar radiation or a decrease in outgoing LW radiation. The sun does not appear to be responsible especially in the past decade which was observed to have a very low TSI but with a record warm climate. Of course, I do know that ocean heat release takes many years so there is a lag, but the heat content is increasing in the oceans so it cannot be blamed on lag.

    What about stratospheric cooling? Even accounting for ozone loss, there should not have been as much cooling if the sun were causing the warming.

    So I see multiple lines of evidence for GHG forcing and no alternative explanation. I wonder if, as Bart suggests, there are some underlying incorrent asumptions built into your analysis. Unfortunately, I am not equipped to figure this out so I will defer to authority.

    I do appreciate your time and now I will seek help. Isn’t that what we all should do when there is a question asked that one cannot answer? :)

  20. Tim Curtin Says:

    Scott: We’ve had brilliant stuff from VS here, plus his cites to Beenstock & co-author 2010. Kaufmann et al are dangerous. Your own ref. to Feulner & Rahmstorf 2010 gives the game away. ‘Global’ temperatures are a compilation from often exiguous surface records at many specific locations. While because atmospheric CO2 is “well-mixed” so for all practical purposes the same worldwide, the same is not true of TSI (total solar irradiance) which is measured at the TOA (top of the atmosphere), but is far from being the same at surface level everywhere, otherwise Khartoum now would have the same temperature as Stockholm (minus 5oC max yesterday) – or vice versa! Kaufmann et al, who include David Stern at “my” own ANU, have never recognized the difference between TSI and surface solar radiation (SSR). That is a further reason (to those advanced by VS) why their papers are unsound. My own regressions of dT/dt = f(RF, dSSR/dt) for locations across the USA show zero stat. sig. coefficients for RF of GHG, and highly sig. for SSR and other variables like RH (relative humidity).

    One major problem with both F&R as well as K. et al is that they fail to adjust the IPCC’s emissions scenarios for the associated variability of uptakes of atmospheric CO2 by the world’s oceanic and terrestrial Biota; as I have shown (Curtin 2009 at my website), along with Knorr (2009), the more the emissions, the greater the biotic uptake, pace IPCC.

    Knorr and I both show how the biotic uptakes have absorbed 57% of total anthropogenic emissions since 1958. As a result of that, the average rate of growth of the atmospheric concentration of CO2 (aka [CO2]) has been 0.41% p.a. since 1958. But that has not stopped the IPCC (Solomon et al 2007), Solomon et al again in PNAS 2009, and Kaufmann et al (2006) from assuming that [CO2] will grow at 1% p.a. from now so that doubling will occur by 2080, exciting, instead of 2161 or later, very boring.

    Email me at tcurtin@bigblue.net.au for my results for Point Barrow (Alaska) or any other major centre across the USA, from Mauna Loa to San Juan.

  21. Alan Says:

    So using statistics to try to isolate a trend indicating AGW is fundamentally flawed because the data over 150 years cannot be considered to represent other than that of stochastic processes.

    Is that the argument between VS and Bart et al? I’m guessing from my stats-challenged existence.

    Maybe the problem is that the AGW discussion and search for ‘proof’ has gone not just into the individual trees of the forest, but into the leaves of an individual branch on the individual tree.

    Call me old-fashioned, but the AGW argument is not about statistics – it’s about physics, isn’t it?

    If I was sitting on an individual atom in a sealed glass beaker, observing the movements of all the other atoms (and mine), I would go “wow this is so stochastic!”. And if my task was to calculate what my atom’s energy state would be in the future I just couldn’t say … I can’t observe any trends.

    If I was sitting on a chair and this glass beaker was on the bench and I was shining a heat source at it … I reckon I could use basic physics to predict the average temperature in the beaker after 5/10/etc minutes. Individual atoms don’t matter. Simple physics can be applied. Stochastic micro-processes don’t matter.

    Let’s translate to the globe …

    If was sitting on a chair in a park reading this blog on my laptop, and observing my micro-climate, I would go ‘wow this is so stochastic” … the wind gusts, the clouds move across the sun etc. If my task was to calculate what my micro-climate would be in the future I just couldn’t say … I can’t observe any trends.

    If I was sitting on the moon looking down on the Earth, I would observe a ball with observable properties, fossil fuel burning behaviour, an atmosphere and a sun … I reckon the forecasting of Earth average temperatures boils down to physics. To be more accurate in the shorter term (decades) I would throw in probablistic events like volcanos etc.

    On the global scale, the apparent stochastic behaviour of trillions of micro-climate volumes over tiny time periods (decades) does’t hide the physics.

    But approaching the question of discernable temperature anomalies and trends and correlations with human behaviour with curve fitting … and then to bog down in arguments about whether it is statistically valid to do so … does take the eye off physics arguments and is just sooo missing the point.

    Why is the AGW discussion getting drawn into this at all?

    I don’t get it.

  22. Bart Says:

    Alan,

    I think your analogies are spot on.

    VS,

    Climate models incorporate a lot of fundamental physics incorporated. Read eg here for a start. If you want to understand climate models, I recommend asking a climate modeler (eg at RC) rather than a phsycist of another branch.

    Your specific issues with the paper by Zorita, Von Storch and Stocker could best be taken up at the blog of the first two.

    And as Scott pointed out as well, there are many changes in all parts of the climate system (air temp, ocean het, arctic ice, ice sheets, glaciers, ecosystems). On top of that, measurements corroborate that less IR radiation is leaving the eart system now than a few decades ago: The enhanced greenhouse effect at work, and a clear sign of a forcing acting on the climate (ie it’s not purely stochastic, but we know that from physics already). This all is not all merely a coincidence or a ‘random walk’. You’d probably claim that the change from an ice age to an interglacial would be a random walk.

    Check out this video for example or these posts about the many lines of evidence for anthopogenic forcing of the climate.

  23. Alan Says:

    Thanks Bart!

    Those analogies were inspired by a mosquito … it was buzzing around my head and, try as I might, couldn’t swat it … it’s movements were way too stochastic.

    So I shut the door of the office and unloaded the insect spray around the room … the sucker is dead!

    A benefit of graduating as an engineer … our motto is “if at first you don’t succeed, hit it with a bloody great hammer!”

    A

  24. Eli Rabett Says:

    Beenstock and Reingewertz are using a too short forcing record which biases their test for a hinged forcing (two straight lines). If you look at the NOAA AGGI, which is the best record of forcings since 1979, it is clearly linear, and the IPCC discussion shows historical forcings are pretty clearly hinged.

    This is what happens when someone who knows neither the data or the theory butts in. They make fools of themselves.

  25. Heiko Gerhauser Says:

    Hi Bart and VS,

    I think I’ve found an easy way to illustrate the statistics issue, so that lay people can get it. Say look at nails coming of a nail making machine and daily readings for a drinking water reservoir. The nails are independent of each other, the reservoir readings are not.

    So, in Excel you could model the nails by a series that adds a random figure to a fixed length. For the reservoir, you might wish to add a random figure not to a fixed quantity of water in the reservoir, but rather to the previous day’s value.

    Excel has a nice function for this RAND. It returns a random number between 0 and 1.

    Now, eye balling the yearly temperatures, they tend to vary by a few one hundreths of a degree per year.

    So, I put in a hundred random values between -0.03 and + 0.03, that is RAND()/100-0.03

    This series has no underlying trend up or down. Yet it produces graphs that meander up or down quite a bit (I’ve put an example on my blog). In fact the way the graphs meander up or down looks quite a bit like the actual temperature data for the last century.

    —————–

    As said elsewhere in this thread, while the statistics are interesting, they also don’t say a great deal on their own. You can’t just analyse the numbers not knowing whether it’s nails or reservoir levels or global temperatures. You need to understand the underlying mechanisms.

    In addition, I must say that I think the temperature range given by the IPCC (0.7 +/-0.2C) is too narrow. I am dubious that we know the difference between 1995 and 2007 to better than +/- 0.1C and the difference between 2000 and 1850 to better than +/- 0.5C.

  26. Heiko Gerhauser Says:

    Hi Bart,

    if temperatures were up by 5C over the last 100 years and had not varied by more than +/- 1C over the last 1000 and not by more than +/- 2C over the last 10000, that would be pretty good evidence on its own that something to do with industrialisation might be the explanation for the rise.

    The actual picture as I see it is that temperature is up by 0.7C +/- 0.5C over the last 150 years and we don’t know how flat temperatures were in the last 1000 years, but anything between 1.5C colder and slightly warmer than at present looks consistent with the evidence we’ve got.

    Also, on the forcing and response side things aren’t that clear cut thanks to aerosols and uncertainty about feedbacks. It boils down to net forcing should be something like 0.1 to 1.5C. And even that presumes that the climate sensitivity is linear line is correct and that clouds don’t act as a thermostat with tipping points from one stable state to the next.

    So, I would say that anthropogenic signal is just about barely visible in the temperature data, for experts with a great deal of understanding how the system should work. It is far from the type of hockey stick graph lay people could or should look at and come to a good conclusion independent of a good understanding of the climate system.

    Let me add this on the random walk thing. I do think that it is not appropriate to assume years are independent from each other. Clearly, the oceans store heat, so if due to random variations in cloud cover, one year is warm, then that’ll have an impact on the next year too. I am not sure how best to translate that into a statistical analysis that does make sense. But I am pretty sure that the confidence intervals and trend calculations of yours, which do in fact assume complete independence of years from each other with the random component for each year having nothing to do with the random component of the previous year, are not the way to go.

  27. VS Says:

    Eli Rabett,

    How interesting to see you here.

    You write:

    “Beenstock and Reingewertz are using a too short forcing record which biases their test for a hinged forcing (two straight lines). If you look at the NOAA AGGI, which is the best record of forcings since 1979, it is clearly linear, and the IPCC discussion shows historical forcings are pretty clearly hinged.

    This is what happens when someone who knows neither the data or the theory butts in. They make fools of themselves.”

    Oh really?

    On your ‘science’ blog, to which you link, you write:

    “Straightforwardly this is a claim that forcing has been increasing as a second order function, while temperature has only been increasing linearly. Given the noise in the temperature record, that is a reach as an absolute, but Eli is a nice Rabett.”

    Wrong. That’s not at all what they ‘straightforwardly claim’. An I(1) series doesn’t ‘increase linearly’ and an I(2) series doesn’t ‘increase as a second order function’, and I have no idea where you managed to get that from.

    Read the paper, read up on the methods (e.g. order of integration, cointegration, again, the Nobel prize website has a few nice Nobel lectures on it), and then “butt in”.

    It’s all explained a few posts back: the order of integration refers to the number of times you have to difference a series to obtain a stationarity (and nothing to do with the order of the ‘polynomial’ shaping the curve). The order of integration of these series is furthermore determined via statistical testing (not ‘assumption’), and the BR findings are confirmed across all papers I read on the topic (see references in my first post).

    You propose then, that there is a structural break in there somewhere (by ‘eyeballing’ the data), and write:

    “The bunnies tossed back a few beers, took out the ruler and said, hey, that total forcing looks a lot more like two straight lines with a hinge than a second order curve, and indeed, to be fair, the same thought had occurred to B&R”

    ”BUT, the period they looked at was 1880 – 2000. Zeroth order dicking around says that any such test between a second order dependence and two hinged lines is going to be affected strongly by the length of the record. Any bunnies wanna bet what happens if you use a longer record???”

    Oh why don’t you try me mr. Rabbet. Start by telling me what exactly happens to the distribution of the Augmented Dickey Fuller test statistic once the number of observations is expanded. Be sure to use formal notation in your reply, and not ‘bunny wabbit has a hunch’.

    Also, I emailed Beenstock about the data he used when I first read the paper (outside of climate science, that’s quite normal), and he wrote me that all the data they use come from GISS-NASA. Are you suggesting NASA supplied the wrong data? Wasn’t this the data you recommended they use?

    So you say: “This is what happens when someone who knows neither the data or the theory butts in. They make fools of themselves”

    Indeed, that somebody makes an enormous fool out of themselves.

    Actually, I have to admit I also did some background research on you Eli, and it turns out that this is not the first time you wade into a topic you have no clue about, and smear / insult the scientists in question because you disagree with their conclusions. Just like here you pretend to understand something about statistics in order to ‘debunk’ BR, you pretended to be an expert on fundamental physics, in order ‘debunk’ the following publication on your anti-scientific smear blog:

    “Falsification Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics
    Authors: Gerhard Gerlich, Ralf D. Tscheuschner”

    http://arxiv.org/abs/0707.1161

    And Gerlich and Tscheschner had the following to say about you on this New York Times blog (comments section).

    “First, let us start with discussing the identity of Eli Rabett. We have been informed that Eli Rabett is the pseudonym of Josh Halpern, a chemistry professor at Howard University. He is a laser spectroscopist with no formal training in climatology and theoretical physics.

    On 2007-11-14 one of us (RDT) sent Josh Halpern the following E-Mail:

    QUOTE:

    Josh Halpern alias Eli Rabbett –
    [If you are not Josh Halpern, then forgive me and delete this message immediately.]

    Apparently, believing to be protected by anonymity you (and others) want to establish a quality of a scientific discussion that is based on offenses and arrogance rather than on critical rationalism and exchange of arguments. Scientist cannot tolerate and endorse what is becoming a quality in weblogs and what is pioneered by IPCC-conformal virtual climate bloggers.

    I must urge you to reconsider.

    My questions to you:

    1. What is the most general formulation of the second law of thermodynamics?

    2. What is your favorite exact definition of the atmospheric greenhouse effect within the frame of physics?

    3. Could you provide me a literature reference of a rigorous derivation of this effect?

    4. How do you compute the supposed atmospheric greenhouse effect (the supposed warming effect, not simply the absorption) from given reflection, absorption, emission spectra of a gas mixture, well-formulated magnetohydrodynamics, and unknown dynamical interface and other boundary conditions?

    5. Do you really believe, that you can transform an unphysical myth into a physical truth on such a low level of argumentation?

    END-OF-QUOTE

    We did not get any response.”

    The whole answer by GT can be read here, comment 974. It’s worth the read.

    http://dotearth.blogs.nytimes.com/2008/01/24/earth-scientists-express-rising-concern-over-warming/?apage=39#comments

    This type of garbage uttered by individuals like you is exactly why this debate is so poisoned. I mean, seriously, this is part of the ‘denialosphere’ and BR are engaged in a ‘circle jerk‘?

    You Dr. Halpern, are a disgrace to your institution and a disgrace to science. I seriously considering compiling all this I have on you and submitting it to some ethics commission at Howard University.

    PS Alan, Haiko and Bart, I’ll get back to the issues you posed (I’m very busy right now, but I couldn’t let Eli’s gibberish just sit there, unchallenged). Also Tim, that’s very interesting, I will certainly read your paper, but beware, I might email you about some of the data ;)

  28. VS Says:

    PPS. I meant ‘Heiko’, of course, my apologies :)

  29. Scott Mandia Says:

    VS,

    Are you endorsing the Gerlich & Tscheuschner paper?

  30. VS Says:

    Hi Scott,

    I don’t have theoretical knowledge to either endorse or dispute that paper (I mentioned it in passing in an earlier post, but I also stated that I’m unqualified to pass judgement).

    Some of my theoretical physicist friends though, who’s nuanced judgement on these matters I sincerely trust, have endorsed it. The most critical one of them still argued that at the very least, they raise extremely interesting points.

    Whether GT are right or not however, is besides the point here. The problem is the anti-scientific attitude, based on insults and baseless claims, encouraged by agitators like Halpern (aka Eli Rabbett).

    Now that’s something I don’t endorse.

  31. Marco Says:

    Good grief, VS, you now repeat Gerlich’s nonsense about Josh Halpern?

    Gerlich apparently thinks that he, as a theoretical physicist (*), knows better than LOADS of physicist (which includes Josh Halpern) about thermodynamics. It is more than likely that Gerlich (and Tscheuschner) never ever work with thermodynamics in their field. And yet they scold the likes of Arthur Smith (notably a theoretical physicist) or Ray Pierrehumbert, who have tried to set them straight on multiple occasions.

    (*) A theoretical physicist with REALLY low impact, it must be said. He might want to put more effort in actually publishing something worthwhile.

    You should also read their paper. It includes loads of odd references, and open attacks on various scientists and scientific bodies (Hans von Storch they don’t like too much, either). Enjoy yourself just looking at the references and their polemic. Then come back to us about how trustworthy G&T are *as scientists*. Forget all about the topic of the paper, the writing explains it all.

  32. VS Says:

    Hi Marco,

    As you mention Smith, this might be of interest to you

    Click to access 0904.2767.pdf

    Anyhow, I don’t want to get dragged into a GT discussion, because, as I stated, I don’t have the qualifications for it. Judging by the complexity of the matter, I doubt that anybody participating in this discussion here, has that knowledge either.

    So I’ll leave it there.

    The nonsense Halpern just posted here (and on his own blog) however, where he couldn’t even get the definition of an integrated series right, I am qualified to judge.

  33. Heiko Gerhauser Says:

    Hi VS and Bart,

    I am no statistician though I like to dabble with the Excel RAND() function. If VS is saying what I think he is, namely that it’s not possible to see a clear trend in the data, if the random yearly addition/subtraction is cumulative, that makes eminent sense to me. And whatever Eli is on about in his post, it’s not about that.

    However, while I think the statistics are entertaining and interesting, and maybe my choice of the word “torturing” wasn’t quite rite, I do wonder how the statistics relates to statements like “In this case, predictions of catastrophic man-made warming are quite premature, and certainly not solid enough to base global policy on.”

    “Catastrophic” is poorly defined. Say it means 5C by 2100 then we need a clear trend break out upwards anyway. And that’s also what the modellers are saying. Much of the warming is masked by aerosols, and we presume that they won’t keep on increasing. But “catastrophic” could also mean 0C increase, and India turning into a desert with storm damage in the US tripling. How much and more importantly how directly do the statistics calcs really matter in that context? I think you are right on the statistics question, I am rather dubious whether this rightness means we should do less or should do more about climate change.

  34. Bart Says:

    From the comments section at http://tamino.wordpress.com/2010/03/05/message-to-anthony-watts/:

    “The farther away the actual temperature gets from the equilibrium temperature, the faster the system will attempt to regain equilibrium.”

    This is bounded by physics: Temperatures continuing to wander off in one direction without a change in forcing would cause an energy imbalance, which would force the temperatures back to where they came from: equilibration. In general, long term changes in global avg temp are the consequence of a non-zero radiative forcing, whereas temp juggle up and down without a clear trend if there is no radiative forcing acting upon the system.

    The random walk argument “is the same mistake Pat Frank made in his ridiculous Skeptic magazine article purporting to show that the uncertainty in future temperatures grows without bound if you propagate uncertainty over time, leading to the absurd conclusion that the surface temperature of the Earth in 2100 is uncertain within hundreds of degrees. (A little basic common sense should have told him that there are basic physical reasons to expect that the climate is not going to be hundreds of degrees hotter or colder within a century’s time, and thus there might be something wrong with the way he was propagating uncertainty.)”

    The variation around the linear trend in global avg temp exhibits autocorrelation, which makes the estimation of the trend and esp the errors of the trend more tricky, but it doesn’t make an OLD trend useless in order to visualize what’s happening.

    See these posts that explain more about trend analyses of temp data and the nature of the ‘noise’:
    http://tamino.wordpress.com/2009/12/15/how-long/

    http://tamino.wordpress.com/2008/08/04/to-ar1-or-not-to-ar1/

    “Most regular readers here are familiar with autocorrelation in time series data. It’s the property that for the random part of the data (the noise rather than the signal), the values are not independent. Instead, nearby (in time) values tend to be correlated with each other. In almost all cases the autocorrelation at small lags (very nearby times) is positive, so if a given random term is positive (negative), especially if it’s large, then the next term is likely to be positive (negative) as well. For global average temperature, the random part of the fluctuations definitely shows autocorrelation. This makes estimates of trend rates from linear regression (or any other method, for that matter) less precise; the probable error from such an analysis is larger than it would be if the random parts of the data were uncorrelated. In fact, a great many time series in geophysics exhibit autocorrelation, which makes the results of trend analysis less precise, sometimes greatly so.”

  35. VS Says:

    Bart, I really don’t see your point. I don’t think you understand what an integrated series is.

    Let: Y(t)=rho*Y(t-1)+error(t)

    This is the standard expression for an AR(1), or first order autoregressive, time series process.

    An autocorrelated process, is one with an rho unequal to zero. The higher persistence, the harder inference is. And when the persitence is perfect, i.e. rho=1, we are talking about a first order integrated series, or I(1), which is no longer stationary (so no standard inference). You have to difference it to obtain a stationary series, namely (if rho=1):

    D_Y(t)=error(t)

    The series is also said to contain a unit root. Calculating linear, or quadratic, or whatever trends in this context is spurious. Your confidence intervals are furthermore meaningless.

    Cointegration, with other integrated variables, is a (the) method for multivariate statistical inference when dealing with series containing unit roots. Hence the Nobel prizes.

  36. Heiko Gerhauser Says:

    Hi Bart,

    a system that actually is in equilibrium won’t move away from it. A closed vessel at constant temperature everywhere will not spontaneously develop temperature differences, not at the 1 mm, 0.001C, 1 s type level. One key problem I have with the argument is therefore that I don’t understand why the Earth should get warmer/colder even over a few months, unless there is a “forcing”, ie something that does in fact force the system away from equilibrium.

    Tamino doesn’t go into what causes the noise, which after all should not be there for a macroscopic system in equilibrium. Now there is an obvious reason we don’t expect the noise to random walk us to +20 or to -20C over a hundred years, namely that sort of temperature history is quite inconsistent with proxy indicators of past behaviour. It’s rather less clear that whatever causes the annual variations in temperature cannot random walk us 0.5C over a century.

    I am not a statistician, unlike apparently VS. I don’t know the exact meaning of the term autocorrelation. But I am pretty sure that unless we get away from the statistics to the underlying mechanisms, it’s bound to lead back to the fact that you can’t distinguish readily between a stackable random component and a trend of 0.5C over a century.

  37. Marco Says:

    VS:
    Ah yes, Gerhard Kramm. As bad as Gerlich, desperately trying not to get into the 2nd law of thermodynamics kerfuffle, and making several mistakes himself. Once again Ray Pierrehumbert tried to educate these people (being an atmospheric physicist himself), but failing due to the inability of Kramm (in particular) to learn.

  38. Bart Says:

    Heiko,

    I wholeheartedly agree that understanding the underlying physical mechanisms is key, and this seems to be missed by VS. I think the physics of the problem bounds the temperature to such a degree that it is not properly characterized as a stackable random component, though I’m not a statistician by any means.

  39. VS Says:

    Heiko you wrote:

    “I do wonder how the statistics relates to statements like “In this case, predictions of catastrophic man-made warming are quite premature, and certainly not solid enough to base global policy on.””

    You also raised some interesting points, such as the difficulty of detecting a man-made global warming signal in the data, and I’ll try to get back to you a bit later on that.

    Alan you wrote:

    “Call me old-fashioned, but the AGW argument is not about statistics – it’s about physics, isn’t it?” and “Why is the AGW discussion getting drawn into this at all? I don’t get it.”

    Bart, you agreed with what Alen wrote.

    Alright, allow me to elaborate on why statistics is relevant in this case. Let me start by stating that every, and with that I mean every, validated physical model conforms to observations. This is the basic tenant of positivist science. However, usually within the natural sciences, you can experiment and therefore have access to experimental data. The statistics you then need to use are of the high-school level (i.e. trivial), because you have access to a control group/observations (i.e. it boils down to t-testing the difference in means, for example).

    In climate science, you are dealing with non-experimatal observations, namely the realization of our temperature/forcing/irradiance record. In this case, the demand that the model/hypothesis conforms with the observations doesn’t simply dissappear (if it is to be considered scientific). It is made quite complicated though, because you need to use sophisticated statistical methods in order to establish your correlation.

    So correlation is, and always will be, a necessary condition for validation (i.e. establishing causality) within the natural sciences. If you don’t agree with me here, I kindly ask you to point me to only one widely used physical model, for which no correlation using data, be that experimental or non-experimental, has been established. Do take care to understand the word ‘correlation’ in the most general manner.

    Now, I’ve tried to elaborate this need in my previous posts, but I fear that we might be methodologically too far apart for this to be clear, so allow me to try to turn the question around.

    Let’s say that you have just developed a new hypothesis on the workings of our atmosphere. You read up on the fundamental results regarding all the greenhouse gasses, and the effects of solar irradiance on them. You also took great care to incorporate the role of oceans and ice-sheets etc into your hypothesis (etc. etc. i.e. you did a good job).

    Put shortly, you developed a hypothesis about the workigns (or causal relations within) a very complex and chaotic system on the basis of fundamental physical results.

    Now, guys, tell me how you think this hypothesis should be validated? Surely it is not correct ‘automatically’, simply because you used fundamental results to come up with that hypothesis? There must be some checking up with observation, no?

  40. VS Says:

    Bart you wrote:

    “I wholeheartedly agree that understanding the underlying physical mechanisms is key, and this seems to be missed by VS. I think the physics of the problem bounds the temperature to such a degree that it is not properly characterized as a stackable random component, though I’m not a statistician by any means.”

    Understanding the underlying physical mechanism is indeed the key, I never disputed that. What I’m talking about here is the validation process (see previous post).

    I’ll try to elaborate on this tonight. My point in short is, that in our subsample of observations, on which we perform our statistical analysis (i.e. the past 150 years or so), temperature is in fact unbounded and can (‘must’ even) comfortably be described as an I(1) process.

    Again, I’ll try to find some time tonight to type that out. Can you in the meantime take a careful look at the definition of integrated series, cointegration and related statistical tests? :) You might not be a statistician, but this stuff is not that complicated to understand, and I feel you ought to be interested… ;)

  41. Tim Curtin Says:

    VS – once again I am most impressed by your contributions to this above-average thread. What is amazing is that even now Judith Lean can write a whole paper funded by NASA (Jan-Feb 2010, wires.wiley.com/climatechange) that ignores the whole issue of cointegration in reaching its conclusion that “most (90%) of industrial warming (sic) is due to “anthropogenic effects rather than to the Sun”.

    Here are my latest regression results, for July values of stated variables at Point Barrow from 1960 to 2006.
    Variable Coefficients Standard Error t Stat p

    RF 0.000785569 0.045665476 0.017202679 0.986358368
    dAVGLO 0.000693091 0.000444636 1.558782024 0.126734481
    dH2O 7.216078519 1.267334653 5.693901371 1.17822E-06
    dRH -0.16626401 0.072127426 -2.305142712 0.026294587
    dAVWS -0.419315814 0.331092703 -1.266460449 0.212496345
    Only dAVGLO, H2O, and RH are stat. sig. (adj. R2=0.44). What happened to Lean’s 90% for RF? Changing the RF values from absolutes to first differences actually turns the coefficient on RF to negative! I confess these are probably naive results, but it will take a lot to get the RF up to significant. These results are little diffferent from those I have for January 1960-2006 at Pt Barrow, although dAVGLO fades (and dAVWS becomes stat. sig.), because it is virtually total darkness there at all times in December and January.
    Notes: RF is radiative forcing of CO2, the others are first differences of AVGLO(= total horizontal surface solar radiation), H2O = precipitable water in cm, RH= relative huidity, and AVWS = average windspeed.

    So back to school for Judith Lean, VS are you available to coach her?

  42. Bart Says:

    VS,
    “There must be some checking up with observation, no?”

    Of courses. See eg these graphs of comparing model (ie hypothesis)results with observations: http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-9-5.jpg
    Top panel is with including all radiative forcings (natural and man made); bottom panel is with including natural forcings only (notably solar and volcanic on this timescale).

    In both cases it’s obvious that global avg temp is bounded; in the absence of a net forcing, it doesn’t keep wondering off in one direction.

    You’re very generous in giving advice, but seem less keen on taking advice from others. Have you read up on what climate models actually do (I provided some links in previous comments)?

  43. Bart Says:

    Tim Curtin,

    It appears that you only tested for the response to CO2. However, there are more forcings than just CO2. Climate responds to the net forcing.

  44. Tim Curtin Says:

    Apologies, I forgot to say that those regression results for July at Point Barrow were for changes year on year in max temps. The results for dTmin are very similar, with RF irrelevant to the actual record from 1960 to 2006.

    Bart: I have just checked AR4 Fig.9.5. The “natural forcings” are it appears mainly of TSI, which is irrelevant to global mean temperatures as derived from measurements at various surface locations with different latitudes and surface SR. Does this Chapter 9 of WG1 AR4 ever mention cointegration? I think not. Does it ever display any regression analysis results? NEVER. Moreover the text (p.684) states that Fig. 9.5 was actually derived from model simulations, and NOT observations as claimed in the caption and by you.

  45. Tim Curtin Says:

    Reply to Bart saying: “Tim Curtin, It appears that you only tested for the response to CO2. However, there are more forcings than just CO2. Climate responds to the net forcing”.

    Bart, my regression results clearly included both radiative forcing (RF) from increasing atmospheric CO2 and from direct “horizontal” solar radiation, “AVGLO”, in addition to “H2O” which represents any feedback from water vapour and from RH, relative humidity. What have I left out?
    The RF is always irrelevant – and often negative!

  46. Heiko Gerhauser Says:

    Hi VS and Bart,

    as Bart knows I am not particularly impressed by this comparison between models with anthropogenic forcings and without anthropogenic forcings. The fact that the forcings are rather poorly known kind of gets neglected there. I am also dubious about the claim that people just cannot construct models that reliably hindcast the last 100 years with a bit more variability. I can think of many things that could cause this variability, say aerosol formation over the Atlantic connected to dust storms, sea ice changes due to changing wind patterns. I’ve got the strong suspicion that failing to come up with a model that comes up with the right temperature path ex anthropogenic forcings is largely due to a lack of trying and the limited number of people qualified to write GCM’s. And anyway, what would be the point?

    Let’s postulate here that the data are poor and not yet suitable to do much sensible validation of the models. Don’t you think, VS, that we still need to make some stab at predicting the future? What do you propose given the data aren’t able to tightly constrain the models?

  47. Bart Says:

    Tim Curtin,

    In Fig 9.5 from AR4 (http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-9-5.jpg) the black lines denote observations, whereas the color lines are the model results (biwht the ensamble mean as a tick colored line).

    Other forcing you left out: Notably aerosols, but also the non-CO2 greenhouse gases and volcanoes.

  48. Arthur Smith Says:

    VS – re your reference to Kramm et al’s silly attack article – I wrote here on Kramm’s incapability of understanding very simply explanations:

    http://arthur.shumwaysmith.com/life/content/why_are_some_people_so_easily_confused

    and apparently nothing has changed. You can choose to believe Kramm, or you could actually spend a little bit of time reading my article and thinking about it a bit… Up to you.

  49. Scott A. Mandia Says:

    I have to thank all of you for this excellent discussion. I feel I am learning much. It certainly appears that the human signature in the recent T record is not as clear-cut as I had thought. Having said this, I must quote Nobel Laureate Sherwood Rowland (referring then to ozone depletion):

    “What’s the use of having developed a science well enough to make predictions if, in the end, all we’re willing to do is stand around and wait for them to come true?”

    I fear that if we wait long enough for the statistical proof it will be too late to reverse the crash-course I and others are convinced we are on. Here is what I believe are the key points (some of which I have already made):

    1) We know that increasing CO2 forces climate change (warming).
    2) We have a pretty good idea that a doubling of CO2 will result in a 1K forcing.
    3) There is reasonable probability that the resulting feedbacks will produce at least 2K additional warming (lower bound) with 3K more likely.
    4) We are also measuring CO2 increases of about 2 ppm/year and rising (except for 2008 due to decreased industrialization from the global recession).
    5) These increases are primarily from humans.
    6) About half of the increase from pre-IR times has occurred in the past 35 years.
    7) It is likely that today’s CO2 level is unique in the past 15 million years – certainly it is in the last 650,000 years.
    8) The last time that Antarctica and Greenland had no ice (approx. 50 million years ago), CO2 levels were 425 ppm +/- 75 ppm. Today’s values are already within that range.
    9) Sea levels were about 120m higher than today at that time.

    We are increasing CO2 rapidly and I think it is quite unwise to take a wait and see approach, especially in light of the fact that there appears to be no viable alternative explanation for the recent warming.

  50. VS Says:

    Hi Arthur,

    Thanks for responding.

    I actually ‘read’ all three of your pieces, (so GT, you ad K et al), but I simply don’t have the knowledge to properly evaluate them on my own. Hence, I’m keeping an open mind with regards to this until I obtain more observations (talk about professional deformation ;).

    Perhaps I’m making a fallacy of false compromise here, but I cannot imagine GT, or you or, in their turn, K et al, being completely ‘wrong’. Also, given the tone of the debate, as evident on various blogs (and escalated by individuals such as Harpern), it’s hard for a non-physicist to tell who’s closer to the truth. In particular, the more people are resorting to insults and ad hominem’s, the more skeptical I am about anything that comes out of a discussion.

    In any case, don’t you think that more (theoretical) physicists should mingle themselves in the discussion? It seems like quite the significant debate, but it apparently ‘lives’ only in the blogosphere, and it’s been three years already since GT posted their (then) working paper online. I find that strange.

    One final question, and forgive me if I’m sounding ‘smart’ (please assume good faith), but I was wondering about this since the first time I saw your comment (i.e. 2008 comment, not the one here).

    GT claim that there is no rigorous (fundamental) derivation of the atmospheric greenhouse effect present in the literature. Your comment, as far as I gathered, is an answer to that.

    If what they say is correct, shouldn’t you submit it for publication? Again, if they are correct, then your comment should be quite a significant contribution to the literature.

  51. VS Says:

    Hi Bart,

    First of all, I am most definitely reading your links. As a matter of fact, some of them were very informative, and I thank you for the time you took to dig them up. They have however not refuted my main statements.

    Allow me to start with the autocorrelation link you posted here:

    http://go2.wordpress.com/?id=725X1342&site=ourchangingclimate.wordpress.com&url=http%3A%2F%2Ftamino.wordpress.com%2F2008%2F08%2F04%2Fto-ar1-or-not-to-ar1%2F

    This blog entry is clearly written by a non-statistician. In fact, I responded to this already here:

    Global average temperature increase GISS HadCRU and NCDC compared

    Contrary to what the author of that blog post claims in the comments, here:

    “[Response: The ADF test is really for the presence of a unit root in an autoregressive process, which is rather a different critter. A trend could easily fail significance without having a unit root.

    Econometrics is good stuff, but non-economic statistics is advanced as well. In science, trends are clues to the secrets of the universe — and in my opinion that’s better than money.]”

    the ADF test and autocorrelation are very closely related, and is not ‘a different critter’. As a matter of fact, if you look closely at the equations I posted above, you can see that perfect autocorrelation, which we detect using the ADF test (i.e. the rho=1), is indication of the series containing a unit root. In particular, the H0 (null-hypothesis) of the ADF test is that the series contains a unit root, while the Ha (alternative hypothesis) is that it contains a deterministic trend.

    The test actually distinguishes between the two, that’s why it’s so important.

    Now, Heiko and Bart,

    Heiko, regardless of the fact that you are not a statistician, you are displaying a good deal of proper intuition. In that sense, we are slowly but surely indeed arriving to the crux of the matter. Just like Tim posted above, simply ‘eyeballing’ the two different simulation results doesn’t really prove anything, and definitely doesn’t constitute a formal empirical verification of a hypothesis.

    If you want to use these model outputs for verification, there are some formal demands. We need to see a rigorously derived statistical test comparing the model outputs with the data. This derivation has, at the very least (!), to include the following components:

    1) Distribution of the test statistic under the H0 that the output corresponds to the underlying data generating process (DGP)
    2) The distribution of the test statistic under the Ha, where this alternative hypothesis is that the output doesn’t correspond to the DGP
    3) The derivation of these distributions has to account for the endogeneity of simulation results, namely the effects of using selected empirical (physical) outputs as inputs in the simulation: the issue here being ‘overfitting’. You would be surprised how much you can ‘fit’ without having any clue about the DGP.

    Without these elements, how can we (formally) distinguish between the validity of GCM outputs and any other simulation generated?

    Note that simply comparing the variance fitted (i.e. the equivalent of a R2 statistic) is a big no-no, and will result in spurious inference. You need rigorous testing.

    For example, regressing two, completely unrelated, I(1) series on each other results in an expected value of the R2 statistic of around 0.5. Tim, as you properly pointed out, this is the equivalent of what apparently happened to Judith Lean’s ‘regressions’ (even though I still have to read that paper, I presume you are not just making stuff up when you say she ignores unit roots :)

  52. VS Says:

    Hi Tim,

    Thank you again for your confidence, but there are many (many!), much more skilled, time series specialists that should ‘coach’ Lean :) Bart, the Dutch are quite good at it, perhaps somebody should get, say, prof. dr. de Gooijer, or prof. dr. Boswijk, or any Tinbergen fellow specializing in TSA for that matter, to do it :)

    While I have certainly had formal training in time series analysis, I’m in a different branch.

    What’s so strange about the whole debate however, is that these tenants (which I’m elaborating on here) of modern statistical testing are not at all so ‘arcane’. Cointegration and unit root testing is widely taught, and should be a standard part of the toolkit of anybody wading into the analysis of time series.

    Clearly evident is the fact that this entire field is completely ignored in the debate. A few individuals such as Kaufmann and you are an exception, and whatever the differences in opinion and approach, I think both of you should be lauded for trying to draw attention to it. If there were no publications by Kaufmann, econometricians like B&R wouldn’t have been drawn into the fray. Now, this is progress in science. Mistakes are OK, as long as they can be weeded out, and the debate remains open and civilized.

    I think also that you are making an extremely valid and important point with the distinction between solar irradiance (TSI) and the radiation actually reaching the surface (SSR). Intuitively speaking the two series represent something completely different, and as far as I gathered, a basic condition for the greenhouse effect is that the sunlight actually reaches the ground (so this is the series you are actually interested in, it also helps in bypassing a part of the ‘cloud’ problem).

    By taking the satellite measurements, you in fact are ignoring all the variance displayed on the surface! As I stated earlier, statistical testing deals with explanation of variance. In this sense, you cannot first ‘artificially eliminate’ the variance from a series (so ‘averaging out’ is questionable), and then claim that the variance explains nothing (as many climate scientists do, awkwardly enough).

    As for the regression results you posted, I have a bit of a hard time interpreting them as they are stated. Could you also post your full specification? I presume you also found GHG forcings to be I(2) and temperatures to be I(1). How about the SSR series, also I(1), just as the TSI?

  53. VS Says:

    Correction.

    I wrote: “In particular, the H0 (null-hypothesis) of the ADF test is that the series contains a unit root, while the Ha (alternative hypothesis) is that it contains a deterministic trend.”

    That was sloppy. The Ha of the ADF test, put generally, is that the series is in fact stationary (and in this case ‘could’ contain a deterministic trend).

  54. VS Says:

    Another correction:

    I just saw that I wrote in my first post that the lag selection in my ADF tests was based on Schwartz Information Criterion, or SIC. In fact, it was based on a related measure, the Alkaike Informoation Criterion, or AIC.

    Using the SIC, which leads to no ‘lags’ being used, results in remaining autocorrelation in the errors of the test equation. That’s dangerous for inference.

    In the context of these temperature series, the AIC leads to 3 lags being employed, and successfully eliminates all remaining autocorrelation in the errors of the test equation (which has a deterministic trend as alternative hypothesis).

    Small issue, but I rather set it straight now, before somebody brings it up .

  55. Tim Curtin Says:

    Bart: I must first apologise for misreading the caption to Fig.9.5, but it remains inadequate, as it does not specify how much of the models’ simulations of the observed global temperature record since 1900 incorporates the observed natural and anthropogenic forcings. For it is known that the models’ retrospective simulations include “tuning” to get them closer to observed climate. This feature of the models explains their inability to produce accurate projections when bereft of current parameter values.
    Secondly, reverting to your graphs at the beginning of this thread, you say “Temperatures jiggle up and down, but the overall trend is up: The globe is warming”. But the visual impression that that is the case depends heavily on the long period of negative and zero temperature anomalies for the period 1880-1930, when the instrumental coverage of the world’s surface areas was far from comprehensive. Africa was completely absent until 1910, Central America and SE Asia were little better until well after 1900 (see CDIAC’s or NOAA’s maps of % coverage by decade) and not much better until the 1940s. Your graph implies a range for the anomaly of no less than 1oC from low to high, when from 1940 to after 2000 it is only 0.3oC (and within error range). I note that the anomaly is based on the 1901-2000 record as baseline, but when the first thirty years of that period were notable for sparse global instrumental coverage, your comments and the graph are very misleading.

    Thirdly, in regard to my own regressions for data at Point Barrow, you say: “Other forcing you left out: Notably aerosols, but also the non-CO2 greenhouse gases and volcanoes.” Actually the aerosols come in through the NOAA’s variables “ TOT, OPQ, H2O, TAU – Average TOTal and OPaQue sky cover (tenths), precipitable water (cm), and aerosol optical depth (unitless)” Apart from “H2O” none of these proved to be stat.sig. at Pt Barrow except, just for TAU, and then like OPQ it was negative: (Adj R2 0.54; Dependent Variable Tmin July 1960=2006)
    #1 Intercept set at 0; 1st differences of RF and all other variables
    Tmin Coefficients t Stat P-value
    Adj R2 0.54
    dAVGLO -0.00052699 -0.700504939 0.487879215
    dH2O 4.51171828 7.428155148 6.52755E-09
    dRH -0.048537845 -1.38896361 0.172930893
    dAVWS 0.461368278 2.786369886 0.008271971
    dRF -18.49913419 -0.651741607 0.518490571
    dTOT 0.534587743 1.28946864 0.205028785
    dOPQ -0.823960411 -1.424652266 0.162419238
    dTAU -6.473953906 -1.53685917 0.13261355

    #2 Absolute RF, 1st differences for all others.
    RAdjR2=.55 Coefficients t Stat P-value
    Intercept -1.773208398 -0.13500869 0.893336799
    dAVGLO -0.000488581 -0.634314798 0.529778026
    dH2O 4.520517936 7.295976214 1.14733E-08
    dRH -0.048497097 -1.359099478 0.182342206
    dAVWS 0.45725584 2.712208284 0.010080598
    dTOT 0.501954286 1.193550476 0.240250731
    dOPQ -0.782868662 -1.32850411 0.192147753
    dTAU -6.150860845 -1.438031873 0.158829908
    R.F. 0.317643048 0.134203849 0.89396874

    Note: I use absolute RF (= 5.35*((lnCO2t)/(ln(280)) to give CO2 its best chance, especially as it is the total concentration that matters, not differenced changes therein, even though clearly it is not then a stationary variable; first or second differences reduce its significance even further.

    Evidently neither surface solar radiation nor RF play well at Pt Barrow, but then the sun hardly does much there even in July – the max T reached 10oC only once between 1990 and 2006, and the rising RF from increasing [CO2] clearly did nothing to warm Pt Barrow in that period. As the other GHG Bart complains about are collectively less than half the [CO2] component of RF (at less than 1 W/sq.m in 2005 of a total RF of 2.63) it seems hardly worth bringing them in.

    Similar analysis at Hilo (near Mauna Loa) shows the RF remains nugatory, and SSR (“AVGLO”) only becomes significant with annual data. Curiously the trends for both Max and Min T at Hilo appear to be down not up.

    Here are the results for Mean Min Temps July 1960-2006 at Hilo (which is the smallish coastal town at the foot of Mauna Loa volcano where the atmospheric level of CO2 has been measured since 1958). Not surprisingly, Relative Humidity proves to be significant at Hilo unlike at Barrow
    Adj R2=.50 Coefficients t Stat P-value
    Intercept -6.134288504 -0.687474786 0.495958458
    RF 1.112332724 0.6916983 0.493331595
    dAVGLO -0.000142251 -0.419898285 0.67692449
    dH2 1.949565126 4.832036925 2.24049E-05
    dTOT -0.959141223 -2.887802382 0.006369984
    dOPQ 1.148820878 3.331364883 0.0019328
    dRH -0.079159467 -2.396252796 0.02159141
    dTAU 3.025541608 0.750188316 0.457761293

    It is true, Bart, that I left out volcanoes, but they hardly belong in a time series analysis, it is 17 years since the last of any consequence, but perhaps Mauna Loa will be the next, it is far from dormant!

    More generally, Bart, it seems to me (and VS) that you climatologists make the mistake of working only with aggregates like TSI and Global Mean Temperature, without even distinguishing between maximum and minimum, and NEVER do the monthly analysis BY location that I do. And again I ask you to point me to any regression analysis in that Chapter 9, misleadingly entitled “Understanding and Attributing Climate Change” (AR4, WG1). Had its myriad authors (led by Hegerl with her links to CRU) and David Karoly done some regressions, they would not have been able to reach their conclusion of 90% likelihood “that humans have exerted a substantial warming influence on climate…”. Well, where’s the evidence for that at Barrow and Hilo where their atmospheric CO2 concentration is actually measured?

  56. Bart Says:

    VS wrote:

    “it’s hard for a non-physicist to tell who’s closer to the truth”

    That’s the crux of the matter as I see it. I addressed exactly this question in an older post. I think the common sense ‘hints’ that I assembled there can go a long way to separating the wheat from the chaff in the popular debate. For health issues, or any complex scientific subject with societal relevance, it is very similar.

    Btw, I wrote a new post outlining my thought on the ‘random walk’ hypothesis, argued mostly from a physical perspective.

  57. VS Says:

    TIm:

    “Had its myriad authors (led by Hegerl with her links to CRU) and David Karoly done some regressions, they would not have been able to reach their conclusion of 90% likelihood “that humans have exerted a substantial warming influence on climate…”.”

    I was already wondering where on Earth they got that probability of 90% from.

    A hypothesis is never true with a ‘probability’, this is the first thing you learn in statistics.. it’s either true, or false… the only ‘probabilities’ in statistics are (if you did everything correct) the probabilities with which you make a Type I / Type II errors. A big conceptual difference.

    Do you, or anybody else here, have an idea how they arrived at this ‘probability’?

    Bart, you are quoting me out of context here, ts ts ts. The full quote is:

    “Also, given the tone of the debate, as evident on various blogs (and escalated by individuals such as Harpern), it’s hard for a non-physicist to tell who’s closer to the truth.”

    ..and it related to discussions of the greenhouse effect conjecture and fundamental physics.

    How about you respond to the validation issue we spent a couple of days discussion before running off to another thread.

    In particular this post right here:

    Global average temperature increase GISS HadCRU and NCDC compared

  58. Tim Curtin Says:

    VS Many thanks for your kind comments. It is ideed weird that the IPCC’s AR4′ conclusions use statistical terminology without ever using any modern methodology. As you said, “cointegration and unit root testing is widely taught, and should be a standard part of the toolkit of anybody wading into the analysis of time series” but are nowhere to be found in mainstream AGW literature.

    As for my regression results, as I have explained in my long post responding to Bart, I have taken the liberty of not differencing the RF variable, to give the true believers in their best shot, as it only get worse for if you do! You asked could I post my full specification? That will be in my paper if I ever get to write up my results! I just used others’ claim that temperatures are I(1). For the SSR series, I would go to I(1), but as there appears to no evidence of multi-collinearity in my differenced data regressions (unlike in the absolute values sets), I have just done as above, pending further tests.

    I look forward to your response to Bart on random walks.

  59. Heiko Gerhauser Says:

    Hi Tim,

    the conclusion of (at least 90% probability that at least 50% of warming over the last 150 years is net anthropogenic) is fine. It’s based primarily on points 2 and 3 of Scott’s list (doubling CO2 gives 1K, water vapour feedback adds at least another 1K) and a tally of the forcings. In principle, it’s of course possible that negative feedbacks we poorly understand act thermostat like. But, I think it’s quite reasonable to demand good evidence for these purported feedbacks and in the absence of that evidence assume the simple physics of radiation and humidity dependence at constant relative humidity hold.

    I actually agree with you on the station issue, and disagree with the IPCC here. Their error band is in my opinion too small at just +/- 0.2C. But I don’t see that affecting the above statement of likelihood. Maybe temperature is only up 0.3C compared to 150 years ago, but then I’d think 100% or more than 100% of that is due to net anthropogenic forcings in all likelihood.

  60. Heiko Gerhauser Says:

    Hi VS,

    I think you are too focused on the need to validate the theory (points 2 and 3 of Scott’s list) with statistical methods against the temperature data. Let’s presume for the moment that the data are not good enough for that. I actually think points 2 and 3 of Scott’s list are very strong indeed, and the theory that needs validation is the one about negative feedbacks, and not just that these negative feedbacks are there now, but also that they’ll persist in the face of stronger forcings.

    Thermostats are often explained with central heating systems, but when it gets too cold, the heating will first no longer maintain a constant temperature and eventually it may break down all together due to frozen pipes.

  61. Bart Says:

    See here the IPCC guidelines on assessing and communicating the uncertainties.

    VS, Tamino is a professional time series analyst; he sure know what he’s talking about. I refrain from commenting on the statistical details, because I lack the background. Instead, my reasoning is based more on physics.

    For lack of time, I’ll just mention these two links that seem relevant to the testing of models:
    http://www.realclimate.org/index.php/archives/2008/01/uncertainty-noise-and-the-art-of-model-data-comparison/
    http://www.thebulletin.org/web-edition/roundtables/the-uncertainty-climate-modeling

  62. VS Says:

    Hi Tim

    Yeah, the non-differencing was the first thing that caught my attention:

    “Note: I use absolute RF (= 5.35*((lnCO2t)/(ln(280)) to give CO2 its best chance, especially as it is the total concentration that matters, not differenced changes therein, even though clearly it is not then a stationary variable; first or second differences reduce its significance even further.”

    You ought to difference it though, as, like you state, it is not stationary. I think BR do a very good job at arriving to their specification :) Try applying their method to your local data. If I manage to reserve some time, I’ll email you about it, perhaps we can take a look at it together.

    As for Bart’s new post, I really have to collect some energy to dive into it again. I also thought I made it quite clear here:

    Global average temperature increase GISS HadCRU and NCDC compared

    “I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks. That’s furthermore quite clear when we look at Ice-core data (up to 500,000 BC); at the very least, we observe a cyclical pattern, with an average cycle of ~100,000 years.

    However, we are looking at a very, very, small subset of those observations, namely the past 150 years or so. In this subsample, our record is clearly observed a random walk. For the purpose of statistical inference, it has to be treated as such for any analysis to actually make sense mathematically.”

    I also don’t understand Bart’s problem with ‘unboundedness’. The whole point being that the variance of the error in the random walk process is limited, hence temperatures are de facto bounded on the very (very) small interval we are looking at (i.e. the bounded, glacial-interglacial, cycle is 100,000 years, our sample is a bit over 100 years… jeez, how complicated is this to understand?)

    Heiko,

    I understand where you are coming from, but the whole point is that the burden of proof is on THEM, not me or anybody disputing it. We have established, so far:

    1) There is no rigorous empirical proof that CO2 is (significantly) influencing temperatures

    2) GCM model outputs have not been formally tested for their ‘fit’ so those graphs make no sense (see my previous reply to you and Bart). Furthermore, they seem to perform rather badly in prediction, another red flag (I can’t find the reference right now, but some Greeks did a GCM prediction evaluation, and the outcome is that GCM’s are pretty bad at it)

    3) The man-made global warming is a phenomenological, rather than fundamental, model, so given 1 and 2, it simply hasn’t been validated, ergo any conclusions need to be treated as hypothesis rather than fact. A

    Allow me to elaborate here on 3. As far as I’m familiar with results from chaos theory, these imply that while we may understand all the individual components of a process (already a long shot in the case of climate, but OK), the aggregated effect of these components can still result in unpredictable behavior. This is furthermore a mathematical/physical result.

    So, just knowing the physical basis of a system, doesn’t mean that we can simply aggregate and extrapolate (not to mention aggregate and extrapolate AND leave out over half of relevant factors, e.g. clouds)

    Putting these three together, we simply cannot claim any certainty, and it is up to those making this extraordinary claim (i.e. that a trace gas will devastate the stability of our climate) to come up with some extraordinary evidence.

    So far, they have failed.

  63. VS Says:

    Bart,

    “VS, Tamino is a professional time series analyst; he sure know what he’s talking about.”

    That’s an authority fallacy.

    Please invite Tamino to come over here and clarify himself. I wrote down quite clearly why his comment is simply wrong in light of ADF test results.

    You might also want to compare what Tamino wrote with what is written here:

    http://en.wikipedia.org/wiki/Unit_root

  64. Bart Says:

    VS,

    Now you’re making some very dubious claims.

    Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). These findings provide ”direct experimental evidence for a significant increase in the Earth’s greenhouse effect that is consistent with concerns over radiative forcing of climate“. See also Scott’s points in his most recent comment.

    There is a lot of fundamental physics involved, and parameterized in climate models. Do as you claim, and refrain from stating a strong opinion (like claiming that a whole scientific field is wrong; note that such an extraordinary claim needs extraordinary evidence) about the physical nature of climate. Some humility would suit you.

    Perhaps read up on the history of climate science first before making such strong and unfoudned pronouncements. http://www.aip.org/history/climate/index.html

  65. VS Says:

    Bart, what claim are you referring to exactly when you write:

    “Now you’re making some very dubious claims.” ?

    As for:

    “There is a lot of fundamental physics involved, and parameterized in climate models. ”

    Of course they are, as they should be. The point is that these results are not derived directly from fundamental theory, hence the models are phenomenological.

    See my comment here:

    Global average temperature increase GISS HadCRU and NCDC compared

    and here:

    Global average temperature increase GISS HadCRU and NCDC compared

    Question: If the GCM models were fundamental, how on Earth could you have differently parametrized GCM models describing the same system?

  66. Heiko Gerhauser Says:

    Hi Tim,

    there’s a reason to use the world average and not local temperatures. As mentioned in an earlier comment, local weather depends quite a bit on the direction of the wind, so that the difference between the winter averages of two years can easily be 6C. That just makes it that much harder to see any signal at all, unless local temperatures go up by like 5C and you have decades of data.

  67. VS Says:

    OK Bart.

    Let me state “So far, they have failed.” as an opinion then. Apologies.

    So far they have failed, in my eyes. My arguments are listed above.

    Now you stop calling cointegration and unit root analysis a ‘funky statistical method’. That’s Rabbett-speak, and I actually like the civilized tone of the discussion we are having here :)

  68. Heiko Gerhauser Says:

    Hi VS,

    what you are saying basically boils down to us not having certainty that thermostat like feedbacks negate the anthropogenic warming. Ok, so we don’t. But neither do we have much evidence for these presumed negative feedbacks, and it’s also clear that at some stage they’d be overwhelmed.

  69. Bart Says:

    Your claims 1,2 3 are dubious, though there’s room for interpretation by what you exactly mean. E.g. there will hardly ever be 100% mathematical proof for anything in nature.

    Thye Greek’s work is discussed here: http://www.realclimate.org/index.php/archives/2008/08/hypothesis-testing-and-long-term-memory/

    I’m starting to wonder, is your search for climate related information best characterized as a ‘random walk’, or are you specifically searching out research that comes to a particular conclusion?

  70. VS Says:

    Hi Heiko,

    Yep, the evidence is flimsy on all sides. However, this is not the picture painted by the IPCC.

    Also, purely out of interest, how are you so certain that:

    a) these feedback mechanisms will be ‘overwhelmed’
    b) and if (a) is true, that we’re very close to this happening

    Bart,

    My search for information is not a random walk, and I have argumented all three claims in this 40 something page discussion. Instead of simply ‘stating’ that I’m wrong, why don’t you tell me, with regard to, respectively:

    1) Where is the empirical proof (i.e. regression analysis)

    2) Where is the formal comparison of outputs with the data, as per my demands here: https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1284

    3) Are we still discussion the phenomenological nature of the GCM’s? How about you answer the question I just posted above here, namely: “If the GCM models were fundamental, how on Earth could you have differently parametrized GCM models describing the same system?”

    As a side note: you keep linking to realclimate, and to be honest, I don’t find Michael Mann’s personal blog the most reliable source on the internet. Especially as they are known to censor the comments section heavily.

  71. VS Says:

    Bart dammit! (excuse the agitation :)

    The post you link to on realclimate is taking Tamino’s blog (we discussed above) as a serious reference. How on Earth can I then take it seriously? Did you compare what Tamino wrote with what is written on unit roots (the definition is given in the wiki link above)?

    His claims are simply wrong. ‘Long term memory’? No, it’s in fact ‘perfect memory’ on our subsample, as the series contains a unit root.

    I feel am now writing this down for the 10th time: Calculating a deterministic trend on a process containing a unit root is misspecification. Hence it is meaningless. That discussion at realclimate is simply flawed in its postulates.

    Besides, I don’t appreciate the ad hominem’s lodged by the author at the reviewers of the Greek paper I was referring to, namely (thanks for the reference ;):

    Click to access 2008HSJClimPredictions.pdf

  72. Heiko Gerhauser Says:

    Hi Bart,

    a while back I had a discussion with James Annan about the heat balance. I asked whether direct measurements of radiation coming and radiation going out were good enough to come up with the balance 0.85 W/m2 by which the globe ought to be warming according to Hansen.

    His answer was that the data were not good enough.

    Now you can also look at ocean heat content, because that’s where virtually all of the 0.85 W/m2 should be going.

    But on Roger Pielke’s blog it’s basically argued that these data are also too poor.

    Guest Weblog By Leonard Ornstein On Ocean Heat Content

  73. Bart Says:

    VS,

    The large chages in climate in the past can only be explained by a climate sensitivity of around 3 (+/-1) degree per doubling, which includes positive feedbacks. Many of these positive feedbacks are also clear from modern observations (eg of water vapor) and theoretical modeling compared with measurements (eg of the carbon cycle). Sure, there is uncertainty, but don’t confuse that with knowing nothing. Certain values have the climate sensitivity have much stronger evidence behind them than others.

    For changes in past climates, see eg this very good presentation: http://www.agu.org/meetings/fm09/lectures/lecture_videos/A23A.shtml
    (towards the end he talks about climate sensitivity)

    Other evidence for climate sensitivity not being much smaller or greater than three: http://julesandjames.blogspot.com/2006/03/climate-sensitivity-is-3c.html

    RC is a blog of a group of climate scientist; it’s not Mann’s personal blog. Besides, you’re dismissing Mann and Tamino very lightly, smells also like ad hom. Re RC’s review of Koutsiyanis, I think they took issue with them relying very strongly on long discredited arguments from deep inside the “skeptical” corner. If you take issue with such statements, than best is to follow the trail back in time, backed up with a solid grasp of the scientific knowledge.

  74. Bart Says:

    Heiko,

    There are different estimates of ocean heat content, and while some go down to 700 m depth, others go down to 2000 m. The former seems to have flattened since 2004, whereas the latter has continued increasing.

    See resp
    http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/
    and
    http://www.skepticalscience.com/empirical-evidence-for-global-warming.htm (last fig)

    Both exhibit short term variability of course, so I wouldn’t conclude too much from apparent plateau’s (esp if they were preceded by a strong increase).

    I came acros this link as well, which discusses the random walk concept:
    http://www.skepticalscience.com/The-chaos-of-confusing-the-concepts.html

  75. Heiko Gerhauser Says:

    Hi VS,

    I am not sure we are “close” to the point, but then neither do I see much evidence for strong, resilient negative feedbacks in the first place. Lindzen has been trying to come up with something, but I don’t find it that convincing.

    Look at the temperatures of the other planets in the solar system. Jupiter may have a very chaotic atmosphere with the potential for thermostat like feedbacks just like Venus or Earth, but it’s rather colder. You can also do some back of the envelopes with assumptions about how reflective clouds can get or how low relative moisture might get. This does leave some room.

    On the other hand, there’s also room for a runaway towards 100C plus once the ice sheets have melted.

    On realclimate and Tamino, I can understand why you may feel annoyed by them, but consider how you come across to Bart too. Having talked to him in person yesterday I positively know that you could improve on that score.

  76. VS Says:

    Hi Bart,

    I’ll check the links you posted later, but let me respond in short:

    My issue with Mann has to do with his dubious reconstructions. As I stated I also side with the Wegman report on it.

    See my post here:

    Global average temperature increase GISS HadCRU and NCDC compared

    My issue with Tamino is elaborated here (and a bit further, again):

    Global average temperature increase GISS HadCRU and NCDC compared

    These are not ad hominem’s, just strong disagreements, based on arguments:

    Posted on that RealClimate posts however, are unfounded attacks like these:

    “…touching all the recent contrarian talking points (global cooling, Douglass et al, Karl Popper etc.) but is not worth dealing with in detail (the reviewers of the paper include Willie Soon, Pat Frank and Larry Gould (of Monckton/APS fame) – so no guessing needed for where they get their misconceptions)”

    I really, really, detest this type of ‘discussion’. It is based on insults, insinuations, authority fallacies (e.g. we’re the ‘scientists’, you’re a ‘layman’, so shut up), and many other unproductive statements.

    This MUST stop. The debate is being poisoned by individuals. I understand the pressures, and the fact that many of these individuals strongly believe that they are ‘saving humanity’ and must ‘take action NOW’, but this tone isn’t helping at all.

    Surely you must agree with me here.

    Also, as a side note: I think that the word ‘discredited’ is used too loosely within the climate science community. Apparently, as soon as a paper disputing a specific result appears in a climate science journal, that result is ‘discredited’.

    That’s not how science works. The opinions of three/four reviewers, who often know each other and the author(s), are not sufficient to ‘discredit’ something, especially if many scientists disagree with them.

    Sorry.

  77. VS Says:

    Hi Heiko,

    “On realclimate and Tamino, I can understand why you may feel annoyed by them, but consider how you come across to Bart too. Having talked to him in person yesterday I positively know that you could improve on that score.”

    I can imagine, but trust me, I’m trying to keep it ‘cool’. Try to assume good faith :) The debate is heated, but I think we are keeping it quite decent.

    Keep in mind that while Bart feels his discipline is under attack, my discipline is, in my eyes, being completely abused here by various individuals.

  78. Heiko Gerhauser Says:

    Hi Bart,

    in the guest post on Pielke’s blog it’s basically argued that there’s also a possibility that ocean heat content below 2000 m might change and that that is really poorly sampled. I don’t really understand this particular issue that well, my feeling is that ocean heat content measurements are supportive of warming over the last 30 years and mildly supportive of a continued radiative imbalance, but that the uncertainties are large.

  79. VS Says:

    Hi guys,

    I might be ‘contradicting’ myself here, but I just bumped into this new paper. So in the spirit of open and fair discussion, here goes:

    Click to access wip04.pdf

    It basically says the opposite of BR.

    They actually use a dynamic panel set up (so that’s both cross-section and time element) to estimate the warming etc effects. They don’t use averaged temperature measurements, but individual weather stations (hence the panel dimension), and they include aerosols in their analysis. I haven’t read it carefully yet, but I don’t think they use SSR in this case.

    Tim, this could be a very interesting addition.

    However, they, for some strange reason, don’t even mention unit roots, which, just as in regular time series, are a severe problem in panel datasets. This is especially strange, because Jan Magnus is a good econometrician (!).

    Once I find the time to read the paper carefully, I will email him to ask him about it, and I’ll keep you posted.

  80. Bart Says:

    VS,

    Does telepathy exist after all? I was just going to post a link to that same paper by econometrists from Tilburg University. Just went very diagonally through it. They find a higher climate sensitivity than most, the reason of which is not clear to me yet. I’m curious about your opinion on this analysis indeed!

    Re the debate being poisoned: From where I’m sitting, it is poisoned by the likes of Soon, Monckton and their like. It’s not about who is a layman or not; it’s about cherrypicking and twisted logic to arrive at unfounded claims. There ARE a lot of empty talking points going around, and they keep resurging irrespective of their flimsy nature. That is what is poisoning the popular debate.

    A lot of these talking points resemble the argument along the lines of “I see a bird flying in the air. Therefore the theory of gravity is wrong“.

    Now apparently few people see gravity as a threat to their way of life. For AGW, that appears to be different.

  81. VS Says:

    Hi Bart,

    I couldn’t resist, and I read it (perhaps I should switch to climate econometrics, it seems it’s eating up most of my free and some of my not-so-free time recently ;)

    I’ll run my comments past some fellow (and especially senior ;) econometricians as well (a few of them specializing in exactly this kind of analyses, namely dynamic panel series).

    If they endorse my concerns, I will email Magnus personally.

    In particular, the first thing that caught my eye is that they find a dangerously high autocorrelation coefficient (the ‘persistence’ we were talking about earlier), namely 0.91 (this the autoregressive coefficient beta1, listed in equation 11 of the paper).

    In light of the unit roots (corresponding to perfect persistence) found in temperature series, this should raise some concerns. Keep in mind that if a series contains a unit root, regular inference is invalid (hence the ADF’s), so coming up with such high persistence is in fact what you will find in such series.

    Allow me to illustrate.

    The ADF test statistic, on, say, the CRUTEM3 data doesn’t reject I(1) for all possible alternative hypotheses (i.e. no intercept, intercept, intercept and trend, again AIC lag selection, MacKinnon (1996) one sided p-values). I(2) however, is clearly rejected. This is the basis of the conclusion drawn by most authors (references in first post) that temperature is in fact I(1).

    However, if we simply choose to ‘ignore’ these test results, and go ahead and estimate the temperature series as an AR(1) stationary process, we get the following estimation results:

    Variable, Coefficient, Std. Error, t-Statistic ,Prob

    C, -0.048767, 0.112268, -0.434377, 0.6648
    AR(1), 0.864812 ,0.046451, 18.61792, 0.0000

    R2=0.73

    If we repeat this excercise with a simple deterministic trend, we get:

    Variable, Coefficient, Std. Error, t-Statistic, Prob

    C, -0.788446, 0.082953, -9.504749, 0.0000
    @TREND, 0.007374, 0.000812, 9.086511, 0.0000
    AR(1), 0.547613, 0.074577, 7.342909, 0.0000

    R2=0.78

    Note how, when ignoring the established I(1) property of the series, our (spurious!) estimate leads us to conclude that the persistence term is in fact equal to 0.86 (and significantly different from 1!). If we include a deterministic trend, it even drops to 0.55.

    This simple analysis, however, doesn’t constitute either a rigorous proof, or a ‘refutation’ of Magnus et al, or even strong verification of my own ‘hypothesis’.

    In my eyes, it does however raise somewhat of a red flag.

    Hence, I’ll investigate further, and I’ll get back to you guys here.

    PS. I also found it curious that they basically ignored the (in the literature) established I(2) property of GHG’s… but that’s a different story altogether.

  82. Tim Curtin Says:

    Thanks VS for link to Magnus.

    After a quick scan my first reaction is that the paper has great interest but some basic misconceptions. Magnus et al say early on: “When we observe an increase in temperature, we observe only the sum of the [warming] greenhouse effect and the [cooling] radiation effect, but not the two effects separately.”

    Perhaps this explains the apparent negative effect on temperature of “Global” solar surface radiation in July at Hilo and Barrow, but it seems odd not to allow for any warming effects from changes in SSR other than from more or less dimming from aerosols, unless and until the local “dimming” effects are documented in full.

    Magnus et al go on “Our purpose is to try and identify the two effects. This is important because policy makers are successful in reducing aerosols (which has a local benefit) but less successful in reducing CO2 (which has a global, but almost no local benefit). Reducing aerosols will cause cleaner air, but also more radiation (‘global brightening’), thereby reinforcing the greenhouse effect.”

    However, they ignore the very large local and global benefits from rising atmospheric CO2 in terms of the well-attested growing NPP associated with that (as I have documented in my peer-reviewed paper “Climate Change and Food Production”, 2009, at my website). This effect stems from the increased partial pressure of atmospheric CO2 resulting from the higher atmospheric concentration (Lloyd and Farquhar, passim). Reducing that concentration from today’s 389 ppm to 350 ppm as proposed by Hansen and – in effect – CoP15 must cet.par have a negative impact on the growth of NPP associated with the average annual 57% biotic uptake of CO2 emissions since 1958, now at around 6GtC pa from emissions of over 10 GtC pa. Reducing that incremental uptake to less than 2GtC pa (as implied by a 60% reduction in emissions from 2000 levels) will hardly have a positive effect on NPP and world food production.

    More generally, I see that Magnus et al at no point test for auto-correlation (Durbin-Watson) or unit roots etc. Another random walk anyone?

  83. Bart Says:

    VS,

    I notified Tamino of your “invitation”. Even though I’m happy to host this interesting discussions here (the statistical details of which go over my head), a more efficient way of communicating with him may be to go over to http://tamino.wordpress.com/ yourself and bring it up with him directly (the subject has already been brought up in the latest thread).

  84. VS Says:

    Hi Bart,

    I left a short reply there, it’s ‘stuck’ in moderation. I’ll repost it here, together with Tamino’s ‘reply’:

    “VS // March 9, 2010 at 2:31 pm | Reply

    Hi Tamino,

    I find it interesting that you claim that ‘I’ personally failed my ‘ADF’ test. You might dispute my test results (posted on Bart’s blog), but are you also claiming the same for all these studies as well?

    ** Woodward and Grey (1995)
    – confirm I(1), don’t test for I(2)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Kaufmann et al (2006)
    – confirm I(1), (they state however that temperatures are ‘in essence’ I(0), but their variable GLOBL is confirmed to be I(1))
    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    …I’m sure there are others.

    Temperature may be ‘bounded’ over it’s long 100,000 year cycle (as observed over the past 500,000 or so years), however, on the subset of a 150 years or so, on which we are formally studying it, it can be easily classified as a random walk.

    Keep in mind that the limited variance of the first difference errors de facto keeps it bounded over this period.

    You are however welcome to hop over to Bart’s blog and respond.

    And:

    “There are so many bonkers theories from so many bonkers commenters, we’ll just have to take ‘em one at a time.”

    Let’s try to keep it civilized, OK?

    [Response: There’s nothing uncivilized in calling your claims bonkers, because they are. Frankly, the label is better than you deserve. As for failing your ADF test, you just plain got it wrong.

    But as I said before, you are not important enough to deserve a distraction from my present efforts. I’ll get around to you, but in the meantime you can wait.]”

    If this is another Halpern/Rabbett like character, I seriously have no interest in engaging in a discussion.

    PS It’s very interesting how he claims I, together with all these other authors ‘got the ADF test wrong’, without naming a single argument, because he’s ‘too busy’.

    Incredible, the level of the ‘debate’ on these blogs… I’m actually starting to wonder if all of this is worth MY time.

  85. VS Says:

    Note:

    My comment just got out of moderation, and he added this to his ‘reply’:

    “It’s either a complete failure of understanding on your part, or dishonesty, that causes you to misrepresent the work of Kaufmann & Stern. As for Beenstock & Reingewertz, their claims are loony.”

    Wow.

    Actually, I know enough already.

  86. VS Says:

    PS. Bart, tell me please, WHAT ON EARTH is wrong with this entire clique?

    How come NOBODY can discuss NORMALLY with somebody who they disagree with?

    Is this a ‘scare tactic’ or something? Are they (or perhaps I should say ‘you guys’, because you do tend to endorse these individuals) trying to keep the reasonable people out of the debate?

  87. Marco Says:

    @VS: what is ‘wrong’ with these people is that some math, which may or may not be accurate or even relevant for the situation at hand, supposedly trumps observations and well established physics.

    Stuff like a cooling stratosphere with a warming troposphere simply does not fit with global temperatures being a mere random walk (and most certainly not with solar influence being the driver). The enhanced greenhouse effect *does* explain both observations (cooling stratosphere, warming troposphere).

    When somebody then comes along and claims “you are wrong, they are right, just look at the math”, they wonder how somebody can just make these claims and thereby neglect observations that do not fit said claim. It should make a statistician or mathematician a bit more humble if their math makes a claim that essentially contradicts an observation.

    And Tamino *is* busy. He’s writing a paper that should put Anthony Watts to shame. SWatts is getting credibility from so many people, his false claims require immediate attention.

  88. VS Says:

    Marco,

    You write:

    “It should make a statistician or mathematician a bit more humble if their math makes a claim that essentially contradicts an observation.”

    Actually, statistics is the discipline that formally deals with observations. I also gave an explanation about the ‘random walk’ interpretation in numerous posts in this thread.

    Finally, when I said ‘ what on Earth is wrong’, I was talking about the tone. Are you endorsing that tone? Judging by your earlier posts in this thread, I think you might be.

  89. jfr117 Says:

    VS you are not alone in your shock at how shrill these blogs become when you ask a question. it’s really quite sad for science. tamino is the worst one and an examle of elitism that makes his message impossible to swallow for many people.

  90. Marco Says:

    @VS:
    What tone do you expect when you walk into a room and say “Oi guys, you’re all wrong, I’m right!” A cheering reception?

    Do remember that it is not the first time someone runs in and makes large claims. Somewhere the kindness stops. Endorsing the tone is not the right word, I *understand* the tone. It’s a result of many, many, many claims by people of “the ultimate proof” that AGW is a hoax, wrong, fraud, whatever, only to be proven wrong (but not admitting as such).

    Let me in the case of the Israeli’s repeat two problems with their analysis:
    1. The observations fit the physics: cooling stratosphere, warming troposphere. The analysis based on the observations by the Israelis does *not* fit the physics. I’d do some major thinking before essentially claiming “the physics are wrong”.

    2. Based on their analysis, the same magnitude of forcing (in W/m2) for solar and CO2, gives a *different* warming (ca. 1.5 vs 0.5 degrees). Another result that contradicts basic physics. Throw away basic physics? Ah, in this case rather problematic, because several aspects of basic physics (the data in particular) were the *input* for the analysis. A circular argument follows: data input=analysis contradicts basic physics=basic physics is wrong=data cannot be used=analysis cannot be performed!

    Two aspects where the analysis results in direct contradiction to known physics. Do you really doubt the physics? Or do you perhaps take a really good look at the math you chose (using ADF *is* a choice) and maybe check whether it really is suited for the type of data you are analysing?

  91. MartinM Says:

    A hypothesis is never true with a ‘probability’, this is the first thing you learn in statistics.. it’s either true, or false… the only ‘probabilities’ in statistics are (if you did everything correct) the probabilities with which you make a Type I / Type II errors. A big conceptual difference.

    If you ignore the entire field of Bayesian statistics, sure.

  92. VS Says:

    Marco,

    1) How does it not fit ‘the physics’? Because they reject runaway warming due to CO2? Or are you referring to something else? Explain please.

    2) Errors in functional relationships can also be the cause. The models are phenomenological rather than fundamentally derived, so this is a realistic possibility. Are you aware of some fundamental physical model that violates statistical findings in such a way?

    Now here’s a novel thought, could it perhaps be the case that we do not completely understand how our climate functions?

    Whatever the case, BR have made a valid addition to the debate, especially as they have employed the most proper statistical analysis I have so far seen in this context.

    The poisonous tone is still not justified.

    MartinM

    Are you suggesting that the “90% probability that modern warming is caused by man-made emissions” is derived via Bayesian statistics? Over there I asked for a reference for that calculation, perhaps you can provide it. I’m honestly interested.

  93. VS Says:

    jfr117

    I have no idea where his ‘elitism’ comes from. Perhaps the fact that his ‘fans’ know less about statistics than he does, got to his head.

    I studied under some extra-ordinary statisticians/econometricians (some of whose estimators you can find in popular software packages), and am familliar with both their work and ther modus operandi. None of them would ever display this kind of behavior when challanged on a technical matter.

    Statistics is a very contra-intuitive discipline, and I have been taught that when you feel that somebody, with less formal training in statistics, doesn’t seem to understand what you are saying (and you are convinced that you are right), you do your very best to explain it.

    That’s at least what I tried to do in this thread, in spite of the warnings of a few of my friends not to get my hands dirty on this stuff. It seems that they might have been correct.

    You’re right, this has nothing to do with science anymore.

  94. Timo van Druten Says:

    VS / Tim Curtin/Bart,

    I think you can forget the paper from Jan Magnus et al.

    http://climategate.nl/2010/03/09/four-degrees-warming-in-2050-oops-you-used-the-wrong-dataset/

  95. jfr117 Says:

    tamino is a caricature of all that is bad about climate science. he may be smart, but that becomes moot by turing off everybody but your followers. what’s the point of preaching to the choir?

    if tamino is your prophet, i ain’t buying your religion.

    anyways, give blogs such as this one and Scott Mandia’s credit for engaging people with questions in reasonable tone. there should always be questions since we do NOT understand the climate system very well. it is full of non-linear feedbacks that we cannot quantify or understand based on a 30 year warm period!

  96. Scott Mandia Says:

    jfr117:

    I appreciate your comments because I do recall I drifted into a poor tone with you on one comment at my blog and I felt bad.

    I try to be civil. :)

    Regarding Tamino: When I first ventured into the fray about a year ago, I also thought Tamino’s tone was meanspirited. However, after seeing the countless false claims repeated over and over again, I understand why he has no patience for it anymore. I am new and not too cynical yet.

    I am still a huge fan of his work which speaks for itself, IMO.

  97. Marco Says:

    @VS:
    What you apparently fail to understand is that B&R claim that, based on their results, the same magnitude of a forcing will result in *different* warming if the forcing is solar or CO2. Please explain us the physical reason for that difference. It’s like saying that putting a 50 kilo box of feathers on your stomach would be less heavy than putting a 50 kilo box of lead there.

    Moreover, their results also state that the reason for warming is solar, not CO2. More of a problem there, since a solar influence on warming should not yield a cooling stratosphere. Yet, the observations show it *is* cooling.

    Two violations of known physics. I’d say there’s something really, really fishy with the math of B&R if their results contradict observations and physics.

  98. jfr117 Says:

    if tamino can’t handle questions (VS’ question wasn’t even questioning tamino’s work per se) then he shouldn’t blog. what is the point of his blog if not to educate and provide a place for discourse?

    if tamino’s work satisifies himself and others, then swell. but just because tamino is satisfied with it does not mean everybody is satsified with it.

    this higher level statistics has a place in this debate, but it is not the be-all, end-all. we are still talking about physical system. while tamino throws out the past decade of plateaued temps as noise, i see it as a possibility to learn something new about the climate system.

  99. Bart Says:

    VS,

    Even though your question “what on earth is wrong with this entire clique” sounds rather like a rhetorical question, I’ll answer anyway.

    I may have a different way of communicating than many others on either side of the popular debate. But as to the contents, I am firmly on the side of mainstream science, because there is a coherent framework of understanding based on looking at all the evidence in its totality. From my PoV, the most damage to public understanding is done by those who spread misinformation (with Morano and Watts as prime exponents); much more so than by supporters of science who let their frustration shine through in their language (a frustration which I, like Marco, understand all too well, even though I try to remain civil, see my views on communication eg here and here).

    Let me ask you the same question that I posed to Tom Fuller: Imagine the hypothetical situation that a brand of science is being strongly criticized / attacked, but that the criticism by and large doesn’t make a lot of sense. And in those instances where the critics do have a point, it’s not relevant for the bigger scientific picture: It stands rocksolid. Again, I ask you to imagine the hypothetical. Think of e.g. evolutionary biology being criticized by creationists; epidemiology being criticized by tobacco apologists; vaccine researchers being criticized by antivax-ers, etc. The arguments of the critics are complete bogus, but packaged such that it’s difficult for the layperson to discern who is talking real science and who is merely setting up a plausible sounding bogus story (whether intentional or born out of confusion). Evolution has been “refuted” countless times by creationists (not). How would you advise the scientists (and their supporters) to respond?

    On a post about false claims at falsification of AGW, Robert Grumbine commented that “Immanuel Velikovsky thought that clouds had their own anti-gravity system, or at least proved that gravity didn’t work as Newton or Einstein said.” On his wikipedia page it sais about Velikovsky that he gained “enthusiastic support in lay circles, often fuelled by claims of unfair treatment for Velikovsky by orthodox academia”.

    Hmm, that rings a bell, doesn’t it…?

  100. sidd Says:

    “(i.e. the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated, as this makes them asymptotically independent.”

    Let me see. I take a diode, biased in the exponential region, and put a varying current through it. I measure both the current and the voltage drop across the diode and discover that I is exponential and V is linear, therefore they are “asymptotically independent”

  101. Scott Mandia Says:

    jfr177:

    If you go back and look at the posts from VS you will see that he essentially called Tamino “clueless” but with nicer terminology. Tamino fired back with strong language but he did not tap the bees nest, VS did.

  102. MartinM Says:

    Are you suggesting that the “90% probability that modern warming is caused by man-made emissions” is derived via Bayesian statistics?

    My one and only suggestion is that if you wish to talk with authority on a given subject, it’s probably a good idea to avoid talking complete and utter bollocks.

  103. MartinM Says:

    Two violations of known physics. I’d say there’s something really, really fishy with the math of B&R if their results contradict observations and physics.

    Add to those two the negative coefficient their model assigns to the first difference of methane forcing, which is patent nonsense. It’s not hard to see why they get that result; the growth rate of atmospheric methane concentration has been dropping off for the past few decades, and was increasing prior to that, while temperatures were declining slightly. But that should have been a huge red flag; it should have been clear that they were getting unphysical results because they were missing an explanatory variable necessary to produce a good fit to the observed temperature trends; namely, aerosols, without which it’s difficult to account for the decline. All they’ve really demonstrated is that GHG forcings and TSI alone cannot account for temperature changes over the past century or so. Well, duh. We already knew that. So, they’ve falsified a sucky model nobody was actually proposing anyway. Brilliant!

  104. MartinM Says:

    Hmph. I notice my prior two comments are a little on the hostile side, for which I apologise. I’m ill, and consequently a bit grumpy today.

  105. Arthur Smith Says:

    VS – you say “I simply don’t have the knowledge to properly evaluate them on my own” – perhaps you should try to gather that knowledge, before opining further on the subject. :)

    I may publish my little contribution, however I hardly think it’s very original given the many textbooks on the subject; in the meantime a comment responding to G&T at the journal they published in is more urgent, and in the works.

    On this whole “random-walk” issue – I think it’s very interesting because it suggests something very profound (and worrisome) for Earth’s climate response function. Considering Earth’s average surface temperature as a reasonable metric (something more along the lines of total surface heat content is probably better, but average T is not a bad proxy for that), the standard systems theory analysis from the physical constraints implies that that average T is determined and constrained through a feedback process. Increasing surface temperature strongly increases outgoing radiation and thus creates a strong negative feedback to bring temperatures back down again. There are known positive feedbacks in the system (associated with water vapor, clouds, and ice cover) but the central assumption in all of climate science is that, for Earth, climate is essentially stable, and the negative feedbacks dominate. The Earth has a particular set-point average surface temperature (or more correctly, average surface heat content), with slight variations caused by things like the solar cycle, El Nino-like internal redistributions of energy, and other small changes in the responsible physical parameters.

    But the analysis VS is promoting suggesting something very different – that temperature is not constrained at all, but randomly walks up and down all on its own. That can only happen if the climate system is neither stable nor unstable (since we don’t have a Venus-like runaway either) but right on the cusp of stability, with positive feedbacks exactly cancelling negative feedbacks, at least on the time scale being discussed (decades to centuries?)

    But that means the equilibrium response of such a metastable climate system to a forcing would be not 3 C as the IPCC estimates. It would be infinite. VS’s argument here is for an arbitrarily large climate sensitivity! Not a good thing at all!!!!

  106. Greenhoof » Blog Archive » Lorne Gunther: Denial (and dumb analogies) are us Says:

    […] I invite you all to have a quick read of Bart Verheggen’s great post on this issue. In addition to having pulled together clearer images of the graph at left, he has […]

  107. Heiko Gerhauser Says:

    Hi Bart,

    I may not have a very different view from you on the basic, physical science, but I certainly have a very different view on the framing of the discussion as science being under attack.

    I think it can be very polarising to immediately classify every question about climate change into one of two categories

    A) It attacks all of established climate science, and must therefore be wrong

    B) If it doesn’t overturn all of established climate science, well, it must be a nit pick and unimportant.

    It’s something I see in this thread too, I even engaged in it myself to a point. Once you are at point B, and the other person suggests they do think the issue deserves discussing, this immediately gets taken as a suggestion for A.

    Yet, climate science is a complex issue with many facets and improving it involves critiquing that detail in my opinion. Watts may publish a whole lot of rubbish on his site, and he does, but I don’t think this is an attack on science. In those instances where it’s complete rubbish, it contributes little; but in those instances where he presents valid points, it goes to improve the science.

    I think an attack on science would be book burning, ie actual destruction of information or the like. When you talk about an attack on science, you are primarily talking about the public’s understanding of the climate issue, while I rather think of science as the body of information that is there so that experts with the time to study the issue in depth can get a better understanding of the physical workings of climate.

    Where the understanding of the public is concerned, I think some interesting work reported by Ron Bailey recently helps to illustrate my concerns. He showed statistics that (US) climate scientists self classified as much more “liberal” and much less “conservative” than the US public at large. He also presented evidence that the messenger mattered, and so did the presentation of the message. In short, if Dick Cheney says climate change is “real” and uses it to justify lower taxes and the war on Iraq, people who like Dick Cheney and the policy measues he advocates are more likely to believe climate change is real, than if Al Gore says “climate change is real” and puts the spin on it that this justifies higher taxes and more power for the United Nations.

  108. Heiko Gerhauser Says:

    Hi Marco,

    I haven’t read the paper, so can’t comment on that. But, the two apparently simple physics points you raise I can’t resist. Why shouldn’t an equal solar and greenhouse gas (globally averaged) forcing have an unequal feedback response? As I understand it, the ice ages were precipitated not so much by less solar radiation, but rather by a different seasonal distribution of that radiation, ie by less radiation being available to melt snow in the summar at high latitude. The same sort of thing could in principle apply to solar and greenhouse gas forcing; for example greenhouse gas forcing might cause more precipitation in winter (and therefore more snow) while solar forcing might be more concentrated in summer (causing relatively more snow melt). And as for the stratospheric cooling: If the solar change is amplified strongly by an enhanced greenhouse effect from water vapour, then well, you’d also expect to see stratospheric cooling, wouldn’t you?

  109. Scott Mandia Says:

    Heiko,

    Take a look at this study:

    The Second National Risk and Culture Study: Making Sense of – and Making Progress In – The American Culture War of Fact

    “Individuals’ expectations about the policy solution to global warming strongly influences their willingness to credit information about climate change. When told the solution to global warming is increased antipollution measures, persons of individualistic and hierarchic worldviews become less willing to credit information suggesting that global warming exists, is caused by humans, and poses significant societal dangers. Persons with such outlooks are more willing to credit the same information when told the solution to global warming is increased reliance on nuclear power generation.”

    Simply put, if the solution is palatable then the mesage is correct. If the solution is not palatable, then the message must not be correct. Classic shoot the messenger.

    This study really helped me to understand why the climate change issue is so politically polarizing.

  110. Heiko Gerhauser Says:

    Hi Scott,

    that’s precisely the study Ron Bailey picked up on. What he added to it boils down to: climate scientists are liberal (in the US political sense), present themselves as liberal and connect the science to solutions liberals like. Consequence: They are no longer trusted by half the population.

  111. Tim Curtin Says:

    Bart (at 5th March) you said “notwithstanding the fact that all yearly temperatures of the past 30 years are higher than any of the yearly average temperatures between 1880 and 1910)”. I have previously pointed out here that is a false claim, as the met. station coverage of the land surface area from 1880-1910 was at best 10-20% (I have the NOAA map for 1885 showing no stations in Africa between Cape Town and Cairo, with not many more by 1910, and all that huge area has now and conceivably always has had mean temperatures higher than anywhere else on the planet, so to have as baseline data as you do for average temperatures between 1880 and 1910 a globe that excludes Africa is invalid and seriosuly misleading. Moreover, as GISS etc now report far fewer stations at the high lat. top end of the NH since 1990, “global” average temperature is likely to be overstated. So Bart, how do you justify your opening claims and as quoted above?

  112. Tim Curtin Says:

    Scott, back on 5th March you cited Feulner, G., and Rahmstorf, S. (2010). On the effect of a new grand minimum of solar activity on the future climate on earth, Geophysical Research Letters, in press. with their conclusion “For both the A1B and A2 emission scenario, the effect of a Maunder Minimum on global temperature is minimal. The TSI reconstruction with lesser variation shows a decrease in global temperature of around 0.09oC while the stronger variation in solar forcing shows a difference of around 0.3oC.”
    Please note that all the IPCC emissions scenarios are irrelevant because they assume away the negative feedback through uptakes of CO2 emissions by global biology (photosynthesis) which since 1958 have accounted for 57% of emissions, but which in the IPCC’s main model (MAGICC) are explicitly assumed to be nil, through its author Tom Wigley’s assumption (Tellus 1993) that uptakes follow a rectangular hyperbolic (Michaelis-Menten) function, i.e. reach a peak c2000 and then cease to rise ever again. There is of course no evidence for this, as shown by myself (E&E October 2009) and by Wolfgang Knorr (GRL, November 2009). The MAGICC assumption has the Madoffian benefit of allowing projections of the growth of atmospheric CO2 to double from the actual c.0.41-0.46% pa since 1958 to 1% p.a. from 2000 – and thereby activate politicians via the resulting exaggerated projection of GMT to 2100.

  113. Marco Says:

    @Heiko:
    If you agree with enhanced greenhouse forcing by water vapour (through solar influence), then assigning hardly any greenhouse forcing to CO2 is…ehm…rather contradictory.

  114. Marco Says:

    Ah, look, Tim Curtin repeats Watts’ false claims. Tim, removing high altitude and high latitude stations doesn’t do much to the trend. If anything, it introduces a *cooling* bias.

  115. Bart Says:

    Hi Heiko,

    I agree with you that the discussion has polarized to an unhealthy extent. Both sides of the ‘debate’ have had a part in this. However, if there wouldn’t have been a string of extra-scientific attacks on the science, scientists (and their supporters) wouldn’t have gotten as defensive as they have. Ie the downward spiral of polarization has been set in motion (in my view) by what I call the attacks on science.

    You are right that knee jerk, defensive reaction along the lines of ‘any criticism is by definition invalid and must be scourned’ is unhelpful to put it very mildly, and damaging to the (understanding of) science even. The “must” in your examples A and B are wrong. However, being faced with many bogus statements of having refuted AGW, the “must” can reliably be replaced by “very likely to be”. And more often than not, the person having made the claim of refutation is not open to counter arguments, or to admitting logical flaws in their reasoning, or to a myriad of observations and understandings that are in direct conflict with their statement. If someone claims to have refuted AGW, just as if someone claims to have refuted evolution (and links to the discovery institute as proof), some loud alarmbells go off in my head.

    I wrote more about my views on how to communicate here and here.

  116. VS Says:

    Hi Timo,

    So what it basically boils down to is that they used a dataset with a known, or even deliberate, ‘bias’. Hmm, that doesn’t look good. I’m trying to look into the I(1) matter I posted above. If somebody has a direct reference to a panelized version of the Magnus et al dataset (i.e. the CRU 2.1 temperature set), please do link. I started downloading the data they used, but it will take me hours and hours of data-constrcution to get it in the proper shape, and I simply have no time for that.

    Hi Jfr117 and Scott,

    About Tamino, he wrote down things that were plain wrong, and I still firmly stand by that. I think I even used some formal notation somewhere up there to show/explain it. I.e. he made a ‘clear’ disctinction between an AR(1) process and an I(1) process, while in fact, an I(1) process is simply a specific, non-stationary, realization of an AR(1) process. Put differently, I didn’t simply say that he was ‘clueless’, I gave arguments why his position was flawed. There is a difference between the two.

    What he lodged against me however, was a baseless insult. My reflex in this case is to interpret it as incompetence, rather than malice.

    Hi sidd,

    The order of integration doesn’t have anything to do with linearity/non-linearity of the series, but the underlying stochastic process. Yes, I(1) and I(2) processes are asymptotically independent. Take a look at some of the references in my first post: it’s discussed there. As a matter of fact, it is the whole reason that BR had to resort to polynomial cointegration.

    Hi MartinM,

    Utter bollocks? How about you come up with that reference first, instead of (de facto) reiterating the position that the hypothesis can never be rejected because it’s ‘physical’.

    Hi Arthur,

    I’m sincerely looking forward to your contribution in the GT debate. I firmly believe that the debate should be expanded, and more physicists should be involved. In that context, I believe that the efforts of GT, you and Kramm et al, ought to be lauded. Civilized dispute is progress, ‘consensus’ is non-science.

    I however doubt that my lack of physics knowledge means that all my knowledge of statistics can be thrown out the window.

    As for the random walk/equilibrium comment, you’re not completely on mark there.

    First off, I outlined the ‘random walk’ issue above, and why an I(1) process is de facto bounded on the interval we are looking at. Be careful to make a distinction between how we observe something, and what the actual (unknown) underlying process is. That, too, I dwelled upon in earlier posts.

    Besides, cointegration is actually a tool to establish equillibrium relationships with series containing stochastic trends. Note that cointegration implies an error corretction mechanism, where two series never wander off to far from each other. It therefore allows for stable/related systems which we nevertheless observe as ‘random walks’. The term ‘random walk’ is a bit misleading here, so it is better to say that the series contain a stochastic trend.

    Take a look at the matter, it’s quite interesting. I keep saying it: hence the Nobel prize!

    PS. Note that the whole reason I involved the GT publication here was to ‘dismiss’ Halpern (it had to do with his role in it). I sincerely despise the role individuals like he play in these discussions, and I think I clearly outlined that above. Besides, considering the gibberish he wrote on his blog, I think his efforts amounted to ‘a campaign of misinformation’ that Bart keeps referring to.

    Hi Heiko and Bart,

    Yes, that’s exactly how I see it: your A/B distinction (nicely put, I’ll ‘save’ that ;). Coming from a discipline that is constantly ‘under attack’, I understand the sentiments. However, this level of ‘dismissal’ of anything coming from any other discipline is simply non-scientific. I don’t buy the ‘traumatized scientific field’ argument. We are scientists, our predecessors managed to keep their calm in front of guillotines, exiles, discommunications and burning stakes… that’s our tradition, that’s our pride. In ratio we trust.

    Let’s live up to it.

    Another good example of a field ‘under siege’ is evolutionary biology, some 5-10 years ago. Note how the biologists ‘won’: namely, by repeating their arguments and delivering their actual proof (!) over and over and over again, until it was clear. Add to that they were arguably coming from a discipline that is much older and well established, and has delivered *much* more proof than climate science.

    It’s a tough business, sure, but nobody promised that science would be easy.

    I think this is also the reason I cannot communicate with individuals like Marco. Sorry Marco for getting a tad personal here, but you have not responded to a single methodological concern I have raised. You simply keep repeating:

    “The physics is right, the hypothesis is right, what do you want me to do? Reject my hypothesis? No! [enter here your favorite “you people don’t know s***” one-liner]”

    Yes, I want you to first give in to the possibility that you’re missed something somewhere. As long as you maintain that that can never be the case, we have nothing to talk about, as I really don’t have the time and energy for a virtual spit-fight. Good day.

    Finally Bart,

    I truly understand that coming from your methodological point of view (i.e. as a climate scientist) you believe that you guys have delivered sufficient proof. However, this is not the case when viewed through the methodological lens of other disciplines, such as mine.

    We need some mutual understanding in this case, instead of instant dismissal.

  117. Bart Says:

    VS,

    Indeed, in science “in ratio we trust”. The problem is that much of the criticism of the kind “see here is proof that AGW is bunk” is not very rational, or at the very least leaves out the bigger context of all the evidence in favour of AGW (as if that suddenly magically disappeared). A prime example is that with a very small climate sensitivity, you’d have a terribly difficult time explaining how the great climate shifts in the past could have happened. A bit of modesty in making far reaching claims at refutation of a field which is more mature than many of its “critics” claim would be appropriate.

    In my new post I explained why I think it is evident on physical grounds that the increase in global avg temp is not merely random: A random increase would cause a negative energy imbalance (as Arthur also noted in his latest comment) or the extra energy would have to come from another segment of the climate system (eg the ocean, cryosphere, etc). Neither is the case: There is actually a positive energy imbalance and the other reservoirs are actually also accumulating energy. Moreover, there is a known positive radiative forcing.

    I think it is fair to conclude that the observed increase therefore has not been random.

    A different question would be, if these data, without any physical constraints on it, could mathematically be described as you you do (purely stochastic; random walk). Perhaps it could; I refrain from an opinion on that. I do note though, that in light of the physical system that these data are a part of, this is a purely academic mathematics question. The physics of it all tells me that it hasn’t in fact been random, since it is inconsistent with other observations.

  118. Alan Says:

    VS said to Bart …

    “I truly understand that coming from your methodological point of view (i.e. as a climate scientist) you believe that you guys have delivered sufficient proof. However, this is not the case when viewed through the methodological lens of other disciplines, such as mine.”

    This comment is similar to many others that I have observed in the vast anti-AGW blogosphere … and I wonder at what it really means.

    It seems to me that people cast ‘climate scientists’ as those that have a specific technical expertise … other than the issue at hand eg “climate scientists don’t appreciate feedback mechanisms”; “climate scientists don’t appreciate ‘measurement error'” etc etc

    Here it is “climate scientists don’t appreciate time series analysis”.

    The assumption is invariably that climate scientists aren’t expert in a specific field and should ‘listen to the professionals’ because they are making goofy mistakes and their conclusions are suspect (and maybe dangerous). Do climate scientists all have the same, specific technical expertise?

    I don’t get it. I work in business – not science. And on major projects we pull together a team with experts in specific disciplines … no one subject matter expert is sufficient.

    It must be the same with climate science research, surely!?

    Research and analysis projects would involve a team. I imagine there is consultation with ‘design of experiments’ experts; instrumentation experts; modelling experts; model coders … and, dare I say it, time series analysis experts.

    Bart, is this the way the climate science research community mobilises?

    If so, it is probably pointless discussing the nuances of time series analysis here unless there are at least two recognisable subject matter experts around, including one who works directly in climate research/modelling/data analysis.

    Are there any here? Bart, you do not claim such expertise, do you?

    So if the pressing issue that VS brings here is that he has real concerns about the application of TSA to the climate science research effort, then perhaps it would be more fruitful to take the discussion to the TSA folks who actually are working now in climate science research.

    VS, why not go and check the hypothesis that TSA is being applied in the way you suspect – directly with the other TSA folks. If it is, and you believe it to be inappropriate, engage in a TSA methodology discussion with them.

    For some time now I have wondered about what you (VS) hope to achieve here.

    All the best, Alan

  119. Heiko Gerhauser Says:

    Hi Marco,

    the term “violations of known physics” is far too strong. What you(ought to) mean is that it seems unlikely that solar has a (net) effect on world global average temperature and CO2 does not, while there is stratospheric cooling. In principle you could have stratospheric cooling from CO2 and water vapour and still no net warming impact on the troposphere, because the CO2 in some fashion (direct chemistry say, or due to issues of where the warming occurs) enhances say cloud reflectivity to exactly compensate (or does something else, make tree albedo a little higher, say).

  120. Heiko Gerhauser Says:

    Hi Marco,

    and yes my previous attempt was a bit confused/confusing, sorry. I haven’t yet come across someone claiming that water vapour does cause a forcing, and CO2 causes zero forcing.

  121. Tim Curtin Says:

    Marco said:
    March 10, 2010 at 07:22
    “Ah, look, Tim Curtin repeats Watts’ false claims. Tim, removing high altitude and high latitude stations doesn’t do much to the trend. If anything, it introduces a *cooling* bias.”

    Marco, do think a bit. How was the global mean temperature calculated in 1880? and in 2000? Am I wrong that the known temperatures at various locations as of 1880-1910 were aggregated to get Bart’s baseline, and at rather more locations in 1970-2000 ditto? Am I also wrong that in 1880 New York was not a good proxy for temperatures in now Kinshasa (former Leopoldville), which had no met. station then, as HM Stanley unaccountably failed to establish one when he was there? Am I wrong that to establish a trend between 1880 and 2000 you have to have actual temperatures aggregated to global series for the SAME LOCATIONS in both years? Or do you belong to CRU? Then of course you claim to know that the temperature trend at say Khartoum from 1880 to 1910 was the same as that at San Juan in Puerto Rico? According to Jim Hansen they are indeed perfect matches, as shown below (?). So using San Juan as proxy for Khartoum (they are nearly the same latitude) from 1880 to 1910 is totally kosher? And using San Juan’s annual mean as proxy for Khartoum does nothing to lower GMT in 1880-1910, and using Khartoum actuals for 1970-2000 does nothing to raise GMT then vis a vis 1880-1910 as claimed by Bart (who as ever seemingly like Tamino keeps quiet when inconvenient facts emerge).

    Bart, I really would like your comments, you were quick to point out my error earlier, your turn, nie waar nie? (pardon my Afrikaans).

    SJ Min Khart. Min S.J Max Khart.Max
    Jan 21.6 15 Jan 28.4 32
    Feb 21.4 16 Feb 28.7 34
    Mar 22 19 Mar 29.1 38
    Apr 22.7 22 Apr 29.9 41
    May 23.6 25 May 30.7 42
    Jun 24.5 26 Jun 31.4 41
    Jul 24.9 25 Jul 31.4 38
    Aug 24.8 24 Aug 31.5 37
    Sep 24.6 25 Sep 31.6 39
    Oct 24.2 24 Oct 31.3 40
    Nov 23.3 20 Nov 29.9 36
    Dec 22.4 17 Dec 28.8 33
    Ave 23.3 21.5 30.2 37.6
    Ann Mean 26.7 at San Juan
    and 29.5 at Khartoum

  122. Tim Curtin Says:

    Heiko Gerhauser Said:

    March 10, 2010 at 11:52
    “…I haven’t yet come across someone claiming that water vapour does cause a forcing, and CO2 causes zero forcing”.

    Dear Heiko, do check my regression data above for Pt Barrow (Alaska) and Hilo (Hawaii) showing zero stat. sig. for radiative forcing from CO2 and very strong stat. sig. coefficients for water vapour. Inconvenient, yes, but incontrovertible, given the NOAA data I used.

  123. Heiko Gerhauser Says:

    Hi Tim,

    ok, I was talking about the radiative absorption properties of water vapour and CO2, which are measured in the laboratory. And so far I have come across no-one claiming those are wrong. Do you?

    I don’t see how you can find much from looking for correlations between CO2 or water vapour using local temperatures. Maybe I am wrong, but it seems a bit like pointless wild goose chase to me.

    On the stations issue, however, I think you’ve got a point. Though even there, poor station coverage and quality means the potential error is large, it does not mean 1880-1910 is proven or even likely to be much warmer than indicated in Bart’s graphs above. It might have been quite cold in Africa or Siberia and we missed it.

  124. Marco Says:

    Tim:
    You can make all the objections you want, but it doesn’t change the fact that your claim is based on absolutely no evidence. There are *several* people who used *different* procedures to check for the effect of the stations that were supposedly ‘removed’ (which was lie one: there was retrospective reporting in the early 1990s). Those supposedly ‘removed’ stations actually had a *higher* warming trend. In other words, removing those stations introduces a *cooling* trend.

  125. Marco Says:

    @Heiko (and VS):
    I was maybe getting a bit too short: a cooling stratosphere with a warming troposphere fits with an enhanced greenhouse effect, not with an increase in solar radiation (in which case both should be warming).

    What I cannot see is that a 1 W/m2 forcing would result in a different feedback depending on solar or CO2 influence. If 1 W/m2 of sunlight introduces x warming, and subsequently y feedback warming through water vapor, why would 1 W/m2 CO2 introduce x warming, but much less than y feedback warming? The water feedback, after all, is a result of the initial warming.

    However, it *is* possible that B&R not look (forget) at the concomitant aerosol emissions that go along with fossil fuel burning. Those would introduce a cooling, which does result in a “CO2-associated” lower feedback per W/m2 of CO2 forcing, but then they just describe the whole issue very very poorly. In that sense Jan Magnus did a better analysis, but he was stupid enough to take the wrong temperature dataset (what is it with these econometricians?).

    @VS: If a fancy type of mathematical analysis shows the temperature of the universe to be -2 Kelvin, the math may well have been very solid, but the outcome is rather questionable. This should suggest a major rethinking of the input to the equations (and perhaps ultimately the equations themselves). One may be right, but you’d better come with some very good explanations before we start throwing away basic physical knowledge.

  126. Bart Says:

    Tim Curtin,

    The consequence of what you’re saying is basically that the global average temperature was known with less accuracy in 1900 than in 2000. True enough, and known (see eg the green error bands on this figure: http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.pdf)

    That doesn’t invalidate my earlier statement, though it means that the finite probability that there actually was a yearly temp anomaly between 1880 and 1930 that was higher than the lowest anomaly during 1980-2009 is not zero, but some (very) small number. Okay, I’ll give you that.

    But no, you don’t need the exact same stations to compare yearly anomalies. You seem to have bought Anthony Watts’ line of reasoning here, which is totally off the mark. Tempeture anomalies are highly correlated in space, and anomalies are calculated such that it is relatively insensitive to changes in which stations are used (as opposes to the claims of Watts and d’Aleo).

    See eg this and the preceding post: http://www.chron.com/commons/readerblogs/atmosphere.html?plckController=Blog&plckBlogPage=BlogViewPost&newspaperUserId=54e0b21f-aaba-475d-87ab-1df5075ce621&plckPostId=Blog%3a54e0b21f-aaba-475d-87ab-1df5075ce621Post%3a316fd156-fbba-46b0-b3ec-a6748f70d579&plckScript=blogScript&plckElementId=blogDest

  127. Marco Says:

    @Bart:
    Before going into lengthy discussions with Tim Curtin, please take note of the following:
    http://scienceblogs.com/deltoid/2009/03/tim_curtin_thread.php
    You may very well end up with a very long thread going completely nowhere.

  128. Heiko Gerhauser Says:

    Hi Marco,

    the spatial and temporal distribution of the forcing is different, and that might (emphasis might) make a significant difference. Í gave you the example of how ice ages are believed to get started, and there it’s also about two forcings where the global average is the same, but in summer at high latitudes there was a big difference.

    I’ve got no idea how significant this really is, but you are making a pretty strong claim, when saying something is plain physically impossible.

  129. Bart Verheggen Says:

    I think the term “efficacy” describes how a radiative forcing relates to a certain temperature change. It varies slightly for different forcings, but not very strongly. A factor of 3 difference seems too large. AR4 provides some estimates and discussion of efficacies.

    See also http://pubs.giss.nasa.gov/abstracts/2005/Hansen_etal_2.html

  130. Marco Says:

    @Heiko:
    solar input and CO2 greenhouse effect are rather strongly connected, I’d say. Aerosols is rather different, they have a very uneven distribution over the globe.

  131. Tim Curtin Says:

    Heiko Gerhauser Said (March 10, 2010 at 12:24)
    “ok, I was talking about the radiative absorption properties of water vapour and CO2, which are measured in the laboratory. And so far I have come across no-one claiming those are wrong. Do you?” No, I am referring only to the NOAA data on water vapour (“H2O” = precipitable water, in cm) and on thge atmospheric concentration of CO2, [CO2] at some hundreds of locations across the USA, vis a vis mean min, max, and Average Daytime Temperatures.

    You then said “I don’t see how you can find much from looking for correlations between CO2 or water vapour using local temperatures. Maybe I am wrong, but it seems a bit like pointless wild goose chase to me.” Why? Surely all science is about measurements. The NOAA provides ALL the data I regressed and reported on above. Do contact me via tcurtin@bigblue.net.au for more info on the data and my regressions.

    Then you kindly added “On the stations issue, however, I think you’ve got a point. Though even there, poor station coverage and quality means the potential error is large, it does not mean 1880-1910 is proven or even likely to be much warmer than indicated in Bart’s graphs above. It might have been quite cold in Africa or Siberia and we missed it.” Not likely, again if you contact me I can send you a paper addressing your point. Briefly, Hansen & Lebedeff (1987) show how adept they are at fictionalising temperature data where there is none. They even admit (Fig.4) that less than 40% of the SH had any data in 1900, yet Bart produces graphs with “global” temperatures from 1880 to 1910.

    Then Marco said March 10, 2010 at 13:39
    “Tim: You can make all the objections you want, but it doesn’t change the fact that your claim is based on absolutely no evidence [not true]. There are *several* people who used *different* procedures to check for the effect of the stations that were supposedly ‘removed’ (which was lie one: there was retrospective reporting in the early 1990s). ” They were not removed, those in Siberia mostly just ceased reporting. “Those supposedly ‘removed’ stations actually had a *higher* warming trend. In other words, removing those stations introduces a *cooling* trend.” Marco, may be , or not. Where is your data? Don’t be so shy.

  132. Tim Curtin Says:

    Bart Said March 10, 2010 at 14:09
    “Tim Curtin, The consequence of what you’re saying is basically that the global average temperature was known with less accuracy in 1900 than in 2000. True enough, and known (see eg the green error bands on this figure: http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.pdf)” [I could not get there but believe you]

    “That doesn’t invalidate my earlier statement, though it means that the finite probability that there actually was a yearly temp anomaly between 1880 and 1930 that was higher than the lowest anomaly during 1980-2009 is not zero, but some (very) small number [sic]. Okay, I’ll give you that.” Thanks, but what is your evidence for the smallness? The variability in just Alaska is huge.

    “… you don’t need the exact same stations to compare yearly anomalies. You seem to have bought Anthony Watts’ line of reasoning here, which is totally off the mark [I have not]. Tempeture [sic] anomalies are highly correlated in space [please provide your evidence], and anomalies are calculated such that it is relatively insensitive to changes in which stations are used (as opposed to the claims of Watts and d’Aleo).” That is simply untrue.

    Bart, as even Tamino admits (without understanding it), anomalies are no different from actuals, because for each anomaly just divide by 100 and add 14 to get the absolute (see GISS). Thus you are absolutely wrong to imply that anomalies somehow abstract from, or are independent of, actual station data.

  133. MartinM Says:

    Utter bollocks? How about you come up with that reference first, instead of (de facto) reiterating the position that the hypothesis can never be rejected because it’s ‘physical’.

    I’m a little confused as to why you (once again) ask me to reference a claim I haven’t made, or even referred to at any point. I also haven’t made the claim that AGW shouldn’t be rejected because it’s physical; quite the opposite, I’ve argued that the model B&R derive is patently unphysical.

  134. MartinM Says:

    Speaking of utter bollocks…

    …anomalies are no different from actuals, because for each anomaly just divide by 100 and add 14 to get the absolute (see GISS). Thus you are absolutely wrong to imply that anomalies somehow abstract from, or are independent of, actual station data.

    Oddly enough, given monthly station anomalies, the procedure you recommend will reproduce absolute values only for those stations whose baseline value is 14. Since temperatures tend to change throughout the year, the number of stations which fit that description for every month would be…oh, right. Zero.

  135. Marco Says:

    @Tim:
    http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/
    http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/

    Global Update

    There you go, false claims proven false.

  136. Bryson Brown Says:

    VS is welcome to correct me if I’m wrong here– I’m a philosopher of science and not trained in statistics to his level. But it seems the procedure he describes discounts trends by allowing purely statistical models that include substantial variation in temperatures on arbitrary scales and frequencies.

    The trouble with the method is that a random walk including year to year variation and longer term variations, all equally likely to move up as down) superimposed on a long-term trend would not be distinguishable from a random walk including various longer-run trends that are also pure random walks but involve low-frequency components (‘trends’, in effect) running over longer periods as well as ‘mere’ year to year variation. In the latter case, we could indeed run into a period characterized by a remarkable collection of record highs without there being any ‘real’ trend or driving force.

    This only shows that pure statistics allows for models of the temperature changes we’ve observed that don’t include any underlying trend or driving force. This isn’t surprising– in fact, it’s trivial. Pure statistics is strictly mathematical; it can model any sequence of data points you like in any way that formally fits the points. So a random walk with both short and longer-term variation of the right scales can do the job.

    What that leaves out, as some points above indicate, is the known physical dynamics of the climate system– and I’m not relying on climate models here, just basic physics. For example, OTBE, when temperatures are higher, the earth should tend to cool down, since higher temperatures produce more outgoing long wave radiation. If that doesn’t happen, we should seek a causal explanation of it– in principle this could include changes in solar input, GHG effects, high-altitude clouds or some other cause that manages to increase the retention of heat energy in the earth to counteract the higher IR emissions from the warmer surface and atmosphere. Climatologists’ work aimed at evaluating these factors is motivated by known physics, not just by statistics.

  137. Bart Says:

    Bryson Brown,

    Very thoughtful comment, and it encapsulates much of what I was trying to get at.

  138. Speaks for itself Says:

    Not a Random Walk

  139. Bart Says:

    VS, I urge you to take a look at Tamino’s detailed reply to your assertions.

    He agrees (and shows) that a random walk could show a spurious trend, and that the ADF test could distinghuish it from a real deterministic trend, but warns that there are choices to be made in using the statistic that could influence the result in some cases: Notably of the presence of drift or the presence of an underlying trend.

    If one allows for the presence of a trend, the null hypothesis of a random walk (as determined fromt the presence of a unit root) is more often rejected than when it isn’t. This makes sense. Even then, a real random walk (also in the presence of a spurious trend which is significant according to OLS regression) can be distinghuished (see the example of his second figure). However, for the GISS temperature series the hypothesis of a random walk is clearly rejected if one allows for the presence of a trend. How did you come to the opposite conclusion of a random walk? By omitting the potential presence of a trend? If so, why?

    In Tamino’s words:

    “How did VS fail to reject it? I suspect he excluded a trend from his ADF test. He may also have played around with the number of lags allowed, until he got a result he liked.”

  140. Tim Curtin Says:

    Marco: thanks for links, and to “Speak for itself” for his/her link to Tamino’s latest on random walks, which I have to admit is rather good. But Tamino and all others who dwell on “global” temperatures have yet to address the inherent folly (see McKitrick & Essex) of saying anything about “global” temperature. What is the operational significance of that concept? exactly nil to any engineer or farmer or housewife/husband anywhere at any point in time. Wow – yesterday it was -8 oC overnight in Moscow, and 10oC here in Canberra, so do I put on an extra blanket or not? or turn that into anomalies from our respective 30 year norms for 11th March, quite likely -1oC for Moscow and -1oC for Canberra. So, Bart, – we hav new ice age, nicht waar?

    Even more looney is aggregating trends for min/max temps at various locations.

    BTW, Excel 2007 is incapable of graphing correctly series showing the projections of mean temperatures of any 2 locations from now to 2100 from actuals (1960 to 2007) – for Excel graphs the average in 2007 of 21.323 (Hilo) and -61.3139 (Pt Barrow) is
    not -20.0418 but -61.3139!!!. With Hansen in charge at Nasa, the likelihood of any new moonshot ever reaching the moon is nil!

  141. VS Says:

    Answer to Tamino’s ‘amazing’ post listed below.

    I really have no time for amateurs like these. Seriously.

    VS // March 12, 2010 at 12:04 pm | Reply

    Hahaha,

    I love it how you copy-pasted the first ten pages of an undergraduate textbook in Time Series Analysis, and ‘impressed’ everybody here with ‘astounding’ mathetmatical statistics.

    OK, lets get into the matter.

    First off, I didn’t do any ‘cherry picking’.

    In fact, YOU were the one cherry picking by using the BIC for lag selection in your ADF test equation. Any other other information criterion (and I bet you tried them all) in fact arrives to a larger lag value and subsequently fails to reject the null hypothesis. The reason I didn’t use the BIC, is because it arrives at 0 (yes, zero) lags in the test equation. I actually noted this in the comments later on, on Bart’s blog.

    Look it up, down the thread (I used the AIC, there was a typo in the first post, that I corrected later on).

    What kind of an effect does using 0 lags have? Well, residual autocorrelation in the test equation that messes up the ADF test statistic. Higher lag selections successfully eliminate any residual autocorrelation. Remember why the ADF test is ‘AUGMENTED’? Exactly because of the autocorrelation problem.

    Also, when using ANY other information criteria to arrive at your lag specification, fails to reject the null in the level series, using ANY alternative hypothesis (check that as well, master statistician). I.e. no intercept, intercept AND intercept with trend.

    Try that again, and report it, will you?

    Finally, you selectively quoted me there, I said ‘de facto bounded’ because the VARIANCE of the error term governing the random walk is FINITE. How hard is that to understand for somebody pretending to be a statistician? Simply calculating trends, in light of these test results is spurious, and you should know that (unless you were ’self taught’ or something similar).

    Look at the temeprature series over the past couple of thousand of years. Where do you see a trend? There is a cyclical movement, but a deterministic trend? Nope…

    I seriously have no time for this kind of amateur nonsense, as well as your lashing out at economics. Economists are at least concious of their unconsciousness. Less can be said about the likes of you.

    I’m not posting here anymore. If you want to have a chat, go to Bart’s thread, and I’ll consider educating you (but given your unfounded arrogance, the chances are slim).

    Good day.

    Your comment is awaiting moderation.

    VS // March 12, 2010 at 12:09 pm | Reply

    over the past couple hundred of thousand of years.. not thousand of years…

    Your comment is awaiting moderation.

  142. Tim Curtin Says:

    Hi Bart (& VS)

    I think we need to get back to basics. Random walks are to some extent a red herring. We have claims of monotonic “global” temperature rises since 1850 (CRU) or 1880 (GISS), and evidence of monotonic increases in the atmospheric concentration of CO2 (hereafter [CO2]) since 1958. If we take the period of comparability of rising temperatures and rising [CO2] which is only since 1958, there is zero stat. sig. correlation between them anywhere on the planet unless one uses absolute values of both variables, when the elementary Durbin-Watson test shows such correlations to be spurious. If we use the IPCC’s formulation of “radiative forcing”, which asserts that it is the CUMULATIVE quantum of GHGs (which are overwhelmingly CO2), then there is no evidence anywhere that first differenced changes in temperature (which is what the IPCC asserts is the relevant dependant variable) are in any way correlated with the IPCC’s definition and measurement of radiative forcing. Random walks are irrelevant – there is NO correlation between I(0) [CO2] and I(1) temperature.

  143. VS Says:

    Guys, I really have some pressing matters to attend to, otherwise I would love to chat on the topic.

    Take a look at a nice discussion of cointegration as related to the BR paper, here:

    http://landshape.org/enm/polynomial-cointegration-rebuts-agw/

    and especially here

    http://landshape.org/enm/cointegration-summary/

    I think it ought to be interesting to anybody with an actual ‘Open Mind’.

  144. VS Says:

    (Tim, I will try to get back to your newest post ASAP, I really need to run now :)

  145. Bart Verheggen Says:

    Tim Curtin,

    Nobody expects a perfect correlation of global avg temp with CO2, due to eg weather related variability and the fact that CO2 is not the only cliamte forcing. That said, the correlation coefficient between the two variables (taking ln(CO2)) is 0.87 (0.77 if autocorrelation of the residuals is taken into account). With any solar index the correlation would be much lower. And as I stated before, physically the trend must be deterministic, otherwise it is inconsistent with other observations and/or conservation of energy.

    CO2 absorbs IR and reemits it in all directions; this is basic physics, corroborated by measurements. The effect is that the planet holds more energy and warms. Or do you have an explanation as to why the earth would not warm in response to more GHG in its atmosphere? There is some basic physical understanding of the system, despite uncertainties which will always be there with geophysical science.

  146. Marco Says:

    @VS: tamino has a rebuttal for you. You can enjoy yourself with having to rebut the Phillips-Perron test.

    The whole discussion between you two is really sounding like the question a statistician once asked me: “Do you want it to be significant or not? We can choose between two methods, and one of them will make it a significant difference”.

  147. Ron Broberg Says:

    VS: Claims of the type you made here are typical of ‘climate science’. You guys apparently believe that you need not pay attention to any already established scientific field (here, statistics).

    VS: It would do your discipline well to develop a proper methodology first,…

    Nothing condescending in your opening remarks. :eye roll:

    VS: Also, I emailed Beenstock about the data he used when I first read the paper (outside of climate science, that’s quite normal), and he wrote me that all the data they use come from GISS-NASA.

    Let’s see … you take an insulting shot at climate science not providing data and in the very same breath point out that all the data used came from a climate science center. Genius.

    VS: Some of my theoretical physicist friends though, who’s nuanced judgement on these matters I sincerely trust, have endorsed it.

    In other words … “I have a friend of a friend …” Sounds like a ‘baseless claim.’

    VS: The problem is the anti-scientific attitude, based on insults and baseless claims, encouraged by agitators like Halpern (aka Eli Rabbett).

    VS: Now that’s something I don’t endorse.

    It’s not something you endourse.
    It’s something you engage in.
    I’m actually interested in the statistics you bring forth here.
    But your assumed air of superiority and snide, sniping remarks are rather poisonious.

    VS: This MUST stop. The debate is being poisoned by individuals. I understand the pressures, and the fact that many of these individuals strongly believe that they are ’saving humanity’ and must ‘take action NOW’, but this tone isn’t helping at all.

    Surely you must agree with me here.

    More than you can believe.
    I hope this mirror has been helpful.

  148. VS Says:

    Marco,

    You people (yes, you Marco, Tamino and other Halpern-like types) are starting to become quite comical with your Realclimatesque ‘rebuttals’. If you guys knew anything about econometric/statistical inference methodology, you would know how to draw your conclusions…

    …and I would bother to respond to the PP test if doing so would actually serve a purpose. But it doesn’t. And like I stated earlier, I don’t have time/energy for a spit-fight, especially with somebody, like Tamino, who doesn’t even understand how autocorrelation pollutes the ADF test (see also subsequent post by Alex, where he used several different enthropy measures to determine lag length, where he also failed to actually address the issues posted).

    As for that BIC, I clearly rectified that here:

    Global average temperature increase GISS HadCRU and NCDC compared

    Just for the record, since some of the posters on Tamino’s blog seem to be confused. The BIC is also called the SIC. See here:

    http://en.wikipedia.org/wiki/Bayesian_information_criterion

    Tamino’s arrogance is startling. This ‘blogosphere’ apparently got to his head.

    The I(1) property has been firmly established in the literature, by both proponents (Kaufmann etc) and skeptics (Beenstock etc.). I’m not going to explain that to a C-grade statistician acting like he’s Heckman/Granger.

    You, however, are free to choose your priest.

    Hi Tim,

    You’re absolutely right that the random walk thing is a red herring. Apparently however, the audience knows so little about statistics that it actually works pretty well as a distraction.

    I would still be very careful about regressing different order integrated processes just like that. Just like any found relationship would be invalid, so is any rejection :) Furthermore, by incorrectly applying the techniques, Kaufmann et al (2006) actually ‘confirm’ AGWH.

    Still, I maintain that Kaufmann has the marks of a good statistician, and any honest mistake is just that, a honest mistake. I’m pretty sure he would admit that too, and he seems to know what he’s talking about.

    People like Tamino however, who don’t even want to understand what they are doing wrong are not worth my time.

    Hi Ron,

    Read Tamino’s style, and read my posts in this 40 page discussion here. You will note a difference in tone. Besides, I argumented every single position in detail.

    Take the time, if you will.

  149. VS Says:

    PS.

    Ron, you really quoted me out of context there.

    (1) I stated that I am not passing judgement on the GT paper, but I also clearly stated where I got my ‘leaning’ from. Take a closer look at the posts you are quoting from.

    (2) Methodological disagreement is not an anti-scientific attitude. Calling people ‘bonkers’ and ‘idiots’ out of the blue, and stating that they are engaged in a ‘circle jerk’ and such things, is.

    (3) My post replying to Halpern was pretty harsh, but I believe anybody reading his blog entry, and understanding the lack of understanding of statistics he has, would conclude that he clearly asked for it.

    If my tone was insulting, I’m sorry, but the heat of the debate sometimes takes over. In general, I think I was being quite forthcoming.

    As for my remarks towards Tamino and Halpern: Again, I’m pretty sure both of them asked for it.

  150. Ron Broberg Says:

    I appreciate that.
    Thank you.

    I have no real qualm with people engaging in ‘string defense.’ But own up to it.

    It’s the “what’s wrong with you people, I’d never act like that” proposition that I object to. Yes, you would and yes you do.

    And I think I ‘get it.’ I think you come from a clique where things like making put downs about data availability get a chuckle. And that’s OK as long as don’t then turn around and try to play the ‘IZ A VICTUM’ card.

    Meanwhile, I learned more about the method being used from Tamino’s post then I learned here. That’s because Tamino was aiming his post at an audience with less statistical knowledge then you possess. But I appreciate you bringing this method forward and hope that you two can continue the discussion – even if by proxy. (I don’t hold it against you that you are cautious about continuing the discussion on his blog).

    Do you have any comments on his use of Phillips-Perron?

  151. Ron Broberg Says:

    string=strong

  152. VS Says:

    Hi Ron,

    Actually that ‘IZ A VICTUM’ comment also evoked a chuckle over here ;) You have somewhat of a point there.. at the moment however, I was completely shocked by how rudely dismissive Tamino was… (see quote)

    I’ll try to get back to the Phillips-Perron test soon, but I actually had no time today (or in the coming few days) for even the postings I’m making now.

    In short however, it’s a fringe test, with a higher rejection tendency. There are other tests as well, that still confirm I(1). I’ll argument this in detail later though.

    In the meantime, do note that Kaufmann et al (2006) would ‘love’ to have been able to conclude I(0) as that clearly fits their hypothesis (I(2) would have been great too for them, hehe, it’s the I(1) that’s really bugging :).

    But they didn’t, because, even though their analysis is somewhat ideologically tainted, they are still proper statisticians (and they have my respect for that).

    However, I’ll try to get on it ASAP. Honest and good natured interest is always welcome, and deserves to be addressed :)

  153. dhogaza Says:

    Reality:

    1. Satellite measurements of infrared emissions from the top of the atmosphere are consistent with physics-based expectations of CO2 absorption of long-wave radiation.

    2. The primary expected positive feedback is water vapor. Model predictions have been confirmed with detailed observations using the AIRS sensor on NASA’s Aqua satellite. The water vapor in the troposphere is responding as expected to changes in temperature.

    3. Downwelling infrared radiation has increased in a way consistent with the underlying understanding within physics.

    So, one of two things are true:

    1. A whole bunch of physics, backed up by observations, is wrong.

    2. A non-specialist who has conveniently found that a particularly choice of statistical test suggested that observed temp trends are simply a random walk with no established trend perhaps made a boo-boo.

    Occam’s razor makes it easy to choose between the two.

  154. VS Says:

    Hi dhogaza

    Thank you for your contribution. However, you would be well advised to read up on the contents of this thread.

    First of all, the AGWH is not a fundamental model, it is phenomenological one. The poitns (e.g. 1, 2, 3) you call ‘reality’, imply some correlation, but definitely do not constitute sufficient proof (in the eyes of a lot of people). Also in the natural sciences. Consequently, disproving the AGWH, does not disprove physics. Your claim is too strong.

    Second, you would also be well advised to look at the source of those ‘convenient’ statistical techniques, before branding them as such.

    Any discussion on the topic must start with the acknowledgement of the possibility that the hypothesis in question is reject-able (i.e. might not be true) as well as the opposite. Holding your current position, prohibits debate.

  155. Marco Says:

    VS:
    Gee, I point to the PP test, and you say it doesn’t serve a purpose to comment on that, Ron Broberg does the same and you say “I’ll come back to that”.

    With regards to arrogance: you come barging in, claim B&R refute AGW, and then do all kinds of grandstanding and call others arrogant when they point to various issues with the analysis. Talking about arrogance!

  156. Adrian Burd Says:

    VS,

    Just a small point, but I think you are being your own worst enemy here. Your posts are far less intelligible than those of your bete noir, at least to physicists such as myself.

    Now, this is no real excuse, but like most (all?) of us, I have limited time to spend learning about these things. I read Tamino’s post and can immediately see what he is saying. His language is clear and straight forward, his terms are defined, and he spends time to give clear examples of what he is talking about.

    On the other hand, your posts are opaque, hard to follow and jargon-filled. This is not to say that they are wrong, just very difficult to get to grips with. You others with phrases such as “so-and-so does not understand simple autocorrelation” and expect your readers to know what you’re thinking. I for one, do not.

    So, with limited time on the part of the reader, guess who wins out?

    As a physicist who works in an interdisciplinary field, I come across the value of good communication on a daily basis. Not only do methodologies differ between subject areas, but frequently similar terms and concepts mean subtly (sometimes grossly) different things. Determining what those are and translating what you mean into terms that can be easily understood by others is a key to having ones ideas successfully accepted in another field.

    I would contend that Tamino is successfully able to do this. So far, I do not see that you have, at least for me.

    I for one would dearly love to understand the subtleties of the different discussions being presented here and on Tamino’s blog. However, given my limited time and your opaque posts (as well as my dim brain), it will have to wait till the next lengthy plane ride – if ever.

    In passing, I will note that your very first post in this thread contained some quite harsh and dismissive language towards Bart in particular, and climate scientists in general. Such a tactic almost always will get up the hackles of those being so summarily dismissed.

    As for Bart, like Gavin over on RealClimate, I think he has been the paragon of politeness – kudos to him.

    Just my 2d worth.

    Adrian

  157. jfr117 Says:

    continue to ‘barge in’ VS, please. i enjoy seeing actual debate, rather than the typical zombie parroting that appear on these blogs. i have learned a lot seeing you go head to head with tamino. kudos for pushing the comfort level for everybody! although its like swallowing bad medicine, i vote for further interactions with tamino. he has built his own empire and it is good to see the self-imposed king challenged!

  158. dhogaza Says:

    Any discussion on the topic must start with the acknowledgement of the possibility that the hypothesis in question is reject-able (i.e. might not be true) as well as the opposite. Holding your current position, prohibits debate.

    If it’s not physically plausible, it ain’t going to work. Reminds me a bit of the fable of the mathematical proof that a bumblebee can’t fly.

    You’re an economist. The odds of your overturning a large body of well-established physics is statistically indistinguishable from nil.

    It’s DK all the way down, boys and girls.

  159. Scott Mandia Says:

    LOL, and I thought this wonderful thread had petered out!

    As for Bart, like Gavin over on RealClimate, I think he has been the paragon of politeness – kudos to him. Agreed!

    I have learned much from this thread and the subsequent one at OpenMind. I also agree that Tamino makes the stats easier to understand than VS.

    VS, what would you need to “see” in order to change your position about the random walk? Play your own devil’s advocate.

    I am with Bart, Marco, Arthur Smith, dhogaza and others who view this in the physical sense. Sorry for the simplicity but I see this as follows:

    The established GHG physics (Arthur Smith’s paper and not G-T’s paper) tells us that we should be warming. We are observing warming with multiple lines of evidence pointing toward increases in GHGs.

    Models are started back in time and then are run forward. These models produce the correct warming only when using Arthur Smith’s physics. Without GHG forcing, we cannot get this warming – in fact we should be cooling. These same models show that we are headed for climate change that will be faster than we can adapt. Uh oh.

    You come along and tell us that you think the paleo record is very suspicious due to the stats used to create them. BTW, are borehole T stats different? They show a “hockey stick”.

    Then you claim that the models are probably not very good. Because we cannot create another Earth to test GHG forcing, these models are the only method we have for “experimentation”. Throw them out?

    So we throw out physics, observations, and also the only way to test (models) because you have a stats method that shows little to no correlation between CO2 and T?

    Again, VS, what would you need to “see” in order to change your position about the random walk? Play your own devil’s advocate. I am truly curious what would change your mind.

    Here is what would change my mind but, alas, I cannot wait for the unlikely: the next 30 years show a decreasing trend in global temperature. If that happens, I will jump ship, so to speak.

    Sorry if this is a ramble, I am tired and hungry. :)

  160. Jim Eager Says:

    Actual Debate?

    Over at Tamino’s place VS wrote: “Look at the temeprature series over the past couple [hundred] of thousand of years. Where do you see a trend? There is a cyclical movement, but a deterministic trend? Nope…”

    The man appears to be utterly ignorant of the Milankovitch Cycles, the very real physical process that drives the trend that he can not see.

    Talk about the arrogance of ignorance.

  161. Tim Curtin Says:

    Re dhogaza (12 March 18.17), you may be right in your (1), but in your (2) how do YOU know that the water vapor in the troposphere is responding to changes in temperature? Regression analysis of the climate data at Pt Barrow for July from 1960 to 2006 shows that it is water vapor that best explains the changes in mean minimum temperature there over 46 years (t=7.26, p=6.98E-09), whilst 1st difference changes in RF via [CO2] have only a negative but statistically insignificant effect, and although cumulative growth of radiative forcing from [CO2](IPCC definition) has a very slight positive impact it is utterly insignificant (t=0.09, p=.9). Remember what Einstein said about hypothesis testing? Then reread the contributions by VS here. The argument is perhaps not so much about the physics, but mainly about the significance and direction of claimed effects.
    However there is always the possibility of reverse causality, so perhaps it is as you claim that changes in temperature change the level of tropospheric water vapor. The news is not good, as it appears that changes in mean minimum, not mean maximum, temperatures, explain changes in water vapor. What is your physics’ explanation for that? Changes in mean maximum and average daylight temperature have no significant impacts on changes in water vapor, while as often radiative forcing only has a negative, but insignificant, effect.
    Moreover the significance levels are much higher for the water vapor effect on mean minimum temperature than vice versa. Any questions?

  162. dhogaza Says:

    how do YOU know that the water vapor in the troposphere is responding to changes in temperature?

    And I give you Tim Curtin … chuckles all around, boys and girls.

    Why do clouds form?

  163. Jim Eager Says:

    Why, cosmic rays, don’t ya know, dhog.

    The stupid, it burns.

  164. dhogaza Says:

    That’s a funny response, which I thoroughly enjoyed, Jim :)

    (if you’re going to abbreviate my handle, though, it’s “dho”, not TCO’s “dhog”, it’s a type of raptor trap invented by Arabs 1,000 or so years ago, a “dho gaza”, I do raptor banding field work).

  165. Rattus Norvegicus Says:

    I’ve been doing some not terribly deep thinking about the implications of the “random walk” theory of climate change.

    Now, as VS asserts, any changes in past global temperature and the current change in global temperature are due to random walks, then given a suitable length of time shouldn’t some rather improbable realizations of this random walk have happened? Given 4.5 billion years, which is a very long period of time, shouldn’t the Earth have had a realization of the random walk which leads to Venus like conditions? Now granted, my knowledge of statistics is related only to the evaluation of the odds of filling a poker hand vs. the odds being offered by the pot, but I was pretty good at that, and my gut can smell a bad hand when I see it.

    Another thing that bothers me about VS’s arguments is that climate models are “phenomenologically based”. I take this to mean that they are statistical models. He based this “argument” on the parametrizations used for subgridscale processes which are used. Now in the last paper I read on GISS Model E, this was around 6 parameters, all of which were based on experimental or observational evidence. The vast majority of the model is based on physics and this holds true for all of the climate models in use. Some are better than others, but the fact remains that they are based on the physics of climate processes and not statistical relationships.

    So what do you say, BS, oops, VS?

  166. Al Tekhasski Says:

    Alan wrote on March 10, 2010 at 11:42

    “The assumption is invariably that climate scientists aren’t expert in a specific field and should ‘listen to the professionals’ because they are making goofy mistakes and their conclusions are suspect (and maybe dangerous). Do climate scientists all have the same, specific technical expertise?

    I don’t get it. I work in business – not science. And on major projects we pull together a team with experts in specific disciplines … no one subject matter expert is sufficient. It must be the same with climate science research, surely!?

    Research and analysis projects would involve a team. I imagine there is consultation with ‘design of experiments’ experts; instrumentation experts; modelling experts; model coders … and, dare I say it, time series analysis experts.”

    You are absolutely correct, the climate change is not science, it is a project, an applied project. But you are imagining too much. As you said, as a major complex project, it is a conglomerate of various disciplines. Unfortunately, the physical conditions of this project usually reside outside the established margins of respective precise sciences. It creates quite a few “inconveniences” for true experts in corresponding disciplines.

    For example, all data fields in climatology are undersampled by a factor of 100 at least, so honest experts in data analysis would not touch them with a 30-feet pole and attest for accuracy of conclusions. If we take the computational aspects of the project, it requires a million-fold bigger computing capabilities, so experts in Computational Fluid Dynamics would simply walk away. If we look at attempted parameterization of turbulent processes in atmosphere, predictive power of primitive theoretical models is nil, while experimental validation of parameterizations would require about a quarter-million years to match the quality of parameterizations in industrial CFL. If we look at attempted modeling with ensembles of climate trajectories – same thing, they have no clue about topological properties of the system and fractal-like complexity of its state space. If we look at carbon cycle and ocean surface-air exchange, it is again apparent that information about highly-variable instant fields of relevant physical quantities (concentration gradient, gas exchange coefficient, wind) is completely missing, which prevents any estimation of CO2 fluxes from being meaningful. Etc, etc.

    In short, the climate change problem is a conglomerate of disciplines where each one must be applied outside its conventional experimentally verified range, where conclusions become highly speculative and uncertain. Only ignorant or intellectually dishonest individuals are embarking on this global problem, where they have to “cut corners” and accept many things as “good enough”, or even “hide the decline”. So, the problem is approached not as a business project, it runs mostly by dilettantes, not experts. Maybe because there is no problem at all, just smoke and mirrors by some ambitious individuals who were historically unfortunate not to own oil fields.

    [Reply: Not only do you seem to come from an entirely implausible conspiracy theory angle, you’re trying to paint a whole scientific field as incompetent or even dishonest. No more of that. BV]

  167. Tim Curtin Says:

    d…hog, for that is what you are, with your belief (at Tamino’s) that temperature anomalies are not related to actual temperatures. What I asked for was your regressions, especially the recursive ones that show all the feedback processes, such as from temperature to water vapor to temperature, and how CO2 initiates this circularity.

    And of course anomalies at any given place depend on the average of the actual temperatures for that place’s reference period, but then Hansen (1987) averages anomalies over and over again, so it is questionable whether GISS means anything at all. The main purpose of the anomalies when multiplied up by 100 as by GISS is to make trivial temperature changes, like 0.7oC since 1900, seem incredibly ominous.

    As Al implies, GISS is run mostly by dilettantes, like the Hog – but I would like to see him in action with his raptors, sounds great.

  168. VS Says:

    Hi guys,

    For everybody doubting the ‘physical reality’ of the random walk model, and for all those that are conveniently ‘ignoring’ my boundedness argument, here you have it from a proper physicist (he just responded to Tamino’s ‘response’ to my comments). I hope he is a bit better in communicating it than me:

    http://motls.blogspot.com/2010/03/tamino-vs-random-walk.html

    And since everybody is so big on ‘authority’ in these spheres (however irrelevant that is, but OK), here’s his profile:

    http://en.wikipedia.org/wiki/Lubo%C5%A1_Motl

    I encourage everybody who’s planning on writing something highly intellectual like ‘You suck! Economics sucks! Physics rocks! Climate is not RANDOM! Oh, and VS, if you didn’t hear it the first time around, you SUCK!’ to read this blog entry before posting.

    ———————————

    Also, any critical reader would have already spotted this comment by Alex on Tamino’s blog. Also definitely worth a look (Tamino ‘dances’ around the matter in his follow-up, instead of replying properly), as Alex (who should really quit posting there and come over here :) clearly showed that Tamino engaged in some extraordinary cherry picking when presenting his results.

    In particular, this part:

    “I downloaded the data myself (including the mean for 2009) and performed several DF-tests with a drift and a linear trend. The results really depend on the selection criteria one uses.The results I get are:

    Lag selection Ho: unit root Trend variable p-value trend variable
    (p-value)
    AIC 0.4301 0.148415 0.0111
    BIC 0.0001 0.239066 0.0000
    Hannan-Quinn 0.4301 0.148415 0.0111
    Modified Akaike 0.9246 0.119943 0.0928
    Mod. Schwartz 0.8237 0.124735 0.0559
    Modified H-Q 0.9246 0.119943 0.0928

    As one can see, only when the BIC-selection criteria is applied, is the null hypothesis of a unit root rejected.However, looking at the residuals of this test equation, there clearly is autocorrelation present, which makes this test invalid (as explained in the above article). When all the other selection criteria are used, no autocorrelation seems to be present in the residuals, and the null hypothesis is not rejected. Now, I don’t believe that someone can write such a good article on unit root testing and subsequently fail to look at different selection criteria. Seems like the author was cherrypicking himself! :)”

    I think that quote speaks for itself.

    ———————————

    As for that PP test, use some common sense here, please. For starters, take a look at the literature on the topic (in my first post):

    ** Woodward and Grey (1995)
    – reject I(0), don’t test for I(1)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    Indeed, Kaufmann and Stern (2000), two AGWH proponents also find, using the ADF and KPSS tests, an I(1) process, and using the PP and SP tests, an I(0) process. However, in Kaufmann et al (2006) they treat the variable GLOBL (global mean temperature) as an I(1) process.

    Guess why they came to that conclusion, in light of all the tests?

    ———————————

    Finally, I saw some comments on Tamino’s blog about me ‘writing off AGWH’ in a ‘couple of paragraphs’, while Tamino delivered ‘2000 words’ with I don’t know how many formula’s (that truly resemble an undergraduate TSA textbook. If you want to see TSA as you are supposed to see it, take a look at a standard graduate textbook like James D. Hamilton’s, Time Series Analysis, Princeton University Press)

    In any case, I just did a word count on how much I posted here in this thread: over 13,500 words in total.

    Now I ‘understand’ that various paladins, who just got hotlinked to this page through Tamino’s blog, want to start fresh ‘fights’. However, do take the time to read the entire argument first before barging in.

  169. Tim Curtin Says:

    Well said, VS!

    Meantime, here’s my comment on Barts’ post “Nobody [sic] expects a perfect correlation of global avg temp with CO2, due to eg weather related variability and the fact that CO2 is not the only climate forcing. That said, the correlation coefficient between the two variables (taking ln(CO2)) is 0.87 (0.77 if autocorrelation of the residuals is taken into account). With any solar index the correlation would be much lower [!!!!]. And as I stated before, physically the trend must be deterministic, otherwise it is inconsistent with other observations and/or conservation of energy”.

    But (1) the IPCC, hardly “nobody”, claims exactly that, “a perfect [better than 90%] correlation of global avg temp with CO2”.

    And (2) Bart claims “the correlation coefficient between the two variables (taking ln(CO2)) is 0.87 (0.77 if autocorrelation of the residuals is taken into account)”. This shows that Bart has no clue about the I(0) I(1), and I(2) factors. His correlations are all bogus, in terms of just the basic Durbin-Watson statistic. What Bart has to show us is his correlations between ln(CO2) and temperatures anywhere on the planet. Take care, I now have a large database showing that not to be the case anywhere in USA, Australia, or UK.

  170. Alan Says:

    Al Tekhasski at https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1442

    I did not say and I did not infer that “climate change is not science”. I most certainly think it is. I don’t think it’s a single discipline … it’s more like a field (or an atmosphere!). Of course, research ‘projects’ is the modus operandi but I wouldn’t class climate change as just an ‘applied project’.

    Other aspects of your post are a puzzle to me … I am not sure what you’re driving at. My guess is that you are saying that, if we don’t have the computer power to run models that simulate eg Navier-Stokes at the micro level, then it is just plain wrong to run models at the (relatively) macro level (like climate models) and to rely on their output.

    Is that what are really saying?

    If so, then would you drive across a bridge which was designed before finite element computer analysis became a usable practice for engineers?

  171. Alan Says:
  172. Alan Says:

    There we go … don’t put a comment between “”!

    What I posted was a plea for a “Help” section to figure out how to format my comments better! And please don’t tell me I have to learn html.

    Apologies for causing post alerts to be sent to you all.

  173. Marco Says:

    Good grief, VS, now you are referring to Lubos Motl?

    Lubos is a theoretical physicist who is known for one reason, and one reason only: he’s a contrarian. A really foul-mouthed contrarian at that. You complained about the language previously, and now you point to Lubos as a credible source. Quite the contradiction, VS!

  174. VS Says:

    Marco,

    I’ve been looking over your comments in this thread, and I can conclude but one thing: you are not adding anything to the discussion, exemplified by your use of terms like ‘contrarian’.

    He laid out a physical argument, YOU OF ALL PEOPLE was asking for.

    Join this discussion on an adult level, or bugger off. Seriously, you are starting to annoy.

  175. VS Says:

    PS. As far as I gathered Lubos is more well known for having been an assistant professor in theoretical physics at Harvard.

  176. Scott Mandia Says:

    Lubos Motl summarizes:

    the AGW cultists want to deny any history of the climate before the year 1850 or so.

    Seriously? He destroys all his credbility with this statement alone.

  177. dhogaza Says:

    PS. As far as I gathered Lubos is more well known for having been an assistant professor in theoretical physics at Harvard.

    Actually, he’s even better known for being an *ex-*assistant professor at Harvard.

  178. dhogaza Says:

    Now I ‘understand’ that various paladins, who just got hotlinked to this page through Tamino’s blog, want to start fresh ‘fights’. However, do take the time to read the entire argument first before barging in.

    The argument is unphysical. Either much of physics is wrong, or one economist is right that his statistical test proves that CO2 doesn’t absorb LW IR (observed), higher air temps over the ocean doesn’t lead to increased evaporation, water vapor isn’t a GHG, etc etc.

    Tamino’s published a fair amount, he’s a professional statistician who does time series analysis for a living. So here we have one professional, one self-claimed expert, disagreeing, but one’s findings are consistent with known physics, the other, not.

    Again, easy choices.

    Motl’s a string theorist, not a trained statistician, BTW.

  179. VS Says:

    Oh Scott, please!

    Grant Foster (aka Tamino) doesn’t lose any credibility when he is obviously:

    -cherrypicking information criteria to fit his hypothesis (1 out of 8! And diagnostics show it isn’t valid in that one case!)
    -ignoring all the established findings in the literature (e.g. Kaufmann)
    -misstating the relationship between unit roots and AR processes on his previous blog entry (to AR(1) or not to…)
    -vomiting over an entire field of science (econometrics/economics)

    Josh Halpern (aka Eli Rabbett) doesn’t lose any credibility when he:

    -fails to even get the basic definition of an integrated series right (he is still maintaining it refers to how they ‘increase’, so that means he didn’t even bother consult wikip.)
    -he proceeds to foulmouth both Beenstock and Reigenwertz personally, as well as their discipline, for being clueless

    ..but when Motl uses harsh language in an otherwise technical blog entry, a type of post that you too demanded, you simply ignore his concerns and get ‘offended’ by one unrelated sentence.

    I was actually looking forward to your comment, because I didn’t classify you as a ‘soldier’, like e.g. Marco.

  180. VS Says:

    dhogaza,

    So a qualified statistician can’t say anything about it because he’s no physicist? And a physicist can’t say anything about it because he’s no statistician?

    How about you respond to arguments, or not respond at all?

    Thank you for ‘polluting’ the discussion.

  181. VS Says:

    Ahem, re post to Scott

    ‘1 out of 6 information criteria’ :)

  182. dhogaza Says:

    It is rude to post people’s real names when they prefer to post anonymously, even if it’s easy to find out who they are. That puts you on my shit list (along with Motl).

    Motl makes no real point in his post. The fact that the climate system doesn’t exhibit white noise is a given. As far as his summing up goes:

    More generally, I want to emphasize that a working predictive theory of the climate can never “neglect” the “natural variability”.

    He implies that mainstream climate science does, which he knows is not true.

    Strawman.

    Even if there were an important man-made trend, it’s clear that the climate variability is important, too. It’s damn important to know how large it is and how it depends on the timescale (and distances), so that we also know the typical timescale where the “man-made linear trend” could start to beat the natural variability.

    True enough. The rule-of-thumb from WMO has been, for many decades, a 30 year time frame for climate changes on a non-geological timescale.

    We surely know that the timescale from the previous sentence is longer than 15 years because it’s pretty easy for the natural factors to beat the man-made effects for 15 years because there’s been no statistically significant warming since 1995.

    Which is totally consistent with the mainstream view of looking over 30 years, rather than 15.

    But it may be much longer.

    Motl knows this is a false statement, because he recently said that the reason to ask Jones whether or not there’s significant warming from 1995 to present was because there *is* significant warming from 1994 to present.

    So even if the man-made trend existed and were large, it’s completely self-evident that most of the research concerning “climate change” would have to focus on the variations which are obviously of natural origin

    Motl knows that much of climate science *is* focused on understanding natural variations. It’s another strawman.

    BTW, VS, strawman arguments are a form of dishonesty, something I’m sure you don’t support.

    You can learn exactly nothing if you deny all of this and you only focus on some hypothetical, politically motivated term that only existed for 100-150 years. The Earth’s long history doesn’t deserve to be denied in this way.

    Again, strawman. He’s lying out his ass here by implying that climate science denies or doesn’t study the earth’s long history.

    The last two paragraphs, which I won’t post here, are equally disconnected from reality.

    I’m sure, though, that VS is convinced that climate science ignores natural variability, the noise structure found in climate, forcings other than CO2, paleoclimate, etc.

    Because Motl has said so.

    As noted above, VS seems unaware of Milankovich cycles. My guess is he knows next to nothing about the physical aspects of climate, and therefore doesn’t have the background to understand that Motl has built an Army of strawmen to demolish.

    Motl, of course, depends on such ignorance. He hopes that if implies that climate science ignores climate change in the past that have happened over geological timescales, that people who are unaware of how comprehensive climate science is will fall for his lie.

    As VS has apparently done …

  183. dhogaza Says:

    So a qualified statistician can’t say anything about it because he’s no physicist?

    You have nothing to say because you know nothing of the physical science.

  184. Adrian Burd Says:

    VS,

    I’m sorry, but you still continue to do yourself no favors here (or elsewhere). Yes, maybe I should have the time to read in detail the papers you refer to, but I don’t (I have my own research to deal with, as well administration, teaching etc).

    So I ask you once again, please explain your points simply.

    As for the physics articles you have referred to, well, the first two were by people whose arguments were well demolished. As for Lubos, to my knowledge he as little to no standing amongst the physics community having gone off the deep end long ago. If you searched beyond the first two or three links that Google turned up, you would have discovered that for yourself. You will also discover that he “resigned” his position as an assistant professor at Harvard some time ago, and did so under something of a cloud.

    So, since you are unable or unwilling to present your arguments in a way that is readily understandable to someone like me (call me stupid if you like), I will side with the physics community on this one.

    Adrian

  185. Enough Already Says:

    I would urge the moderator of this blog to cut off the troll and return this normally good blog to the reality based universe

  186. Scott A. Mandia Says:

    If this is a war, then I would call myself a soldier and I am pretty confident that my side not only has superior numbers of troops but much better weapons. :)

    Of course, one must still know thine enemy.

    Anyway, my comment about Motl is that it is absurd to think that climate scientists (and all others in fields related) do not consider data before 1850. If that were true then why all the hockey stick fuss?

    Is it true that the main arguments between the randon walk hypothesis and the physical data arguments is related to length of time?

    If so, then we have a problem because it would be like comparing deaths due to being hit by horses in the past 2,000 years vs the deaths from being hit by cars in the past 2,000 years. No?

  187. Paul Tonita Says:

    So, to sum it all up, Thermodynamics Schmermodynamics. It’s all random! Maybe Toyota can use this random walk bit to explain their faulty accelerators. They’ll be very pleased to hear this!

  188. Marco Says:

    @VS:
    A physicist is not a physicist. That is, a theoretical physicist does not automatically understand thermodynamics. In fact, we have several examples that some simply don’t (Gerlich and Tscheuschner to start out with). Motl has made a career about being a contrarian, not just on climate change. Just check his site and look at his comments about string theory. And yes, he once was a promising scientist. Look at what kind of person this once promising scientist is here:
    http://backreaction.blogspot.com/2007/08/lubo-motl.html
    Be sure to read the word document with some examples of Motl’s way of arguing. And then you complain about me.

  189. jfr117 Says:

    from motl’s blog: “When we say that a function, “f(x)”, resembles “white noise”, it means that its values at different values of “x” are random and independent from each other. Such functions are inevitably completely discontinuous. If we use them as a model of temperatures, the temperature in the next year has nothing to do with the temperature of the previous year. It can suddenly jump to the temperatures seen in 1650.”

    i had never thought about the assumptions inherent in the variability. but this assumption (trend plus white noise) does not make phsyical sense for temps. what are the similar assumptions for red and pink noise?

  190. dhogaza Says:

    i had never thought about the assumptions inherent in the variability. but this assumption (trend plus white noise) does not make phsyical sense for temps.

    No, there’s nothing controversial about that part of Motl’s post, either.

    what are the similar assumptions for red and pink noise?</blockquote

    Tamino's posted a bunch of stuff over time regarding noise and temperature.

    Here's one that touches on it, but there are more detailed ones over there.

    http://tamino.wordpress.com/2007/09/21/cheaper-by-the-decade/

  191. jfr117 Says:

    @ Scott

    i am guessing but i think motl was referring to the smoothed nature of recent historical temp. anomalies that have made events such as the mwp and lia go away…and the recent warming look very large in comparison.

  192. Bart Says:

    VS,

    I have yet to see your reply to my new post, where I outline that it is evident on physical grounds that the increase in global avg temp is not merely random: A random increase would cause a negative energy imbalance or the extra energy would have to come from another segment of the climate system (eg the ocean, cryosphere, etc). Neither is the case: There is actually a positive energy imbalance and the other reservoirs are actually also accumulating energy.

    How do you reconcile this with the hypothesis of a random walk?

    Moreover, there is a known positive radiative forcing. You’d have to explain how it’s possible that an enhanced concentration of GHG in the atmosphere would *not* lead to warming; it contradicts what we think we know about the physics.

  193. Al Tekhasski Says:

    All energy comes from Sun, it just passes through the climate system. Therefore, an energy imbalance could be anything, and air parameters can and will walk up or down with changes in “effective atmospheric thermal resistance” and interplays between local (read: oceans, soils, ice) heat “capacitors”/reservoirs. For example, temperature in a pot on a slow stove fluctuates like turbulent hell while being reasonably bounded, all by the same physics.

    The importance of “positive” energy imbalance due to CO2 “radiative forcing” is highly questionable, because, for example, the Earthshine experiment has detected a long-term drift in albedo, from 0.319 in 1995 to 0.297 in 1999, a 2.2% change. This would be an equivalent of CO2 quadrupling. And their reconstruction of albedo from ISCCP cloud database shows a staggering 10% anomaly (or ~3% global) from 1986 to 1998.

    The “known positive” radiative forcing is a result of theoretical estimates based on tropical (and near-tropical) abstract models of atmosphere with dubious cloud parametrization, the result contrarians cannot reproduce without full original information. Even then, all this alleged forcing is just 1/20th of the energy imbalance from recorded drastic albedo changes, which have no explanations from climatology, and are ignored for simplicity. So, nothing contradicts anything in this dirty application with no apparent small parameters, such that it not possible to cleanly apply classic methods of physics.

  194. Adrian Burd Says:

    Al,

    Please go and read

    http://www.skepticalscience.com/earth-albedo-effect.htm

    To summarize what is there:

    The changes in albedo inferred from earthshine do not entirely agree with those from satellite measurements. The latter are whole planet measurements whereas the earthshine measurements are only in the 0.4-0.7 micron wavelength band. Satellites show little to no trend in albedo from year 2000 onwards, whereas earthshine shows a increase between 1999-2003, and little to no trend since then.

    Lastly, do you think for a minute that climate scientists are sufficiently stupid and ignorant (or duplicitous) to not include albedo? Changes in land use, atmospheric aerosols, clouds etc are taken into account in calculating the radiative forcings. This is all abundantly clear in Chapter 2 of the WG-1 AR-4 IPCC report.

    I sometimes think that otherwise intelligent people go searching for the slightest thing that might bolster their claims, come across some site such as WUWT, and repeat what they’ve seen. Instead, they should spend time reading the literature.

    I have changed fields (from theoretical physics to marine science) and I know that it takes a long time, lots of effort and hard work to become knowledgeable and be able to contribute significantly to a second field. People seem to forget this, even though they have presumably put in the time and work to become expert in their own discipline.

    Now, I for one would love to learn more about cointegration, unit tests etc so that I can assess arguments being presented here, as well as perhaps make use of them in my own work. So again, VS, perhaps you can enlighten us all as to how precisely these things work and why you think there is a problem.

    Adrian

  195. Pat Cassen Says:

    In addition to Adrian Burd’s recommendation, Al should read the comprehensive review by Wild: “Global dimming and brightening: A review”
    http://www.leif.org/EOS/2008JD011470.pdf
    “Recent brightening cannot supersede the greenhouse effect as the main cause of global warming, since land surface temperatures overall increased by 0.8°C from 1960 to 2000, even though solar brightening did not fully outweigh prior dimming within this period…”
    The story is nowhere near as simplistic as Al would have it.

  196. Al Tekhasski Says:

    Adrian Burd wrote: “The changes in albedo inferred from earthshine do not entirely agree with those from satellite measurements. The latter are whole planet measurements whereas the earthshine measurements are only in the 0.4-0.7 micron wavelength band. Satellites show little to no trend in albedo from year 2000 onwards, whereas earthshine shows a increase between 1999-2003, and little to no trend since then.”

    I think you are confused. It is Earthshine that covers (a) entire Earth at once, and (b) uses the right light range. Satellites, in contrast, require a lot of effort in interpretation of received brightness. They use either swaths of limited view fields, or need orbit correction, and calibration target correction, and satellite body temperature correction, and diurnal correction, and inverse scattering correction (weighting function), and who knows which else correction. The results have to be reconstructed from many pieces of incoherent and noisy data. It is no better than surface garbage, and is a subject of wild wishful interpretations.

    Why do I think that AGW climatologists are negligent about albedo? Because (a) they (at least RC advocates) frequently state that albedo is constant and “well known”, and (b) the albedo effect is nearly two orders of magnitude bigger than the entire alleged doubling in CO2, yet no historical data are available from the distant past, and cloud cover os frequently a fuge parameter that allows models to fit known surface data. Yet they believe that they can calibrate their models without the most important data about albedo.

    Adrian also wrote: “I have changed fields (from theoretical physics to marine science) and I know that it takes a long time, lots of effort and hard work to become knowledgeable and be able to contribute significantly to a second field.”

    Sorry to ask, but why would someone “change” theoretical physics to marine stuff? Do they pay more, or something else?

  197. Al Tekhasski Says:

    Pat Cassen wrote: “Al should read the comprehensive review”

    I appreciate this pointer to real (typical) climatology. It is impressive amount of effort, one would only guess how much did it cost to taxpayers.

    First, let me remark that global SSR is not the same as global albedo. Second, I certainly agree that the story is not that simplistic. Unfortunately, I can easily point out one obvious source of discrepancies and incoherency. The review mentions that “To date more than 30 anchor sites in different climate regimes provide data at high temporal resolution (minute data).”

    Now consider this. The Weather pattern has a characteristic spatial variability of the order of 50km. Therefore, in accord with Nyquist sampling theorem, one needs a spatial sampling grid of 25 x 25 km in order to get representative statistical properties of the climate field. Earth surface is 5*10^8 km2, therefore the global climate data acquisition grid should have about 800,000 sensors equally spaced around the globe. The SSR nework has 30. Give me a break.

  198. Tim Curtin Says:

    dhogaza: Tamino outed himself as Grant Foster at RC when as “guest poster (sic)” on 16 September 2007 he proceeded to plagiarise (if he was not one of the authors) the paper by GF, Annan, Schmidt and Mann which had been submitted to JGR on the 10th; the paper attacked Stephen Schwartz’ paper in JGR before that had even appeared; Tamino’s graphs required direct access to the data in GF et al, and it would certainly be very odd for Gavin Schmidt to commission the guest posting if not from his co-author, who at one point uses the term “we” confirming that “Tamino” was the lead author. There is no harm in any of this, but you are wrong to accuse VS of outing GF (or Halpern, long known to be most likely the Rabbett). What is reprehensible is the way “tamino” hides who he is from most of his readers when maligning others who do use their real names (like Anthony Watts to name just one). One suspects the real reason for GF to continue modestly using his Tamino soubriquet is that he has very little to be modest about.

  199. Tim Curtin Says:

    It is surprising to find Bart (March 12 at 14.11) citing against me the claim by a certain Barton Paul Levenson that the correlation coefficient between global mean temperature and CO2 is an amazing 0.87. Alas, BPL’s frequent contributions to the Deltoid blog all too often betray a lack of statistical training, and his “results” as cited by Bart have never been published, let alone peer reviewed. The same appears to be true of Bart’s own training, as BPL’s use of ln(CO2) instead of CO2 itself whilst approved by Bart is a nonsense, and does nothing to improve the true outcome, even if use of ln is apparently what the IPCC does when deriving its “radiative forcing”
    Anyway, actually the adj R2 is 0.64 for your source’s 1880-1998 series, and 0.76 on his data from 1880 to 2007, but that is without setting the constant = 0, as one should. Doing that, the adj R2 vanishes to MINUS 0.05 (for 1880-2007) and the coefficient ceases to be statistically significant both for actual CO2 and for ln(CO2).
    In short, to cite BPL’s absurd “result” when it is in flagrant disregard of all the high-powered tests for spurious correlation cited elsewhere on this blog is astonishing, his “result” does not even pass the Durbin-Watson test.
    All the same Bart, your Blog has otherwise been rewarding for many of us.

  200. Bart Verheggen Says:

    Tim Curtin,

    The temperature effect of CO2 is approximately logarithmic (hence the sensitivity is defined per doubling of CO2 rather than per ppm), and the same relation holds over a certain interval of concentrations, but not all the way down to zero where the relation becomes close to linear I believe). Thus doing a correlation while forcing the intercept to zero would be wrong, and the way BPL did is correct as a first approximation.

    Also, we’re talking about *global* warming, so correlations for individual locations don’t interest me much.

    Al Tekhasski,

    The earth climate remains constant if in- and outgong radiation equal each other, and it changes when there’s an imbalance, which is currently the case, in line with what would be expected from an enhanced greenhouse effect (ie more infrared being reflected both to the surface and to outer space).

    Please refrain from setting up strawmen arguments and making broadbrush accusations of scientists; I’m not interested.

  201. dhogaza Says:

    Tamino outed himself as Grant Foster at RC when as “guest poster (sic)” on 16 September 2007 he proceeded to plagiarise (if he was not one of the authors) the paper by GF, Annan, Schmidt and Mann

    Schmidt invited him as a guest to discuss one of Schmidt’s papers, and you’re accusing him of *plagiarism”?

    Fortunately, everyone with a three-digit IQ knows that Tim Curtin is

    1. a liar

    2. ignorant

    3. a fool

    so no harm is done. I’d be wary of libeling people in the UK if I were you, though.

  202. VS Says:

    And if you had that ‘3 digit IQ’ yourself, you might actually be able to read:

    ” to plagiarise (if he was not one of the authors) ”

    Now bugger off, troll.

  203. dhogaza Says:

    Here’s the post Tim Curtin’s referring to.

    It is posted by Tamino, and no where does it reveal Tamino’s real identity.

    Therefore, this:

    Tamino outed himself as Grant Foster at RC when as “guest poster (sic)” on 16 September 2007

    would appear to be a false statement.

  204. VS Says:

    Which part of ‘bugger off troll’ do you fail to understand? The ‘bugger off’ or ‘troll’?

  205. dhogaza Says:

    VS lives! He doesn’t answer any of he questions put to him, or rebut any of the posts showing that he’s ignorant of climate science, but he lives!

    Definition of plagiarism:

    v. tr.
    To use and pass off (the ideas or writings of another) as one’s own.

    To appropriate for use as one’s own passages or ideas from (another).

    Note that “plagiarism is whatever Tim Curtin decides it is” is not one of the dictionary definitions.

    Got any proof that when Schmidt invited Tamino to guest post that Tamino plagiarized Schmidt?

    I didn’t think so.

    The fact that you’re impressed with serial liars like Motl and Curtin says a lot, VS.

  206. dhogaza Says:

    Which part of ‘bugger off troll’ do you fail to understand? The ‘bugger off’ or ‘troll’?

    Which part of “this is not your blog” do you fail to understand?

    Bart, looks like VS is backed into a corner and is fighting like a rabid bat …

  207. dhogaza Says:

    And by the way, VS, you are more than welcome to chase the link I provided and to prove that Tim’s telling the truth when he says that Tamino “outed himself” in that post, or the thread that follows.

  208. jfr117 Says:

    “He doesn’t answer any of he questions put to him, or rebut any of the posts showing that he’s ignorant of climate science, but he lives!”

    dhogaza…are you serious? vs has engaged every comment that i have read in some way, shape or form. i don’t know how he has the time do it. it took me four hours to read this entire thread yesterday.

    to his credit he has tried to remain to the point, as much as you can in what is the wild west. to pretend he has not brought forth pertinent points is either dishonest or you haven’t read the whole thread.

    vs has raised an interesting statistical question. since tamino has spent years considering the temperature series via his statistics – to suddenly say that statistics don’t matter when they raise a potentially different conclusion raises questions to ‘skeptics’ since that changes the goalposts when the answer changes.

    i have no idea who tamino is, but wouldn’t the real grant foster have an issue being labeled as tamino if he was in fact, not tamino? i assume the real grant foster is in the ‘climate science’ field (whatever that means) and is aware of tamino’s blog.

  209. VS Says:

    Bart,

    Inderdaad, breng dit alsjeblieft even onder controle. Ik dacht dat we een normaal gesprek aan het voeren waren en dan komt plotseling een hele horde van dit soort types langs (het had met de Tamino link te maken, geloof ik, daarvoor ging alles min of meer prima).

    Ze voegen absoluut nul toe aan de discussie, en blijven maar iedereen beledigen, onder de aanname dat ze jou als ‘rug’ hebben (zie ook hierboven). Ik kan mij niet voorstellen dat jij het daar mee eens bent. Wij zijn het weliswaar niet met elkaar eens, maar een biertje op een terrasje zou toch echt naar binnen kunnen gaan. Dat kan ik niet over deze vent zeggen.

    Kijk bijvoorbeeld even goed naar zijn ‘contributies’.

    Overigens, je zou ook even kunnen kijken naar mijn laatste antwoord op de post van Tamino, hier op jouw blog. De man heeft een aantal fatale fouten gemaakt in zijn analyse. Een tweedejaars statistiek student zou ze eruit kunnen vissen (en ze zijn eruit gevist: enthropy measure cherry-picking deluxe.. fout uitleggen wat een I(1) vs AR(1) proces is..).

    Ter wille van een gezond debat, zou ik het erg op prijs stellen als je daar nog even op zou kunnen reageren, inhoudelijk. Iedereen blijft het negeren.

    Ik zal zeker nog even op jouw posts reageren, maar al mijn tijd gaat op aan mensen zoals deze dhogazza hier. Niemand schijnt het ook ‘erg’ te vinden.

    Hij vindt immers dat ik op zijn commentaar in moet gaan terwijl hij begint met roepen dat ik op zijn ‘shit list’ sta. Wat moet ik daar echt mee?

  210. Bart Verheggen Says:

    Behave yourselves, please. No namecalling. May I also remind you that only the host (i.e. me in this case) can tell someone to leave.

  211. dhogaza Says:

    dhogaza…are you serious? vs has engaged every comment that i have read in some way, shape or form.

    Where has he commented on my post regarding Motl’s post that VS references?

    I’ve missed it.

    Thanks in advance for the link …

    vs has raised an interesting statistical question. since tamino has spent years considering the temperature series via his statistics – to suddenly say that statistics don’t matter

    If one has a large body of observed physical evidence, and someone comes along and says “I’ve statistically proven that this observed physical evidence can’t be true”, it’s reasonable to say “your statistical analysis is most likely wrong” (not “don’t matter” – WRONG).

    Because the alternative is that our observations aren’t real, which is silly.

  212. dhogaza Says:

    i have no idea who tamino is, but wouldn’t the real grant foster have an issue being labeled as tamino if he was in fact, not tamino? i assume the real grant foster is in the ‘climate science’ field (whatever that means) and is aware of tamino’s blog.

    He’s a professional statistician. The identification is correct, however the point is that “outing” someone’s real name who chooses for whatever reason to post anonymously is rude. Or worse.

    It’s also a favorite trick of certain people in the denialsphere.

    For instance, in my case, over at dotearth, someone posted not only my real name BUT PART OF MY CLIENT LIST.

    You can get that off the net easily enough – I have nothing to hide – but people do this as a form of intimidation. You know … “if you post here at this blog, I’ll reveal your name, some of your clients, etc, potentially exposing your clients to the type of abuse and harassment that characterizes denialist tactics”.

    It’s just wrong.

  213. dhogaza Says:

    Google translate’s version of VS’s post:

    Yes, please bring this under control here. I thought we were conducting normal conversation and then suddenly a whole host of these types along (it did with the link to Tamino, I think, before everything went more or less fine).

    They add absolutely zero to the discussion, and continue to insult anyone but the assumption that they like you ‘back’ (see also above). I can not imagine you would agree with you. We are although not in agreement, but a beer on a terrace would really be able to go inside. I can not say about this guy.

    Look as good example to his “contributions”.

    Incidentally, you could also just be looking at my last reply to the post of Tamino, here on your blog. The man has a number of fatal errors in his analysis. A half-year statistics student would be able to fish them out (and they are caught out: enthropy measure deluxe cherry-picking mistake .. I explain one (1) vs AR (1) process ..).

    To ensure a healthy debate, I would really appreciate it if you could comment on just yet, content. Everyone continues to ignore.

    I will certainly also react to your posts, but my time is spent on people like this dhogazza here. Nobody seems too ‘bad’ way.

    He thinks his comment is that I should go in as he starts calling me his “shit list” sta. What am I really that?

    Dhogaza’s shorter version:

    Please make these people go away so I don’t have to answer as to why I think my statistical treatment trumps physics, or why my statistical argument is more valid than Tamino’s.

    As far as insults go, VS, all along you’ve dismissed Tamino as not understanding first-year stats, when of course we know that Tamino makes his living doing this stuff. Time series analysis is all that he does.

  214. jfr117 Says:

    i had somehow missed a lot of the interaction from yesterday. a bunch of posts that i see now, i didn’t see yesterday. guess i have to read again. my bad.

    but if the statistics are right (tbd) then the theory needs to be reworked. that’s how science works. concensus and all.

  215. dhogaza Says:

    At this point, perhaps the best thing would be for VS to work his analysis up into publishable form, and find a suitable venue. My guess is he won’t get anywhere in any journal related to the physical sciences (since his results are unphysical), but I imagine he’ll have no problem getting it published in an economics journal.

  216. dhogaza Says:

    i had somehow missed a lot of the interaction from yesterday. a bunch of posts that i see now, i didn’t see yesterday. guess i have to read again. my bad.

    No problem, none at all …

    but if the statistics are right (tbd) then the theory needs to be reworked.

    I don’t think you fully understand. If the statistics are right and there’s no actual trend in the observed temperature data, then we need to explain why the MSU and later AMSU data is all screwed up. Why the surface temp record is all screwed up. Why ecosystems are moving north (albeit their parts aren’t moving north in synchronized fashion, which is a real problem). Why measurements of IR radiation at the top of the atmosphere, taken by satellites, matches theory. Why water vapor response, measured by satellite, matches model results. Why downwelling IR measurements match theory.

    We’ll have to explain why all the extra energy being retained in the climate/earth/ocean system is … magically disappearing. There’s nothing in physics that allows it.

    You have to really believe that VS has made a very large part of physics *and* physical observations of related phenomena disappear in one big POOF!

    I say that’s unlikely …

  217. Bart Verheggen Says:

    VS,

    You chose whether you want to engage someone or not, also if (s)he “requests an answer” (in reply to “Hij vindt immers dat ik op zijn commentaar in moet gaan terwijl hij begint met roepen dat ik op zijn ’shit list’ sta. Wat moet ik daar echt mee?”)

    All,

    I don’t like the insults going over to each other, but there’s more things I don’t like. Al T’s conspiracist comment rubbed me the wrong way; veiled accusations dito (perhaps even more than clear ones out in the open). Unveiling the identity of people who wish to remain anonymous is rude.

    Everybody just try to be a little nicer than you really want to be (after mt). Count to ten and all that. Engage with substance, not with namecalling (or don’t engage). If this ends in a foodfight I’ll close the comments.

  218. VS Says:

    The results we’re debating now have already been published.

    Read the thread.

    ** Woodward and Grey (1995)
    – reject I(0), don’t test for I(1)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Kaufmann et al (2006)
    – use I(1) for GLOBL (Temp var.)

    You’re a troll.

  219. VS Says:

    Didn’t see your comment Bart.

  220. jfr117 Says:

    @dhogaza – i believe vs’ original thesis was to rebuke bart’s assertion of statistically significant warming. not that it isn’t, or hasn’t warmed; just that it might be insiginficant if treated differently on a statistical basis. if that is true, and motl’s description does make more sense than treating temperature series as a linear trend plus white noise, then to me the implication is this: yes we have warmed, but it not significant within the modern record. that combined with the NAS acknowledgment of mann’s reconstruction is valid back to 400 years should help us conclude that: yes its warm but it may still be well within the bounds of natural variability.

    therefore, there is no attack on physics. just reframing what we are looking at and putting it into another perspective. the data is right, the co2 theory may even be right but natural variability is large and something we need to understand better.

    to make vs have to explain ‘your’ theory because of his conclusion just doesn’t make sense to me.

    @ vs
    i would recommend you stick to what you started with – statistics. i enjoyed your earlier posts but recently that message has been diluted. who cares who anybody is. if they wanted us to know, then they would tell us. please respond to actual scientific discourse though and ignore the insults. i recognize your contribution and would ask you to please continue. although i understand it must be difficult fighting off everybody.

  221. Bart Says:

    jfr117,

    Thanks for a thoughtful comment and bringing the discussion back to contents.

    VS postulated that by purely inspecting the numerical values (i.e. without physical meaning attached to them), their increase is indistinguishable from a random walk. This seems to depend on choices made in the statistic, but even if we accept that as being true, the question remains, was the increase in fact unforced? Showing that numerically it could have been doesn’t mean that it was.

    In my newer post I argue on physical grounds that it wasn’t random, but was in fact forced. Namely, the hypothesis of unforced variability is inconsistent with the observations of a positive radiative balance at the top of the atmosphere, and with the observation of increased heat content/signs of global warming in other metrics (ocean heat content, Arctic sea ice, ice sheets, glaciers, ecosystems, etc).

    The point I was making at the end of this post was that statistically there is no reason to assume that the long term trend of increasing temperatures has stopped or reversed since 1998. It would be slightly ironic if people who have trumpeted the “global warming stopped since 1998” canard (Lubos perhaps? Haven’t checked recently) would now claim that it’s all a random walk anyway. I would expect that VS would agree that if the 130 year record is merely a random walk, that the latest 12 years are by far not enough to draw any conclusions from. Perhaps VS will join us in fighting strongly against the erroneous “1998” claim.

  222. Alex Says:

    Over at Tamino’s blog I posted my results of the augmented Dickey-Fuller test, conducted with several selection criteria to illustrate that it is not so obvious that the null hypothesis of a unit root is rejected or not. Tamino pointed out that the Phillips-Perron test does reject the null hypothesis and when I checked this I got the same result. This, the way I see it, just supports what I was trying to point out. Simply on the basis of statistical testing we can neither accept nor reject the hypothesis of a unit root. This is not very satisfying, but we should of course not reject (or accept) the presence of unit because of this dissatisfaction.

    I have seen several people, both at this blog and at Tamino’s, talking about random walks and unit roots as if they are the same thing. This is not the case. A random walk (as explained at Tamino’s page) is a simple model, which has a unit root. However, there are many (more complicated) models which have a unit root, but are not a random walk. So a random walk is just a model with this property, but not every model with this property is a random walk. The reason why it is so important to check whether the series contains a unit root is that if it does, many of the ‘standard’ statistical techniques are invalid, which might lead to false conclusions. I hope this clarifies why some of us put so much emphasis on the possible presence of a unit root, but that this is not the same as saying that temperature is a random walk.

  223. dhogaza Says:

    if that is true, and motl’s description does make more sense than treating temperature series as a linear trend plus white noise

    Again, it’s a strawman, no one claims it’s white noise …

    Read this post entitled “How long?” by Tamino, for instance.

    Search Real Climate for “autocorrelation” and you’ll find a ton of references.

    This should make clear the strawman nature of Motl’s implication that climate science treats climate data as trend+white noise.

    In Motl’s case, he knows this. I will let you draw your own conclusion as to why he writes this way despite knowing this.

    In VS’s case, I suspect he believes the implication made by Motl to be true, since it was he who linked to Motl in the first place. I don’t think VS understood that Motl’s piece builds a small army of strawmen to shoot down.

    At least I hope he didn’t …

  224. jfr117 Says:

    @bart

    this posting has had many good contributions and, except for the last day or two, a real advancement of information. i think (emphasized) that the term ‘random walk’ is confusing the issue. i don’t think that is how we should view temperature data and i don’t think that is what has been proposed to do – but statistics that may be applicable to this kind of data is most commonly associated with random data. thus we have been associating temperature = a random walk. but in fact we are only using statistics designed to handle this kind of data. in other words: the use of random-walk statisical assumptions for temperature does not necessarily mean that temperature is physically described as a random walk. is this correct?

    if this is true, then again, no attack physics. just another way to statisically look at the data. temperature is what is – statistics help us to view it through different lenses.

    alex and vs, you seem to be the statistical gurus here. is this correct?

    obviously the temperature has increased due to a forcing – that is true. the question vs

  225. Bart Says:

    Thanks Alex, that was helpful.

    You wrote:

    “if it does (contain a unit root), many of the ’standard’ statistical techniques are invalid, which might lead to false conclusions.”

    So *if* the temp series contain a unit root, how would that influence the OLS trend I calculated? My guess (corroborated by Tamino) is that the actual trend estimate wouldn’t be much different, but the error of that estimate would be larger. Calling such a trend nonsense and misleading (as per VS’ first post) seems too strong of a pronouncement.

  226. dhogaza Says:

    VS, a question … over a Tamino’s you said …

    Look at the temeprature series over the past couple of [hundred] thousand of years. Where do you see a trend? There is a cyclical movement, but a deterministic trend? Nope…

    I’ve added in the word “hundred” because later you said that’s what you meant.

    Why do you think this is significant to analyzing what’s hapeening on century timescales?

  227. Adrian Burd Says:

    Alex,

    Many thanks for a very useful and clear explanation. Am I correct in assuming that the Dickey-Fuller test and the Phillips-Perron test have different assumptions behind them? Are they indeed testing the same thing with the same background assumptions? If so, can one argue that the fact one gets different results from the two tests say something more about the tests than the data?

    I wonder if you could elaborate on your statement to the effect that the presence of a unit root invalidates many of the standard statistical tests. Which types of statistical test are invalidated and how are they invalidated?

    Many thanks,

    Adrian

  228. Tim Curtin Says:

    1. Dhogaza: I see that you have called me a liar and worse, on March 14, 2010 at 15:48 “Fortunately, everyone with a three-digit IQ knows that Tim Curtin is
    1. a liar
    2. ignorant
    3. a fool
    so no harm is done. I’d be wary of libeling people in the UK if I were you, though”.
    You beauty, dhogaza, taking care to avoid libel laws here in Australia with your anonymity. For the record, I have never libelled anyone in UK USA or here, as that is not my style.
    But Bart, despite your penchant for censuring VS et al. for using such language, while as ever dhogaza here and everywhere always gets away with it, you as publisher remain liable, as Dow Jones found not long ago in a case brought against them in Melbourne for an article in Barrons published in USA but circulated here. IF I were to be that litigious, watch out. But to be on the safe side, I suggest you either get dhogaza to retract and apologise or else ban him, just as I am banned from Deltoid for using straight talk against him and ilk.
    [edit. No more talk of anonymity issues or other “whodunnit” stuff. BV]
    However jfr117 being anonymous does not attract dhogaza’s vitriol!
    Dhogaza’s other accusations against me are also defamatory, and Bart + his service provider need to be more careful, and at least warn d. to mind his language – or out himself, so that his targets can reply in kind.
    [Reply: Keep your threats to yourself. BV]

  229. Tim Curtin Says:

    Bart (at March 14, 2010 at 15:02) Again you astonish me! You said: “The temperature effect of CO2 is approximately logarithmic (hence the sensitivity is defined per doubling of CO2 rather than per ppm (sic)).” First, your parenthesis is I fear wrong (enough so for dhogaza to be very rude about you if he understood it himself, but he does not!). The doubling of atmospheric CO2 is ALWAYS stated in terms of parts per million by volume (ppmv), namely from 280 ppmv in 1750 or 1900 (the end-year of “pre-industrial” seems to be variable), so doubling means 560 ppmv for just [CO2], and more if other GHGs are included, but still in ppmv (the Stern and Garnaut reports say we had reached 450 ppmv of CO2-equivalent by 2005, so doubling for them from 2005 means 900 ppmv CO2-e by 2060 (Garnaut Fig.4.4)).
    Yes, Arrhenius stated the effect of atmospheric CO2 would be about logarithmic, and showed lesser extra warming for a 100% increase in [CO2] from the 1900 level than for a 50% increase. The scientists favoured by the IPCC do not accept this, as in AR4 WG1 which infamously claims that doubling will be achieved by 2047 (A2 scenario) with warming of 3oC (+/-1.5) by then (WG1, Fig.10.20 (a)), when the actual c40% rise in [CO2] since 1900 has been associated with only a rise of 0.7oC in GISS global mean temperature from 1900 to 2000. The IPCC’s teams defend this inconsistency by claiming that aerosols counteracted the effects of the 40% rise in [CO2] between 1900 and 2000, and that these aerosols are no longer present, hence the 3oC for the extra 60% of [CO2] from the 1900 baseline. Clearly almost none has noticed the brown haze that spreads all the way from Shanghai and Beijing to Kabul and further west. Those that have argue for increasing aerosols and their benevolent haze (see Nature 1077-1078, 30 April 2009).
    BTW, Bart, NONE of the IPCC’s projections are based on the kind of econometric techniques discussed by VS, Al T, and others here. Have you ever seen any econometric work at Foster/Tamino’s? His times series are purely arithmetic.

    As for setting the constant at zero, Levenson’s correlations where that is not the case fail the Durbin-Watson test (if they did not he would be on the front cover of IPCC AR5), and when set at zero, there is no correlation, not even spurious. It is for you to explain why the constant should not be zero, and why Levenson did not test for unit roots. My own guru on these matters states “The unwanted consequence of allowing Excel to compute a non-zero intercept is to introduce an additional ‘linear trend estimator’ along with the other specific regressions of Temperature on CO2”. This “stranger at the feast” helps to explain the gross exaggeration of Levenson’s finding.
    The problem is of course easily fixed as I have done, by re-running the regressions on the first differences of the data exercising the Excel option to force a = 0. End of your and Levenson’s nice story.

  230. VS Says:

    CORRECTION REGARDING THE KPSS SECTION:

    There is a glaring mistake in the post (I typed it quickly and in one pull, while ‘flipping around’ null hypotheses.. I guess it helps to proof-read)

    It regards the KPSS test statistics.

    The stationarity (i.e. hypothesis of NO unit root) of the GISS-all series is in fact rejected in most cases at the 5% and 10% significance level, but not at the 1% significance level.

    The KPSS test results should read like this:

    ========================

    Critical values:

    1% level, 0.216000
    5% level, 0.146000
    10% level, 0.119000

    So once the Lagrange Multiplier (LM) test statistic is ABOVE one of these values, STATIONARITY is rejected at that significance level.

    Newley-West bandwith selection:
    TEST STATISTIC: 0.165696
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.154875
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    PARZEN KERNEL:

    Newley-West bandwith selection:
    TEST STATISTIC: 0.147904
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.130705
    Conclusion, stationarity is not rejected at 1% and 5% significance levels. Rejected at 10% significance levels.

    ========================

    The discussion in fact more or less remains the same, and I still must note the small sample properties of the KPSS test statistic, which is asymptotic.

    My sincere apologies for any confusion.

    The test outcomes are furthermore confirmed by Kaufmann (see references first post).

  231. VS Says:

    The conclusion should then read:

    ADF: Clear presence of a unit root
    KPSS: Presence of unit root detected at 5% and 10% sig, not at 1% sig.
    PP: No presence of unit root, but only when using (3) as an alternative hypothesis (this is robustness issue)
    DF-GLS: Clear presence of a unit root

  232. VS Says:

    So, finishing up the KPSS section, because it’s flawed as it is written now (again, apologies for the confusion, it seems I confused myself there at one point :)

    CORRECTED VERSION

    Let’s now try to interpret the results of the KPSS test.

    We see that the null hypothesis of NO unit root, is rejected at 10% for all methods used, and 5% in most cases. At a 1% significance level, it is however not rejected.

    Two things to note:

    (1) The test is asymptotic, so the critical values are only exact in very large samples

    (2) The null hypothesis in this case stationarity, and the small sample distortion severely reduces the power of the test (the power is the ‘inverse’ of the probability of a Type II error). In other words, the test is biased towards NOT rejecting the null hypothesis in small samples.

    However, in spite of this small-sample bias, we nevertheless manage to reject the null hypothesis of stationarity in all cases, at a 10% significance level and in all but one case using a 5% significance level. I conclude that there is strong evidence, when testing from ‘the other side’, and minding the small sample induced power reduction of the test (i.e. the fact that it is biased towards not rejecting stationarity in small samples), that the level series is NOT stationary.

    I(0) is therefore rejected.

  233. VS Says:

    CORRECT VERSION OF POST

    ============================

    Bart, it would be great if you could delete the previous couple of posts, because they might be confusing. I was doing three things at the same time, and somewhere in between I lost focus when dealing with the KPSS.
    [Done. BV]

    The official version of my statistical argument is given HERE:

    ============================

    Hi everybody,

    The debate has certainly become heated, and I apologize for my contribution to making it so.

    I tried, in this thread, to be as forthcoming as possible, but the plethora of insults finally got to me. Some of the people posting here have been posting very nasty stuff on other blogs when referring to me. The final straw however, was the lambasting of participants in this discussion whose contributions I am actively enjoying (for example, the completely reprehensible bashing of Tim above).

    However, this is a spit fight I promised myself I wouldn’t engage in, and I’m sorry for any offence. I hope we can keep it scientific and argumented now.

    Let’s try to bring this discussion back on track.

    I also want to thank Alex for clearing up the unit root/random walk difference. I have mentioned it in one of my many posts, but it got lost in the debate, and I should have stressed it more. Alex is 200% correct in stressing this distinction. I will however allow him to elaborate on that further, if he sees fit.

    Allow me to show the line of my statistical argument now (warning, it’s around 2500 words).

    ————————–

    I will show all the steps taken in the process of establishing the I(1) property of temperature series. I will list all test results, motivations, and decisions. This way Alex, or anybody else for that matter, will be able to inspect them.

    I will use the GISS-NASA combined surface and sea temperature record that I downloaded from their website. I will resort to this series, because everybody seems to be using it in this discussion. However, I have to stress that more or less the same results are established using HADCRUT or CRUTEM3 (or the GISS-NASA land only) temperature records.

    ————————–

    TESTING THE I(1) PROPERTY

    ————————–

    We start by examining the GISS-NASA temperature series 1880-2008 (GISS-all). We want to see whether the series contains a unit root. As mentioned here, and on various other places, the presence of a unit root in a time series invalidates regular statistical inference (including OLS with AR terms) because the series is no longer stationary (this is a necessary condition).

    Definition stationarity (from wiki):

    http://en.wikipedia.org/wiki/Stationary_process

    “In the mathematical sciences, a stationary process (or strict(ly) stationary process or strong(ly) stationary process) is a stochastic process whose joint probability distribution does not change when shifted in time or space. As a result, parameters such as the mean and variance, if they exist, also do not change over time or position.”

    ————————–

    AUGMENTED DICKEY FULLER TESTING

    ————————–

    I start with applying the Augmented Dickey Fuller test. The definition (and purpose) of the ADF is given here, again on wikipedia:

    http://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test

    I stress this part of the definition:

    “By including lags of the order p the ADF formulation allows for higher-order autoregressive processes. This means that the lag length p has to be determined when applying the test. One possible approach is to test down from high orders and examine the t-values on coefficients. An alternative approach is to examine information criteria such as the Akaike information criterion, Bayesian information criterion or the Hannan-Quinn information criterion.”

    The ADF can be applied in different forms, depending on how you want your alternative hypothesis to look like. The null hypothesis is the presence of a unit root. The alternative hypothesis (determining the specification of the test-equation) can be:

    (1) no intercept
    (2) intercept
    (3) intercept and trend

    I will focus on (3) here, because this is the most ‘restrictive’ case and because I have been accused of ‘ignoring’ this alternative hypothesis when arriving to my test results. It also corresponds to what has been posted here and elsewhere as the probable alternative hypothesis. Do note however, that the results given below are *much* more conclusive in cases (1) and (2).

    I will furthermore use all the information criteria (IC) available to me to arrive at the required lag length (‘p’ in quote above, I will refer to it as ‘LL’ below) in the ADF test equation.

    Hypothesis specification:

    H0: GISS-all contains a unit root
    Ha: GISS-all is trend stationary (testing against case 3)

    NOTE: All residuals of test equations have been tested for normality via the Jarque-Bera test for normality (the p-value is reported as JB below), and in all cases the null hypothesis of normality is not rejected. The ADF test, under the assumption of normality of residuals, is then exact. For a definition of this normality test, see here:

    http://en.wikipedia.org/wiki/Jarque_bera

    ADF test results:

    IC: Akaike Info Criterion (AIC)
    LL: 3
    p-value: 0.3971
    Conclusion: presence of unit root not rejected
    JB: 0.393560

    IC: Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 0
    p-value: 0.0000
    Conclusion: presence of unit root rejected (I will get to this below, bear with me)
    JB: 0.202869

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 3
    p-value: 0.3971
    Conclusion: presence of unit root not rejected
    JB: 0.393560

    IC: Modified Akaike
    LL: 6
    p-value: 0.8619
    Conclusion: presence of unit root not rejected
    JB: 0.370261

    IC: Modified Schwartz
    LL: 6
    p-value: 0.8619
    Conclusion: presence of unit root not rejected
    JB: 0.370261

    IC: Modified HQ
    LL: 6
    p-value: 0.8619
    Conclusion: presence of unit root not rejected
    JB: 0.370261

    Now, we see that using the ‘BIC’ one arrives at a deviant number of lags (namely 0). This warrants further inspection. Note that the purpose of the lag length is to eliminate all residual autocorrelation, so that the ADF tests can function properly.

    In order to inspect this issue, we compare the residuals of the test equations with 0, 3 and 6 lags respectively. Here I report the Q statistics for the first 10 lags in the residual series. The Q statistic is used to determine the presence of residual autocorrelation. A more detailed explanation is given here:

    http://en.wikipedia.org/wiki/Ljung%E2%80%93Box_test

    I quote, for those with no time to ‘click’ ;), the following:

    “The Ljung–Box test is a type of statistical test of whether any of a group of autocorrelations of a time series are different from zero. Instead of testing randomness at each distinct lag, it tests the “overall” randomness based on a number of lags, and is therefore a portmanteau test.”

    0 Lags in test equation:

    0.447
    0.683
    0.858
    0.102
    0.161
    0.159
    0.215
    0.168
    0.178
    0.081

    3 Lags in test equation:

    0.862
    0.885
    0.953
    0.983
    0.912
    0.950
    0.938
    0.837
    0.854
    0.731

    6 Lags in test equation:

    0.939
    0.997
    1.000
    0.999
    0.998
    1.000
    1.000
    0.999
    0.999
    0.989

    So, once we use the BIC determine lag length, our residuals are very messy (i.e. borderline significances etc. See first sequence Ljung-Box Q-statistics). Higher numbers of lags however, especially 6, successfully eliminate all traces of residual autocorrelation. Note also that the condition that the residuals of the test equation are normal is least solid when using the BIC for lag selection.

    Both conditions are necessary for the ADF to function properly.

    By using statistical diagnostic measures, we can therefore safely disregard the deviant lag length arrived at via the BIC, and use one of the other measures (so either AIC or HQ, or the modified versions of all three, so basically any IC except the BIC/SIC).

    Our ADF-based inference is coming to a closure. We now need to proceed to test for the I(1) versus I(2) property of the GISS-all series, in order to make sure temperature is not I(2). Again, we perform the tests, now on the first difference of GISS-all, or D(GISS-all).

    For the sake of readability (and because we still have a bunch of other tests to do) I will only report the p-values of the test using the remaining 5 ‘untainted’ IC’s. The IC implied lag length will be again be reported as ‘LL’.

    VERY IMPORTANT NOTE: The alternative hypothesis, in the first difference series will now be intercept (or drift) instead of intercept and trend. So this is case (2). The reason for this is that an intercept in the first differences immediately implies a trend in the level series. Again, as above, I am giving the ‘deterministic trend hypothesis’ the benefit of the doubt (contrary to what has been claimed elsewhere).

    ADF test results, for D(GISS-all):

    IC: Akaike Info Criterion (AIC)
    p-value: 0.0000
    LL: 4
    Conclusion: presence of unit root rejected

    IC: Hannan-Quinn Info Criterion (HQ)
    p-value: 0.0000
    LL: 2
    Conclusion: presence of unit root rejected

    IC: Modified Akaike
    p-value: 0.0000
    LL: 0
    Conclusion: presence of unit root rejected

    IC: Modified Schwartz
    p-value: 0.0000
    LL: 0
    Conclusion: presence of unit root rejected

    IC: Modified HQ
    p-value: 0.0000
    LL: 0
    Conclusion: presence of unit root rejected

    So, using the ADF, we do not reject the presence of a unit root in the level series. However, once we difference the series, the unit root is rejected in all instances. We therefore conclude that the ADF test implies that GISS-all is in fact I(1).

    Now, let’s turn to other tests.

    ————————–

    KWIATKOWSKI-PHILLIPS-SCHMIDT-SHIN TESTING

    ————————–

    The careful read has probably noted that the null hypothesis of the ADF test is that the series actually contains a unit root. One might argue that, due to the low number of observations in the series, or simply bad luck, this test fails to reject an untrue null-hypothesis, namely that of an unit root, in the level series. In other words, the possibility that we are making a, so called, Type II error.

    We can however test for the presence of a unit root, by assuming under the null hypothesis that the series is actually stationary. The presence of a unit root is then the alternative hypothesis. In this case we ‘flip’ our Type I and Type II errors (I’m being very informal here, the analogy serves to help you guys ‘visualize’ what we are doing here).

    To do that, we use a non-parametric test, the KPSS, which does exactly that. Namely, it takes the null hypothesis as being stationarity around the trend, and the alternative hypothesis is the presence of a unit root.

    See also: http://en.wikipedia.org/wiki/KPSS_tests

    “In statistics, KPSS tests (Kwiatkowski-Phillips-Schmidt-Shin tests) are used for testing a null hypothesis that an observable time series is stationary around a deterministic trend.”

    IMPORTANT NOTE: The KPSS test statstics’ critical values are asymptotic. Put differently, the test is exact only when the number of observations goes to infinity. The ADF on the other hand, is exact in small samples under normality of errors (that we tested for above using the JB test statistic).

    KPSS Test result, for two different bandwidth selection methods. The spectral estimator method employed is the Bartlett-kernel method.

    The asymptotic (!) critical values of this test statistic are:

    Critical values:

    1% level, 0.216000
    5% level, 0.146000
    10% level, 0.119000

    So once the Lagrange Multiplier (LM) test statistic is ABOVE one of these values, STATIONARITY is rejected at that significance level.

    Newley-West bandwith selection:
    TEST STATISTIC: 0.165696
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.154875
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    PARZEN KERNEL:

    Newley-West bandwith selection:
    TEST STATISTIC: 0.147904
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.130705
    Conclusion, stationarity is not rejected at 1% and 5% significance levels. Rejected at 10% significance levels.

    Let’s now try to interpret the results of the KPSS test.

    We see that the null hypothesis of NO unit root, is rejected at 10% for all methods used, and 5% in most cases. At a 1% significance level, it is however not rejected.

    Two things to note:

    (1) The test is asymptotic, so the critical values are only exact in very large samples

    (2) The null hypothesis in this case stationarity, and the small sample distortion severely reduces the power of the test (the power is the ‘inverse’ of the probability of a Type II error). In other words, the test is biased towards NOT rejecting the null hypothesis in small samples.

    However, in spite of this small-sample bias, we nevertheless manage to reject the null hypothesis of stationarity in all cases, at a 10% significance level and in all but one case using a 5% significance level. I conclude that there is strong evidence, when testing from ‘the other side’, and minding the small sample induced power reduction of the test (i.e. the fact that it is biased towards not rejecting stationarity in small samples), that the level series is NOT stationary.

    I(0) is therefore rejected.

    ————————–

    PHILLIPS-PERRON TESTING

    ————————–

    Unlike the ADF, the Phillips-Perron test doesn’t parametrically deal with autocorrelation. Instead, the test statistic is modified directly to robustly account for it. Furthermore, this makes the test robust to heteroskedastitcity (varying variance). However, as always with robust tests, these modifications reduce efficiency if these ‘robustness corrections’ are in fact not needed. This is however a very lengthy discussion and I’ll leave it there for now.

    Let’s take a look at those PP test results then, shall we. We begin by taking case (3) again, so our test equation contains both an intercept and trend. The test results reject the presence of an unit root:

    Phillips-Perron test on GISS-all, Bartlett kernel, Newley-West bandwith:

    Ha: Trend and intercept (case (3))

    TEST STATISTIC -5.744931

    1% level, -4.031899
    5% level, -3.445590
    10% level, -3.147710

    Conclusion: the presence of a unit root is rejected

    Now, let’s, just for the sake of sensitivity analysis, test with using just an intercept (and no trend) in the test equation.

    Ha: Intercept (case (2))

    TEST STATISTIC: -1.555403 (p-value 0.5024)

    1% level, -3.482453
    5% level, -2.884291
    10% level, -2.578981

    Conclusion: the presence of a unit root is NOT rejected

    Just like it was claimed elsewhere, and confirmed by Kaufmann and Stern (2000), the PP test results lead us to conclude that the series is I(0), when setting the presence of a trend as the alternative hypothesis. Setting simply an intercept in the alternative, in fact fails to reject the presence of a unit root.

    ————————–

    DICKEY FULLER GENERALIZED LEAST SQUARES TESTING

    ————————–

    Our final set of tests will concern the DF-GLS tests, which are similar, but not the same as the ADF tests. Again, we will use (3) as the alternative hypothesis, and we will use all available IC’s to derive the required lag length.

    DF-GLS test results:

    The critical values of the relevant Elliott-Rotherberg-Stock DF-GLS test statistic are given below:

    1% level, -3.551200
    5% level, -3.006000
    10% level, -2.716000

    IC: Akaike Info Criterion (AIC)
    LL: 3
    TEST STATISTIC: -1.759718
    Conclusion: presence of unit root not rejected

    IC: Schwartz / Bayesian Info Criterion
    LL: 3
    TEST STATISTIC: -1.759718
    Conclusion: presence of unit root not rejected

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 3
    TEST STATISTIC: -1.759718
    Conclusion: presence of unit root not rejected

    IC: Modified Akaike
    LL: 6
    TEST STATISTIC: -1.065158
    Conclusion: presence of unit root not rejected

    IC: Modified Schwartz
    LL: 5
    TEST STATISTIC: -1.305844
    Conclusion: presence of unit root not rejected

    IC: Modified HQ
    LL: 6
    TEST STATISTIC: -1.065158
    Conclusion: presence of unit root not rejected

    Again, just like in the case of the ADF test series, we do not reject the presence of a unit root, when using (3), i.e. linear trend and intercept, as our alternative hypothesis, Ha. In this case, even the SIC/BIC measure points to the use of 3 lags, and is in line with both the HQ and AIC.

    If we move on to the first difference series, the presence of a unit root is clearly rejected (I won’t bore you again with a series of tests, since this isn’t what we’re debating).

    So on the basis of the DF-GLS test series, using all information criteria, we again conclude that the GISS-all series is I(1)

    ————————–

    SUMMARY AND CONCLUSIONS

    ————————–

    We have now applied a myriad of different methods to check for the presence of unit roots. As you can see, and like pointedly Alex noted, you do actually have to interpret the results.

    ADF: Clear presence of a unit root
    KPSS: Stationarity (no unit root) rejected at 5% and 10% sig, not at 1% sig.
    PP: No presence of unit root, but only when using (3) as an alternative hypothesis (this is a robustness issue)
    DF-GLS: Clear presence of a unit root

    For me personally, adding all these together (and minding the small-sample properties of the ADF, if the autocorrelation is properly dealt with and the errors are normal), leads me to conclude that the GISS-all series are in fact I(1).

    I do have to ***stress*** here that I’m not the only one who looking at these results draws this conclusion. These tests have been extensively reported in the literature (see references in my first post), by both AGWH proponents and AGWH skeptics, and all conclude I(1).

    A very conservative econometrician or statistician, *might* conclude that the evidence is ‘mixed’, although it leans towards the presence of a unit root. However, if one is THAT conservative, it is truly impossible to conclude, in light of all this evidence, that the series does NOT have a unit root.

    That was my whole point, and this was my statistical argument.

    VS

  234. Pat Cassen Says:

    So, VS, not knowing a Kwiatkowski test from a Karyotype test, I surmise that you have concluded that there is a ‘good chance’ that it is at least possible that GISS-all is a random walk?

    Time to get back to physics?

  235. Alex Says:

    No Pat Cassen, not a random walk, but there is a ‘good chance’ that a unit root is present.

  236. Pat Cassen Says:

    Alex – right, not a random walk – but I thought the point was that ‘no unit root’ = ‘random walk excluded’, and VS’s analysis above allows a unit root so he/she would say ‘random walk possible’?

    (I would say that the physics excludes pure random walk, but that is another matter.)

  237. Nir Says:

    I know next to nothing about stats, but reading what VS wrote, am I correct in understanding that his tests show that there may be a unit root found in temperature records? If so (from wikipedia) that means that the process is non-stationary… But doesn’t that just mean that there’s a trend there, which is exactly what one would expect if warming is taking place?

    Or am I grasping not only the wrong end of the stick, but something that isn’t even a stick?

  238. stereo Says:

    That was my whole point, and this was my statistical argument.

    VS

    Chicken.

  239. Pat Cassen Says:

    Hey VS – ‘way up top you said:

    “In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk.”

    Doesn’t seem to follow from your analysis down here (nor from what we learned from Alex). So what’s your conclusion re: random walk (as in “Is the increase in global average temperature just a ‘random walk’?)

  240. S. Geiger Says:

    thanks everyone for the nice dialog (well, minus a few digressions). Seems to me that VS has established that the temperature data reasonably meet the ‘unit root’ criterion. The Unit Root criterion is a neccessary (but not sufficient) quality for a time series to be a random walk. Is that where we are at?

  241. stereo Says:

    VS has chickened out from taking on Tamino.

  242. dhogaza Says:

    Tim Curtin … I didn’t bother to read his entire post but …

    Bart (at March 14, 2010 at 15:02) Again you astonish me! You said: “The temperature effect of CO2 is approximately logarithmic (hence the sensitivity is defined per doubling of CO2 rather than per ppm (sic)).”

    First, your parenthesis is I fear wrong (enough so for dhogaza to be very rude about you if he understood it himself, but he does not!). The doubling of atmospheric CO2 is ALWAYS stated in terms of parts per million by volume (ppmv)

    I imagine everyone else here understands what bart meant when he said “per doubling of CO2 rather than per ppm”, but for Curtin’s benefit – he means per doubling rather than linear in relationship to ppm.

  243. dhogaza Says:

    Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 0
    p-value: 0.0000
    Conclusion: presence of unit root rejected (I will get to this below, bear with me)

    In fairness, VS should point out that Tamino used BIC because of a typo in an early VS post. It’s not exactly Tamino’s fault that VS made a typo …

  244. dhogaza Says:

    I tried, in this thread, to be as forthcoming as possible, but the plethora of insults finally got to me

    You’ve complained several times of other people making insulting posts, somehow painting yourself as an angelic, non-insulting victim of … bullies?

    Yet your in your post you say this:

    You will note that many of the assertion made about the then-standard approach to hypothesis testing in economics, are in fact applicable to present day ‘climate science’

    Which is nothing other than an insult to an entire body of science, since the use of quotes indicates a belief that it’s not really science, or at least, is bad science. It’s been pointed out elsewhere that you began your attack on Tamino by essentially accusing him of not understanding first-semester statistics. You’ve made a variety of other insults, as well.

    So, chill with the victim pleading, OK?

  245. dhogaza Says:

    Also, VS analyses the data for 1880-2009. No one claims a linear trend for that period of time, but rather for the last few decades, and indeed Bart, in his post, shows an OLS fit from 1975-present.

    It would seem to me that to show that an OLS fit for that time period is invalid you’d need to analyze the years for which the OLS is actually computed, rather than the data set as a whole.

    Using your previous example of the cyclic nature of climate over time frames of hundreds of years (driven by Milankovich cycles, though several of us have had the impression you’re unaware of it), we’re not interested in time scales for which we have no *physical* basis for expecting a CO2-forced trend.

    We don’t, for instance, expect the CO2-forced trend due to anthropogenic sources to end the cyclic nature of climate over the timescale of Milankovich cycles, but only to affect the amplitude (everything else being equal). This does not mean that there can be no CO2-forced trend causing rising temperatures at a time when the phase of the current Milankovich cycle would lead us to expect a zero, or slightly negative, trend as we ease down the 25,000-50,000 year path towards the next ice age.

    Nor do we expect the trend from 1880-present to be linear – a key question in climate science back in the 1970s was “when will the additional forcing due to exponential increases in anthropogenic sources overwhelm the noise in the system and lead to a distinguishable trend?”.

    So what do you get when you run your analysis for the relevant period for which a linear trend is claimed, as opposed to the longer 1880-present timeframe for which we already knew there was no linear trend and that CO2 would not be predicted by physics to dominating natural variation?

  246. dhogaza Says:

    It would seem to me that to show that an OLS fit for that time period is invalid you’d need to analyze the years for which the OLS is actually computed, rather than the data set as a whole.

    In case it’s not clear, I mean it seems to me that you need to show that the series for 1975-present is quite likely non-stationary.

  247. dhogaza Says:

    Also …

    Also, excuse the authority fallacy (and ensuing ridicule ;), but I’ll trust two Economics Nobel Prizes with my statistics, over some quote coming from a journal who’s editor is so sloppy in statistics that he write things like these in interviews with the BBC:

    “BBC – Do you agree that from 1995 to the present there has been no statistically-significant global warming

    Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level.”

    Not significant at a 95% significance level? Wow, that’s really not significant… it’s significantly insignificant even ;) (leave the ‘warming’, leave the discussion we had above, I’m simply showing how sloppy he is with statistics)

    Surely VS is aware that the choice of 95% is a more a rule-of-thumb than anything, and is certainly not a result of statistical theory. And that some fields are dropping that arbitrary choice and just reporting p values.

    Fisher himself was known to consider a smaller value as indicating significance in some cases.

  248. dhogaza Says:

    A bit on the history of “statistical significance”:

    http://www.jerrydallal.com/LHSP/p05.htm

  249. dhogaza Says:

    The Wabbett had something to say about this:

    The bunnies tossed back a few beers, took out the ruler and said, hey, that total forcing looks a lot more like two straight lines with a hinge than a second order curve, and indeed, to be fair, the same thought had occurred to B&R


    We also check whether rfCO2 is I(1) subject to a structural break. A break in the stochastic trend of rfCO2 might create the impression that d = 2 when in fact its true value is 1. We apply the test suggested by Clemente, Montanas and Reyes (1998) (CMR).xvi The CMR statistic (which is the ADF statistic allowing for a break) for the first difference of rfCO2 is -3.877. The break occurs in 1964, but since the critical value of the CMR statistic is -4.27 we can safely reject the hypothesis that rfCO2 is I(1) with a break in its stochastic trend.

    BUT, the period they looked at was 1880 – 2000. Zeroth order dicking around says that any such test between a second order dependence and two hinged lines is going to be affected strongly by the length of the record. Any bunnies wanna bet what happens if you use a longer record???

    Such as one of the proxy reconstructions going back centuries (not one constructed by Mann, just to make VR happy).

    Physics tells us there is a structural break, and if a statistical test on a subset of the available temperature record tells us there’s not, well, try a longer record.

    At which point we’re going to hear a whole lot of hand-waving about the fact that none of the dozen or so existent proxy reconstructions is valid.

    He’s already set up the foundation for this argument in his second post:

    I wouldn’t classify the test results I posted above as ‘torture of the data’; coming from my field, that judgement would be far more applicable to what Mann et al are doing with their endless and statistically unjustified ‘adjustments’ to proxy/instrumental records.

    Not that there’s any truth to the statement …

  250. Alex Says:

    To explain why many statistical tests are invalid when a unit root is present, I will have to introduce some notation commonly used in econometrics. This can be found in any undergraduate textbook and should not really cause any difficulties. In econometrics it is common to write the regression model in matrix notation, where the matrix X is a n-by-k matrix. n is the number of observations you have and k is the number of regressors in your regression model. Many statistical tests (e.g. T-test, F-test, Wald-test) rely on the assumption that, as your sample size goes to infinity, the matrix Q = (1/n)*X’X converges to a finite nonsingular matrix. However, if one or more of the regressors in X contain a unit root, then matrix Q will become infinitely large (this is a consequence of the nonstationarity), so this assumption is violated. This is also the reason why the Dickey-Fuller test uses different t-statistics, instead of the ‘standard’ t-statistics. Another well known problem that can occur is when a variable with a unit root is regressed on another, completely independent variable with a unit root. This may lead to a phenomenon known as spurious regression. You’ll get the impression that you have fitted a perfect model, where the regressor has a lot of explanatory power, but in fact the two variables are completely independent of each other. To test whether two variables with a unit root are related is by means of cointegration.

    @Bart

    You asked what the consequence is for the trend you calculated *if* the temp series contain a unit root. The answer depends on what you wanna do. If you just wanted to show the average annual increase between 1975 and 2009, then there is nothing wrong, but if you wanted to show that the underlying trend hasn’t changed your way of analyzing the trend is not very suitable. The reason is that OLS requires that you correctly model the underlying ‘data generating process’. This is a term statisticians use when they refer to the mechanism (or set of equations) which generated the data. If you don’t model this underlying mechanism correctly your estimates will likely be biased (know as omitted variable bias) and the test statistics will not have the right distributions under the null hypothesis. Now the model you estimate is:

    temp(t) = constant + beta*t + E

    where E is an error term. I think most of us will agree that this model is too simple to describe the underlying mechanism that determines temperature (and yes, I also think the random walk model is too simple). Moreover, the model you estimate does not contain a unit root, so *if* temp would have a unit root, then that would just be another reason why your model is misspecified. Now I know that ‘to model the underlying mechanism correctly’ is easier said than done, I actually find this the hardest part of statistical analysis. The model should be based on theory (physics, climate science, etc), statistics has no role in this part. Once a model for temperature has been proposed, statistics can be used to test this model. I would find it very interesting if you, or maybe someone else on this blog, could write down what determines global mean temperature in a set of equations. That way I could try to work out a statistical test to see if it fits the data.

    @Adrian Burd

    VS already discussed the Phillips-Perron test in his post, but maybe I can still clarify it a little bit more. Both the Dickey-Fuller test and the Phillips-Perron test require that the error term of the test equation is white noise. If the error term would contain autocorrelation, then it can’t be white noise. The two tests have different ways to deal with this problem. The DF-test adds lags of the dependend variable to the test equation until there is no more autocorrelation. The different selection criteria are different ways to determine the required number of lags. The ‘standard’ formula for the variance of the estimator will not work if there is autocorrelation in the error term. The Phillips-Perron test uses another formula to calculate this variance, however, this formula will only yield a variance, which is approximately correct. The larger the sample size the better the approximation. Whether a sample of 128 observations is sufficient is hard to tell.

    @Pat Cassen

    You’re right that if the data contains a unit root, then it could be that the data follows a random walk. Fortunatly we can test this with fairly simple statistical methods in several ways. I’ll only do one of them. Let’s first assume that the data is indeed following a random walk, which means we can write it as:

    y(t) = y(t-1) + E

    where E is white noise. Now simply estimate the following equation:

    y(t) = b*y(t-1) + e

    using OLS on the GISS temperature series ranging from 1880 to 2009. The
    estimate for b is ~0.92, which lies close to 1 and an R^2 of ~0.82. On the basis of this someone might think that the model fits the data quite well and conclude that the data is indeed following a random walk. But if you look at the residuals and test for the presence of autocorrelation you’ll get very strong evidence that the error term is autocorrelated. But this is clearly contradicting the random walk model, since this one assumes that the error term is white noise. So, on the basis of statistical testing, I would conclude that temperature is *not* a random walk. But remember, this does not mean that temperature doesn’t have a unit root!

    VS has written thousands of words here. I don’t think that anybody who would write that much would be 100% consistent all the time. I read his lengthy post as well and I would like to add that I do indeed get the same results and, given the statistical evidence, I would indeed sooner conclude that temp has a unit root than not.

  251. VS Says:

    Bart, thanks for the moderating, much appreciated :)

    Hi Pat,

    The whole ‘random walk’ thing is somewhat of a red herring. First of all, I said ‘statistically speaking’ a ‘random walk’, implying that we can model it as such. On March the 5th I stated:

    “I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks.”

    I also wasn’t being accurate (e.g. D(GISS-all) is, for example, much better described as an AR(3) process, but we’ll get to that later, I promise). My first post actually relates to what Alex posted just above. Once the series contains a unit root, regular inference, including OLS trends with confidence intervals, is invalid.

    What Tamino apparently told Bart, namely that a unit root ‘doesn’t really matter’, is simply false. I don’t know if Tamino is simply unaware of this or if he is misleading people on purpose because most of his blog entries rely on the assumption that unit roots ‘don’t really matter’.

    As for the trend: you can indeed calculate an average increase (although, a simple arithmetic mean would suffice). Calculating confidence intervals (via OLS) however implies that the underlying data generating process (DGP) is in fact trend stationary. I think I have shown this not to be the case. Hence, the intervals are meaningless, and any implications on how the ‘trend is changing’ are spurious.

    Bart, you stated on March 14, 19:24:

    “I would expect that VS would agree that if the 130 year record is merely a random walk, that the latest 12 years are by far not enough to draw any conclusions from. Perhaps VS will join us in fighting strongly against the erroneous “1998″ claim.”

    While I’m not claiming that temperatures are a simple ‘random walk’ (again, thanks Alex for clearing that up, it got lost in the debate), I am claiming that the series contains a unit root.

    In that sense, I will definitely join you in fighting the erroneous ‘cooling trend’ claim. However, you guys have to quit blindly calculating trends too, for the very same reasons :)

    Anyway, Patrick, the purpose of the lengthy post is two-fold.

    (1) We are basically, slowly but surely, reproducing the results in the literature leading up to the BR publication discussed above. Once we have established temperatures to contain a unit root, and greenhouse gas forcings to contain two unit roots (also widely reported in the literature), we are set to evaluate the cointegration analysis proposed by BR.

    One thing at a time though.

    (2) The post is also a lenghty reply to Tamino’s post here:

    Not a Random Walk

    There Tamino claimed that the GISS series does not contain a unit root. He furthermore claimed that I have not included a ‘trend’ term in my test equations, when arriving to my results (untrue, see lenghty post, almost all test equations include a trend term).

    He then proceeded to use the BIC/SIC, even though I claimed well before Tamino posted his ‘refutation’ (my post is dated March 8, Tamino’s blog entry March 11), that it isn’t appropriate in this case:

    Global average temperature increase GISS HadCRU and NCDC compared

    I cleary stated:

    “I just saw that I wrote in my first post that the lag selection in my ADF tests was based on Schwartz Information Criterion, or SIC. In fact, it was based on a related measure, the Akaike Informoation Criterion, or AIC.

    Using the SIC, which leads to no ‘lags’ being used, results in remaining autocorrelation in the errors of the test equation. That’s dangerous for inference.

    In the context of these temperature series, the AIC leads to 3 lags being employed, and successfully eliminates all remaining autocorrelation in the errors of the test equation (which has a deterministic trend as alternative hypothesis).

    Small issue, but I rather set it straight now, before somebody brings it up.”

    In his defense Tamino claimed that he had not read all my posts because I’m not ‘important’. This excuse doesn’t impress me much. If Tamino is the time series expert people apparently hold him to be, he would have checked all of that himself without being ‘called out’ (like I checked when I took the data the first time, and I’m not a TSA expert, I just had formal training in it).

    Apparantly he’s been staring at the same 120-something observations for the past few years without properly testing for a unit root. In the meantime, he has written a whole series of blog entries ‘explaining’ to people how calculate trends with AR terms.

    This is inexcusable

    It also clearly points to a lack of formal training in TSA. The very first thing you are taught to do is to thoroughly test for the presence of unit roots, for the very reasons outlined by Alex above.

    He then proceeds to show his readers the ONLY TWO instances in which the unit root is rejected, namely

    a) PP test, with a trend and intercept (or (3), using notation in my previous post)
    b) ADF test, with a trend and intercept, using a BIC/SIC to derive lag lenght

    He fails to mention that most (if not all) other test setups point to the presence of a unit root. To add insult to injury, he then proceeds to accuse ME of ‘cherry picking’.

    Finally, when a reader (JvdLaan) referred to my latest post (with all the test results), he replies:

    “[Response: Seriously, folks, VS and his theories don’t deserve the attention.]”

    May I point out that this comment was posted on a blog entry devoted entirely to ‘my theories’. As a side note, I would like to thank Tamino here for the confidence, but I honestly cannot claim to have invented unit root analysis.

    PS. Alex, your contribution is very much appreciated. I also suspected (and hoped) you would check my stats, since you were the first one to actually post results using different IC’s, on Tamino’s blog :) I’ll try to get back to your posts a bit later, I needed to set this straight first.

    PPS. I also clearly outlined, on March the 5th, why I don’t believe that Mann et al. proxy reconstructions can be used for the purpose of econometric inference. Here:

    Global average temperature increase GISS HadCRU and NCDC compared

  252. Bart Verheggen Says:

    Dhogaza, Alex, VS,
    Thanks all for your latest construtive comments.

    VS, Tamino didn’t tell me that a unit root ‘doesn’t really matter’, but he did say that it affects the error of the trend more than the trend estimate itself. Which makes sense to me.

    Stereo,
    Leave the animals at home.

  253. VS Says:

    Hi dhogaza,

    Perhaps we can continue normally from here. You should take me off that ‘list’ of yours, though :P

    First off, that Jones quote, was more of a ‘statisticians joke’, and I really couldn’t help myself (I even apologized in advance :). The point is that Jones actually meant ‘significant at 5% significance level’. A signficance level of 95% is simply ridicilous. It implies that your ‘p-value’, of the null hypothesis that the trend coefficient is equal to zero, is lower than 95%.

    I think Jones actually meant: “0 is an element of the 95% confidence interval for the calculated trend coefficient”. This would correspond to a signficance level of 5% being employed.

    But this is besides the point, it was a comic interlude :D

    You also asked me to test for a unit root in the past 30 years (i.e. the sample Bart used). I don’t think it’s relevant, because we have access to a longer series, so why throw away observations? Also, 30-something observations is really too few for the tests to function properly (small sample distortions really start creeping up on you with less than 50 obs). I guess that a 100 or so is the minimum in this case, but that’s just an educated guess.

    As for that quote, about ‘climate science’, I don’t think I insulted a field. I have pointed to a very influential paper on non-experimental hypothesis testing.

    Click to access the_probability_approach_in_econometrics.pdf

    Judging by what has been published w.r.t. statistical testing / regression analysis in the mainstream climate science literature, I firmly believe that that paper needs to circulate (no offense). I kindly invite you to read it, it was an eye-opener for me when I was an undergrad (perhaps one of the most influential in my decision to pursue a PhD), and Haavelmo even managed to fetch a Nobel prize (in part) for it.

    As for Tamino, my previous post explains my position in this situation.

    And finally, in case of Rabbett, he should first get the definition of an integrated process right, before butting into this discussion. Also, his post might have been ‘interesting’ if he had actually posted some test results, instead of ‘bunny hunches’. However, given that he clearly demonstrated that he’s unaware of what exactly we’re testing for (i.e. he defines an I(1)/I(2) process as ‘increasing as a first/second order function’), I wouldn’t hold my breath.

  254. VS Says:

    “It implies that your ‘p-value’, of the null hypothesis that the trend coefficient is equal to zero, is lower than 95%.”

    Actually, in the context of the Jones quote, it implies that the relevant p-value is HIGHER than 95%.. :)

  255. Scott Mandia Says:

    I am glad to see the discussions are back on track.

    Alex: your comments have helped me greatly to understand the issue at hand.

    VS also told us to look at several Wiki sites. I found the image to the right on this link helpful:

    http://en.wikipedia.org/wiki/Unit_root#Unit_root_hypothesis

  256. RedLogix Says:

    Scott,

    For a control systems person that graphic you’ve linked to is very interesting.

    The ‘green line’ case is commonly encountered in what we call ‘integrating processes’, for instance, filling a vessel. A constant inflow into a tank (with no outflow changes) results in a steadily rising level. If the inflow is interupted briefly, ie turned off for a short period and then turned back on, the level rise will cease (or drop if there is outflow)… and then resume at it’s original rate when the inflow is restored. But the level will always be lower than if the interuption had not taken place.

    The ‘blue line’ case is even more common, for instance the postion of a valve that is controlling the flow in a pipe. If the valve is closed briefly the flow reduces, but when opened back to the original position the flow restores to the original flow.

    The difference between these cases is fundamental and comes about for purely physical reasons…. in the first case the vessel is acting as a physical integrator, while in the second there is no such mechanism in play.. only ‘return to equilibrium’ forces are in play.

    It occurs to me (at risk of seeming hopelessly naive) that the planet’s climate has elements of BOTH mechanisms in play, ie the ‘return to equilibrium’ behaviour that radiative transfer and energy balance demand, AND the presence of a huge thermal integrator, ie the oceans… acting on different time scales.

    Does this suggest a decent physical reason why this pesky ‘unit root’ can be extracted from the temperature record?

  257. VS Says:

    Hi Scott, RedLogix,

    I’m really happy you guys are diving into the matter. However, you ought to be careful when interpreting the implications of that figure! :)

    While a process containing a unit root indeed displays the depicted property (i.e. permanent effect of a ‘shock’) this doesn’t imply that these shocks just come and go ‘randomly’. That would indeed be in stark contrast with what we know about our climate.

    I replied to Arthur Smith a bit earlier (on March 10th) with this:

    “Besides, cointegration is actually a tool to establish equillibrium relationships with series containing stochastic trends. Note that cointegration implies an error corretction mechanism, where two series never wander off to far from each other. It therefore allows for stable/related systems which we nevertheless observe as ‘random walks’. The term ‘random walk’ is a bit misleading here, so it is better to say that the series contain a stochastic trend.

    Take a look at the matter, it’s quite interesting. I keep saying it: hence the Nobel prize!”

    The main point is that once we have established that our series contain unit roots, we might nevertheless be able to ‘cointegrate’ them (note that multiple series can furthermore be cointegrated, e.g. vector cointegration).

    If we succeed, we have established an ‘equilibrium’ relationship between the variables in question. Note that these statistical modeling techniques allows for both runaway warming due to GHG forcings as well as negative feedbacks (so only ‘temporary effects’ due to forcings) or whatever else we find in our data.

    I think we’ll be able to get into that matter once we have established the presence of unit roots in various series we are investigating (e.g. solar irradiance, GHG forcings).

  258. Adrian Burd Says:

    Alex,

    Many thanks for your clear posting on unit roots and the tests for them.

    So here’s a question. What type of physical model would yield these results? I ask because it’s unclear to me if the way to interpret these statistical results is to re-examine the physical theory, or is it just a way of saying something about the noise in the data and that, statistically, we don’t have sufficient data to get a good signal over the noise.

    To put it another way, are these statistical tests telling us something profound about the physics of the system? Or are they telling us something about the signal to noise ratio in the data?

    My impression from reading what you and VS have posted is that it’s the latter. However, some of VS’s comments make me think it might be former.

    Adrian

  259. Rob Traficante Says:

    Hi Alex and VS

    Thanks for the discussion. I’ve no real idea about time series apart from what you might get in a first or second year of a maths degree, so all this is very helpful. Just to examine some of the points Dhog raised, I decided to run the tests on truncated series, starting with a series that spanned from 1880 to 1910 and, adding a year at a time, running through to a series that spanned from to 1880 to 2009. I know I’m bound to get significant results just by chance, but I was looking to see if there was some way to measure – I don’t know the phrase I’m looking for, but I’ll choose ‘consistency of effect’, with the ADF tests. Here’s what I used in R (I’m not terribly familiar with R, so please be kind) :

    # set up the end points: ADF test will use data from 1880 to startseries
    startseries <- 1910
    endseries <- 2009
    lengthseries <- length(c(startseries:endseries))

    # set up a null list to write the results too
    results.AIC <- list(endyr=rep(NA,lengthseries),pvalue=rep(NA,lengthseries),lags=rep(NA,lengthseries))

    # loop through, truncating the series at the specified year
    for (endyear in startseries:endseries)
    {
    temp.temp <- V2[V1<=endyear]
    test1 <- CADFtest(temp.temp,type="trend",max.lag.y=10,criterion=c("AIC"))
    lagnum <- test1$max.lag.y
    test2 <- CADFtest(temp.temp,type="trend",max.lag.y=lagnum,
    criterion=c("none"))
    results.AIC$pvalue[endyear-startseries+1] <- test2$p.value
    results.AIC$lags[endyear-startseries+1] <- test2$max.lag.y
    results.AIC$endyr[endyear-startseries+1] <- endyear
    }

    That’s is my version of ‘brute force programming’! In the syntax, assume V1 is the year in the GISS data, and V2 is the temperature. I ran a similar loop for the BIC, HQC and MAIC criteria (which is all the R add on has). BIC rejects the null consistently after, say, the 1880-1920 truncated series. HQC rejects the null for most truncated series except for when the end year is 2005, 2006, 2007, 2008 or 2009. MAIC almost always fails to reject the null. AIC is more difficult to describe – it rejects the null for most truncated series up to 1960, but then mainly fails to reject except for some particular series (those where I truncate at 1985, 1986, 1992, 1993, 1994, 1999 and 2000). If I’ve done this correctly (by no means a given), what would be your interpretation, if any, of the 'consistency' of these tests, or am I just data-mining to death?

    Interestingly, with the truncated temp series 1880-2000 (which is the period BR use for their equation (2)), it’s possible we would reject the null using the ADF test.

    I could use a similar approach truncating ‘from the left’ as it were, eg dropping 1880, then 1880-1881, then 1880-1882, and see how the test results look. Any point in doing so…?

    Regards
    Robert

  260. dhogaza Says:

    First off, that Jones quote, was more of a ’statisticians joke’, and I really couldn’t help myself (I even apologized in advance :). The point is that Jones actually meant ’significant at 5% significance level’. A signficance level of 95% is simply ridicilous. It implies that your ‘p-value’, of the null hypothesis that the trend coefficient is equal to zero, is lower than 95%.

    It is common in science to state that p = 0.05 is equal to a 95% confidence level.

    In this case, 1995-present gives p = 0.076, at least according to one person who computed it. Or a 92.4% confidence level, as it is often put.

    1994-present gives p < 0.05 using HadCRUT, which is

    1. why the skeptic-provided question used 1995 as the start point (if you poke around you'll see folks like Lindzen and Motl suggesting the use of 1995 because it's the *latest date* for which the trend is not significant)

    2. why it's cherry-picking and dishonest, because in a field where 30-year trends are considered minimal for meaningful discussion, it's a bit surprising that even 1994-present yields a statistically meaningful trend.

    and of course we'll probably both agree that Jones dropped the ball in his answer, leaving himself open to blatant quote-mining.

    He should've responded by not answering the question and simply stating "it's significant from 1994-present and all earlier dates going back to 1970". He should've recognized that the question was dishonest, and a set-up for future misrepresentation, and refused to play the game their way.

  261. dhogaza Says:

    You also asked me to test for a unit root in the past 30 years (i.e. the sample Bart used). I don’t think it’s relevant, because we have access to a longer series, so why throw away observations?

    Given that we have temperature reconstructions going back 400 years which are, according to the NAS, very reliable, which have narrow error bars compared to the reconstructions in the interval -400 to -2000 years, by choosing 1880 to present you’re already throwing away data.

    As to why start at the mid 70s, again, it comes from knowledge of the physics and from observations. We know that from the end of the LIA forward, temps have been roughly flat until the 1970s. We have a physical explanation – indeed, we *had* a physical *prediction* beforehand, which current observations support – for the same. Why treat it as a uniform system? Why not look at the time period of interest, the time period in which physics informs us that CO2 forcing should be strong enough for a trend to emerge from the noise?

  262. dhogaza Says:

    “Besides, cointegration is actually a tool to establish equillibrium relationships with series containing stochastic trends. Note that cointegration implies an error corretction mechanism, where two series never wander off to far from each other. It therefore allows for stable/related systems which we nevertheless observe as ‘random walks’. The term ‘random walk’ is a bit misleading here, so it is better to say that the series contain a stochastic trend.

    Take a look at the matter, it’s quite interesting. I keep saying it: hence the Nobel prize!”

    It ain’t going to lead to a nobel prize in physics, trust me.

    As hard as it might be to believe … physical systems *do* exist.

    The myth of the mathematician “proving” a bumblebee can’t fly does apply here, on a couple of different levels.

  263. dhogaza Says:

    To put it another way, are these statistical tests telling us something profound about the physics of the system? Or are they telling us something about the signal to noise ratio in the data?

    My impression from reading what you and VS have posted is that it’s the latter. However, some of VS’s comments make me think it might be former.

    Or we can ask what the physics tells us about expectations over the last 130 years. We can ignore Milankovich cycles over such a short time frame so …

    We have TSI fluctuating in a way that’s not exactly random, but not cyclical to the point where we can predict solar output in advance over such a time frame.

    We have a significant negative forcing in the form of large volcanic events that spew stuff up into the stratosphere in an unpredictable way.

    We have periodic redistributions of ocean heat that affect temperature, again, not exactly random but the timing and magnitude are currently unpredictable except in the very near future (ENSO).

    Am I missing anything significant?

    So lots of noise, no long-term trend on this timescale, relatively short-term perturbations when TSI changes significantly for an extend time, or when volcanic activity is running a bunch of “heads” or “tails” in a row, etc. Meteorologists picked 30 years as a rule-of-thumb for climate analysis because such perturbations typically don’t persist for such long periods.

    I don’t think the statistical argument is terribly surprising in this situation.

    But if you think of the drunk walking from the lamp post starting in 1880 … when he began his walk, there was a light breeze from the right. By 1975 that light breeze had become noticeable, a stiff breeze, always from the right. She is now experiencing a gale force wind, and by 2100 will be experiencing Cat 3 hurricane wind, always from the right.

    A drunk walking in such conditions is going to veer left in response to the wind, and as it increases, further and further to the left.

    If a statistical analysis fails to capture the physical change in the system, it don’t mean the wind ain’t blowin’ harder and harder.

    It simply means that statistics is misleading us because the tests being used are too coarse to capture the change in physical system.

    It certainly doesn’t mean, as two economists claim, that “AGW is disproved”.

  264. dhogaza Says:

    Hmmm, looks like my drunken walker went through a sex change some time in the 20th century… :)

  265. Ron Broberg Says:

    VS, I wasn’t very happy with your summary. Despite having no knowledge of the statistics in question, I could see some dodginess there. Here is my summary of your post.

    ADF: Presence of unit root not rejected in 5 cases, rejected in 1
    KPSS: Stationarity (no unit root) rejected at 5% and 10% sig, not at 1% sig.
    PP: No presence of unit root
    DFG: Presence of unit root not rejected

    All of the above are consistent with “no presence of unit root”

    ADF-DIFF: Clear presence of a unit root
    DFG-DIFF: Clear presence of a unit root

    The difference methods, OTOH, indicate “clear presence of unit root”

    So what is happening when we move from the data to the differences? Why are we see evidence rejecting unit roots in the first set and clear precence of unit roots in the second set? Is this an indication of weakness in the conclusions? In the methods selected? Of the noisiness of the data? Of my ignorance of the appropriate tests?

    I’m also interested in seeing a set of such tests running up the decades:
    (this is not a request – more like homework for me one day)
    1880-2010
    1890-2010

    1960-2010
    1970-2010
    1980-2010

    CO2 has not increased linearly over the 130 years in question. How do non-linear trends affect the tests used?

    Climate scientists do not believe that CO2 is the only forcing in effect. Do the tests in this post tacitly assume that it is?

    WUWT has posted a Scafetti piece claiming strong periodic function in global temperature. Can these tests be used against periodic “trends”?

    VS, I am intrigued. But glaringly aware of my ignorance.

  266. KenM Says:

    As to why start at the mid 70s, again, it comes from knowledge of the physics and from observations. We know that from the end of the LIA forward, temps have been roughly flat until the 1970s.

    We’ve just skewered Motl for cherry-picking his start time of 1995.
    We’ve just lambasted Goddard for cherry-picking the 1980-something start date for his “snow analysis.”

    VS does his analysis over the entire GISS data set, but you think ignoring the first 90+ years of that dataset is OK because a model created *after* 1975 predicted that things would change in the 70s?

    I find it hard to swallow that temperature is a random walk too. The random walk conclusion says more to me about the weakness of the statistical methods than it does about nature. But seriously – how can you suggest doing the same thing as Motl and Goddard with a straight face?

  267. dhogaza Says:

    VS does his analysis over the entire GISS data set, but you think ignoring the first 90+ years of that dataset is OK because a model created *after* 1975 predicted that things would change in the 70s?

    Yes, I do think it’s OK, because it might help us understand what’s going on with the dataset as a whole.

    The two economists who “disproved AGW” noticed the break in the data a few decades ago, too, and looked to see if such a break could be teased out statistically. I think they failed to do so because the data’s inconclusive in that timeframe, assuming they did their stats correctly. Thus my wondering what happens if you go back 400 years using the proxy reconstructions which, for that time frame, are not controversial and which have relatively tight error bounds.

    Looking at the last 30-40 years might tell us that even VS and Alex would accept that an OLS fit is valid over that time frame, even if they claim it’s possibly not valid over the entire data set.

    Take a look at Bart’s first post – he did an OLS fit 1975-on, did you complain then? VS is saying “an OLS fit 1975-now is possibly invalid because I show a unit root in the data 1880-present”. That’s not necessarily true. I asked “hey, how about the 1975-now data alone”. VS says he suspects there’s not enough data points to properly test. OK, perhaps that’s true.

    The difference between Steve Goddard, for instance, and what I’m suggesting is that I’m saying “look harder at the data to better understand it”, not “do this, not that, ignore the rest, and poof! Global warming is disproved” which is an accurate description of Goddard’s hand-waving.

    And remember that underlying this is a difference of opinion by Tamino, a Phd in statistics who’s entire professional career involves time series analysis, as I understand it, and an PhD in economics who uses statistics as a tool, but perhaps hasn’t as strong a theoretical background in statistics as does Tamino.

    Remember, VS is making an extraordinary claim – OLS fits to climate data can not be shown to be valid – and as with all such claims, requires extraordinary evidence since I’ve never seen statisticians outside economics make that claim.

  268. VS Says:

    Hi KenM,

    You mean this?

    Still Not

    I think I agree.

    Now he’s performing unit root tests on less than 34 observations. Those observations that fit his hypothesis.

    His ‘objective’ sample: 1975-2008.

    Here guys, plot it.

    -0.04
    -0.16
    0.13
    0.01
    0.09
    0.18
    0.26
    0.05
    0.26
    0.09
    0.05
    0.13
    0.26
    0.31
    0.2
    0.38
    0.35
    0.13
    0.14
    0.23
    0.38
    0.29
    0.4
    0.56
    0.32
    0.33
    0.48
    0.56
    0.55
    0.49
    0.63
    0.54
    0.57
    0.43

    Note that when we employ p lags in the test equation, we are using the last (n-p) of these observations, because the last p drop out (with a minimum of 1 dropping out).

    Note also that we need to estimate 3+p coefficients, and the variance of the regression, so 4+p parameters.

    So

    0 lags in test equation: we are estimating 4 parameters with 33 observations
    4 lags in test equation: we are estimating 8 parameters with 30 observations

    He then proceeds to flush unit roots and the Nobel prize for economics down the toilet:

    “The whole “unit root” idea is nothing but a “throw some complicated-looking math at the wall and see what sticks” attempt to refute global warming. Funny thing is, it doesn’t stick. In fact, it’s an embarrassment to those who continue to cling to it.

    But hey — that’s what they do.”

    This Grant Foster guy is amazing.

  269. dhogaza Says:

    No, VS, he’s throwing *your use of it* down the toilet.

    Again, the similarity to the myth of the mathematical proof that a bumble bee can’t fly is remarkable and informative.

    Now he’s performing unit root tests on less than 34 observations. Those observations that fit his hypothesis.

    It’s not “his hypothesis”, it’s standard climate science: recent decades fit a (nearly) linear model. The entire GISS series doesn’t. What is so hard to understand about that? There are physical reasons for the emergence of the CO2 forcing signal about that time: TSI variability went almost flat (simplifies things, don’t have to account for it), negative forcing due to industrial aerosol emissions dropped steeply in the industrialized west during the 1970s (another simplification), and of course most importantly CO2 emissions were rising exponentially leading to a linear increase in forcing.

    So the physical *prediction* is that CO2 forcing is increasing in linear fashion, its magnitude has become greater than fluctuations in TSI and would’ve outstripped negative forcing from industrial aerosols at some point but did so more quickly due to clean-air regulations in the west, etc.

    Therefore, we expect the climate response to this linear forcing to be linear as that forcing grows in magnitude to a sufficient level to overwhelm the variability of other natural forcings.

    In the 1970s scientists were arguing if we’d already reached that point, and if not, when we would reach it.

    They certainly weren’t arguing that climate was responding linearly to a much lesser CO2 forcing in the face of abnormally high TSI levels in, say the early 1900s.

    So you’re imposing an assumption of linearity over the entire timeseries that a) doesn’t represent the view of scientists in the field and b) has no physical basis.

    He claims that using an ADF test over a non-linear time series isn’t a good idea. I did a little googling and came up with this buried in a summary of statistical testing techniques:

    Disadvantage of the ADF test: lack power when the model
    specification under the alternative hypothesis is nonlinear (see
    Nelson and Plosser, 1982; Taylor et al., 2001; and Rose ,1988)

    Seems like Tamino’s telling the truth here.

    Well, the “alternative hypothesis” in this case is that the series from 1880-present is non-linear. If you think you’re breaking big news here, you’re sadly mistaken.

    Why have you adopted the hypothesis that the series 1880-present is closely matched by a linear model? What is your *physical* basis for doing so?

    You’re also ignoring what he says is actually most important, not the ADF test over the linear part of the recent climate record, but testing the data 1880-present taking into account CO2 forcing.

  270. VS Says:

    http://chart.apis.google.com/chart?chs=500×400&chf=bg,s,ffffff&cht=ls&chd=t:15.18,0.00,36.70,21.51,31.64,43.03,53.16,26.58,53.16,31.64,26.58,36.70,53.16,59.49,45.56,68.35,64.55,36.70,37.97,49.36,68.35,56.96,70.88,91.13,60.75,62.02,81.01,91.13,89.87,82.27,100.00,88.60,92.40,74.68&chco=0066ff

    Google chart: Tamino’s sample.

  271. dhogaza Says:

    He then proceeds to flush unit roots and the Nobel prize for economics down the toilet

    May I suggest you read his post again – this time for comprehension rather than to mine for bits to hang insults on?

  272. dhogaza Says:

    Google chart: Tamino’s sample.

    What about it? Looks pretty similar to the chunk Bart fit his OLS to in the first place.

  273. dhogaza Says:

    Just to be very clear for those who aren’t playing along at home (by reading Tamino’s post):

    He then proceeds to flush unit roots and the Nobel prize for economics down the toilet:

    “The whole “unit root” idea is nothing but a “throw some complicated-looking math at the wall and see what sticks” attempt to refute global warming. Funny thing is, it doesn’t stick. In fact, it’s an embarrassment to those who continue to cling to it.

    But hey — that’s what they do.”

    The “they” in this case is clearly “those who claim they’ve proven the physics behind AGW wrong via statistical analysis of the GISTEMP data”.

    Not:

    1. economics as a whole

    2. most specifically not the Nobel prize winner(s) alluded to by VS.

    VS: stop the BS, please. You’re smart. You know who Tamino was talking about when he made that statement.

  274. jfr117 Says:

    so what does this show – that we have a 40 year dataset that shows linear growth in temperature? that this 40 year period is warmer than….what?

    if we can’t compare the most recent data to pre-1975 due to nonlinearity, what can we conclude?

  275. dhogaza Says:

    so what does this show – that we have a 40 year dataset that shows linear growth in temperature? that this 40 year period is warmer than….what?

    Well, that starts getting us into the realm of hockey sticks, etc. I don’t see the point of that in this dicussion. But let me just say that one can study hockey sticks and ignore Mann and short-centered PCA or “regular” PCA. There are enough hockey stick reconstructions using different statistical analysis techniques to outfit an NHL team.

    if we can’t compare the most recent data to pre-1975 due to nonlinearity, what can we conclude?

    We can compare it, we simply can’t say “since VS assumes linearity, climate scientists must assume linearity, and once they do, we can say that AGW is disproved” :)

    Tamino does look at the entire dataset, but by incorporating CO2 forcing in his analysis – I suggest you read his latest post and, if you have questions, ask him directly.

  276. MP Says:

    @VS,

    If the test requires more data points, why not have a go at monthly data then? If global T it is a random walk should it not be a random walk on every time scale?

  277. VS Says:

    Hi jfr117, welcome back.

    The only thing shown here is that Grant Foster doesn’t know how to test his own hypotheses. Here’s a proper test for what Tamino and dhogazza are propsing (if that’s two different individuals, I’m starting to doubt).

    The Zivot-Andrews unit root test: http://www.jstor.org/pss/1391541

    With a Stata module here: http://ideas.repec.org/c/boc/bocode/s437301.html

    The idea is the following. We allow the series to display a structural break in the test equation. However, instead of ‘cherry picking’ we allow the estimation algorithm to determine the most probable location of this ‘structural break’, assuming it exists.

    The test equation then allows for this break, when testing for unit roots. That way we don’t have to depend on Grant Foster picking out our break and throwing away the data he doesn’t like.

    ———————–

    Zivot-Andrews unit root test for gisstemp_all

    ———————–

    We allow for all three possible alternative hypotheses. So we can allow for a structural break in:

    (1) Intercept
    (2) Trend
    (3) Both.

    Zivot-Andrews unit root test for gisstemp_all
    Allowing for break in intercept
    Lag selection via AIC: lags of D.gisstemp_all included = 3
    Minimum t-statistic -3.473 at 1987 (obs 138)
    Critical values: 1%: -5.43 5%: -4.80

    Conclusion: unit root NOT rejected.

    Zivot-Andrews unit root test for gisstemp_all
    Allowing for break in trend
    Lag selection via AIC: lags of D.gisstemp_all included = 3
    Minimum t-statistic -3.765 at 1977 (obs 128)
    Critical values: 1%: -4.93 5%: -4.42

    Conclusion: unit root NOT rejected.

    Allowing for break in both intercept and trend
    Lag selection via AIC: lags of D.gisstemp_all included = 3
    Minimum t-statistic -4.410 at 1964 (obs 115)
    Critical values: 1%: -5.57 5%: -5.08

    Conclusion: unit root NOT rejected.

    Overview of endogenously determined structural breaks, with test equation containig:

    (1) Intercept: 1987
    (2) Trend: 1977
    (3) Intercept and Trend: 1964

    ———————–

    CONCLUSIONS

    ———————–

    By allowing for a structural break in both intercept and trend seperately, or both together, the null hypothesis of unit root presence is not rejected, in any instance.

    Conclusion: unit root GISS_all

    The results furthermore fall in line with my elaborate unit root testing results posted here:

    Global average temperature increase GISS HadCRU and NCDC compared

    Here’s my view on Tamino’s previous ‘cherry picking’ of IC’s and test results:

    Global average temperature increase GISS HadCRU and NCDC compared

    Grant Foster, stop pretending you are a statistician.

  278. dhogaza Says:

    I’m not tamino, and I don’t pretend to be a statistician.

    VS:

    Grant Foster, stop pretending you are a statistician.

    Back to insults, I see, you who would pose as being the victim of a bunch of insulting bullies …

    Fact is, he *is* a statistician, while you’re an economist who uses statistics as a tool. Whether or not “economist” should be considered a term of insult is an exercise left to the reader …

    What I see here, is VS skating by objections without answering them, then throwing more mathy-looking stuff at the wall, hoping it will stick.

    You haven’t addressed Tamino’s using CO2 as a covariate.

    You still haven’t addressed the physical evidence. The bumble flies …

  279. dhogaza Says:

    So we have a statistical test saying we can’t reject the null hypothesis that there is no structural break.

    And we have physics arguing – nay, *predicting* – said break.

    And our economics dude says the inability to reject the null hypothesis based on limited data allows one to state that AGW is disproven.

    Does that about sum up the absurdity?

  280. dhogaza Says:

    No, I read too quickly … your test allows for a structural break, but fails to reject a unit root.

  281. MartinM Says:

    Zivot-Andrews unit root test for gisstemp_all
    Allowing for break in intercept
    Lag selection via AIC: lags of D.gisstemp_all included = 3
    Minimum t-statistic -3.473 at 1987 (obs 138)
    Critical values: 1%: -5.43 5%: -4.80

    Since when did GISTEMP go back to 1850?

  282. VS Says:

    Hi MartinM,

    My dataset also contains HADCRUT and CRUTEM3, and a bunch of GHG forcings, which all go back to 1850.

    My observations are therefore ID’d from 1850 onward. Stata then returns the exact observation ID. This (obviously) has no impact on the estimation procedures, since GISS data are NA before 1880.

  283. dhogaza Says:

    What is the physical basis for assuming there’s a *single* structural break in the series, rather than, say, two?

  284. Bart Says:

    Keep it nice guys.

    Re-reading the very informative 2009 Ramanathan and Feng article I came across this relatively simple explanation of the earth’s radiation balance:

    So the process of the net
    incoming (downward solar energy minus the reflected) solar
    energy warming the system and the outgoing heat radiation from
    the warmer planet escaping to space goes on, until the two
    components of the energy are in balance. On an average sense, it is this radiation energy balance that provides a powerful constraint for the global average temperature of the planet.

    I.e. The global average temperature only changes over climatic timescales (multiple decades or longer) if there is an imbalance in the radiation budget. As is now indeed the case. Climate is to a certain extent deterministic.

  285. dhogaza Says:

    This guy, for instance, explores the data and suggests there are two.

    And there’s a known physical explanation for the structural break in the early 1900s …

    He’s a civil engineer with a masters, so I’ll concede that our economist probably has a stronger background in stats than him, just as VS should concede that Tamino has a stronger background in stats than VS.

    But it’s interesting.

    And unlike VS, he did simply *assume* any particular number of break points, he’s attempted analysis, at least.

  286. Bart Says:

    VS,

    The tests that Tamino did in his latest post seem to be the most relevant for the issue at hand:
    1) For the time period for which the trend is approximately linear
    2) Using the estimated radiative forcing instead of a linear trend
    Where 2) is obviously superior.

    I may have missed it, but using these criteria (esp the second one, though as dhogaza explained, there are good reasons to pick the timeframe from 1975 onwards (give or take a few years), what would you get as test results?

  287. KenM Says:

    Aren’t GHG forcing’s a function of the observed temperature? I mean, if the temp goes down for 10 years, scientists look at that and say, “hmmm – why’d that happen?” Eventually they decide on a cause, e.g. aerosols, and *then* they assign a forcing that explains the deviation of the temperature from an expected trend that’s also plausible with the proposed cause.
    If I’ve got that right, then isn’t using GHG forcings as a covariate to the CADF test kind of circular logic? If they got the forcings wrong because they assumed a trend that isn’t necessarily there, then isn’t using those incorrect forcings in the test wrong too?
    Not saying they got the forcings wrong, just saying using them as a covariate is wrong when the notion of a trend is being challenged.

  288. dhogaza Says:

    2) Using the estimated radiative forcing instead of a linear trend

    He corrected my misunderstanding, he’s using the net forcing from all sources (not just GHGs) that are used as forcing inputs to GISS Model E, as described here.

    Eyeballing, it looks like a bunch of the others might somewhat balance out, so I’m not sure it makes a huge difference vs. just using CO2 forcing.

  289. dhogaza Says:

    Aren’t GHG forcing’s a function of the observed temperature? I mean, if the temp goes down for 10 years, scientists look at that and say, “hmmm – why’d that happen?”

    Uh … no. CO2 forcing comes from radiative transfer physics, which I can pronounce but not do :)

    The temp drop from Pinatabu were modeled quite accurately *in advance*, i.e. after physical observations as to the amount of stuff ejected into the stratosphere were available, but before the (roughly) three year cooling period was over.

    Eventually they decide on a cause, e.g. aerosols, and *then* they assign a forcing that explains the deviation of the temperature from an expected trend that’s also plausible with the proposed cause.

    In the case of aerosol cooling from the 1940s or so until clean air laws made a change, you are *partially* right. They didn’t just “assign a forcing” to fit observations, they rather tried to calculate forcing and then compared to observations.

    If I’ve got that right, then isn’t using GHG forcings as a covariate to the CADF test kind of circular logic?

    The kind of circularity you mention would be useless, and unscientific, and why would one waste one’s time doing something useless and unscientific???

  290. S. Geiger Says:

    quick question of curiosity. With our varied temperature measurement networks we can get an idea of the ‘thermal content’ of the global system..which would seem to require both water and atmosphere (but mostly water). If we could very accurately measure this quantity would it be a smooth continuous function of time or would it bounce around (I guess due to the random element of ‘weather’). Are the bounces we see actual changes in the earths energy balance or just noise in our measurment network?

    BTW, I’ve greatly enjoyed this discussion…at some points it lives up to the potential for such blogs to provide a real forum for honest discussion….also kind of interesting how hard it is for some not to stray back into ad hom land…very emotional topic I guess.

  291. dhogaza Says:

    Here, for instance, you can see where the figures for various GHG historical concentrations comes from, along with the ranges they assign for the future (which of course can’t be precisely predicted because we don’t know what kind of response to the problem we’ll take).

    This data is tranformed into forcing figures using *physics*.

    So you can see that for GHG forcing, your assumption of circularity is false.

    There are other pages which describe the provenance of each and every forcing …

  292. KenM Says:

    Hi Dhogaza – I see the quoted text, where’s that from?

  293. KenM Says:

    doh! didn’t realize the whole thing was a link! I’ll check it out.

  294. dhogaza Says:

    If we could very accurately measure this quantity would it be a smooth continuous function of time or would it bounce around (I guess due to the random element of ‘weather’).

    Bounces …

    Are the bounces we see actual changes in the earths energy balance or just noise in our measurment network?

    There’s considerable uncertainty with the sea temp stuff, and the measurements are for surface and (relatively) near surface, there’s no time series of measurements for the ocean as a whole.

    BTW you’ve come perilously close to what Ternberth was talking about in the infamous “we can’t account for the missing warming”, as he was talking about not being able to account for excess energy in the system (determined IIRC by satellite measurements) because of inadequate observations of the entire atmosphere/ocean system.

    Let’s not go OT with that, but thought you might find it interesting…

  295. KenM Says:

    Didn’t take long to reject I’m afraid – those are predictions for models. the forcings (like the one Tamino used) are not predictions but ‘observations’. I say ‘observations’ in quotes because I understand that they are really best-guess approximations – i.e. there were no aerosol-measuring instruments collecting data in 1940.

  296. dhogaza Says:

    also kind of interesting how hard it is for some not to stray back into ad hom land…very emotional topic I guess.

    Well … VS is essentially repeating arguments made by a couple of economists in a paper in which they bluntly stated: “AGW is disproved”.

    You can see why that might annoy some people…

  297. KenM Says:

    In the case of aerosol cooling from the 1940s or so until clean air laws made a change, you are *partially* right. They didn’t just “assign a forcing” to fit observations, they rather tried to calculate forcing and then compared to observations.

    Agreed, but this does not mean they are correct. Plausible, certainly, but that’s not a good enough defense when someone comes along and argues that temperature in the last 100 years or so resembles a random walk. You can’t say their plausible theory is wrong because someone else’s plausible theory says they are wrong.

    Put it another way, let’s say for argument’s sake that the 40s aerosol levels were estimated incorrectly. Would you agree this is possible?

    If so, then you must also agree that using those forcings as a covariate in the CADF test *to prove* that the temps are not following a random walk is circular.

  298. VS Says:

    Hi KenM and Bart

    That regression (‘climate forcings act as trend’) is completely flawed statistically.

    GHG focings are found to be I(2) by everyone in the literature, i.e. the series contains two unit roots. Temperature has been found, and above yet again, as well as throughout the literature, to be I(1).

    Simply regressing temperatures on GHG forcings (or simple CO2 forcings) leads to spurious results. Tamino either knows this, and he’s misinforming people on purpose, or he doesn’t, and he’s incompetent.

    Hence the need for polynomial cointegration, employed by Kaufmann et al (2006) (but tested for incorrectly) and employed by Beenstock and Reigenwertz (tested for correctly).

    I quote, Kaufmann and Stern (2000):

    “The univariate tests indicate that the temperature data are I(1) while the trace gases are I(2). That is, the gases contain stochastic slope components that are not present in the temperature series. This result implies that there cannot be a linear long-run relation between gases and temperature.”

    And I quote Beenstock and Reigenwertz:

    “The order of non-stationarity refers to the number of times a variable must be differenced (d) to render it stationary, in which case the variable is integrated of order d, or I(d). We confirm previous findings [refs] that the radiative forcings of greenhouse gases (C02, CH4 and N2O) are stationary in second differences (i.e. I(2)) while global temperature and solar irradiance are stationary in first differences (i.e. I(1)).

    Normally, this difference would be sufficient to reject the hypothesis that global temperature is related to the radiative forcing of greenhouse gases, since I(1) and I(2) variables are asymptotically independent [ref]. An exception, however, arises when greenhouse gases, global temperature and solar radiation turn out to be polynomially cointegrated [ref]. In polynomial cointegration the greenhouse gases that are stationary in second differences must share a common stochastic trend, henceforth the “greenhouse trend”, that is stationary in first differences. If this “greenhouse trend” exists and if it is cointegrated with global temperature and solar irradiance, we may conclude that greenhouse gases are polynomially cointegrated with global temperature and solar irradiance.”

    What Tamino wrote down is complete nonsense, statistically speaking. Apart from the issue illustrated above. He furthermore

    1) didn’t test his hypothesis properly,
    2) used a way too short and specifically selected sample
    3) again disregarded the presence of TWO unit roots in GHG forcings series (all of them, I’ll post test results)

    ————————

    Again, because the posts keep getting spammed away by certain individuals who feel they need to share they each and every thought and ‘hunch’ with us as soon as it pops up in their head.

    **Unit root analysis, including all test results and motivations. Conclusion, GISS-all is I(1):

    Global average temperature increase GISS HadCRU and NCDC compared

    **My view on Tamino’s cherry picking of test results, in his ‘Not a Random Walk’ (strawman) blog entry:

    Global average temperature increase GISS HadCRU and NCDC compared

    **Graph of sample employed in Tamino’s latest ‘analysis’

    Link to sample:
    http://chart.apis.google.com/chart?chs=500×400&chf=bg,s,ffffff&cht=ls&chd=t:15.18,0.00,36.70,21.51,31.64,43.03,53.16,26.58,53.16,31.64,26.58,36.70,53.16,59.49,45.56,68.35,64.55,36.70,37.97,49.36,68.35,56.96,70.88,91.13,60.75,62.02,81.01,91.13,89.87,82.27,100.00,88.60,92.40,74.68&chco=0066ff

    [Note that the alternative hypothesis set by Tamino, which is set against the unit root null hypothesis, is that the series has a straight linear trend. Convenient sample choice, no?]

  299. VS Says:

    Here are the test results for GHG forcings:

    I also reproduced the findings of Kaufmann (various papers) and Beenstock and Reigenwertz, concering the I(2) property of Co2 forcings. I downloaded the data from GISS-NASA, here:

    http://data.giss.nasa.gov/modelforce/ghgases/GHGs.1850-2000.txt

    I furthermore transformed the ppm series into forcing, following Kaufmann, Beenstock AND wikipedia.

    F_CO2=5.35*Ln(CO2_ppm/285.2)

    As before, we can proceed to test for unit roots. I will test against the alternative hypothesis of a trend and intercept. Note however that the results hold (arguably more firmly) when testing against alternative hypotheses (1) and (2) (listed in my lengthy post above).

    ————————–

    AUGMENTED DICKEY FULLER TESTING

    ————————–

    As before, we first test the level series. In contrast to the temperature series, the IC’s all deliver the same results. I will employ the three standard ones in this case, namely the AIC, BIC/SIC and HQ.

    Level series, F_CO2, ADF testing

    IC: Akaike Info Criterion (AIC)
    LL: 12
    p-value: 1
    Conclusion: presence of unit root not rejected
    JB: 0.000002 (!! the errors have a mad kurtosis, over 5)

    IC: Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 7
    p-value: 0.9987
    Conclusion: presence of unit root not rejected
    JB: 0.000003 (same as above)

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 7
    p-value: 0.9987
    Conclusion: presence of unit root not rejected
    JB: 0.000003 (same as above)

    First difference series, D(F_CO2), ADF testing

    IC: Akaike Info Criterion (AIC)
    LL: 7
    p-value: 0.6764
    Conclusion: presence of unit root not rejected
    JB: 0.000000 (same as above)

    IC: Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 6
    p-value: 0.7871
    Conclusion: presence of unit root not rejected
    JB: 0.000002 (same as above)

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 6
    p-value: 0.7871
    Conclusion: presence of unit root not rejected
    JB: 0.000002 (same as above)

    Second difference series, D(F_CO2, 2), ADF testing

    IC: Akaike Info Criterion (AIC)
    LL: 5
    p-value: 0.0000
    Conclusion: presence of unit root rejected
    JB: 0.000002 (same as above)

    IC: Schwartz / Bayesian Info Criterion (BIC, used by a critic of mine)
    LL: 5
    p-value: 0.0000
    Conclusion: presence of unit root rejected
    JB: 0.000002 (same as above)

    IC: Hannan-Quinn Info Criterion (HQ)
    LL: 5
    p-value: 0.0000
    Conclusion: presence of unit root rejected
    JB: 0.000002 (same as above)

    So, if these test results are to be trusted, we cannot reject the presence of a unit root in the level series (so no I(0)), likewise, we cannot reject the presence of a unit root in the first difference series (so no I(1)). However, the presence of a unit root in the second difference series, is clearly rejected, so we conclude that CO2 GHG forcings, are I(2). In other words, they need to be differenced twice in order to obtain stationarity.

    I have to note here that normality of the errors of the test equation are rejected in all instances. This implies that the ADF test is not exact. This might be a bit problematic for inference.

    So let’s consult other tests.

    ————————–

    KWIATKOWSKI-PHILLIPS-SCHMIDT-SHIN TESTING

    ————————–

    Just as before, we now take stationarity to be the null hypothesis, and employ the KPSS test. The asymptotic critical values of the test statistic are, again:

    1% level, 0.216000
    5% level, 0.146000
    10% level, 0.119000

    As before, once the value of the test statistic exceeds one of the above values, stationarity is rejected at that significance level. I will only report the Bartlett kernel method results, because I’ve read yesterday (while doing the KPSS tests) that this approach is most stable in small samples. The results however also hold for the Perzen kernel (in fact, they are even more solid).

    Level series, F_CO2, KPSS testing

    Newley-West bandwith selection:
    TEST STATISTIC: 0.291981
    Conclusion, stationarity is rejected at all significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.173808
    Conclusion, stationarity is not rejected at 1% significance level. Rejected at 5% and 10% significance levels.

    First difference series, D(F_CO2), KPSS testing

    Newley-West bandwith selection:
    TEST STATISTIC: 0.253348
    Conclusion, stationarity is rejected at all significance levels.

    Andrews bandwith selection:
    TEST STATISTIC: 0.244655
    Conclusion, stationarity is rejected at all significance levels.

    Second difference series, D(F_CO2, 2), KPSS testing

    Newley-West bandwith selection:
    TEST STATISTIC: 0.021613
    Conclusion, stationarity is NOT rejected at any significance level.

    Andrews bandwith selection:
    TEST STATISTIC: 0.031438
    Conclusion, stationarity is NOT rejected at any significance level..

    Applying the KPSS test, we again confirm that CO2 GHG forcings are I(2).

    ————————–

    PHILLIPS-PERRON TESTING

    ————————–

    The PP test, again, is the odd one out. When testing the level series, we cannot reject the presence of a unit root.

    Phillips-Perron test on Level series, F_CO2
    Bartlett kernel, Newley-West bandwith:

    Ha: Trend and intercept (case (3))

    TEST STATISTIC 3.218009 (p-value: 1)

    1% level, -4.020396
    5% level, -3.440059
    10% level, -3.144465

    Conclusion, presence of unit root is not rejected

    Phillips-Perron test on First difference series, D(F_CO2)
    Bartlett kernel, Newley-West bandwith:

    Ha: Trend and intercept (case (3))

    TEST STATISTIC -6.398659 (p-value: 0.000)

    1% level, -4.020822
    5% level, -3.440263
    10% level, -3.144585

    Conclusion, presence of unit root is rejected

    So following the PP test, we find that the R_CO2 series is in fact I(1). This test, AGAIN deviates from all the other tests. However, for what follows, it still arives at CO2 forcings being I(d+1) vs I(d) related to temperature series. This in its turn, corresponds to the relationship determined by other tests.

    ————————–

    DICKEY FULLER GENERALIZED LEAST SQUARES TESTING

    ————————–

    In order not to bore you with yet more test results. The DF-GLS test clearly indicates that the F_CO2 variable is in fact I(2).

    ————————–

    SUMMARY AND CONCLUSIONS

    ————————–

    We find, in line with what both Kaufmann and Beenstock have found, that there is strong evidence to suggest the presence of two unit roots in the F_CO2 series. Rabbett posted something on his blog about a structural break in the series (although he completely misspecified the nature of that hypothesized ‘break’).

    In this case, allow me to cite the results of BR.

    They tested whether the F_CO2 series is in fact a I(1) series with a structural break in 1964. I quote:

    “We also check whether rfCO2 is I(1) subject to a structural break. A break in the stochastic trend of rfCO2 might create the impression that d = 2 when in fact its true value is 1. We apply the test suggested by Clemente, Montanas and Reyes (1998)
    (CMR). The CMR statistic (which is the ADF statistic allowing for a break) for the first difference of rfCO2 is -3.877. The break occurs in 1964, but since the critical value of the CMR statistic is -4.27 we can safely reject the hypothesis that rfCO2 is I(1) with a break in its stochastic trend.”

    I have to note they ‘express’ themselves slightly incorrectly here. What they are saying is that the null hypothesis, of there being NO break, is not rejected (i.e. the test statistic does not exceed the critical value).

    They furthermore proceed to test all GHG focings for their I(d) properties, and report:

    “We have applied these test procedures to the variables in Table 2. It turns out that the radiative forcings of all three greenhouse gases are I(2).”

    These are their findings, in Table 2 (for the record, I used the 1850-2000 data):

    rfCO2, I(2), 1850-2006
    rfCH4, I(2), 1850-2006
    rfN2O, I(2), 1850-2006

    I think we can trust these results (although, if someone is particularly sceptical, we can run them as well).

  300. dhogaza Says:

    Didn’t take long to reject I’m afraid – those are predictions for models. the forcings (like the one Tamino used) are not predictions but ‘observations’.

    Look more closely, the historical data is blue in the graphs, the projections for various scenarios (i.e. how we control emissions) are the yellow part.

    Which values get used? Depends on what time frame the model is being used to explore. They want to run improved versions of the model against past forcing data to see if they do a reasonably good job of matching historical climate trends.

    I say ‘observations’ in quotes because I understand that they are really best-guess approximations – i.e. there were no aerosol-measuring instruments collecting data in 1940.

    That’s a real problem for *some* forcings, not for all. There’s good proxy data for some, not all, good historical measurements for some things, not others.

    Aerosols are a problematic one, AFAIK you are right about that.

    However before scrubbers etc were introduced, there is good economic data, and a lot of dirty industries didn’t change particular much say 1920s(?) until when air quality measurements were being done regularly. So I would assume that if you know a particular technology for producing X by burning coal produces Y lbs of CO2 for each lb of X in 1950, you can probably work backwards to how much CO2 was produced by that particular production technology in the 1920s or 30s…etc.

  301. dhogaza Says:

    Agreed, but this does not mean they are correct. Plausible, certainly, but that’s not a good enough defense when someone comes along and argues that temperature in the last 100 years or so resembles a random walk. You can’t say their plausible theory is wrong because someone else’s plausible theory says they are wrong.

    No, we say their theory is implausible because it’s *unphysical*, and can then try to figure out where they’ve made their mistake.

    Like the myth of the mathematical proof that a bumble bee can’t fly.

  302. dhogaza Says:

    Put it another way, let’s say for argument’s sake that the 40s aerosol levels were estimated incorrectly. Would you agree this is possible?

    If so, then you must also agree that using those forcings as a covariate in the CADF test *to prove* that the temps are not following a random walk is circular.

    The net forcing is dominated in recent decades by CO2 and we have good figures for that going back to the 1950s, before the period of concern, and proxy info before that.

    Really, the CO2 data isn’t controversial.

    And outside a handful of economists who think they’re going to trump a whole lotta physicists, the fact that recent decades of warming closely fit a linear model is wholly uncontroversial.

  303. dhogaza Says:

    GHG focings are found to be I(2) by everyone in the literature, i.e. the series contains two unit roots. Temperature has been found, and above yet again, as well as throughout the literature, to be I(1).

    Simply regressing temperatures on GHG forcings (or simple CO2 forcings) leads to spurious results. Tamino either knows this, and he’s misinforming people on purpose, or he doesn’t, and he’s incompetent.

    Or he didn’t use GHG forcings, which happens to be true (my bad for saying so).

  304. VS Says:

    Bart,

    Hij is echt FLINK aan het spammen. Doe er echt wat aan.

    Hoeveel posts heeft hij vandaag geschreven? Wat heeft hij daar precies in gesteld? Bewijs? Referenties? Formele bewijzen?

    Dit is toch niet ghogaza’s climate-Twitter pagina?

    De hele discussie raakt erdoor vervuild. Mijn resultaten worden zonder argumenten ondergekotst.

    Waar slaat dit echt op?

    [If you have a message for me alone, you can email me via the link on the right. Otherwise, comment in English on an English thread. Dhogaza has brought many good arguments to the table, and his total wordcount is still way below yours here. BV]

  305. S. Geiger Says:

    Moderator/Bart – any chance you could broker a deal to have VS and Tamino ‘debate’ this issue(sans all other posters, save perhaps Alex, who stays on point and w/out any personal attacks) in a new thread. You seem to get along with Tamino…and allow VS to post on your site so maybe this is possible. Maybe you could set up a few ground rules that they could both agree to beforehand(?).

    Thanks

  306. dhogaza Says:

    Yet … yet … the bumble bee flies …

    Really, it’s the ultimate in hubris to think that statistical tests by economists disprove physical theory.

    CO2 absorbs LW IR, we know that, there’s tons of observational data backing up AGW theory, etc etc.

    B&R are essentially demanding that a large percentage of known physics be thrown in the toilet. Not only is AGW not real, but there’s a very good likelihood that they’ve just proven that airplanes don’t fly …

  307. dhogaza Says:

    And, VS, if you’re going to insult me while victim-pleading about other people insulting you … knock off the secret decoder-ring messages to Bart and post in english:

    He is really FLINKE to spamming. What to do really.

    How many posts he has written today? What exactly has he made? Proof? References? Formal proof?

    It’s not climate-ghogaza’s Twitter page?

    The whole discussion becomes contaminated through. My results are no arguments, puked.

    Where does it really?

  308. dhogaza Says:

    Moderator/Bart – any chance you could broker a deal to have VS and Tamino

    Tamino doesn’t seem to think VS is worth the effort, and the paper that VS is essentially using to build his case hasn’t gotten traction in the real world, so I can’t say I blame him.

  309. MartinM Says:

    Here are the test results for GHG forcings

    What about the net forcings, as per http://data.giss.nasa.gov/modelforce/NetF.txt ? A quick look in R seems to rule out a unit root in that series.

  310. dhogaza Says:

    B&R has some stuff in it that’s very interesting …

    If instead of a permanent increase in its level, the change in rfCO2 were to increase permanently by 1 w/m^2, global temperature would eventually increase by 0.54 C.

    If the level of solar irradiance were to rise permanently by 1 w/m^2, global
    temperature would increase by 1.47 C.

    This should give an idea as to HOW MUCH PHYSICS MUST BE THROWN OUT if B&R’s analysis is correct.

    Physics doesn’t differentiate between different sources of energy. Add 1w/m^2 of energy, and the climate will respond the same way regardless of the source. B&R have just invented “smart energy” …

    I don’t have the statistical background to refute them, but … I know that quoted statement is false.

    Again, the bumble bee flies …

  311. dhogaza Says:

    What about the net forcings, as per http://data.giss.nasa.gov/modelforce/NetF.txt ? A quick look in R seems to rule out a unit root in that series.

    It’s net forcing that matters, despite B&R’s belief that 1 w/m^2 is different depending on the source of that 1 w.

    VS: how can you place faith in a paper that trumpets such an obviously unphysical conclusion?

  312. dhogaza Says:

    VS, I must thank you because I hadn’t bothered to look at B&R before and I’ve read it through.

    There are other insanely non-physical conclusions they draw from their analysis, for instance doubling of CO2 won’t lead to a permanent increase in temperature.

    Apparently the CO2 forcing must “wear out” somehow, because as time goes on, it apparently ceases to absorb LW IR.

    And somehow, everything we know as to why the earth’s not a frozen ball roughly 33C colder than it is today is false.

    You’d be better off spending your time figuring out where B&R have gone wrong, rather than continue your argument based on their results.

    They really are arguing that, in essence, bumble bees don’t fly.

  313. Tim Curtin Says:

    dhogaza (aka bumblebee) contests the B&R statement that “If instead of a permanent increase in its level, the change in rfCO2 were to
    increase permanently by 1 w/m2, global temperature would eventually increase by 0.54 C. If the level of solar irradiance were to rise permanently by 1 w/m2, global temperature would increase by 1.47 C” by saying “Physics doesn’t differentiate between different sources of energy. Add 1w/m^2 of energy, and the climate will respond the same way regardless of the source”. Granted, the G&R statement is poorly worded, they can hardly mean that the change in rfCO2 would increase by 1W/m2 p.a. or that IR would or could rise by 1 W/sq.m p.a., it still behoves us all to recognise that changes in IR as measured at top of the atmosphere would have the same effect at ground/surface level as changes in [CO2] and other GHG in the troposhere also at ground/surface level. What are the surface level forcings of both CO2 and IR? – that is what one supposes the physics is about.

  314. dhogaza Says:

    Someone want to translate Tim’s post into plain english?

    I think B&R’s statement stands on its own, and see nothing that indicates that they’re using radiative forcing in a non-standard way (which is the only way their statement could be true).

  315. Ray Says:

    VS, Bart, Tim et al. Great dialog. Thanks.

  316. Tim Curtin Says:

    Dear Bumblebee (aka Dhogaza), apologies, let me try again. TSI actually measures the sun’s output, but what reaches say Alaska or Hawaii in the form of solar surface radiation is quite different, whatever Hansen & Sato or you believe. For example, at Barrow in July 2066 SSR was 57.22 W/sq. m, and at Hilo it was 95.48, while TSI was about 1365.5 W/sq.m (Hansen Sato Ruedy Lo 2009, Fig.4). This may have had something to do with the different mean daily temperatures in Barrow & Hilo in July 2006, 3.86oC and 24.53oC respectively, despite identical RF from [CO2].

    This suggests to me that cointegrating the GISStemp global means for July with TSI and the atmospheric CO2 level (which is the same at Barrow & Hilo) may be missing something. The same applies to all other months of record at Barrow & Hilo, and at some 1200 locations for which SSR data are available for 1990 to 2006 (and 1960-1990 for some 250 of those) from NOAA. What stops Hansen’s GISS from gridding local SSR to get global mean SSR by month in W/sq.m? In my view the variations in TSI reported by Hansen (2009) from 1366.5 W/sq.m in 1980 to 1365.2 W/sq.m in 2008 are less likely to impact on the GISS Mean Temps for those years than the variations in SSR at say Barrow from July 1980 (88 W/sq.m) to July 2006 (95.48) and everywhere else in the GISS grids from which it derives its global mean (I realise the dates are not identical but no doubt bumble can link me to monthly TSI).

    I can translate into Dutch if that would help (if I have a 2nd language that’s it from my schooling in Afrikaans).

  317. PeterW Says:

    [edit. Keep it polite. BV]

  318. Tim Curtin Says:

    Apologies to all: I left out the word not in my last but one which should have read “…it still behoves us all to recognise that changes in IR as measured at top of the atmosphere would NOT have the same effect at ground/surface level as changes in [CO2] and other GHG in the troposhere also at ground/surface level”. This was I trust made clear from my next posting.

  319. Bart Verheggen Says:

    KenM,

    The climate forcings are calculated based on physics; they are not derived from the temperature trend.

    VS,

    You reply at March 16, 23:00 doesn’t convince me. If the goal is to see whether the temperature is forced (deterministic) as opposed to random (stochastic), than the best trend estimate is to take the estimate of the net radiative forcing. Using a fit to only the CO2 forcing instead will lead to inferior results in comparison. I see that later (23:02) you used the actual CO2 forcing instead of a two line fit, which is an improvement of course. But it’s still inferior to using the net radiative forcing, for which esp the aerosol component is very important.

    Dhogaza and myself both explained why it’s appropriate to use the period after 1975: That’s when the GHG forcing is dominant and the expected trend in temperature would be roughly linear (and linearity was an assumption in the simplest form of the ADF tests, as I came to understand). Plus, that’s what got this discussion started, as it is the period I used for a regression in the head post (last figure).

    You may be annoyed with certain people; others may be annoyed with you. As long as everyone remains some civility we can continue the discussion, but whether behaviour crosses a line or not is in the eye of the beholder. And on my blog, I decide where the line is.

  320. VS metrics Says:

    Bart,

    I responded to the post-1975 concern well before Tamino posted his results. I also performed a set of separate statistical tests to test this ‘hypothesis’, properly. Note that you three are in fact disputing a whole body of published literature here, while acting as if I’m the one making the ‘extraordinary’ claim.

    I understand Tamino and his sidekick are your virtual friends, and this is your blog.

    Therefore do as you like and moderate as you wish. The readers will judge you for how you’re doing that, and considering the thoughtfulness of many of the contributors here, I have no doubt they will draw the right conclusions.

    ————–

    For all those joining the discussion and can’t find my test results and arguments, [edit] here’s a quick overview of the current statistical discussion, especially relating to the posts made by [edit] Tamino.

    ** The whole discussion started on March 5th, with my comment here.

    My claim there, that you cannot simply calculate deterministic trends via OLS and report confidence intervals, rests on the I(1) property of temperature series (i.e. the series contains a unit root), which has been widely established in the literature. Tamino seems not to have read any of these papers, as he’s not disputing e.g. Kaufmann’s work explicitly, only implicitly.

    ** A whole debate then ensued, and at one point, Tamino made this post, misstating my position, and denying the presence of unit roots (i.e. stating the series is not I(1), but I(0)) in the GISS data, here.

    ** The definition of a unit root is given here:

    ** I then perform my unit root analysis, and find that in only two of the many different set ups, the unit root is actually rejected. I report all test set ups, motivations and test results here.

    ** I explain the exact nature of the cherry picking performed by Tamino here.

    Note that Tamino posted the only two test results which agree with his hypothesis, while ignoring the vast majority of indicators pointing to the presence of a unit root:

    ** Tamino then responding with this new post, where he picks 34 observations and starts performing statistically invalid analyses, here.

    ** I comment on the impact of his sample size/test procedure, here.

    ** I then propose (and perform) the appropriate unit root test (Zivot-Andrews unit root test), and test for the presence of a unit root, allowing for the hypothesized structural break in the series. This was proposed by some here, and is the ‘basis’ of Tamino’s post.

    This breakpoint is determined endogenously by the test/data, and doesn’t require us to throw away 3/4 of our observations and/or cherry-pick our breakpoint.

    This is in stark contrast to what Tamino does, namely: handpicks 30-something obs, ‘shows’ unit root analysis then ‘doesn’t work’ (good morning, Columbus) and then proceeds to perform a spurious regression (i.e. I(1) var on I(2) var, ignoring a total of 3 unit roots there) to ‘prove his point’.

    Note that, when properly tested via the Zivot-Andrews test, the presence of a unit root is again not rejected:

    ** Finally I argument why the regression performed by Tamino in his latest post is invalid, referring to published literature (i.e. the I(1) vs I(2) properties of Temp and GHG’s respectively).

    Those interested in a texbook treatment of where Tamino messed up, are referred to Davidson and MacKinnon (2004), page 609, section “Regressors with a Unit Root”. Bart, you stated you are not ‘be convinced’ by my reply. Davidson and MacKinnon offer a formal proof of it, on page 610.

    My post related to this issue, is given here

    Here I’m reproducing the test results for unit root presence in the CO2 forcings, showing that the series are indeed I(2), or have two unit roots. References for other GHG’s are also listed.

    Also, for the record, here’s a plot of Tamino’s ‘cherry picked’ 35 observation sample. Note that he set his alternative hypothesis to be a straight linear trend. Very convenient if you want to reject the null hypothesis of a unit root (not that the tests are valid with so few observations, and so many parameters to estimate).

    ————–

    IMPORTANT:

    **I’m not ‘disproving’ AGWH here.
    **I’m not claiming that temperatures are a random walk.
    **I’m not ‘denying’ the laws of physics.

    *****These are all strawmen, posted by Tamino’s (admittedly statistically illiterate) ‘fan base’ here, in an effort to dillute my argument, and make my contributions unreadable.

    All that I am doing is establishing the presence of a unit root in the instrumental record. The presence of a unit root renders regular OLS inference invalid. Put differently, you cannot simply calculate confidence intervals assuming a trend-stationary process, because the temperature series is shown to be non-stationary (i.e. contains a unit root).

    Alex gives the technical reason why OLS inference is invalid in the presence of a unit root. This concerns non-singularity/finiteness of lim n->Inf, Qn=(1/n)*X’X matrix (‘consistency’, or ‘raakheid’ in Dutch, of the t and F based tests demands Qn to be finite/non-singular in its limit). In case of unit root(s) somewhere in X, Qn is infinite in its limit. This is a violation of one of the assumptions of OLS-based testing.

    Here’s Alex’s post:

    These findings, i.e. the unit root in Temperature series, have also been reported numerous times in the published literature that I have surveyed:

    ** Woodward and Grey (1995)
    – reject I(0), don’t test for I(1)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Hey et al (2002)
    – Confirm presence unit root in temp seriest, I(1)
    ** Kaufmann et al. (2006)
    – Treat the temperature variable as I(1)

    Unpublished

    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    There are more.

    All more or less confirm my results (and are in contrast to Tamino’s). Note that all authors that check, also confirm that all GHG’s are I(2).

    ————–

    I will respond to comments regarding the statistical analysis here. We can discuss the physical implications once we have established the validity of my test results/set ups (which are disputed by Tamino). I’m not avoiding that discussion though, and am very interested to engage in it. However, I first need to make my statistical point clear.

    I have seen a couple of interesting posts, that I would also like to continue on once the statistical results are dealt with, like for example that of Rob Traficante yesterday.

    ————–

    VS

    PS. Just for the record. People, statistics is a formal discipline. If you have a ‘theory’ on how a certian estmator is going to behave in a certain situation (‘nonlinearity’ of trend, or whatever else you think up together, rather than test), come with either a formal derivation, monte carlo simulation results, or a reference.

    I guess the same holds for physics. I’ve seen a lot of ‘it contradicts the physics’ handwaving, but no proofs. I don’t understand how this doesn’t annoy the physicists reading this thread [edit]. Simply stating your opinion on how the physics is not in line with the statistics in half a paragraph is not sufficient to prove said point.

  321. Tim Curtin Says:

    Bart: do you have data for the aerosol components of net forcing at the actual locations where GISS temps are measured before being gridded and amalgamated? If not, how do you explain these my latest regression results for Pt Barrow in Alaska? The model uses absolute values, not 1st differences, but with the Wiki definition for RF.

    Mean temperatures July 1960-2006, Pt Barrow
    Model Summary
    Model AdjR2 SEE Durbin-W
    1 .962 .86175 2.029

    Coefficients t Stat P-value
    Constant=zero.
    RF [CO2] -0.903 -3.1539 0.0029
    Solar SR 0.00051 2.414 0.0199
    H2O 4.896 7.025 1.0476E-08

    All are significant at better than 95%, but the RF is negative! The “H2O” (precipitable water in cm. according to the NOAA data source) seems to be decisive. The DW indicates absence of spurious correlation. Not much strong sun at Barrow even in July with ave T of around 3oC.
    BTW, the semi-log growth rate of mean temperature in July at Barrow from 1960 to 2006 was 0.071339% p.a.. Projecting to 2100 at that rate we get from 3.86oC in 2006 to 4.127oC, a rise of 0.2666, or 0.027oC per decade. Is that enough to wipe out all polar bears?

    [Reply: global vs local. BV]

  322. jfr117 Says:

    i have to say that i find it frustrating when the argument that vs is in economics and not a statistician, physicist, etc is used. what we all should be seeking is the truth – no matter who or what discipline it comes from. i’ll be honest here, part of what i find very frustrating is the real or perceived notion of ‘you are not one of us’ therefore you are wrong. ala the beenstock paper has not received much play, therefore its irrelevant. climate science is not a mature discpline and involves multiple discplines. so in order to advance, which we we should all acknowledge that we still need to advance our knowledge, these cliques and groupthink must go away. that fact that tamino doesn’t think vs is woth the time is fine – but it does not mean that tamino is correct because well, he says so. vs and alex, again, thank you for pushing us all.

    [Reply: I understand your take on this situation, but from the climate science point of view the whole scientific foundation gets attacked multiple times daily, as if decades of science is suddenly proved wrong by a guy on a blog. It’s very unlikely, and because many of these claims are unfounded, scientists (and their supporters) tend to have gotten a little defensive. A claim that ‘AGW has been proven false’ is bound to get peoples defenses up. It’s almost as unlikely as claiming that smoking cannot cause cancer or that gravity doesn’t exist. Ok, not quite, but you get the point. BV]

  323. Marco Says:

    Tim, don’t know where you get your data, but in my analysis of Barrow temperature trends, I get a T-increase of 6 degrees celsius per century using the annual temperatures (slope zero P<0.0001).

    (July, notably, is 4 degrees per century)

  324. Craig Goodrich Says:

    dhogaza writes:

    “CO2 absorbs LW IR, we know that, there’s tons of observational data backing up AGW theory, etc etc.

    “B&R are essentially demanding that a large percentage of known physics be thrown in the toilet. Not only is AGW not real, but there’s a very good likelihood that they’ve just proven that airplanes don’t fly …”

    No, actually, there is no observational data anywhere backing up AGW theory. There is SOME observational data suggesting the global average temperature has increased on the order of 1 deg C over the last 150 years, but the exact figure is highly uncertain due to instrumental error and the adjustment games CRU, GISS and the rest have been playing.

    As to actual evidence for CO2-driven AGW, twenty years ago the only argument for it was, “our models [all based on astronomer Hansen’s studies of Venus] can’t reproduce recent warming without a strong CO2 greenhouse effect.” Now, reading the only relevant chapter of the IPCC’s AR4, WG1 Ch 9, “Attribution”, after two decades and a hundred billion dollars, we find the only argument for CO2-driven AGW remains, “our models [all still based on astronomer Hansen’s studies of Venus] can’t reproduce recent warming without a strong CO2 greenhouse effect.”

    As to known physics, there is no question as to the behavior of the CO2 molecule subjected to radiation in the lab. There is substantial question as to the behavior of a minute amount of a trace gas in an atmospheric / hydrological system the basic effect of which is to move immense amounts of heat and moisture from point A to point B. For example, a 10% reduction in humidity in the upper troposphere is enough to offset the entire greenhouse effect of all the atmospheric CO2 above 160 ppm, which is the minimum level needed to sustain life.

    The whole theory is as ludicrous as the assertion that anthropogenic CO2 will measurably modify the pH of the oceans — given that the oceans already contain nearly two orders of magnitude more dissolved CO2 than the entire atmosphere.

    [Reply: Leave unfounded accusations at the door before entering please. Adjustments to raw data are needed and documented and they do not influence the warming trend. There’s a lot more known about climate change than you’re aware of apparently. E.g.
    Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). This heat build-up is manifesting itself across the globe.
    And since you like to bash models: Why don’t you develop a physics based model that can explain both the current and past climate changes at least as good (preferably better) as current GCM’s, but without a substantial sensitivity to adding extra greenhouse gases. Then I’ll take you seriously. BV
    ]

  325. Marco Says:

    @jfr117:
    the issue isn’t so much that VS (or B&R) are not physicists, it is that they do an analysis, make large claims (CO2 hardly causes warming and it is transient), but fail to test that against the known physics.

    If you do loads of math that says bumblebees can’t fly (*), are you going to doubt your math, or are you going to doubt the observations?

    * this can be done, just take the math for fixed-wing aircrafts and a bumblebee should not even lift off. Unfortunately, it is inappropriate math for the situation.

  326. MP Says:

    I would like to draw the attention to several recent papers by Lean et al listed below, in which global temperature time series where analysed using multi-regression techniques. These studies strongly suggest that the time course and fluctuations of the temperature can be largely explained by a combination of ENSO, Volcanic aerosols, solar radiation and last but not least anthropogenic forcing, which is composed of eight different components, including greenhouse gases, land-use and snow albedo changes, and tropospheric aerosols etc.

    How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006

    How will Earth’s surface temperature change in future decades?

    Cycles and trends in solar irradiance and climate

    There is also a paper by Mike Lockwood from 2008 with a similar analysis, however he uses an artificial linear trend in anthropogenic forcing and concludes that the sun’s contribution is quite small (compared to the findings by Lean et al).

    Recent changes in solar outputs and the global mean surface temperature. III. Analysis of contributions to global mean air surface
    temperature rise

    Another even older paper from 2002 by Douglass and Clader performs a multi-regression analysis on the UAH T2LT satellite data.

    Climate sensitivity of the earth to solar irradiance

    In 2004 they submitted an update of the analysis.

    Climate sensitivity of the earth to solar irradiance:update

    One of the major issues with the Douglass papers is the use the UAH dataset, which did not show any warming at the time. After the discovery and subsequent correction of an error in 2005 the UAH t2lt showed a similar warming compared with other datasets.

    The use of multi-regression techniques has been criticized by several climate scientists, who argue that these techniques -if not applied carefully- may lead to non-robust spurious results. One of the reasons is that the different covariates show collinearity. Another issue is the way response lags for the different covariates are introduced in the regression model (using no lag, or a discrete shift or a RC-filter type of lag). And finally there is still a debate going on concerning TSI reconstructions, which results in the use of different TSI reconstructions in the different papers. By using different reconstructions different results may be (are) obtained.

    A more conservative analysis was conducted by David Thompson in the following two papers:

    A large discontinuity in the mid-twentieth century in observed global-mean surface temperature

    Identifying signatures of natural climate variability in time series of global-mean surface temperature: Methodology and Insights

    In these papers Thompson removed the effects from ENSO, large Volcanic eruptions and dynamically induced variability on the basis of
    the cold-ocean–warm-land pattern (COWL) from various temperature series. The regression of ENSO and Volcanic eruptions were performed very carefully by only considering stretches of data which showed low collinearity when removing one or the other covariate. Interestingly he found that part of the variability observed in the global temperature during the mid-twentieth century is likely to be caused by instrumental biases (another source of variability). In the second paper he finds after removing Tdyn (COWL), ENSO and Volcanic eruptions a clear monotonic global warming pattern since ~1950 (some of the data can be found here). Maybe interesting to see what ADF tests tell us about these residual temperature series. Thompson did not attempt to further remove the effect of varying solar activity or anthropogenic forcing.

    It is clear that the variability in global temperature can be largely explained by a combination of different reasonably well characterized and measured sources. Understanding the statistical properties of the global temperature time series would therefore also require a better of understanding of the statistical properties of the underling sources of variability. Furthermore models that do not include the known sources of variability are likely not robust.

  327. MP Says:

    my comment in disappeared in the spamfilter… :P

  328. KenM Says:

    The climate forcings are calculated based on physics; they are not derived from the temperature trend.

    Yes, but the *measurements* that go into those calculations are, in at least one case (aerosols), guesstimates. I have no doubts that the physics are sound, it’s the “measurements” of something where we do not actually have direct data that bothers me. And those aerosols have been used to explain a lot of discrepancy in the temperature record . A discrepancy that can also be explained by (possibly) a random walk.

    [Reply: Yes, there’s a lot of uncertainty in the (historic) aerosol forcing, but don’t confuse uncertainty with knowing nothing at all. And no, that doesn’t make a random walk remotely more likely, because in observing the whole earth system it would violate conservation of energy. BV]

  329. Bart Says:

    If your comments gets flagged (for having lots of links for example), please don’t resubmit the same or a similar comment, but rather write a very short comment to that effect or send me an email (via the link on the side). Makes my life easier to “de-spam” your comment. Thanks.

  330. Bart Says:

    VS,

    In your first comment here you wrote:

    “In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk. Simply calculating OLS trends and claiming that there is a ‘clear increase’ is non-sense (non-science). According to what we observe therefore, temperatures might either increase or decrease in the following year (so no ‘trend’).”

    Whether the ‘naked values’ in the absence of any physical meaning or context could theoretically be consistent with a random walk is a purely academic mathematics question, on which I haven’t opined very strongly. Though it seems that you backpedaled from ‘random walk’ to ‘contains a unit root’, which Alex helpfully explained in a comment is not necessarily the same.

    The physics of it all tells me that it hasn’t in fact been random/purely stochastic, since that would inconsistent with other observations and our physical understanding of the climate system (incl conservation of energy).

    Basically, a random walk towards warmer air temps would cause either a negative radiative imbalance at TOA, or the energy would have to come from other segments of the earth’s system (eg ocean, cryosphere). Neither is the case. It’s actually opposite: a positive radiation imbalance and other reservoirs also gaining more energy. Which makes sense, in the face of a radiative forcing.

    The statistics details go over my head at times, but on physical grounds it seems clear that the increase in global avg temp over the past 130 years has not been random, but to a certain extent deterministic. (see also my newer post) It’s a consequence of the basic energy balance that the earth as a whole has to obey to. Would you agree with that? If not, you would in fact be making an extra-ordinary claim that needs extra-ordinary evidence.

    Finally, different people have different sensitivities. I’m sure you would be quite defensive if I would come in at an econometrics forum and claim that the whole foundation of your discipline is wrong. That is pretty much what the B&R paper claims, and your entry on this blog (see the selected quote above) raised suspicions that that was your line of thinking as well. I also note that you have accused/badmouthed a fair number of people a fair amount of times, and called others out on their anonymity. There’s the pot and the kettle, you know.

  331. Scott Mandia Says:

    Craig,

    I suggest you go to the Start Here link on Real Climate and educate yourself. I also suggest checking out Skeptical Science.

    I hope you realize that if what you say is true then 100s maybe 1000s of scientific experts are wrong and very ignorant. What do you think the probability of that is?

  332. dhogaza Says:

    VS would be a lot easier to deal with if he were at least consistent…

    IMPORTANT:

    **I’m not claiming that temperatures are a random walk.

    From VS’s first post:

    In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk

    It’s easy to see how people might believe that VS is claiming that temps are a random walk …

  333. Scott Mandia Says:

    jfr117:

    I know that you are concerned that we do not understand the internal variability enough to be so confident that AGW is significant and action is required now.

    What is your biggest concern regarding action? Is it a financial concern?

  334. whbabcock Says:

    The issues being addressed in this thread relate to a single question, “Does available real world data support the hypothesis that increased concentrations of atmospheric greenhouse gases increase global temperature permanently?”

    VS has clearly pointed out that, to properly test this hypothesis, one must use statistical techniques that are consistent with the underlying characteristics of the data. As noted in the B&R paper, “… the radiative forcings of greenhouse gases (C02, CH4 and N2O) are stationary in second differences (i.e. I(2)) while global temperature and solar irradiance are stationary in first differences (i.e. I(1)).” B&R refer to five papers that have the same findings – i.e., that radiative forcings and global temperature are non-stationary to the same order.

    Ignoring the properties of the time series data used to test a theory (hypothesis) can easily suffer the “pitfall of spurious regression.” That is, you can’t look at the simple correlation between greenhouse gas concentrations and temperature (or simple transformations of these data) and accept the hypothesis that one is caused by the other. In the case before us (i.e., given the characteristics of the time series data being used), cointegration has been demonstrated as the appropriate statistical technique. This has nothing to do with the logic or correctness of the underlying theory being tested. Rather, it has to do with the statistical properties of the time series being used to test the theory – two separate issues.

    The B&R paper finds that, when cointegration is applied to available data,” … greenhouse gas forcings do not polynominally cointegrate with global temperature and solar irradiance.” Hence, available data do not support the physics based hypothesis.

    This type of statistical result simply demonstrates the relationship (or lack thereof) in available data. It is what is!! This result stands (unless there are problems in execution – e.g., the analysis was implemented incorrectly, or the data are faulty, etc.). No appeal to theory or to alternative analyses of different types of data that support the hypothesis changes this single analytical result. Again, it is what is! It is what the data are telling us. In this case the data are telling us that bumble bees can fly (i.e., real world data – observations — are inconsistent with the formulated, mathematically based hypothesis).

    What does all this mean? It could mean that the theory is incorrect. Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.” It certainly “raises a red flag” as VS has noted several times. And, it does mean that one can’t simply point to highly correlated time series data showing rising CO2 concentrations and rising temperatures and claim the data support the theory.

  335. jfr117 Says:

    @ bart and marco: again, vs is not attacking the physics. he is objectively statistically testing the global temperature anomaly data. i am sorry that it offends you when people ‘attack’ the theory you like, but it should be ok. if your theory is correct – it will withstand all objective attacks. if the theory is falsified, it can be reworked to be made stronger. science is never settled.

    bart if you want your blog to be a place for actual scientific discourse, then consider yourself lucky. if just want cheerleaders to expound on how much we know, then please state that. i think vs has raised the game and is a valuable asset to the climate discussion.

  336. Scott Mandia Says:

    BTW, I just read a post by Marco on OM that the B&R paper is NOT published. According to B’s vitae, it is a working paper.

    I am curious to see if and when it gets published what the reaction will be.

    That does not change what VS is speaking of it just means that the B&R paper should not be held to a high standard yet by anybody.

  337. jfr117 Says:

    @ scott – what is my biggest concern? i am concerned that ‘action’ is based on premature understanding of the climate system. and as a result, confidence in science (e.g., meteorology) will be seriously undermined in the future. i am concerned that if co2 is not the primary cause of the recent 40 yr wam period, then we will have wasted vast resources and public trust in building this case. i am concerned that a 40 yr period is insufficient to fully characterize a) the magnitude of the anomaly wrt historical temps and b) our full understanding of climate dynamics.

    i advocate more study (open to both ‘groups’) to build a true concensus of climate science. i think then we can build a more robust policy that will withstand a 12 year plateau in temps or an increase in hurricanes, e.g., politicians will not have to yell.

  338. S. Geiger Says:

    “is the case. It’s actually opposite: a positive radiation imbalance and other reservoirs also gaining more energy. Which makes sense, in the face of a radiative forcing.”

    – along these lines can we explain the current ~ 10 yr blip in temps being relatively stable? What forcings have changed to account for this local plateau? Is there some amount of uncertainty about the climate forcings or is this viewed more as a shortcoming in the available data? Or, am I completely on the wrong track and the earth has been relatively constant in gaining more energy but its not manifest in the atmospheric temp readings?…although my (limited) understanding is that ocean heat content has also been on somewhat of a plateau.

  339. Scott A. Mandia Says:

    Thanks, jfr117. It is always good to know the motivation behind one’s comments.

    You know my position is that I am convinced that waiting will have more dire consequences (including financial) than action now.

    Public confidencde in science will be far worse if predictions come true and nothing was done about it.

  340. Scott A. Mandia Says:

    What plateau?

    Each of the last three decades has been warmer than the one before and each has set a record. The 2000s were the warmest despite the 2nd half of that decade experiencing a record low solar intensity.

    Ocean heat content is also increasing over time.

  341. MP Says:

    @whbabcock,

    If I use several covariates with mixed roots (e.g. I(0), I(1) and I(2)) to create a new time series, what would be the root of this new time series?

    Does this not depend on the relative amplitude of the first and second order differences in the underlying covariates?

    And why is it from a statistical point of view “correct” to ignore known sources of variability in global T (ENSO, Volcanic eruptions, instrumental biases etc), which expectedly will affect the first and second order differences? For example ENSO (a stochastic cyclic phenomenon) comprises a significant portion of the variance in the temperature data.

    Why not remove the known sources of natural variability first and then check the order of the time series?

    also check the papers I linked above in MP March 17, 2010 at 14:40

  342. jfr117 Says:

    …the plateau of the past 12 years. its used as ammunition as proof that the science is not settled. i think that if the scientific process had been open over the past decade…and not hijacked by politics…then every storm, heatwave, coldwave, tornado or hurriance would be used as proof or disproof. and we wouldn’t have to undergo the rhetoric from both sides.

  343. Marco Says:

    @jfr112:
    An “objective” analysis of the climate data is almost impossible, as you actually need to understand the data to apply the appropriate equations.

    Moreover, we need to take into account what types of errors and uncertainty there may be present in the math. MP has made some very relevant comments of ‘issues’ with the data.

    Take all this together, add the fact that the outcome of the analysis results in some rather surprising results (same forcing=different heating and one supposedly permanent and the other transient), and it is extremely arrogant to come in and claim your analysis shows the established physics wrong (and that *is* what B&R do). I’d do some major testing to check whether my results are robust, talk to climate scientists to see if I did everything right, and at the very least point out that my statistical methods are known to have some uncertainty. As Tamino has pointed out, you can use different tests and some simply give a different answer. Loads of arguments can ensue about the appropriateness of the various tests, which already indicates opinion gets into the matter.

  344. dhogaza Says:

    BTW, I just read a post by Marco on OM that the B&R paper is NOT published. According to B’s vitae, it is a working paper.

    I am curious to see if and when it gets published what the reaction will be.

    That does not change what VS is speaking of it just means that the B&R paper should not be held to a high standard yet by anybody.

    Any paper claiming that a 1 w/m^2 forcing from different sources result in a different climate response won’t make it into any reasonable journal in the physical sciences.

    If it gets in anywhere, I imagine it will be some economics journal.

  345. MP Says:

    “Plateaus” also occurred in the 70’s, 80’s and the 90’s.

    This can be easily visualized when plotting the 10 year trends of the various datasets:

  346. Craig Goodrich Says:

    BV:

    “Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). This heat build-up is manifesting itself across the globe.”

    No, they find increasing outgoing longwave radiation, which is consistent with slight temperature increases, which we already knew. And no heat build-up is manifesting itself anywhere, not in ocean heat content nor in average tropospheric temperature. (Murphy 2009, replete with the casual sprinkling of magic aerosols, is pure armwaving. Note that his coverage stops right at the point where we start to actually have good data.) Heat can not hide, not from the ARGO buoys and satellite coverage. It ain’t there.

    In fact, not only is there NO evidence for CO2-driven AGW, but every one of the theory’s predictions has proven not merely wrong, but spectacularly so. In any other branch of science, the theory would have been discarded more than a decade ago, but it’s been kept alive, zombielike, by billions of politically-motivated dollars and Climategate-style manipulation.

    “And since you like to bash models: Why don’t you develop a physics based model that can explain both the current and past climate changes at least as good (preferably better) as current GCM’s, but without a substantial sensitivity to adding extra greenhouse gases. Then I’ll take you seriously.”

    1) ANYTHING can reproduce any curve if you throw enough fudge factors into it. When the models all a) use exactly the same values for exactly the same parameters, and b) are fully available in source form on the Internet, I may take them seriously. To say a model that has a dozen arbitrary values tossed into it for the sake of curve fitting is “physics based” is, to put it mildly, using the term loosely.

    2) A very simple model incorporating the PDO plus a Svensmark-effect warming of around .5 deg C/century due to increasing solar activity throughout the 20th century reproduces the (supposed) surface values quite nicely without further ad-hoc aerosol jiggery-pokery and quite without any CO2 nonsense.
    ===================

    Scott: “I suggest you go to the Start Here link on Real Climate and educate yourself. I also suggest checking out Skeptical Science.”

    I have read ALL of the basic RC posts. The fundamental purpose of RC is to put out responses — however vacuous — to any accidental leaks of real science into the climate propaganda stream. I have yet to see any post there that does not consist of some combination of strawman, changing the subject, or occasionally simple obfuscation.

    “I hope you realize that if what you say is true then 100s maybe 1000s of scientific experts are wrong and very ignorant. What do you think the probability of that is?”

    Actually the relevant group — again, WG 1 Ch 9 — is much less than 50; probably closer to 20. All the rest are irrelevant; they may be right or wrong, but they are definitely hungry.

  347. dhogaza Says:

    Steve Geiger:

    along these lines can we explain the current ~ 10 yr blip in temps being relatively stable? What forcings have changed to account for this local plateau?

    Well, we have been in an extended solar minimum, that’s no secret since many in the denialsphere have been jumping up-and-down in excitement awaiting the 2nd coming of the little ice age. Hasn’t happened, of course, 2000-2009 was the warmest decade on record despite the solar minimum. But certainly lower TSI has been a slightly negative forcing.

    Beyond that you have natural variability, La Niña, the lack of a strong El Niño in the 2000-2009 time frame, etc.

  348. dhogaza Says:

    In this case the data are telling us that bumble bees can fly (i.e., real world data – observations — are inconsistent with the formulated, mathematically based hypothesis).

    What does all this mean? It could mean that the theory is incorrect. Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.” It certainly “raises a red flag” as VS has noted several times.

    Well, when a paper like B&R executes a statistical analysis that supposedly throws out a huge amount of physics that’s unrelated to climate science (though the fall of climate science is one result they trumpet), you’re right.

    Red flags are raised.

    Either just about everything we know about energy is wrong – not just in climate, but everywhere, inside the cylinders of your SUV, energy that heats your house, energy released by earthquakes, you name it – or B&R screwed up somewhere.

    I’ll sell you a certified and calibrated Occam’s Razor (TM) if you need help figuring out which is likely to be true. They’re right, most of physics is wrong or … they screwed up.

  349. dhogaza Says:

    Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.”

    To make clear, in case you’ve not read B&R, they reject the notion that there’s not enough or accurate enough data. They state absolutely that “AGW is disproved”.

    And that they’ve disproved radiative physics, as well …

  350. Geckko Says:

    VS is correct.

  351. A C Osborn Says:

    I have some questions for Bart, dhogaza and Scott.
    I am not a Scientist or Mathematician, but I am interested in learning the truth.
    So based on this CO2 statement, which has been accepted on here as a Fact, “The concentrations today are the highest in the past 650,000 years and likely to be higher than at any time in the past 15 million years.”

    Question 1, why are we using the Mauna Loa atmospheric level of CO2 form 1958 onwards, why not use whatever we were using to show the values before 1958?

    Question 2, If we were using Ice Core data prior to 1958 why?

    Question 3, why aren’t we using all the other Valid Scientific Measurements of CO2 prior to 1958?

    Question 4, why do we ignore Valid Scientific Measurements of CO2 for the 1940s which show around 375/380 ppm?

  352. Shub Niggurath Says:

    dhogoza
    “In this case, 1995-present gives p = 0.076, at least according to one person who computed it. Or a 92.4% confidence level, as it is often put.”

    The null hypothesis in this case is:” the rise in temperatures (GTA) are higher in 1995 -present compared to ‘previous periods’ or ”60-’90’s base period’. If the p value for this turns out to be 0.076, the null hypothesis is rejected. What more is there to be said?

    The null hypothesis can of course be acccepted at higher levels of uncertainty, but that is allowed only if levels of confidence are agreed upon – *prior* to experimentation. The ‘experiment’ in this case being calculation of the temperature anomaly. Moreover the gridded temperature anomaly calculations are derived from automated weather stations from thousands of places making the sample size in question enormous. How then are we satisfied with a significance level of p<0.05? Shouldn't the levels be set much lower? I know the data elements are anomaly calculations per year and there are only 15 years in on one side in this example we are talking about (the Jones' 1995-present warming one), but arent each anomaly values representative of large samples?

    Therefore I find the 'confidence intervals' understanding in the IPCC which seems to run along Jones' statement and dhogoza explanation of it – both of which seem very puzzling.

    I've raised this issue and also seen it discussed several times – and I've always seen someone throwing a link to the uncertainty estimates the IPCC uses. How is that good enough? That link only illustrates the uncertainty reporting scale the IPCC decided to use to convey results. What is the statistical foundation of such thinking in the first place? I’ve gone through this thread and I see that VS has also raised this question. I now figure – if a professional statistician cannot understand it, there must be some merit asking this question.

    Thanks

  353. Rattus Norvegicus Says:

    Bees can’t fly debunked.

    More apropos to the situation here is the application of the wrong theory to the problem at hand.

  354. Ron Broberg Says:

    VS: By allowing for a structural break in both intercept and trend seperately, or both together, the null hypothesis of unit root presence is not rejected, in any instance.

    Conclusion: unit root GISS_all

    Am I misinterpreting something or have you done this twice now?

    “unit root not rejected” IS NOT the same as “presence of unit root confirmed.”

    You have done this here and here

    VS, I appreciate your attempt at reporting complete results – but I have not been able to follow some of your jumps when you report that the level data analyzes as “unit root not rejected” and you then summarize with the conclusion “unit root confirmed.”

  355. luc Says:

    Apparently when the math does not work we imply disrespect for the Physics. However I invite you to see this simple explenation by a Freeman Dyson who makes a clear case for spending money on data input instead of theoretical models.

  356. dhogaza Says:

    More apropos to the situation here is the application of the wrong theory to the problem at hand.

    That was my point in bringing up the bumble bee can’t fly myth …

    That the original calculation was for a fixed-wing aircraft, and that this calculation was mis-applied by an entomologist who apparently didn’t understand the model he claimed was meant to prove a bumble bee can’t fly.

    Somehow, somewhere, B&R are misapplying their tool(s).

  357. dhogaza Says:

    Question 1, why are we using the Mauna Loa atmospheric level of CO2 form 1958 onwards, why not use whatever we were using to show the values before 1958?

    Precision. Frequent sampling. Free from urban sources of CO2.

    Question 2, If we were using Ice Core data prior to 1958 why?

    Presumably because that’s what was available, if true. Do you have a source stating that this is the only source that’s used?

    Question 3, why aren’t we using all the other Valid Scientific Measurements of CO2 prior to 1958?

    Which are they? Again, please source your statements.

    Question 4, why do we ignore Valid Scientific Measurements of CO2 for the 1940s which show around 375/380 ppm?

    Urban areas have considerably elevated CO2 because it takes awhile for CO2 from cars, trucks, industrial plants, etc to disperse in the atmosphere, and as it is doing so, more CO2 is being emitted.

    Most of the supposed sources for claims that CO2 was higher in the past have been contaminated in this way, or by being measured indoors, etc.

    Which actually makes these *invalid* measurements if you’re interested in the CO2 content of the atmosphere at large. Perfectly valid if you want to know the CO2 content of the air next to a freeway, etc.

    So, which valid measurements do you think are being ignored?

    And who do you think is in charge of the conspiracy to ignore that data?

  358. dhogaza Says:

    The null hypothesis in this case is:” the rise in temperatures (GTA) are higher in 1995 -present compared to ‘previous periods’ or ”60-’90’s base period’. If the p value for this turns out to be 0.076, the null hypothesis is rejected. What more is there to be said?

    If you want to say that the difference between (say) p = 0.076 and (say) p = 1.0 is less significant than the difference between (say) p = 0.076 and p – 0.05, go for it.

    I’ve posted a link on the history of the p=0.05 choice for significance. It’s a *rule of thumb*. It’s not something that falls out of theoretical statistics. There’s no “theorem of significance” that proves that this is the “right” choice.

    Are you one of those who claim that the fact that 1995-present isn’t statistically significant to p <= 0.05 while 1994-present is, allows one to say "CRU head says 'there's no warming'"?

    You know that's not what the test says, right? It says it's actually much, much more likely than not that it's been warming 1995-present, just not *quite* strong enough to meet the iconic yet ad hoc 95% level.

  359. dhogaza Says:

    The null hypothesis in this case is:” the rise in temperatures (GTA) are higher in 1995 -present compared to ‘previous periods’ or ”60-’90’s base period’.

    Lord. No, that’s not the null hypothesis. Not even close. Think about it.

  360. Bart Says:

    Whbabcock,

    No, that’s not what this thread is about. It’s about a few things: Whether the temperature data contain a unit root, and what the consequences would be for how to analyze the time series.
    You would be correct with your inference if AGW was only based on (perhaps spurious?) correlation, but it’s not. It’s based on physics and a myriad of observations.

    Jfr117,

    I have no problem with people rooting for a unit root. But far reaching claims that the 130 year trend is purely random/stochastic and not deterministic is at serious odds with established physics. I am arguing the perceived implications of the unit root/randomness hypothesis for our understanding of climate science. I think the implications are very limited, though I’m open to learning about more accurate ways to analyze time series.
    In what way is the existing *scientific* consensus not true enough for you?

    S. Geiger, Craig Goodrich

    As I wrote earlier:
    “I would expect that VS would agree that if the 130 year record is merely a random walk, that the latest 12 years are by far not enough to draw any conclusions from. Perhaps VS will join us in fighting strongly against the erroneous “1998″ claim.”
    He responded in the affirmative. VS, wanna join me?!

    Craig,

    I said *physics based* model. Not curve fitting.

  361. Shub Niggurath Says:

    dhogoza

    I am well aware of the what a ‘p value’ signifies. I am only asking this:

    “If the p value is higher than 0.05, the probability of temperature trends being what they are purely due to chance is as high there being no trend at all.”

    Is someone justfied, purely mathematically/statistically in making this statement? I do not wish to argue about what anyone in the media made out what Jones said in his interview.

    Of course, the caveats with all this are:
    1) This is a climatologically short period of time.
    2) The trend is still a rising one.
    3) Jones dropped the ball.

    You haven’t addressed my point that for larger samples of data – researchers usually seek higher significance levels before accepting causality/correlation.

    There are very good recent examples for this type of thinking – which similarly puzzles me. Look at the reaction to the Thailand HIV vaccine trial for example – some groups wouldn’t accept the study conclusions because p=0.039 wasn’t good enough (!).

    http://news.sciencemag.org/scienceinsider/2009/09/massive-aids-va.html

  362. Bart Says:

    Lucia makes the following comment about the random walk issue:

    There are good physical reasons to expect that global surface temperature is not a random walk. The first law of thermodynamics must apply. A warmer planet will re-radiate more heat to the universe. We might not know the price constitutive relation, but if the planet warms, it’s unlikelt to re-radiate less. (If it did, that would really be amazing!)
    So,here really is a physical law that will tend to cause the earth’s surface to hover around some typical value. If the temperature fell to that of pluto, it would certainly warm up. If it rose to that of mercury, it would certainly cool down.

  363. Craig Goodrich Says:

    Bart,
    “Craig,
    I said *physics based* model. Not curve fitting.”

    Yup. And I said, to call the models currently in use “physics based”, when they are loaded with arbitrary parameters — which may be given any number of names, they still have no measurable physical basis in the data — is an amazing stretch. The “calibration” of these models is not an exercise in physics, it’s an exercise in curve fitting; for all the correspondence to actual measured real-world data you could call their parameters “pinkbunny1, pinkbunny2, pinkbunny3, …”

    If you actually believe that we know enough about the forces and energies involved in the chaotic, hugely complex climate system to actually construct a real physics-based model, I’m afraid you simply don’t understand the science (as the RC fanboys love to say).

    [Reply: Read up on what models actually do (e.g.) before making sweeping statements. BV]

  364. MikeN Says:

    >Add to those two the negative coefficient their model assigns to the first difference of methane forcing, which is patent nonsense… But that should have been a huge red flag;

    The same thing happens in Steig’s Antarctic warming paper. When unrolled, the calculation of temperature applies a negative weight to some stations’ temperature records.

    VS, do you have a personal stats blog of some sort.
    I’ve suspected Tamino engages in some cherry-picking, but he never answered enough questions for me to followup.
    He did try to quote Ian Joliffe as an authority before, and ended up having Ian tell him he is wrong.

  365. dhogaza Says:

    “If the p value is higher than 0.05, the probability of temperature trends being what they are purely due to chance is as high there being no trend at all.”

    Short answer, no. But scientists raise the bar far higher than that. Failure to reach the p=0.05 level of significance means just that and no more. p=0.076 means that, and it’s not the same p=0.5, which seems to be what your statement says.

    You haven’t addressed my point that for larger samples of data – researchers usually seek higher significance levels before accepting causality/correlation.

    I think it depends an awful lot on the field …

    There are very good recent examples for this type of thinking – which similarly puzzles me. Look at the reaction to the Thailand HIV vaccine trial for example – some groups wouldn’t accept the study conclusions because p=0.039 wasn’t good enough

    Well, offhand I can think of reasons for wanting a very high level of significance (p=0.01 or whatever). I am totally unaware of this specific example, but in medicine you might run into cases where a drug has serious side effects, for instance. Perhaps in this case you want to have an extremely high level of confidence before moving from trials to general use. In some cases the cost might be very high, and you want an extremely high level of confidence that positive outcomes are much higher than cheaper alternatives.

    I’m sure you can think of a lot of other possibilities.

    But your example does help point out that the specific p=0.05 value for “statistical significance” is a rule of thumb informed by practice. It’s not a fundamental property of statistics that falls out of any theorem proof.

  366. Shub Niggurath Says:

    When I said:

    ““If the p value is higher than 0.05, the probability of temperature trends being what they are purely due to chance is as high there being no trend at all.”

    I was trying to say:
    “If the p value is higher than 0.05, the probability of temperature trends being what they are (a rising one) is for statistical purposes the same as there being no trend at all”

    Thanks

  367. dhogaza Says:

    He did try to quote Ian Joliffe as an authority before, and ended up having Ian tell him he is wrong

    Yes, that Tamino misunderstood a statement of Joliffe’s regarding Mann’s form of PCA, not that Tamino doesn’t understand statistics. Joliffe also said there was confusing jargon around PCA and it was clear that this was part of the problem. “decentered” vs. “uncentered” vs. “what did Mann actually do, actually? (joliffe said he wasn’t sure from reading the paper)”.

    Slight difference of importance than your quote implies.

  368. dhogaza Says:

    Yup. And I said, to call the models currently in use “physics based”, when they are loaded with arbitrary parameters — which may be given any number of names, they still have no measurable physical basis in the data

    Ahem. This is not true for GISS Model E, at least. The parameters are physics-based, which ultimately rest on observations.

  369. Shub Niggurath Says:

    Oops didnt catch your reply! Thanks.

    Those who view p values on a continuum are usually the ones who want to prove their theory. Those who view it dichotomously are those who don’t agree with the said theory. Isn’t that true? :)

    It is also unlikely that a scientifically sound theory/body of knowledge will go for a long time without yeilding statistically significant correlations or effects – of a fundamental nature – at some point. My opinion on this, from my brief study of climate science is that the AGW hypothesis is still hanging in the air gasping for its p value.

    My question then is, as is on every reasonable skeptic’s lips is: why shouldn’t we seek a greater level of certainty that what our data and research can afford us at the moment?

    Just as in medicine – the area that I am familiar with and work in – the stakes are high. A rearrangement of the world’s economy is sought. We should wait.

    Regards

    [Reply: You seem to strongly downplay the potential (or even likely) risks of business as usual. If a doctor tells you that you better quit smoking, otherwise your risks of severe lung illness will strongly increase, would you wait until you’re in the IC before you stop smoking? BV]

  370. Kweenie Says:

    “joliffe said he wasn’t sure from reading the paper)”

    Joliffe also said: ”

    Joliffe also said (http://tamino.wordpress.com/2008/12/10/open-thread-9/#comment-25158);
    My view is that it was used inappropriately in MBH which was one error, and that the lack of transparency was certainly another error. Were they ‘huge’ errors? Everyone’s definition of ‘huge’ will differ (now there’s a huge topic for discussion!). In isolation I certainly wouldn’t deem them so. They became larger (in importance) than would otherwise have happened because the paper continued to be cited as having valid conclusions well after it became clear that some of its methodology was flawed. If it had quietly disappeared, but the errors noted and never repeated, leaving other less controversial papers to be cited when discussing past climate, the errors would not have attained such prominence. But I guess that would have been much less fun for the protagonists on both sides …

  371. Alex Heyworth Says:

    Re dhogaza Says:
    March 17, 2010 at 18:38

    Any paper claiming that a 1 w/m^2 forcing from different sources result in a different climate response won’t make it into any reasonable journal in the physical sciences.

    Two net 1 w/m^2 forcings from different sources would have the exact same effect if they were both evenly applied to an object which was all at the same temperature.

    What net effect 1 w/m^2 increases in CO2 forcing and solar forcing will have on the earth’s climate depends on how they are distributed and on what the temperature is of the parts the earth that get greater or lesser forcing.

  372. adriaan Says:

    This discussion is going nowhere.. and it was such a nice thread when it started. I learned a lot. It was quite refreshing to learn from VS that you can apply a different form of analysis to the CO2/global temperature set. His expose made a lot of sense to me. But I am not a climate scientist. I am a biologist. And we have been facing similar problems, but with an immensely lower impact on humanity in the short term. But I think we did a better job.

    First of all, it was soon recognized that metadata would become crucial to the exploitation of DNA sequence information. A common data format was rapidly established, data sharing mechanisms were set up, and version control was implemented. Next came the microarray data, and their concomitant statistical analysis. It was not long before a standard protocol was established for storing the raw data and metadata. No raw data, no publication. Everyone can now download numerous datasets from experiments performed worldwide, and reuse them for their particular purpose. Many analytical and statistical tools can be downloaded and modified according to one’s desire, since it is all open source.

    I would strongly suggest to implement similar approaches in climate related data and analytical tools. And stop harassing each other. The fact that your model, based partially on physics and partially on a large number of assumptions does not agree with the statistical analysis of observations should learn you a lot: the model is not right. That’s why it is a model.

    Sorry for the OT excursion into genetic research.

    [Reply: This has nothing to do with models (who agree quite well with the observations actually). If the idea of a pure random walk goes against conservation of energy, than it’s not a random walk, even *if* the values divorced from any physical meaning are inconclusive as to their randomness. It is a physical system we’re talking about. BV]

  373. Tim Curtin Says:

    Alex H: exactly, as my data above on Point Barrow show very clearly, and especially as solar forcing at the surface is different everywhere, whether in the Arctic or at the equator, and quite different from that of TSI which is invariant everyhere at any given date. Another oddity is that the earth is round, not flat,as implied by all who use TSI, like most here – so hours of daylight are in most places less than 24, and then there are angles, such as at Barrow, where horizontal direct and diffuse solar radiation forcings are quite different from those at places like Hilo in Hawaii.
    Bart: go figure – the global is made up of the local, ignored by most in this Blog except for temperature. My micro Barrow data refute all the macro claims you have made.

  374. dhogaza Says:

    It is also unlikely that a scientifically sound theory/body of knowledge will go for a long time without yeilding statistically significant correlations or effects – of a fundamental nature – at some point. My opinion on this, from my brief study of climate science is that the AGW hypothesis is still hanging in the air gasping for its p value

    Based on 1995-present not quite reaching statistical significance at p <= 0.05?

    What's special about that timeframe?

    What's wrong with 1994-present? It meets the de facto test of statistical significance.

    I think you're playing games here …

  375. DML Says:

    I note from your plots that there is an increase in temperature beginning in ~1910 of about the same rate and duration as the one that begins in ~ 1945 (I believe that Jones of UEA recently remarked on this). Furthermore, I understand that the significant increase in atmospheric CO2 did not begin until ~ 1945. Application of Mill’s method differences results in the inference that CO2 is not a significant causal factor in either rise. Thoughts?

    [Reply: CO2 is not the only driver of climate (see e.g. this graph), but esp over the past 3 decades it has become the most important one. Also in climate changes in the earth’s past CO2 played a major role, see e.g. this excellent lecture. BV]

  376. dhogaza Says:

    I would strongly suggest to implement similar approaches in climate related data and analytical tools.

    Geez … let me guess …

    You’re probably unaware that the entire set of raw and adjusted data used to compute GISTEMP is online and in summary form available for free?

    (If you want DVDs of the scans of the individual pieces of paper forms on which weather data have been traditionally recorded, you’ll have to pay production costs, but they are available.)

    Care to guess when the GHCN data was first made available? Was it due to external pressure? Hardly. It first came online in 1992.

    18 years ago.

    This is just one example. There’s scads of available data.

    Like every other field of (non-proprietary) science, climate scientists have responded to the advent of cheap storage, cheap computers, and cheap internet connectivity by making data available. They publish in journals like Nature and Science just as people in your field do, and meet the same disclosure requirement for non-proprietary data.

    Why do you assume that it’s any different in climate science? I suspect it’s because you’ve gotten your information from denialist sites.

    The fact that your model, based partially on physics and partially on a large number of assumptions does not agree with the statistical analysis of observations should learn you a lot: the model is not right. That’s why it is a model.

    Where did models enter into this? And, of course, there are several possibilities here …

    1. fundamental physics (not “a model”) has been overturned (if you believe B&R, and while VS claims not to be saying that, he’s treating B&R as authoritative)

    2. someone’s goofed up in their choice of tests

    3. need more data

    However, we don’t need more data to know that 1 w/m^2 radiative forcing due to increased CO2 is no different than 1 w/m^2 radiative forcing from the sun. Perhaps you don’t know how fundamentally wrong it is to say that. If you do, you must know that a line of statistical arguments that leads to that conclusion is fatally flawed.

    Anyway, best of luck overturning tons of physics. You’ll need luck, and a thick skin …

  377. Beaker Says:

    Adriaan, I find your reasoning quite strange, regarding different models. You seem to conveniently forget that statistical models have quite a lot of assumptions as well, and that those assumptions may not agree with the real world either. I can find all kinds of unlikely associations and relationships that have nothing to do with reality. VS started this thread by claiming temperature as a random walk, although he seems to have stepped away from that claim later on. But that is pretty unlikely from a basic physics viewpoint. It would tell me that perhaps the statistical model is wrong.

    In fact, as far as I can see, physical models restrain the values that certain parameters can take, and that is a good thing. It makes sure that your parameters aren’t going to do something that is physically unlikely, if not impossible. I have no clue why you would put your trust much in a statistical model over a physical model. If the two do not agree, I´d opt for the physical model most of the time.

  378. adriaan Says:

    @Dhogaza,

    I am not unaware of the fact that I can download GISS, GHCN etc. You make the false assumption that I am unaware of the problems in your field of science. I am only complaining about he fact that GISS and GHCN do not bother to have a good version control system, and that they continuously change their datasets without proper notice. And that I cannot go back to a previously published dataset to repeat a given analysis exactly with the original data, because they are not archived.
    And I am not dumb enough to start discussing that 1W(sic)/m^2 radiative forcing is different coming from the sun or any other source. What I do not accept is that a measured value of coming from the sun is equivalent in importance to a computed value. Thats where the models come in.

    And I do not need to turn over tons of physics. If I manage to turn over one single rule in your model, the model will be flawed. And that is exactly what VS was doing, pointing out that the correlation between CO2 and global temperature is depending upon whether the data are I(1) or I(2) class. And his conclusion was that your model was wrong.

    [Reply: His conclusion is that standard statistical procedures such as OLS are not valid, *if* indeed there is a unit root present. His conclusion is not (oe at least shouldn’t be) that physical based climate models are wrong. It has no bearing on that. BV]

  379. adriaan Says:

    @Beaker,

    I know that statistical analysis needs information about the data in order to get the most out of the information in the data. But you can analyse the data without any a priori knowledge, and make statements about what the data reveal, or what they cannot reveal. A statistical model is only relevant for the data, and does not necessarily be in agreement with a physical or biological model. Limiting the range of parameters on the basis of physical or biologiacl or other knowledge can improve the performance of the analysis. But assuming that one knows the limits of the parameters also conveys risks and can introduce unwanted bias. That you opt for the physical model is ok, but I am afraid that you do not have the full overview of the physical model with all its limited parameters. As has been stated in this thread, we do NOT know all the details of the processes that are being modelled. A major factor is that everything is expressed as being global, wheras most of the extremes are local, and compartimentalized by sea currents, mountains, whatever. In biology, compartimentalisation is one of the biggest challenges. No model of cellular activity has succeeded. And we can do thousands of experiments per week to test our hypotheses. So where does that leave you?

  380. Alex Heyworth Says:

    dhogaza

    However, we don’t need more data to know that 1 w/m^2 radiative forcing due to increased CO2 is no different than 1 w/m^2 radiative forcing from the sun.

    You seem to be fixed on this point. The additional data we need to determine that they have the same effect is that they are both evenly applied to an object with a uniform temperature. Since neither is the case when applied to the earth’s climate system, you do not have an argument (unless you could demonstrate empirically that the effects were the same).

    [Reply: Different forcings (of the same nominal value) can have a slightly different temperature effect indeed (as indicated by the “efficacy“, but they generally differ by a few 10s of a percent, not by a factor of 3. BV]

  381. nigguraths Says:

    dhogoza
    I said that the anthropogenic global warming theory is gasping for its first p value. Not the global warming theory. No tricks. :)

    VS says here that climate models are phenomenological, Richard Lindzen says as much. Hadi Daulatabadi is saying that the earth’s climate system may compensate for greenhouse forcings which may result in changes, without raising temperatures. The said phenomenological models examine temperatures as an end-result which are then postulated to *cause* changes.

    Now the climate community may grasp this cause/effect corruption nuance, but I don’t see them ever explaining this. The flag of temperature always flutters high. Why is that?

    Every non-temperature based effects-in-the-real-world WG2 argument flounders today. Many WG1 scientists are openly derisive of WG2 arguments. Yet the heat-trapped in the system which caused their precious temperature rises should have showed up in the WG2 arena – proving and supporting their hypothesis. But that is precisely where the wheels come off. Why is that?

    Regards

    [Reply: Climate models are physics based, notwithstanding sweeping statements to the contrary. Some wg1 scientists have the view that the wg2 report does not quite have as solid of a scientific backing as the wg1 report. That’s partly due to the nature of the fish: It’s a ‘softer’ science, and the scientific literature base is thinner. This is now mentioned more often, because the alleged errors were mostly in illustrative examples in wg2, and have been (ab)used to discredit the whole of (wg1) science. BV]

  382. adriaan Says:

    @nigguraths,

    That is because the model derived results are openly discussed as being as important or even more important than actual observations. This point has also been raised by VS in one of his first posts on this thread.

    The observations are not in agreement with the physical model, so the observations must be wrong.

    What can I do?

    [Reply: Strawman argument. No scientist has made such a claim. The observations *are* in agreement with the physical models. Hansen states in pretty much any talk he gives that our understanding of climate change is based on three pillars: Current obervations, paleodata, and physics based modeling. He notes that the former two are the most important. BV ]

  383. Scott Mandia Says:

    BTW, the global CO2 ppm as measured by the Atmospheric Infrared Sounder (AIRS) satellite agrees well with Mauna Loa measurements. See graphic below that I just generated using NASA Giovanni:

  384. PaulW Says:

    CO2 might be increasing, but the “CO2 forcing” is “Missing” according to Trenberth’s latest paper.

    The situation almost looks a little random to me.

  385. S. Geiger Says:

    PaulW – is there a link to the paper (or at least abstract) and/or a discussion of his (Trenberth’s) paper.

    Thanks

  386. Mike Says:

    VS,

    I’m am currently pursuing my PhD in statistics and recently this thread was brought to the attention of the fellows in my cohort. I must tell you we have all thoroughly enjoyed your contributions to this thread and your repeated refutation of what appears to be widespread nonsense. I believe you have single handedly converted at least two true believers!

    [Reply: The presence (or not) of a unit root does not negate basic physics (e.g. conservation of energy). BV]

  387. Eli Rabett Says:

    Since it has become a small issue here, as well as some satellite measurements, there were a fair number of sampling flights in the 1970s that measure CO2 concentrations above the boundary layer in the free atmosphere and agreed with the Mauna Loa measurements and those from other sites.

    The earlier measurements mostly suffered from bad siting (don’t measure in the middle of Paris), bad timing (measurements in agricultural areas have huge daily swings), bad calibration (in most cases, what calibration) and bad chemical technique(the various titrations are tricky) although you can go through them and find occasional ones that are usable

  388. stereo Says:

    PaulW Says:
    March 18, 2010 at 02:47

    “CO2 might be increasing, but the “CO2 forcing” is “Missing” according to Trenberth’s latest paper.

    The situation almost looks a little random to me.”

    You have misunderstood what Trenberth is saying. He is not saying that the forcing is missing, he is saying it is hard to track the energy in such a complex land/ocean system.

  389. Tim Curtin Says:

    [edit] dhogaza: “we don’t need more data to know that 1 w/m^2 radiative forcing due to increased CO2 is no different than 1 w/m^2 radiative forcing from the sun”: I assume that right now it is night where you are? What is the radiative forcing from the sun as you sleep where you are? And what is the RF from CO2, which according to the IPCC (and Piet Tans at Mauna Loa) is the practically the same night or day at ALL parts of the globe on any given day? No doubt, you are right about RF and SSR having the same effect if both are 1 W/sq.m at the SAME location and at the SAME time, but 1 W/sq.m of SR at top of the atmosphere is much less than that at the surface of the globe.
    Moreover, the so-called radiative forcing, which for CO2 was 1.66 W/sq.m in 2005 (WG1, AR4, p.141), is supposed to be radiation prevented from leaving the earth’s atmosphere and is therefore additive to the incoming radiation from the sun of 1,365 W/sq.m, for a total of 1,366.66 in 2005, when the TSI is incoming, but presumably the RF is busy on its own at night!
    The data I use are for average daily “global” (=total) direct+diffuse solar radiation expressed as Wh/sq.m, i.e. Watt hours per square metre per day, stated as averages for the month in question, so less in January in NH than in July. Now the IPCC simply states Radiative Forcing in W/sq.metre, and implies that it is invariant at any given location to either day or night or season (loc.cit.).
    Finally, from AR4 WG1 p.141 we can deduce that there are 0.01469 ppmv of atmospheric CO2 to 1 W/sq. m. of radiative forcing, so, given the Wiki climate sensitivity of 0.8K/W/sq.m:
    1880 2005 2100
    CO2 ppm 280 379 560
    Rf W/sqm. n/a 1.66 2.658938
    GISSTemp 13.87 14.65 15.44915

    Given the supposed logarithmic relationship between radiative forcing and climate sensitivity, even 15.45 oC for GISStemp when CO2 has doubled seems an over-estimate. Whence the claimed 3oC for doubling of CO2?

  390. kim Says:

    As I walk along
    Random thoughts come in my head.
    Wander and wonder.
    ===========

  391. Eli Rabett Says:

    One significant point lost in the statistics, is that the most of the pre-1960 or so forcings DO NOT HAVE ANNUAL RESOLUTION. Some of them are pretty thinly spaced, like a decade or more, and have simply been interpolated. You have to dig down about two levels in GISS and then link out to the actual source to see this. For example Ethridge, et al. on CH4

    Except for the strat aerosols, everything has been really heavily averaged, which means, when you look at it, that the “noise” (or random variation in the forcings) are essentially zero, and yes, VA, in that case, first or second differences ARE equivalent to differentiation. The strat aerosols have their own issues

    AFAECS, this knocks a lot of the statistical flim-flam into a cocked hat.

    And oh yeah, Eli has played this game before with econometricians. About 100 more comments needed to beat that one.

  392. Alex Heyworth Says:

    BV, thanks for your reply to my comment above. I gather from looking at the tables referenced in the link you gave that efficacy is basically set so that current efficacy of CO2 forcing is = 1. Solar forcing efficacy at current levels of TSI is fairly close, in the range 0.91 to 0.97. (I am reading from the Ea column on the tables.) Does this sound right, or am I misreading them?

    [Reply: Sounds about right. BV]

  393. Alex Heyworth Says:

    PS, the tables are at http://data.giss.nasa.gov/efficacy/table1.pdf and http://data.giss.nasa.gov/efficacy/tables3n4.pdf

  394. VS Says:

    Hi Ron Broberg,

    My apologies for making you wait. I wanted to answer your question earlier, but ‘stuff’ got in the way.

    The question you pose has more to do with the methodology of statistical testing, than with anything else. Every statistical (hypothesis) test is basically constructed as follows. Do allow for some ‘informality’ here, for the sake of exposition:

    (1) we set a null hypothesis (H0)

    (2) we derive the distribution of the test statistic under the H0, mostly analytically but in some cases via simulation. (Sometimes we also derive distributions under various alternative hypotheses, but this is very technical stuff, so I’ll leave it there for the moment)

    (3) we set the maximum ‘deviation’ from the null hypothesis we will tolerate, before rejecting the H0 (the so called critical value of the test statistic, corresponding to our pre-chosen significance level)

    (4) we calculate the test statistic for our sample realization, and draw our conclusion by comparing it with the critical value set in (3)

    We can therefore never ‘accept’ a null hypothesis. We can only reject it, or fail to find sufficient evidence to reject it.

    Now, in the case of unit roots, the various tests have different null hypotheses:

    ADF – H0: presence of unit root
    KPSS – H0: stationarity (no unit root)
    PP – H0: presence of unit root
    DF-GLS – H0: presence of unit root
    ZA – H0: presence of unit root

    So, in the cases of the ADF, DF-GLS and ZA, we concluded that there is no sufficient evidence to reject the null hypothesis of a unit root.

    Using the KPSS, which has the opposite different null hypothesis, namely the absence of a unit root. Applying the KPSS testing procedure, we however do reject the null hypothesis of no unit root.

    This is a standard statistical procedure to assess the series in terms of unit roots. My ‘conclusion: unit root’ statement applies to the final inference we make considering all the test results. So while I understand how you come to your idea, and your logic is correct, it is not applicable in this instance.

    Allow me to connect this to some standard jargon used in science.

    Note that in a, say OLS, regression we often refer to a coefficient as ‘statistically significant’. What we actually mean in such cases, is that the H0 that the coefficient is in fact equal to 0, is rejected at the chosen significance level.

    In any case, I hope this helps with interpreting the test results.

    ——————-

    Rabett should first read up on some econometrics before ‘playing games with econometricians’. I responded to his claims elaborately enough, here.

    I think it ought to be clear why I’m not debating anything with individuals of this ‘caliber’.

    PS. NO G&T DISCUSSION PLEASE. I elaborated my position on the G&T issue below the linked post. If you really need to comment on (my position relating to) G&T, do read the relevant posts first, please.

    ——————-

    Hi Mike,

    thanks, I appreciate the comment :) The discussion was in part meant for ‘my people’.

    And Bart, you replied there (although I don’t think that’s the ‘nonsense’ Mike was referring to):

    [Reply: The presence (or not) of a unit root does not negate basic physics (e.g. conservation of energy). BV]

    Thank you, can you now please proceed to explain that to all the ‘contributors’ here claiming otherwise?

    ——————-

    I would like to draw the readers’ attention to this comment by whbabcock.

    Very important methodological issues, that I’ve (obviously) failed to address as eloquently as whbabcock.

    ——————-

    Finally, Bart, it seems our ‘little’ discussion here has received quite some exposure ;)

    [Reply: I’ve been trying to make that point repeatedly now: The presence (or not) of a unit root has no bearing on the basic physics. The presence of a random walk however is inconsistent with basic physics. There is now a whole chorus here (from the exposure you indicate I guess) desperate to claim that climate models and AGW as a whole is now suddenly junk because there may be a unit root in the temp series. That is of course utter nonsense. Do you agree?
    Yet you surprise me again in ‘recommending’ whbabcock’s comment. I replied to him here. He makes exactly the incorrect inference as I described just above, as if AGW is falsified by the presence of a unit root. BV
    ]

  395. Josh Says:

    Completely on topic and amazingly well timed – did anyone else catch this?

    http://www.sciencenews.org/view/feature/id/57091/title/Odds_Are,_Its_Wrong

    “For better or for worse, science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science’s fidelity to fact and conferred a timeless reliability to its findings….” + another few thousand words.

  396. Dave McK Says:

    VS – please accept my thanks for upholding AND defending a standard.
    ‘what would you expect when you walk into a church and say there’s nothing supernatural’? lol – you really brought out the juice.
    It is malice, in case you doubt. They live off the benefit of doubt. That is the nature of their malice.

    Besides, chem 101 students can prove that water vapor in any unit of atmosphere carries 50,000 times the energy of the CO2 component (at the very least), so the fetishists are never going to find the driver in the residuals.

    [Reply: Quit the namecalling.
    Nitrogen is still way more abundant in the atmosphere than any other species and it does nothing. The effect depends on pysics. Water vapor is a greenhouse gas, but acts purely as a feedback, not a forcing. BV
    ]

  397. Dave McK Says:

    That is 50,000 times when there’s 1% water vapor and 500 ppm CO2, incidentally, to identify the context of the calculation.

  398. DML Says:

    Your repeated reply to VS and those who express agreement with him basically consists of the following argument:

    If VS is correct, basics physics is wrong.
    Basic physics is not wrong.
    Therefore, VS is wrong.

    But this is not a sound argument because the first premise is false. Both basic physics and VS can be correct because VS’s analysis concerns a climate model, and climate models contain more than basic physics – they include hypotheses in the form of assumptions and simplifications, and some of those hypotheses can be wrong even if the basic physics in the model is accurate.

    It is still possible, of course, that VS’s analysis is wrong. But since your argument is a non-sequitor, it doesn’t challenge his analysis.

    [Reply: This isn’t about climate models at all. Where does everyone get that claim from? You so badly want to find fault with them, is that it?
    The conclusions of a random walk is inconsistent with physics. The fact that it isn’t straightforward to reject the presence of a unit root has not bearing on the physics, lest many people want to make it appear as such. That is what I’m argueing against. BV
    ]

  399. Ian Says:

    VS, could you summarize the implications of your analysis? Like some other readers, I thought in your initial comments that you wanted to turn a blind eye (at least for this analysis) to physical understanding of the temperature record and simply apply ADF-style analyses to the dataset. As did some other readers, I inferred when you said “no trend” that you concluded global temp was, in the main, a random walk. In later comments I think you were arguing that you didn’t think this, but I’m not clear what you meant.

    Or, to restate my question, what implications does your analysis have for our physical understanding of climate? If it’s something simple (say, a restatement of the fact that short time scales are not useful in relating CO2 forcing to temperature, or that greenhouse emissions and concetrations in the atmosphere are not related linearly), then fine – but it would be nice to have a clear statement of what you think. If something more novel, that a concise summary would be appreciated.

    [Reply: Seconded. BV]

  400. A C Osborn Says:

    dhogaza Says:
    March 17, 2010 at 20:41
    “Presumably because that’s what was available, if true. Do you have a source stating that this is the only source that’s used?”

    I was hoping for an answer from Scott as they were his statements, but as you have responded –

    As a very knowledgable person on Climate Research are you actually saying that you don’t know where the CO2 data that is so important to the research actually comes from prior to 1958?

    The reason that I ask is that it is a virtual straight line for the last 10000 years before 1958, with Wars, Volcanoes and greatly varying temperatures, shouldn’t there have been some changes in the CO2 level?

  401. A C Osborn Says:

    dhogaza Says:
    March 17, 2010 at 20:41
    And who do you think is in charge of the conspiracy to ignore that data?

    Conspiracy? Wow do you think there is a Conspiracy then?
    I was trying to establish the reasoning/decisions for using what is currently used. Afterall the whole basis of Green House gases goes back a long way, are you saying that they couldn’t accurately measure Atmospheric CO2 when that was first propsed?

  402. Jimmy Haigh Says:

    [edit. Comment on the substance, not on the person. BV]

  403. Marco Says:

    @A C Osborn:
    Volcanoes contribute, on an annual basis, about 1% of the total anthropogenic emissions of today. That already is negligable. More important, however, is that there is no evidence of periods with markedly increased or decreased volcanic activity over the last many centuries which would result in a significant reduction or increase in CO2 emissions.

    There have been some variations in the few centuries (see e.g.
    http://zipcodezoo.com/Trends/Trends%20in%20Atmospheric%20Carbon%20Dioxide_2.gif ), but don’t expect wars or volcanoes to have been a major factor. The latter may have been one source of decrease, though, through temperature changes. But the effect is limited: the six degrees temperature increases during interglacials shows a 100 ppm increase in CO2. We’re already at the same increase with a ‘mere’ 1 degree increase.

  404. Bart Says:

    ALL: Please put comments that are not on topic in the open thread. The topic here is (statistical properties of) the temperature record (and its implications).

    Assertions that climate science is bogus because of x, y or z belong in the open thread (as do all other topics besides those mentioned above). Before making such assertions, please check what the science has to say about it e.g. here and take that into account in your comment.

  405. A C Osborn Says:

    Bart Says:
    March 18, 2010 at 15:09
    So Scott posts data and I can’t ask where he gets it or if he understands it. OK Bye.

  406. IanH Says:

    Ian @ 13:56 Says
    VS, could you summarize the implications of your analysis? Like some other readers…

    As I understand VS has looked at the temperature record (note not the models, not the physics) and determined that the record demonstrates temperature is I(1), the GHGs are I(2), yet the models, and nearly all climate researches are trying to fit a linear regresssion – oops won’t work, can’t work. He’s not as I understand it arguing with how the model is constructed, or how the earths physics works, just that you can’t wire the known physics together as per the GCMs. I’ve never seen him say because the temp is I(1) physical processes themselves are wrong, he’s not gone there, because as he says it’s not his field, it is now the job for the modellers to rethink theur GCMs

    [Reply: No. His analysis has no bearing on GCM’s or on our understanding of radiation physics. It may have bearing on the uncertainty estimate of a linear trend. Something entirely different. BV]

  407. nigguraths Says:

    BV
    This isn’t about climate models at all. Where does everyone get that claim from? You so badly want to find fault with them, is that it?

    This is related to the models. Because if VS is right – it is upto to those who argue in favor of the models being representative of the climate reality, to demonstrate their models incorporate this nature of temperatures.

    All parts of the AGW hypothesis have to work for it to be accepted.

    [Reply: This is related to the validity of OLS; *not* to the validity of physics based climate models. GCM’s don’t incorporate the temperatures, they try to simulate them.
    For you to claim that AGW is wrong you’d have to substitute it for a theory that works even better at explaining all the data. I’ll be waiting (not). BV
    ]

  408. mpaul Says:

    “VS, could you summarize the implications of your analysis?”

    I would strongly advise VS to not answer that question. Lets just stick to the narrow topic at hand. The implications should be left to others. VS has simply proven that GISSTEMP and CRUTEM3 are I(1). QED.

  409. VS Says:

    Hoi Bart,

    Thanks for your reply. You are absolutely correct in stating that the presence of a unit root in the temperature series doesn’t automatically ‘disprove’ AGWH.

    I never made that claim.

    Again, for the record, I was (too) loose with my wording when I stated that temperatures are a ‘random walk’. I also said, ‘statistically speaking, a random walk’. I even stated on March 5th:

    “I agree with you that temperatures are not ‘in essence’ a random walk, just like many (if not all) economic variables observed as random walks are in fact not random walks. That’s furthermore quite clear when we look at Ice-core data (up to 500,000 BC); at the very least, we observe a cyclical pattern, with an average cycle of ~100,000 years.”

    If you read my posts carefully, you will see that I (formally) invoked the random walk model in order to explain the idea behind a unit root, the simplest of all processes containing a unit root. Had this been a ‘normal’ debate, this would have been sorted out in three back-and-forth posts.

    Anyway, what the presence of a unit root does do however, together with the two unit roots found in various GHG forcings, is indicate which statistical method we need to apply in order to analyze the time series. As I stated earlier, we first establish the I(1) (here and here) and I(2) properties (here) of the series (temp and GHG’s, respectively), and then we proceed with (polynomial) cointegration in order to establish (or reject) any (non-spurious) correlation in the time series records.

    I, and others, have elaborated extensively on why ‘regular’ (multivariate or not) OLS regression analysis, including the calculation of confidence intervals of deterministic trends, is invalid in the presence of a unit root. This is important. Note that cointegration is the method for analyzing series containing unit roots. This too, is important.

    Now, had we not been distracted by various ‘refutations’ posted on various ‘science’ blogs, we would have arrived at cointegration (as applied by both K et al, and BR) much earlier.

    I’m eager to continue on this topic. For all those that ‘can’t wait’, here’s a post on cointegration analysis which I found very illustrative in this context. I admit I haven’t double checked everything posted there – this discussion here is already taking up way too much time – but David Stockwell (the author of these posts) looks like he knows what he’s doing.

    A nice test (in-sample forecast) of BR is then performed here. That last one should be very interesting for those already familiar with cointegration.

    I recommend the actual posts to all truly interested in the theory behind cointegration, as well as the actual (not ‘straw-manned’) contents of the BR paper.

    As for that post by whbabcock. What I liked there is the methodological frame he erects. I completely agree with his assertions in that sense (i.e. ‘analytical fact’, save errors in execution/data).

    The rest, we’ll reproduce and evaluate, like real positivist scientists ;)

    [Reply: I updated my newest post to include your clarification re ‘random walk’, where I explain that I have no particular beef with unit roots, but that pure randomness is unphysical. Many other commenters however seem to hold up your thesis here as a smoking gun that slams GCM’s and disproves AGW. Why don’t you join me in setting them straight? BV]

  410. Bart Says:

    ALL: In a new post I explain my view on the relevance of the unit roots. It does *not* mean that GCM’s are now invalidated or that AGW is now on its knees.

  411. Scott A. Mandia Says:

    I did not reply because dhogaza beat me to it but he shares the same view. Eli also responded today with the same arguments.

    Question 1, why are we using the Mauna Loa atmospheric level of CO2 form 1958 onwards, why not use whatever we were using to show the values before 1958?

    Direct measurements, if done correctly such as Mauna Loa data, are always better than interpreting values from proxies.

    Question 2, If we were using Ice Core data prior to 1958 why?

    Most of these locations are remote so there is less chance of siting issues. Furthermore, humans were not taking measurements for most of the 650,000 year ice record.

    Question 3, why aren’t we using all the other Valid Scientific Measurements of CO2 prior to 1958?

    and

    Question 4, why do we ignore Valid Scientific Measurements of CO2 for the 1940s which show around 375/380 ppm?

    Because it is extremely unlikely that they are a valid representation of global averages. We have a very good approximation of how much carbon is being emitted today by human sources and nature hasn’t changed much since 1940s regarding natural source/sink issues. There is no explanation for a 1940s 375/380 ppm value other than those values are tainted somehow.

    BTW, I show data regarding CO2 from volcanoes and from nature vs. humans at the two links below:

    http://www2.sunysuffolk.edu/mandias/global_warming/global_warming_misinformation_volcanoes.html

    http://www2.sunysuffolk.edu/mandias/global_warming/global_warming_misinformation_nature_emits_more_co2.html

    The links below shows locations of ice cores:

  412. Shub Niggurath Says:

    “For you to claim that AGW is wrong you’d have to substitute it for a theory that works even better at explaining all the data.”

    No. Nothing like that needs to be done.

    The IPCC scenarios (and thence climate models) propose a range of sensitivities and feedback strengths. Since there is no one study that we can discuss, if we take the AR4 to be representative of the sum of the physics of AGW- we are already in the realm of the unfalsifiable.

    Present-day temperature rises are stuck on the shore of Jones’ p values and the millenial record is stuck on shore of the hockey stick.

    Forgive my colorful language, but I can substantiate my claims. As a side question, are there any physical processes that climate scientists consider cannot be modelled? I know you guys model mosquitos and microbes like algae.

    [Reply: Of course you’d have to replace it by something better. Why would you thow it away otherwise? Do you throw a medical diagnosis in the wind because there’s inherent uncertainty associated with it? And replace it with, well, with what exactly? Not with “nothing” I may hope, but rather with something that you deem offers a better diagnosis. BV]

  413. A C Osborn Says:

    Scott A. Mandia Says:
    March 18, 2010 at 17:05
    Direct measurements, if done correctly such as Mauna Loa data, are always better than interpreting values from proxies.
    So you are saying that scientists between the time that they identified the “Greenhouse” gases and 1958 were not capable of taking measurements that were better than the very poor approximation provided by Ice Core Samples?

  414. Pat Cassen Says:

    A C Osborn – You can look this stuff up. “THE PRE-INDUSTRIAL CARBON DIOXIDE LEVEL”, T. M. L. Wigley, Climate Change, 5, 315 (1983)

  415. A C Osborn Says:

    Pat Cassen Says:
    March 18, 2010 at 18:11
    OK I read it, are you still saying that an Ice Core simulation (which are known to NOT replicate modern CO2 levels) are better than those of the Scientists that went from 1880 to 1958?
    I am not talking about prior to 1880 or even 1900.

    [Reply: Here’s a graph of temp and CO2 (Law Dome and Mauna Loa) starting at 1880. They line up very well. The problem with CO2 measurements is that you need a site that is not affected by emissions but preferably not by vegetation fluxes either, plus you need long term continuous measurements. That is why Ralph Keelings work was so groundbreaking. BV]

  416. ScP Says:

    Josh, a great article and as worth reading as this exceptional thread – this is the best post and discussion I have read this year.

    Thank you VS for the clarity with which you explain and thanks Bart for hosting the post!

    By the way, Josh, are you any relation to the other ‘climate’ Josh? – see http://www.cartoonsbyjosh.com

  417. Marco Says:

    @A C Osborn:
    What ‘simulation’ are you talking about? Those are direct measurements. And the Law Dome data fit quite well with the Mauna Loa data.

    What we know (yes, know) from the pre-1950s data is that local ‘contamination’ of the measurements is very very likely. We can still show that, just make a trip around the countryside with a CO2 analyser. You’ll get wildly varying numbers depending on time-of-day, wind force and direction, location, height, etc.

  418. Pat Cassen Says:

    A C Osborn – Not sure what your problem is. From Wigley, 1983:

    “There are 19th century data from the southern hemisphere … These data are of high quality (comparable with the best measurements made prior to the 1950s) and may well be the only 19th century data available which are unequivocally free from local or regional pollution effects. Many of the measurements are significantly less than the commonly assumed ‘pre-industrial’ value of around 290 ppmv first suggested by Callendar.”

    And see Marco, above.

  419. HAS Says:

    A couple of observations, then a question of clarification or two and a comment.

    First in terms of the initial discussion about confidence limits on forecasts, there isn’t just a problem with the specification of the model, there is as far as I can see little regard given to the systemic variability in the underlying measures (temp and GHG).

    Second the passage of time causes nothing, so time is basically a surrogate variable. The interesting issue when testing models etc is what is it a surrogate for. As I understand it time series analysis is helping to identify the characteristics of the systems (i.e. models) that generated the series.

    To my questions of clarification.

    My understanding then is that given that temp (as measured) is I(1) and GHG (also as measured) is I(2) the problem goes beyond simply the issue of what statistical tests should be applied.

    First it says that any simple linear relationship between them will be an invalid model because under these circumstances and temp being I(1) implies GHG is I(1), which isn’t not. Is this correct?

    Going further any GHM that fails to generate temp as I(1) and GHG as I(2) has to be rejected as a valid model (taking into account the fact that some of the GHG is exogenous). Is this correct?

    Finally for those that think the rule of physic reins supreme through all this I suspect you are ignoring the fact that CGM are complex constructs using various physical sciences as inputs, but dealing with significant uncertainty. When a complex engineered system fails we don’t blame the laws of physics we blame the accuracy of the modelling.

    In fact the issue of the bumble bee (referred to a few times) is instructive, but not for the reason raised. Model builders using the laws of physics claimed the bee couldn’t fly, empirical observation shows it can so the models need to be revised.

    By analogy model builders using the laws of physics conclude temp and GHG have certain attributes, but observations of them show this not to be the case. The models need to be revised.

    [Reply: The issues discussed here have no bearing on GCM’s. Why do you bring them up? BV]

  420. Kweenie Says:

    “o you throw a medical diagnosis in the wind because there’s inherent uncertainty associated with it? And replace it with, well, with what exactly? Not with “nothing” I may hope, but rather with something that you deem offers a better diagnosis.”
    .
    Continuing your metaphor, in that case I would ask for a second opinion. Which I believe in medics is not unsual and not considered to be a bad (skeptic?) thing.

    [Reply: So would I. For climate, see e.g. here or here if you were to *randomly* search for a second opinion. BV]

  421. Kweenie Says:

    “[Reply: So would I. For climate, see e.g. here or here if you were to *randomly* search for a second opinion. BV]”

    Using quotation marks for “randomly” is adequate looking at the links. One might prefer to go to a different “hospital” (http://www.co2science.org/data/mwp/mwpp.php)?

    [Reply: That “hospital” (or “doctor” I might say) was most definitely not randomly picked. It’s like the situation where you dislike your MD’s diagnosis, and you specifically seek out the one doctor in the nearest 100 km from whom you know that (s)he thinks smoking is not bad for your health. BV]

  422. Shub Niggurath Says:

    Of course you’d have to replace it by something better. Why would you thow it away otherwise? Do you throw a medical diagnosis in the wind because there’s inherent uncertainty associated with it? And replace it with, well, with what exactly? Not with “nothing” I may hope, but rather with something that you deem offers a better diagnosis.

    You dont have to replace the theory of anthropogenic global warming with anything. We think our measurements tell us that the globe is warming – stating that is enough. I differentiate AGW from GW always. GW is just observation – we are all fine with that.

    No diagnosis has ‘inherent uncertainty’. The uncertainty is in the face of less evolved disease, un-investigated physical findings and limits of knowledge. Doctors should strive to reach a more accurate diagnosis but say “we don’t know” when they don’t. Many patients die without a diagnosis. Many other patients get stuck with some label and they die anyway.

    Patients are more angry at dishonest doctors than doctors who cannot diagnose what they have.

    [Reply: So I guess you’re fine with a medical diagnosis stopping at the point where the thermometer sais you have a 40 degree fever which is slowly getting worse? YOu wouldn’t want to know why, and to then be able to do something about it? BV]

  423. dhogaza Says:

    VS: you agree with at least part of B&R’s analysis, which in its entirety leads to conclusions that are physically impossible.

    I suggested some time ago that your time could be more profitably spent, perhaps, by figuring out where they’ve gone wrong, and why.

    Because unless you do so, there’s really no reason for us to place much credence in what you’ve done. If your recreation of part of what B&R have done includes one or more of the horrific blunders they’ve made, well … the implications are obvious.

  424. dhogaza Says:

    A nice test (in-sample forecast) of BR is then performed here. That last one should be very interesting for those already familiar with cointegration.

    I recommend the actual posts to all truly interested in the theory behind cointegration, as well as the actual (not ’straw-manned’) contents of the BR paper.

    The fact is that B&R’s full analysis lead to horrifically impossible conclusions.

    You claim this:

    1. I’m not ‘disproving’ AGWH here.
    2. I’m not claiming that temperatures are a random walk.
    3. I’m not ‘denying’ the laws of physics.”

    If you agree with Stockwell’s lead paragraph at the site you linked:

    Beenstock’s radical theory needs to be tested. As discussed here, he proposed that CHANGE in greenhouse gases (delta GHGs or dGHGs) not absolute values produces global warming.

    Then you are declaring that *some* laws of physics have been overturned, as as their application lies at the heart of CO2-forced warming (regardless of the source of CO2) then you necessarily are rejecting current AGW theory, as well.

    That would make your statements #1 and #2 false, though presumably it’s due to your not understanding the consequences of Stockwell’s restatement of one of B&R’s key conclusions rather than dishonesty.

    If you disagree with the unphysical conclusion made by B&R and agreed to by Stockwell, then do us all a favor:

    Where did B&R go wrong? You’re claiming to be the stats expert here. You’re the one claiming that Tamino (who has a new paper in GRL just out) doesn’t understand 1st-semester statistics.

    Back up your claims to statistics guru status by identifying where B&R went wrong.

    Or admit that perhaps you *are* claiming that a whole bunch of physics has been proven wrong by B&R …

  425. dhogaza Says:

    And I see that VS is back to his insulting ways again …

    On your ‘science’ blog, to which you link

    Eli Rabbett is a PhD chemist and professor. Chemistry is science. You can guess my opinion of economics, an opinion you’re strengthening with almost every post.

  426. dhogaza Says:

    Eli says:

    One significant point lost in the statistics, is that the most of the pre-1960 or so forcings DO NOT HAVE ANNUAL RESOLUTION. Some of them are pretty thinly spaced, like a decade or more, and have simply been interpolated. You have to dig down about two levels in GISS and then link out to the actual source to see this. For example Ethridge, et al. on CH4

    Except for the strat aerosols, everything has been really heavily averaged, which means, when you look at it, that the “noise” (or random variation in the forcings) are essentially zero, and yes, VA, in that case, first or second differences ARE equivalent to differentiation. The strat aerosols have their own issues

    AFAECS, this knocks a lot of the statistical flim-flam into a cocked hat.

    VS claims that his earlier post shows that Eli is wrong.

    VS: provide a reference to a paper showing that he’s wrong about the effect of averaging.

    Also, you’ve ignored the fact that the earlier forcing data is of relatively poor quality in many cases. I was unaware of the heavy averaging and instances of interpolation, but had been pondering just when to ask you how you can statistically treat all of this data uniformly when measurement error increases greatly as you go further back for some of the data, and that the increases aren’t uniform across the different things being measured.

  427. dhogaza Says:

    Well, I don’t have a lot of time to poke around, but I am finding references claiming that smoothing does indeed make it more difficult to reject a unit root test. So Eli’s claim that …

    everything has been really heavily averaged, which means, when you look at it, that the “noise” (or random variation in the forcings) are essentially zero

    Can cause difficulty does seem pertinent.

    Perhaps this professor isn’t as dumb as VS claims. Perhaps Tamino isn’t, either.

    And perhaps B&R are as wrong as those who understand the physical implications are trying to point out …

  428. dhogaza Says:

    VS:

    I think it ought to be clear why I’m not debating anything with individuals of this ‘caliber’.

    Since his identity has been exposed more than once on this thread, this is who VS won’t debate.

    I’d be careful of debating him, too. He clearly knows his stuff.

    VS – where’s your CV and what’s your real name, since you went out of your way to “out” the wascally wabbit and tamino early in this thread?

    Me, I just have a humble BS in CS, though since I took my senior sequence and did my senior project in my freshman year, I did take pretty much every graduate level course remotely related to computer science my school offered.

    Oh, well, time to see if VS will take the time to identify where B&R has gone wrong. Thus far, he’s only made claims about them being right …

  429. Al Tekhasski Says:

    Bart wrote:
    “The earth climate remains constant if in- and outgong radiation equal each other”

    No, it is not necessarily true. Climate (== spatially-distributed surface-atmosphere system) may perfectly fluctuate even if average energy flux across the system remains unchanged. First because the air is coupled with massive (but liquid) reservoirs with big thermal inertia, and second, various spatial distributions of surface temperatures may have different global temperature but the same average emission.

    [Reply: I include the oceans in my notion of climate. E.g. ENSO shuffles energy around and thereby influences atmospheric temps without an energy imbalance at TOA. It does not however change the total heat content of the earth system. A radiative forcing such as from GHG, aerosols or solar does. BV]

  430. David Adamson Says:

    “VS – where’s your CV and what’s your real name, since you went out of dho please get back to trapping and ringing raptors, this advanced statistics is obviously way too much for a simple BS. ”

    Et tu Brutus!

  431. dhogaza Says:

    [Edit. Calm down.]

  432. dhogaza Says:

    David Adamson:

    So what, are you proud that you can type “dhogaza” into google? Does it make you feel superior that you did a Comical Tony Watts style “partial outing” rather than a full VS-style outing?

    [edit]

    Christ, anyone who can type “dhogaza” into Google will find my personal information. I use the handle because …

    1. I like it

    2. It’s unique, people who want to learn who I am can find out (though in the future I may reconsider this, because of assholes like you)

    3. Who is VS? Inquiring minds want to know. Why does he hide?

  433. dhogaza Says:

    More complete disclosure, [edit]

    email: dhogaza@pacifier.com
    website: donb.photo.net
    professional website: openacs.org
    ethnicity: 75% German, 25% Dutch (as best I can determine)
    religion: fallen methodist
    HWP?: possibly, given my age

    what else do you want to know, [edit]?

  434. dhogaza Says:

    And, oh yes, when I was in my late 30s and early 40s, I was one of the top raptor trapper/banders in the world.

    this advanced statistics is obviously way too much for a simple BS

    And, ignoring your ignoring of my explanation of my university degree … apparently it’s too much for VS.

    His endorsement of a statistical analysis by B&S that essentially says much of modern physics is wrong, is simply stupid.

    And your creative way of trying to defend it … is beyond stupid.

  435. HAS Says:

    Re my comment at March 18, 2010 at 21:07

    [Reply: The issues discussed here have no bearing on GCM’s. Why do you bring them up? BV]

    Because if the answer to the two questions I ask is in the afirmative the issues discussed here constrain the classes of models that acceptably describe the world as observed. (There is also an alternative explanation namely that the data series used to derive the results are inaccurate.)

    I’m sure you’re not saying here that climate models are determinsitically created from the laws of physical science, and thereofore to question the results of those models is to question those laws. Although I’m not completely sure given your comment on the post you link to “just as gravity is not falsified by observing a bird in the sky”.

    For me I take the view that while the physical sciences can describee parts of the processes leading to climate taken in isolation, taken as a whole one is dealing with a system of very great complexity and uncertainty. There are many ways to comibine the parts to give the whole, and this is where the fun (in the sense that science is fun) begins.

    Perhaps do me the courtesy of tryning to understanding the implications of my questions, and if you have a view on the answers share.

    [Reply: “models that acceptably describe the world as observed.” Upper panel humand and natural climate forcings. Lower panel natural forcings only. BV]

  436. dhogaza Says:

    Bart, I apologize for my wrath, but for god’s sake [edit]

    Anyone can google for me. My handle is my brand, and I’ve been on the net forever, so actually “dhogaza” is likely to return more information for many queries than “Don Baccus” (one reason I keep it. plus … I like it).

    Meanwhile, there are serious questions on the table, which Admonson shall we say … tried to fiddle while making a fool of himself.

    1. who is vs? Asked because he’s “outed” two people who post with semi-transparent pseudonyms, while staying hidden himself – totally vile, IMO.

    2. what are his credentials? (note, I related my academic experience before asking the second time – I haven’t revealed my professional ones, but given that I’ve just entered 2.5 months of contract for $25,000 let’s just say I don’t give a crap what people like VS or Admonson care)

    3. Asked for answers for some specific questions, which VS has a tendency to ignore (other than to insult other people regarding their credentials, without revealing his own).

    Blah blah.

    Get down to it, VS. Admonson [edit. No namecalling, swearing etc. BV]

  437. dhogaza Says:

    HAS:

    I’m sure you’re not saying here that climate models are determinsitically created from the laws of physical science, and thereofore to question the results of those models is to question those laws

    Actually, to quite a large degree, they are, no matter how much you want to believe otherwise.

    The monte carlo aspect of individual runs have to do with the setting of initial conditions (which can never be known totally precisely) along with random pertubations ranging over things which can’t be precisely determined (even for an atomic weapon, which was the domain space for which Johnny Von Neumann invented the methodology – note: the hiroshima and nagasaki *did* explode).

    This is the source of the non-deterministic aspects you’re talking about, yet, we know it’s not a problem if proper (non B&R, non VS) statistics are applied (I’m sure VS could provide us a statistical proof that fat man didn’t explode over Nagasaki for instance).

    Anyway, models of this sort are soundly based on physics. If you want to reject climate models on this basis, you must reject those used to engineer and design nuclear weapons. They’re built on common ideas.

    And as we know from the history of WWII, and after-war tests … they do blow up.

    However much you think such models are dumb, stupid, etc … a wide variety of fission and fusion weapons, when tested, *have* blown up.

  438. dhogaza Says:

    Bart wrote:
    “The earth climate remains constant if in- and outgong radiation equal each other”

    No, it is not necessarily true. Climate (== spatially-distributed surface-atmosphere system) may perfectly fluctuate even if average energy flux across the system remains unchanged. First because the air is coupled with massive (but liquid) reservoirs with big thermal inertia, and second, various spatial distributions of surface temperatures may have different global temperature but the same average emission.

    If Al doesn’t understand that he and Bart are saying the same thing, lord help us.

    Other than the fact that Bart’s talking about climate in equilibrium, and Al is muddying by “First because the air is coupled with massive (but liquid) reservoirs with big thermal inertia” talking about climate out of equilibrium.

    And further that Al is pretending that Bart’s statement is assuming “equilibrium will end weather”, which is silly …

  439. HAS Says:

    dhogaza at March 19, 2010 at 05:37

    I trust you understand the irony of saying on a thread where we are debating the underlying statistical processes of a couple of key variables that all is solved by running Monte Carlo simulations to get the initial conditions right.

    I should add that I don’t think that “such models are dumb, stupid, etc …” -your projection – and also that I think it would be relatively trivial to show that compared with climate modelling nuclear fusion or fission are strongly bounded problems.

    If dealing with the world’s climate over the last few thousand years is this easy for the physical sciences, why not go and sort out a few of those intractable problems in the social sciences to show what “hard data men” can really do. I understand I should now say “sarcasm off”.

    My point is: we should have some humility about the ability of the physical sciences to explain extremely complex real world phenomena.

    [Reply: Indeed. But noone ever said it’s easy or that we know it all. The same humility should reasonably also be expected from people who attempt to criticize a whole scientific field as being either ignorant of fraudulous in sudden and systemic manner. BV]

  440. Al Tekhasski Says:

    dhogaza writes: “If Al doesn’t understand that he and Bart are saying the same thing, lord help us.”

    No, we are not saying the same thing. In climatology speak, “earth climate” is synonym to “global temperature [index]”. Therefore saying that “climate remains constant” is equivalent to saying “constant global temperature”. I am saying that infinite number of climates (with different global temp index) may have the same OLR. And zonal climates can “walk” while having the same total OLR, in balance with total insolation, while the global index will vary. Do I need to spell more, like “sigma*T^4”?

    [edit]

  441. David Admonson Says:

    Dho,

    See
    “# dhogaza Says:
    March 13, 2010 at 06:18

    That’s a funny response, which I thoroughly enjoyed, Jim :)

    (if you’re going to abbreviate my handle, though, it’s “dho”, not TCO’s “dhog”, it’s a type of raptor trap invented by Arabs 1,000 or so years ago, a “dho gaza”, I do raptor banding field work”

    you said so yourself ! Remember?

    BTW I thought I had deleted my remarks before the latin, Unfortunately I was diverted and hit the submit, I apologize for my sarcasm.
    I look forward to yours.
    Note my name is Adamson not Admonson

  442. Tim Curtin Says:

    Bumblebee asked: “Who is VS? Inquiring minds want to know. Why does he hide?” Perhaps for same reasons as you until you were outed at NYT.
    But back to your physics. That is not in doubt, but what is uncertain is the practical significance of what increasingly appears to be no more than a trivial theoretical curiosum.
    For example, Jeffrey Kiehl of NCAR (GRL 28 November 2007) concludes after running n models that “the range of uncertainty in anthropogenic forcing of the past century [by a factor of two] is as large as the uncertainty in climate sensitivity…” and earlier that “the total forcing is inversely related to climate sensitivity”. Ye gods!

    I had gathered from you et al that the science is settled without a smidgeon of uncertainty – yet Kiehl here admits he hasn’t got a clue. Typically, despite being at NCAR, his paper contains not a single piece of evidence on any issue and least of all to show that the physics stacks up.

    Back to your raptors, they have more sense.

    [Reply: Don’t set up strawmen for knocking down. Noone claimed that “the science is settled without a smidgeon of uncertainty”. I would argue that the main tenets are reasonably clear, and there is a lot of uncertainty in details. What I strongly argue against is the implict claim of many that uncertainty is the same as knowing nothing. It’s not. Besides, more uncertainty measn higher risk. BV]

  443. VS Says:

    Hi guys,

    Under Bart’s new entry there are some very interesting references posted. I would really appreciate if people would post them here.

    This reference to Kaufmann et al (2009) also looks like a proper statistical set up (I still have to read it properly!). Nice one Alex Heyworth :)

    There was also a question in the other thread, posed by Alan, asking if I have proposed anything ‘new’. The answer to this question is no. Not really, at least.

    I just mentioned a single implication of this body of literature, namely, that given the (established) presence of a unit root, simple OLS based (multivariate or not) inference is invalid. This includes OLS based trend estimation and calculation of the relevant confidence intervals. Note that this is not a matter of opinion, but rather a formal result.

    Now, I haven’t seen anybody making that case clearly, and considering this, I believe somebody should.

    Also, I performed a Zivot-Andrews test in order to compare the null hypothesis of a unit root with the alternative hypothesis of a trend stationary process with a structural break in the sixties. I haven’t seen that one yet in the literature, although there is a good chance I simply missed it.

    Note that the endogenous breakpoint method indeed finds the hypothesized break in 1964, so no inconsistency there with what people have ‘eyeballed’.

    Again, we infer that the series contains a unit root.

    No grand innovations here, just statistical consistency, and a good deal of objective testing.

    Cheers!

  444. Kweenie Says:

    [edit]

  445. Alan Says:

    VS remarked:

    Now, I haven’t seen anybody making that case clearly, and considering this, I believe somebody should.

    The link is to Google Scholar with “OLS trend temperature climate” as the search term and returning 531,000 hits.

    VS, I wonder where you are going with this … I really do.

    Having observed this thread, I reckon that you are implying that there is a huge body of research, by a vast number of research teams, that could be compromised because, hey, they aren’t sharp enough to get their data analysis methodology right. And, in particular, this lacking appears to be the case in climate science.

    You may truly believe this and you say someone should “make that case”.

    You have road-tested your argument here. Where to from here?

    Seriously, there is little point that I can see continuing here … if you feel strongly go and put your case into a robust proposition and submit it to a peer-reviewed journal. A blog isn’t the place – despite its attractions.

    Are you sufficiently convinced that you have found a critical flaw in climate science and committed enough to “make the case”?

    If you are, that’s great … I will await the outcome keenly.

    If not, then this will become pointless flummery.

  446. Geoff Cruickshank Says:

    Thanks VS
    I learned some interesting things from your explanations.

  447. Jimmy Haigh Says:

    [edit] Dhogaza says:

    (if you’re going to abbreviate my handle, though, it’s “dho”, not TCO’s “dhog”, it’s a type of raptor trap invented by Arabs 1,000 or so years ago, a “dho gaza”, I do raptor banding field work”

    Now we use windmills.

  448. Bart Says:

    ALL: Play the ball, not the person. And say something substantial, or don’t say it at all.

  449. VS Says:

    Guys,

    Just for the record. I firmly believe in open and transparent science.

    Save a few incidents, I have thoroughly enjoyed this thread, and I think that the contributions of (most of the) unit root ‘skeptics’ are really raising the bar.

    Consider this full fledged peer review. There is nothing as effective as a skeptic looking over your shoulder while you’re doing your analysis.

    Given that, I truly don’t understand people are calling for me to ‘stop’ doing it here. We have access to a huge online community of experts. I welcome their opinions.

    Now, I have indeed claimed that a lot of the OLS trend analyses performed in climate science is incorrect, from a statistical point of view. I believe I have both the formal science and the test results on my side.

    Note also that when cointegration entered the game in the 80s, it buried a whole body of previously published macro-economic articles. You can imagine that the authors in question were not amused, but they conceded.

    That’s how science works.

  450. Tim Curtin Says:

    Hi BV: you replied to my last Don’t set up strawmen for knocking down. Noone claimed that “the science is settled without a smidgeon of uncertainty”. Really? I could name a lot of names, not least at the ANU here, eg Frank Jotzo co-author of the Garnaut report, and Garnaut himself passim even before his Report came out.

    BV added: “I would argue that the main tenets are reasonably clear, and there is a lot of uncertainty in details. What I strongly argue against is the implicit claim of many that uncertainty is the same as knowing nothing. It’s not. Besides, more uncertainty means higher risk”.

    I suggest you read Skidelsky on Keynes (2009), risk and uncertainty are not the same thing at all. Insurers against risk (eg death) generally fare better than banks in financial crisies, because their risks are actuarially based, at least until they move into banking like AIG which took on all kinds of ‘risks’ that were actually uncertainties (like political risk as in Greece) – “the use of ‘risk’ to cover uninsurable contingencies [like climate change] conveys a spurious precision” – like the spurious correlations that VS has demonstrated.

    Kiehl’s paper remain rubbish, at least until you mount a better defence using evidence rather than his joke models.

    [Reply: Another strawman. I didn’t say uncertainty and risk are the same. I said that more uncertainty in the case of climate science (eg in climate sensitivity) means a higher risk (because the chance of catastrophic effects increases). BV]

  451. Paul_K Says:

    VS,
    Thank you for some fascinating, high quality input. I admire your energy, if not your patience. I would suggest that you take your own advice, stick to your subject matter, ignore the [edit] individuals who go straight to ad hom in the absence of anything useful to say.

    I would also caution you against trying to defend the G&T paper. I recognise the context in which you made your comments. At the same time, I would say that one of the primary conclusions of the G&T paper – breach of the 2nd law of TD by AGW theory – is, to put it mildly, highly questionable. Your detractors will attach your name to a support of the paper, which is by no means well supported by scientists on either side of the AGW debate, and forget the context in which you made your comments. I believe that that would be a great pity because your stats comments have been invaluable in my view, and should not be diminished in such a way.

    I fully support your quest for rigorous statistical methodology.
    Let us hope that over time, climate scientists themselves will see the essential need to respond to the challenge that they must upgrade their understanding and application of statistical theory.

    In my view, this is long overdue, not just in the methodologies for testing correlations in time-series, and very obviously in paleoclimatology, but perhaps more importantly in the “cause and attribution” studies founded on tuned GCMs and summarised in IPCC AR4.

    It genuinely surprises me that several posters here, who lay claim to some sort of scientific background do not grasp the critical importance of statistical tools for the expression of confidence in the validation of ANY mathematical model or hypothesis against empirical or experimental data. The central argument against you (apart from poor understanding of TSA) seems to be along the lines that we don’t need statistics because the answer is already there in the physics.

    Without some such attempt at rigourous quantification of uncertainty between and within models, and using the best tools available, the results will always remain a matter of faith in the absolute rightness of the underlying governing equations, as well as faith in the translation of such equations into a phenomenological FD form with all of the potential sources of error implied by such translation process.
    This of course leads to a dangerous circular argument:- we know that the models are right because the physics are right, and the physics must be right because the models say that there is no other explanation; there is no other explanation because we know that the physics are right.

    Well, it may even be true, but the logic supporting it is fallacious. I know of no way to break into this fallacious logic other than (a) a willingness to consider and test other models which might explain the observations better (to demonstrate that they really don’t!) and (b) the rigourous application of statistical tools to sort the wheat from the chaff.
    Before anyone screams at me that I must be a denier of basic physics, I would pose a serious question from my own personal list of uncertainties in the physics:-
    I can apply Beer-Lambert to a 100% CO2 phase and find Einstein A and B coefficients without too much difficulty. CO2 lasers have been around for quite a while. Now can anybody point me to experimental data that tells me how to calculate the Einstein B coefficient for CO2 (or directly estimate the degree of kinetic thermalisation on a >10m scale) to be expected in a known mixture of gases which include other dipolar and diatomic molecules? The only experimental data I have seen (Heinz Hugg in 2000 or 2002?) suggested a variation of scattering with composition – something which, as far as I can tell, is not accounted for in any atmospheric radiative model. Does anyone have a reference to any updated experimental data?
    Are we so sure of the physics that we can abandon statistical discrimination between models?

    [Reply: This whole discussion has no bearing on climate models (GCM’s). It has bearing on OLS regression. BV]

  452. Adrian Burd Says:

    VS,

    Please correct me if I’m wrong, but your argument seems to be that the statistical methods used to estimate temperature trends and confidence estimates of those same trends in climate data are invalid (because of the presence of a unit root). This seems to be an important point – physics aside for the moment.

    So, how do we correctly estimate the trend, or is this impossible to do, given the data at our disposal? Is it impossible to say, from the present data alone, that global average temperatures have been increasing? Is it also impossible to say, from the data alone, that one of the major factors influencing any trend in global temperature is the rise in atmospheric CO2? I think these are really important points if this discussion is be carried forward in any meaningful way.

    I appreciate that the following will get statisticians all a-twitter, but most scientists approach a problem from (at least) two directions – the data and theory. This seems to be particularly true in the environmental sciences. Hopefully these meet in the middle.

    Adrian

  453. dhogaza Says:

    It genuinely surprises me that several posters here, who lay claim to some sort of scientific background do not grasp the critical importance of statistical tools for the expression of confidence in the validation of ANY mathematical model or hypothesis against empirical or experimental data. The central argument against you (apart from poor understanding of TSA) seems to be along the lines that we don’t need statistics because the answer is already there in the physics.

    No, it’s the fact that the conclusions arising from this particular analysis can be shown to be non-physical. Therefore, there’s an error somewhere in the analysis. It’s not that statistical analysis is unwanted, it’s that *erroneous* statistical analysis is unwanted.

    Let the self-proclaimed statistics expert figure out where B&R went wrong. He’s mysteriously silent about it.

    So, again, VS: where did B&R go astray? They’re wrong. Their results are physically impossible, and the impossibility has nothing to do with climate science specifically.

    Show us where they’re wrong and perhaps you’ll earn some respect.

  454. KenM Says:

    VS,
    Please allow me to add to the chorus thanking you for your contribution here. I’ve learned a lot.
    I keep going back to Tamino’s latest analysis, where he applied climate forcings as a covariate to the ADF test – the ‘CADF’ test.

    This struck me as a somewhat nonsensical proof of an absence of a unit root, since the climate forcing numbers are derived to explain the fluctuations in the temperature record. How could he *not* get the answer he wanted?(!)

    You touched on it briefly, mentioning that the climate forcings themselves contain a unit root, but I’m wondering if you might care to expand on why “climate forcings” make for bad covariates?

    [Reply: ? Of course it makes the most sense to use our estimate of climate forcings as the underlying forced trend. Actually then you’d still have to account for phenomena such as ENSO which are strictly speaking not a climate forcing but work to redistribute energy across ths system (notably ocean-atmosphere), and as such do influence the atmospheric temperatures. I’d be curious to the test results in those case (as indeed Tamino did with the net forcing). Climate model output could perhaps also be used, since it accounts for how the forcings actually influence the temperatures. BV]

  455. KenM Says:

    dho, you mentioned tamino has a new paper out in GRL. I don’t find it. Which issue?

  456. Scott Mandia Says:

    VS,

    I wish you to keep posting here and I thank you for your time. There is nothing wrong with a dissenting view if you can back it up.

    As I stated much earlier in this thread, I still think we need to use the precautionary principle with regard to emissions. We have nothing really to lose by limiting GHG emissions and almost everything to lose if we do not. Of course, “we” meaning the average person and not those in the fossil fuel and related indistries.

  457. Shub Niggurath Says:

    Mr Scott Mandia
    Using the ‘precautionary principle’ is a step-down from the earlier position of the anthropogenic camp. It goes from “we are the ones causing the warming” to “lets shut down emissions anyway”. Right now, a lot of the scientific intelligensia (re: Lindzen’s entire east and west coasts remark) think this way. Which raises the question – why do you need the theory of AGW for?

    Scientists using the ‘precautionary principle’ – which is nothing but nonsense masquerading as reasonable scientificality – raises the sceptre of a lot of scientists having arrived at their AGW promotion via their environmentalism.

    If actions due to the theory of anthropogenic warming flow from its scientific conclusions, one should NOT do anything about ’emissions’ if the theory does not hold up.

    The theory of anthropogenic warming should not be used to derive support or funding for alternative energy sources.

    Regards

    [Reply: ? Nobody “needs” AGW theory. Scientists just try to understand the climate system. Bart]

  458. KenM Says:

    Of course it makes the most sense to use our estimate of climate forcings as the underlying forced trend. Actually then you’d still have to account for phenomena such as ENSO which are strictly speaking not a climate forcing but work to redistribute energy across ths system (notably ocean-atmosphere), and as such do influence the atmospheric temperatures. I’d be curious to the test results in those case (as indeed Tamino did with the net forcing). Climate model output could perhaps also be used, since it accounts for how the forcings actually influence the temperatures. BV

    Actually, it makes no sense at all to use this forcing as the covariate (underlying trend), since essentially that is what is being challenged.

    I say it’s a random walk (I don’t really, just for argument’s sake).

    You say it’s not, and the random appearance is explained by these forcings “X”.

    You say CADF proves it when (and only when ) you use those forcings as the covariate.

    The forcings were created to explain the fluctuations in the temperature record. Some are better than others. Some, like aerosols, are suspect. e.g. I can create my own aerosol estimates, modify the climate forcings Tamino used accordingly, and voila – the unit root is back.

    [Reply: You’re trying to create an image of circular reasoning which is not there. The climate forcings are not being challenged here at all. And they are not fitted to the temp record either. They are based on a combination of observations and radiative and other physics. Deal with it. BV]

  459. dhogaza Says:

    dho, you mentioned tamino has a new paper out in GRL. I don’t find it. Which issue?

    Hmmm, it’s been accepted, perhaps it hasn’t appeared yet. GF et al 2010.

  460. Ian Says:

    KenM, are you assuming that the forcings are derived from the temp trend itself? If that were true, I think you’d have a point – but the forcings aren’t derived by trying to find a fit with the temp data.

  461. Rattus Norvegicus Says:

    The paper is actually going to be in JGR.

  462. dhogaza Says:

    RN: oops, my bad, thanks for the correction.

  463. KenM Says:

    KenM, are you assuming that the forcings are derived from the temp trend itself? If that were true, I think you’d have a point – but the forcings aren’t derived by trying to find a fit with the temp data.

    If they did not, then how did they measure the mass and mixing ratios of atmospheric aerosols from 1945? It was my assumption that they created models with adjustable parameters including the mass and mixing ratio of aerosols. They adjusted the masses and ratios, applied the proper physics, and then confirmed the theory by noting the effect it should have had on temperature.
    Obviously if the expected effect did not match the observed change in temperature, they would have to :
    a) come up with a supplemental theory (something other than aerosols)
    b) change the mass and or ratios of aerosols in the model
    c) something else I can’t think of

    My cursory review of the literature suggests (b).

    [Reply: Knowledge of industrial output and typical emissions of said industries; knowledge of emissions of SO2 and other aerosol precursors and the relation with aerosol properties. There are observations and physics behind the whole story, despite you assertions to the contrary. BV]

  464. Paul_K Says:

    Dhogaza,
    You wrote:
    “No, it’s the fact that the conclusions arising from this particular analysis can be shown to be non-physical. Therefore, there’s an error somewhere in the analysis. It’s not that statistical analysis is unwanted, it’s that *erroneous* statistical analysis is unwanted.”

    It would help me to understand your perspective if you could be specific about what conclusions you believe to be non-physical. Thanks

    [Reply: My take on that question is here and here. Basically, temps being a random walk is inconsistent with energy balance considerations (a.o. conservation of energy). BV]

  465. Scott Mandia Says:

    Shub,

    You missed my point.

    In the extemely unlikely event that somehow AGW is wrong, there is still nothing to lose by reducing carbon emissions and becoming more energy efficient.

    In the extremely likely case that AGW is correct, then doing nothing about reducing emissions will be a great tragedy.

    You should take a few hours/days to watch the Manpollo videos:

    http://manpollo.org/education/videos/videos.html

    A few quotes that drive the point home:

    “What’s the use of having developed a science well enough to make predictions if, in the end, all we’re willing to do is stand around and wait for them to come true?” – Nobel Laureate Sherwood Rowland (referring then to ozone depletion)

    “Scientific knowledge is the intellectual and social consensus of affiliated experts based on the weight of available empirical evidence, and evaluated according to accepted methodologies. If we feel that a policy question deserves to be informed by scientific knowledge, then we have no choice but to ask, what is the consensus of experts on this matter.” — Historian of science, Naomi Oreskes of UC San Diego

    “We built an entire foreign policy based on responding to even the most remote threats. Shouldn’t we apply the same thinking to a threat that is a virtual certainty?” — Daniel Kurtzman, polical satirist

  466. Paul_K Says:

    Dhogaza,
    You also wrote:-
    “So, again, VS: where did B&R go astray? They’re wrong. Their results are physically impossible, and the impossibility has nothing to do with climate science specifically.”

    Again, can I ask you to be specific about what results are “physically impossible”? Your previous questions on the subject appear to have been asked and answered. What is still outstanding for you? Thanks

  467. MikeN Says:

    Wasn’t that the logic behind nuclear winter? I’m not going to say the science is wrong, because who wants to support nuclear war? So I’ll endorse the idea that a nuclear explosion will lower the planet’s temperature by as much as 35C.

  468. dhogaza Says:

    Again, can I ask you to be specific about what results are “physically impossible”? Your previous questions on the subject appear to have been asked and answered. What is still outstanding for you? Thanks

    Answered by who? I ignore/don’t read Tim Curtin, you should to.

    The claim that 1 w/m^2 forcing from CO2 will lead to only 1/3 as much warming as 1 w/m^2 from solar insolation is *absurd* and *unphysical*.

    The fact that a CO2 molecule’s ability to absorb IR “fades” quickly is *absurd* and *unphysical* (if it were true, it would require an ever-increasing amount of CO2 just to maintain the planet at its current temperature, all things being equal).

    You can look this stuff up … or go ask some physicists on a physics forum.

  469. David Admonson Says:

    dhogaza Says:
    March 19, 2010 at 05:11
    (edit)
    Dho,
    I did read your [edit. Pot, kettle] posting BEFORE it was edited, for which I am waiting for an apology.

    VS thank you for your contribution and patience, you are a gentleman.
    I think KenM has asked an important question “you touched on it briefly, mentioning that the climate forcings themselves contain a unit root, but I’m wondering if you might care to expand on why “climate forcings” make for bad covariates?”
    Would you care to comment?

  470. adriaan Says:

    adriaan Says:
    March 18, 2010 at 01:33

    @nigguraths,

    That is because the model derived results are openly discussed as being as important or even more important than actual observations. This point has also been raised by VS in one of his first posts on this thread.

    @Bart,

    The observations are not in agreement with the physical model, so the observations must be wrong.

    What can I do?

    [Reply: Strawman argument. No scientist has made such a claim. The observations *are* in agreement with the physical models. Hansen states in pretty much any talk he gives that our understanding of climate change is based on three pillars: Current obervations, paleodata, and physics based modeling. He notes that the former two are the most important. BV

    The essential message of what VS was telling us is that even if the models are in agreement with the observations, that does not prove that the models are right. And that Hansen tells so every time is for me reason to look further (personal motive).

    What VS is telling, is that by treating the problem in a different way, as being a random walk problem, that the apparent correlation between increase of CO2 level and temperature rise becomes not significant. And even if the physiscs is firm, stable, solved, whatever, this is not true, and will not become true. You can not end a discussion by stating that the science is settled. I am a biologist, we are rewriting our science by the day. What was true yesterday, is proven false today. The science can not be settled. If one receives signals that a given interpretation of data can also be interpreted by different means and give different conclusions, then you ougth to revise your theory. And this is no strawman argument. I can think for myself. Which is something a lot of people apparently cannot ot simply refuse to do.

    [Reply: VS has later clarified that he did not mean that temps are a random walk. Only that they contain a unit root, which has consequences for OLS regression. Not for climate models, not for AGW. See here.

    VS, will you do me favor and set all these people straight who want to walk away with your thesis here and claim all kinds of things that are unsupported?

    BV]

  471. Ian Says:

    adriaan – surely the whole of biology isn’t rewritten by the day? All sciences have problems and areas that see a lot of activity and progress from time to time, without having to overturn the discipline.

    A few people have mentioned the notion of resolving a conflict between observations. I’m not suggesting that observations in general should be underweighted, but it’s interesting to note that GCMs in several cases were discrepant with observations, and the discrepancies were resolved in favor of the models once better data were available. For instance, CLIMAP ocean temps were revised down, close to model results, and MSU data suggesting cooling (or less warming) were corrected, bringing them in line with models.

  472. adriaan Says:

    @Ian,

    What I am objecting to is the fact that something like CGMs are complicated sets of rules based on physiscs, but that the ensemble of rules is treated as being as physics. It is not. In biology, not the entire set is rewritten everyday, but we find everyday new interactions between already known components. How can you be sure that within your models (and they are models, nothing more) new interactions can reveal new findings? No model of GCM is able to deal with compartimentalisation of energy. This is one of the major challenges in biology. And I can do thousands of experiments per week. All you are doing is tweaking the parameters of your models to agree with observations. But this is not synonymous with understanding what is actually happening in climate. A model is a primitive abstraction of reality. And it works as long as reality allows so.

  473. Shub Niggurath Says:

    “In the extemely unlikely event that somehow AGW is wrong, there is still nothing to lose by reducing carbon emissions and becoming more energy efficient.”

    Internal combustion with crude oil/gas derivatives are among the most energy efficient modes of power production invented and improved upon. Fossil fuel consumption is the foundation of Western civilization, especially in the the Northern hemisphere.

    Compare that with ‘green’ wind and solar power, for example. Abysmal output, requiring monstrous government subsidies derived for taxation of human productivity which is based on fossil-fuel burning, and most importantly – no input control whatsoever – that’s what these things are. Very energy efficient indeed! :)

    Yes – there is nothing wrong in becoming energy -efficient. You do not need a theory of anthropogenic warming for that. That was my point to begin with.

    Your Daniel Kurtzman and Naomi Oreskes quotes could just as well be turned on their head.

    For example, the Oreskes quote implies I should ‘trust’ the experts’. Given the fact that trust in climate science is at its nadir for well-founded reasons right now, and many of them seem from their own words, to be philosophical lightweights – I think I’ll do fine on my own thank you. I am speaking from personal experience here; just examine the tenor of posting from the AGW camp regulars at RealClimate etc etc – I can name names, and VS – a newcomer to the climate blogs – noticed the same thing. Do you think they inspire trust? None of them sound like experts – more like street-thugs. I’ll say this once more – the latin root of the word ‘doctor’ -is docere, to teach.

    Regards

    [Reply: And what’s the word for ‘learning’? And ‘listening’? And ‘humility’? BV]

  474. ianl8888 Says:

    Thanks for this thread – it was extremely interesting on a large number of levels, and I hope we see many more examples on other aspects of AGW

    I have kept a copy of the whbabcock post (March 17) as the most accurate summary of the various elements at play here

  475. adriaan Says:

    Beste Bart,

    You try to hide the things that VS has shown. And I think VS was more right than wrong, without knowing anything of your nice models. Let me explain. In the IPCC report, WG1, chapter 2, page 213, note a. This formula is modelling atmospheric CO2 concentration (right?). Can anyone explain the physical basis of this formula? Dhogaza?

    [Reply: Arrhenius? Tyndall? Or the Rabett. Bart]

  476. MP Says:

    @VS, Bart

    The statistical analysis performed by econometricians lead to two major statements. First, several tests suggest that the global surface temperature time series has an integration order I(1). Secondly, several anthropogenic radiative forcings (ARFs) has an integration order I(2). I think that these findings do not necessary contradict the current understanding of AWG.

    1. I(1) for global surface temperature
    This finding does not mean that global T is a pure unbound random walk, e.g. a deterministic linear trend also has integration order I(1). In fact this would actually fit with a linearly increasing forcing, e.g. log(CO2) in the last 50 years. Because the global T dataset is relatively short and is dominated by an increasing trend it is more likely to find a unit root, if a longer time series would be used the integration order will be I(0). The temperature over longer time scales is bound en therefore stationary (temperature can not run away because of energy conservation).

    2. The integration order of global T (I(0) or I(1)) is not the same as the integration order of the ARFs (I(2)), therefore global T cannot not be determined by ARFs.

    At first sight this statement seems valid, however the variability in global T is not only determined by anthropogenic forcings but is also determined by natural variability like ENSO, Volcanic eruptions, solar variation etc. If variability in global T (first order difference) would be purely determined by ARFs the integration orders should be the same.

    To investigate the above statement I obtained and normalized the time series for ENSO and ARFs
    ENSO:
    ftp://www.coaps.fsu.edu/pub/JMA_SST_Index/jmasst1868-today.filter-5
    sum of anthropogenic forcings :
    http://data.giss.nasa.gov/modelforce/RadF.txt

    Using these two datasets I have created artificial temperature series using T = (1-f)*E + f*F, where I varied the relative contributions of ENSO and ARFs from pure ENSO (f=0) to pure ARF (f=1) and checked the integration order for each T-series using the matlab ADFtest allowing up to 2 lags. I also obtained and normalized the GISS global T dataset and for comparison plotted them together with the artificial T-series. Using 2 lags I also obtain an integration order I(1) for the GISS time series (with no lags I obtained I(0)).

    The results are plotted in the figure linked below:

    The results show that for f=0-0.5 the integration order is I(0), for f=0.6-0.9 the integration is I(1) and only for f=1.0 (pure ARF) the integration order is I(2). This clearly demonstrates that mixed time series will give a mixed integration order. Moreover adding only a little bit of noise to ARF already lowers the integration order from I(2) to I(1) (note the remark by Eli Rabbett). Hence the conclusions by B&R are premature and are not supported by a more detailed analysis of the different sources of variability.

    Furthermore I’d like to note that on the time scale of decades CO2 should show at least an integration order I(1) because humans are adding CO2 incrementally to the atmosphere. This notion is consistent with global T being close to I(1), given I(1) is not a reason at all to assume global T is a pure non deterministic random walk.

  477. adriaan Says:

    With regard to radiative forcings and positive feedbacks, I would like to draw your attention to
    http://www.nature.com/nature/journal/v463/n7280/edsumm/e100128-07.html

    Which to my humble opinion, shows that your positive feedback has been severely exagerated.

    [Reply: Read again. BV]

  478. dougie Says:

    VS, fresh air at last.
    get over to

    Anomaly Aversion

  479. adriaan Says:

    Is this all physics? Or what is it? Please explain.

  480. Rattus Norvegicus Says:

    Except that the additional CO2 released as a result of increased warming is not the principal feedback. It is the increase in absolute humidity.

    The paper you pointed to is interesting, there is a discussion of it here. Bottom line: Frank’s estimates for the historical period studied — 1050AD to 1800AD — come in at the low end of the previous estimated range and don’t affect the fast feedback estimate of climate sensitivity, which currently is what the IPCC quotes, 2.5C – 4.0C with best estimate of 3C.

  481. tgv Says:

    If radiation in must equal radiation out to satisfy this rather odd version of the 1st law of TD that is promoted here, then how do you account for inductive energy transfer and tidal energy transfer (wobble)?

    I would suggest that it is the height of arrogance for climate scientists to divine what is ‘physical’ and what is ‘not physical’. Much is not known about the energy balance of the earth.

    [Reply: I would suggest that it’s the height of arrogance to claim (without hamepered by evidence or understanding apparently) that a whole scientific field has it radically wrong. Take it elsewhere. BV]

  482. adriaan Says:

    @Rat,

    Whatever, but what they showed is that the magnitude of the positive feedback is factors lower than IPCC has estimated. Or not?

  483. adriaan Says:

    @Rat,

    And they did use the flat hockeystick proxies in their calibrations, which means the factual feedback should be much lower if we take into consideration that there was a MWP followed by the LIA?

  484. Tim Curtin Says:

    Adriaan said “You (Bart) try to hide the things that VS has shown. And I think VS was more right than wrong, without knowing anything of your nice models. Let me explain. In the IPCC report, WG1, chapter 2, page 213, note a. This formula is modelling atmospheric CO2 concentration (right?).” No, wrong.

    Adriaan added: “Can anyone explain the physical basis of this formula? Dhogaza?” Not the latter, nor anyone, there is no basis of any kind for it.

    The formula states clearly that IPCC (i.e. authors of WG1, ch.2) has simply chosen to define “the decay (sic) of a pulse (sic) of CO2 with time t” by the formula.
    Atmospheric CO2 does not decay, although as much as 57% of the “pulses” since 1958 (running at c 10.5 Gtc pa) , and at least 15% pa of the basic stock (c 760 GtC in 2000, Houghton J. (2004, p.30) is taken up the global biota in the process of photosynthesis without which not even Bumble would be around. The formula, typical of WG1, does not refer to the circulation of CO2. I would not want any of the authors of WG1 in charge of the stockrooms at Kmart or anywhere else, as what we have with CO2 is a need for inventory analysis, there is a huge turnover, and no “decay”, as CO2 does not wear out, but recycles from the atmosphere to living matter by photosynthesis and back via respiration. The individual molecules never die, as I recall from my primary schooling, “matter can be neither created nor destroyed”, but they can transmogrify!.

    The Bern Carbon cycle model referred to in the footnote cited by Adriaan basically assumes that the photosynthesis process has already or soon will terminate (in the MAGICC version used by WG1), or at least reach a ceiling. In that case mass starvation a la Ehrlich and Holdren will surely eventuate, as devoutly wished by all at CoP 15 in Copnehagen with their determined effrorts to extinguish the “pulses” (emissions) without which we will all die a lingering death.

  485. adriaan Says:

    @Tim,
    You expressed my feelings a bit harsher than I would have done. But you seem to agree that there is no basis in ithe IPCC carbon model, based on the Bern carbon cycle? Is this physics? Or is this not physics?

  486. adriaan Says:

    @Tim,

    If you are right, what are we talking about on this blog?

    I like the approach by VS, he(she) has taught me a lot about how to look at these data. I think I will be drinking a beer with him(her) on a warm, sunny outdoor table somewhere in the Netherlands.

  487. Rattus Norvegicus Says:

    Adriaan, the best evidence we have right now (and it is skimpy, which is why Jones said the jury is still out) looks something like this. Compare this with the second chart in this post which shows the spatial extent of today’s warming against the same base period.

  488. VS Says:

    Bart,

    I take it that with net CO2 forcings, you mean the first differences of the CO2 forcings series? (If not, my apologies, and do refer me to the right data/transformation).

    We established here, that the CO2 forcings series in fact contains two unit roots. This means that the series needs to be differenced twice in order to obtain stationarity. In other words, after taking first differences, the series still contains a unit root.

    As for that Covariate Augmented Dickey Fuller test, as proposed by Hansen (1995), and used by Tamino; that one too assumes stationarity of the regressor (or ‘covariate’). In case of CO2 forcings (used by Tamino), this assumption is clearly violated.

    I cannot stress enough how important this is. Here are the textbook treatments of spurious regression (i.e. the consequences of ‘ignoring’ unit roots, in the context of OLS inference) that I found in my bookshelf.

    – Davidson and MacKinnon (2004), pp. 609-610, Regressors with a Unit Root
    – Hamilton (1996), pp. 557-561, Spurious Regressions (very formal treatment)
    – Greene (2003), pp. 632-636, Random Walks, Trends, and Spurious Regressions
    – Verbeek (2004), p. 313, Models with Non-stationary Varibles – Spurious Regressions (undergrad treatment)

    Spurious regressions are furthermore characterized by, from Verbeek (2004): “..a fairly high R2 statistic, highly autocorrelated residuals, and a significant value for beta”. Note that in this case it refers to regressing two unrelated RW variables. The case extends to more complex specifications containing regressors with unit roots.

    Now, while we’re at it, once again, for the record. A random walk implies the presence of a unit root, but the presence of a unit root (what we have established) does not, in it’s turn, imply a random walk process (i.e. brocolli is a vegetable, but not all vegetables are brocolli :).

    Also, for those interested in the landmark paper that wrought all this about, and fetched a Nobel prize in the process, it’s Granger and Newbold (1974), entitled ‘Spurious regressions in econometrics’, published in the Journal of Econometrics.

    At risk of kicking a man while he’s down, I sincerely urge people to stop referring to Tamino’s analyses in the context of this discussion. In all of his blog entries, he implicitly assumes temperatures (and e.g. forcings) to be a trend-stationary process. I think we have, by now, shown this not to be the case.

    Finally, I would like to reiterate that the TSA body of literature is not trivial. The unit roots we have been discussing here over the past two weeks concerns the first chapter in Hamilton (1994), that stretches some 10-15 pages (those pages contain much more in fact). The book itself is almost 800 pages thick, and consists for the most part (70%+) of pure formal notation (i.e. mathematical statistics packed in matrices).

    ————-

    Hi Adrian Burd,

    The ‘answer’ in this case is cointegration analysis. This allows for both forecasts, as well as proper confidence interval estimation, and relations between the various variables of interest. The presence of a unit root doesn’t mean we cannot perform any statistical analysis. It does however imply which statistical method we need to employ (i.e. cointegration analysis).

    I’ll try to find time to make a more elaborate post on cointegration in the near future. However, to give you a spoiler, <a href="http://landshape.org/enm/testing-beenstock/"here's how the BR model specification predicts when it is estimated on the first half of the observations (and projected over the second half).

    Not bad, huh?

    N.B. David Stockwell referred to the specification as ‘Beenstock’s theory’. I would prefer to call it what it is: a model specification.

    ————-

    Hi MP,

    Your point (2) is slightly misleading. The whole idea behind polynomial cointegration, as proposed by BR, is that it allows for I(1) and I(2) series to be related. The different orders of integration only imply that the series cannot be cointegrated linearly, not that they cannot be cointegrated at all.

    A couple of questions w.r.t. your analysis:

    (1) In your ADF test you allowed for a maximum of 2 lags. However, when analyzing the temperature data we found 3 lags to be an absolute minimum. Why did you choose such a low ‘maximum’ level? Standard econometric software packages often set the maximum lag length higher than 10. Note that the matlab function ‘pushes’ the number of lags up to your ‘maximum’. This implies that it would ‘like’ to choose a higher number of lags, and that you have in fact obtained a ‘corner solution’ in terms of IC optimization.

    (2) What does your (test equation) autocorrelation structure look like in terms of Q-statistics?

    (3) What do the Jarque-Bera tests for normality of disturbances indicate?

    (4) What do you get if you allow AIC or HQ based lag selection, with a max lag length of, say, 10?

    (5) Did you try applying the KPSS testing procedure? This would allow for testing ‘from the other side’, and might give us some indication of the robustness of your results.

    (6) And last but definitely not least, would you mind posting all your data somewhere, together with the exact transformations you employed (in terms of collumns of those two matrices you link to), so that we can replicate your findings? I found it hard to infer what you did exactly from your post.

    Please don’t mind the inquisitiveness, it’s ‘professional deformation’. The effort is appreciated!

    PS. It would be interesting to formulate a cointegration model, a-la BR, that allows for the ‘f’ parameter in your analysis to be fitted/estimated, rather than assumed.

    ————-

    adriaan, eindelijk terrasjesweer! Facebook went berserk today.. ;)

    VS

    [Reply: I don’t mean the net CO2 forcings, but the net ‘all’ forcings (ie also including aerosols, non-CO2 GHG, solar, volcanoes, etc). See eg here estimates of the forcings as used in the GISS model. Before the chorus starts bashing this as being model derived and therefore not worthy of attention, please spend some moments reading how they’re actually estimated. Hint: Observations and physics. As I wrote in an earlier in-line reply, there are other factors that influence temp which are not considered a forcing, eg ENSO (which redistributes heat and there) also influence the atmospheric temp without affecting the radiative balance at TOA. Then there’s the fact that the forcings don’t translate 1 to 1 to temps, but due to all kinds of other physical relations that are incorporated in the models they are e.g. ‘smeared out’ in their temp effect. If you’d want to do a serious physics based analysis, these kinds of thins would have to taken into account. That would be a very interesting exercise indeed. Let me know if you’d want to pursue this. BV]

  489. Rattus Norvegicus Says:

    Tim, I would like a cite to a real paper, not some accusation in the press or on a blog, that the CCCC used by MAGICC “assumes that the photosynthesis process has already or soon will terminate”. A quick check around the UCAR site yielded no clues.

  490. Rattus Norvegicus Says:

    VS, it is not net CO2 forcings, it is net FORCINGS, the sum of all forcings both positive and negative.

  491. VS Says:

    Hi Rattus,

    GHG forcings are I(2) as well. Furthermore, they cointegrate into a I(1) series. Solar irradiance is I(1).

    See BR or Kaufmann et al (2006).

  492. MP Says:

    @VS,

    I just wanted to point out the variability in Global T is rather complex and that the different components affect the observed integration order.

    If I find some time I will try to pass you the data and code.

    Regarding f, there are several multi-regression papers that analyse the different natural and anthropogenic contributions to global T. See my comment above
    MP Says:
    March 17, 2010 at 14:40

    I chose 2 lags because I found I(1) with that for GISS. If I choose increasingly more than 2 lags the orders increase progressively…however this weakens the ADF test. Still get a mix…

  493. VS Says:

    Hi MP,

    Information criteria actually capture that ‘trade off’ you sketched (these are so called ‘entropy’ measures). Try performing your analysis with the max length set to 10, while letting the IC’s pick the lag length freely.

    I’m looking forward to your results (as well as your data)!

    Cheers, VS

  494. Rattus Norvegicus Says:

    VS, I suggest you look at CO2 forcing since 1958 using the Mauna Loa data. I would like to see your results, because that data has both a clear trend and little interannual variation.

  495. Rattus Norvegicus Says:

    Umm that should have been concentrations, forcing is log(CO2).

  496. Tim Curtin Says:

    Rattus said: Tim, I would like a cite to a real paper, not some accusation in the press or on a blog, that the CCCC used by MAGICC “assumes that the photosynthesis process has already or soon will terminate”.

    I have documented this before, in a published peer-reviewed paper of my own (Climate Change & Food Production, 2009:1101, available at http://www.timcurtin.com). It shows how Tom Wigley (formerly Director of CRU at UEA) adopted (Tellus 1993) the Michaelis-Menten formulation of a hyperbolic relationship whereby rising [CO2] has an initial beneficial impact, cet. par., on biotic uptakes by net primary production (NPP) that tapers off rapidly and then hits a ceiling whereby further rises in [CO2] have zero impact on either yield or permitting more NPP, which thereby no longer absorbs emissions of CO2, so gross emissions from henceforth equal net emissions. Actually net emissions have been just 43% of gross since 1958.

    Wigley and Enting (CSIRO, 1993) formalised this assumption and it is enshrined in Wigley’s MAGICC model that forms the basis of the WG1 projections of [CO2] from 2000 to 2100 and beyond (WG1, chapter 8, and especially the Supplementary Material), because it limits growth of future absorption of emissions to nil and thereby validates the Madoffian assumption (eg Solomon et al 2007, and in PNAS 2009) that although [CO2] grew only by just 0.41% p.a. from 1958 to now, from 2000 to 2100 and doomsday it grows at least at 1% p.a. if not more.

    Now because Wigley is not an economist, and as so-called economists like Stern & Garnaut never question THE science, the Michaelis-Menten assumption, which is perfectly valid for that tomato now, is not valid for all tomatoes at all times, as varieties improve every year, and there is no law (yet) stopping me from starting to plant tomatoes in that unused area of my veggie patch, despite the MAGICC claim that they can never grow and absorb CO2.

    Apologies, Bart, for length, but a serious question deserved a full answer, and it is not OT, because the rate of growth of [CO2] determines the growth of radiative forcing, and when that is exaggerated, it is not surprising that forward projections of RF from 2000 have already failed to yield the predicted rise in GMT.

    So Rattus, BV, Adriaan, and VS, close down your greenhouses NOW, for whatever you do you will never as per Wigley be able to increase their uptakes of CO2! But some day I hope to join you all in a beer (full of CO2 as it is).

  497. Rattus Norvegicus Says:

    I hate to say this Tim, but E&E? And what you claim in your post is a far cry from “assumes that the photosynthesis process has already or soon will terminate”.

  498. HAS Says:

    The problem with Blogs is that you can very quickly diverted into areas outside the current issue under debate. The issue here is about the empirical properties of two time series central to climate change, and what the implication might be for climate change models.

    Now rather than a straightforward discussion I found myself tripping over some sensitivities about climate models being derived from physics so the statistical issues could have no implications for the models and its results.

    My instinct was that this had to be just nonsense and surprising from those active in the field (I somehow didn’t believe that neither the physical sciences were sufficiently advanced nor computing power sufficient to develop a deterministic model from first principles sufficiently rich to describe and predict climate. I should add that I did know that statistical issues abound in the estimation of the two time series mentioned, and that there was room for improvement.

    So I thought in all fairness I should check at the IPCC AR4.

    And of course the use of statistical parameter estimation abounds in these models, and particularly in those areas most germane to the impact of GHG. In addition to parameter estimation these models are tuned to improve their performance, which also involves statistical analysis comparing model results to actual data (and no doubt the use of the very time series that are the subject of this thread).

    So dhogaza and BV these kinds of statistics have real implications for your science (as a number of other commentators have ably but less directly pointed out).

    VS: Just as an aside since temperature at a location is strongly correlated with temperature close by I assume that the estimation of errors in estimates of grid temperatures could potentially suffer from similar issues as raised by these time series?

  499. VS Says:

    Very good point HAS! We’re not there yet (patience, we’re going through the matter with a snail-pace), but again, that’s a very good point!

  500. Tim Curtin Says:

    Rattus: that is very glib. Name the journal that would publish anything pointing out the impact of cutting atmospheric CO2 on food production? Nature? I tried, see my Note (also on my website) it declined to publish pointing out that the Meinshausens et al in 2 papers in Nature 30 April 2009 explicitly assumed zero uptakes or worse.

    Here is what Nature’s leader endorsing Meinshausens et al and Allen et al (both in Nature, 30 April 2009) had to say;
    “The 500 billion tonnes of carbon that humans have
    added to the atmosphere lie heavily on the world, and the burden swells by at least 9
    billion tonnes a year (sic)” (p.1077), even though the actual increase in the atmospheric
    concentration of CO2 (i.e. [CO2]) recorded at Mauna Loa between May 2008 and May
    2009 was only 1.68 parts per million by volume (ppm), equivalent to 3.56 billion tonnes of carbon (GtC), implying that it is TOTAL cumulative or annual emissions that determine climate change, not the atmospheric concentration that emerges after taking into account net uptakes of carbon emissions.

    Allen et al in Nature 30 April 2009, SI, stated explicitly (SI, p.6) “the terrestrial carbon cycle model has both vegetation and soil components stores. The vegetation carbon content is a balance
    between global average net primary productivity (NPP) *(parameterized as a function
    of atmospheric carbon dioxide, which asymptotes to a maximum value multiplied by a
    quadratic function of temperature rise in order to represent the effect of climate
    change)* and vegetation carbon turnover” (my italics in asterisks). Thus the Allen paper explicitly
    assumes that net carbon uptakes become first zero and then negative as allegedly
    “climate change” reduces NPP.

    So if I cite that in E&E you infer it is not what they said?

  501. Tim Curtin Says:

    Rattus, further to my last, the source of that assumption in Allen et al 2009 is as I said Wigley 1993, who rejects the logarithmic form for porjecting uptakes of CO2 by the biospheres:

    NPP= (No(1+beta*ln(C/Co)) …A1

    in favour of

    NPP = [(No(C-Cb)(1+b(Co-Cb))]/[(Co-Cb)(1+b)(C-Cb))]…A2

    Wigley’s A1 “allows NPP to increase without limit as C increases” (which has always been the case so far, see Curtin 2009) so he says it should be replaced by A2, whose hyperbolic form ensures that NPP reaches a ceiling with respect to increases in [CO2], around 2000 according to WG1, for which there is no evidence, see Knorr W., GRL, 2009, if you don’t believe me. Allen’s contribution is to make it quadratic so we should already be seeing declines in total world NPP. Are there?

    It is A2 and its built-in ceiling on increases in NPP that determines the projections in MAGICC which was developed by T.G.L.Wigley, S. Raper and M. Hulme (all of CRU/UEA) and is available at http://www.cgd.ucar.edu/cas/wigley/magicc/index.html.

    WG1 describes its use of MAGICC in 8.8.2. I have MAGICC and it has no module for overrruling A2, and thus has its limitations as a computer game, which is all MAGICC is, and a very bad one at that.

  502. HAS Says:

    I can’t believe that I actually wrote:

    “I somehow didn’t believe that neither the physical sciences were sufficiently advanced nor computing power sufficient to develop a deterministic model from first principles sufficiently rich to describe and predict climate.”

    I have tried to parse this but have totally failed!

    What I don’t believe is that science can produce the model, nor that computors could model it.

    VS: when you get round to it my searching here started with “Uncertainty estimates in regional and global observed temperature changes: a new dataset from 1850” 2005 P. Brohan, J. J. Kennedy, I. Harris, S. F. B. Tett & P. D. Jones and then back from there into “Estimating Sampling Errors in Large-Scale Temperature Averages” 1997 P. D. Jones, T. J. Osborn, and K. R. Briffa. This makes adjustments for intercorrelations, but I’d be interested to know how this stacks up against more recent developments in the field. Also if I understand it right the SE equation used by Jones is derived from precipitation models and data, and depends upon estimates of inter-site correlations derived from empirical relations, without carrying the variances in those estimates through to the estimates of the SE.

  503. IanH Says:

    Rattus Norvegicus @ 05:30
    I hate to say this Tim, but E&E?”.

    Please Rattus don’t go there, don’t try and pretend we’ve not read the Climategate & NASA emails. A claim to authority or lack of in the peer reviewed journals suggests you’ve not read them, or understand their implication.

    [Reply: Referring to E&E is a bit like referring to the Journal of Creation Science as a refutation of evolutionary biology. It’s not even listed in the ISI, the editor said that she’s following her own political agenda, and indeed it’s basically an outlet for anything, no matter how absurd, as long as the message is “AGW is wrong”. BV]

  504. Alex Says:

    It seems that most readers of this blog by now have acknowledged the importance of testing for the presence of a unit root, since the presence of one will have serious consequences for OLS. From time to time though, the unit root issue is still mixed up with the random walk hypothesis. As I stressed earlier, a random walk model contains a unit root, but the presence of a unit root doesn’t mean that the temperature series is a random walk. In my previous post I actually tested the random walk hypothesis and concluded that, on the basis of statistical testing, temperature is not a random walk. I repeat it here, because it is important not to mix these two things up.

    Some have also been asking what the implications are of a unit root for physical or climate theories, since the focus here has been mainly on the implications for statistical tests. Whether a unit root has any implication depends on what the theories say (implicitly) about the presence of a unit root. There are three possible situations.

    Situation 1:
    Theory is indifferent to the presence of a unit root. In this case it would matter if there is a unit root and so tests for a unit root cannot be used to test the theory itself. However, from a statistical point of view it will still be relevant.

    Situation 2:
    Theory excludes the presence of a unit root. In this situation the different unit root tests (if appropriate under the given circumstances) can be used to test the theory. If a test fails to reject a ‘unit root null hypothesis’ or rejects a ‘no unit root null hypothesis’, then this can be taken as evidence against the theory, since it is in clear contradiction with one of its predictions.

    Situation 3:
    Theory requires the presence of a unit root. In this situation rejecting a ‘unit root null hypothesis’ or not rejecting a ‘no unit root null hypothesis’ can be taken as evidence against the theory.

    So no matter which situation we are in, from a statistical point of view we should always test for unit roots. However, unit root tests are only relevant for physical theories in the second and third situation.

    Now in principle it is possible to derive analytically whether a certain theory will have a unit root, though sometimes this can be quite tricky. The way to do this is by specifying the theory in a set of equations. From these equations one can derive the statistical model from which one can derive whether or not a unit root is present or whether there could be a unit root, but that it doesn’t really matter (In econometrics vernacular this is called ‘nested’).

    Several people raised the question whether a unit root was related to the amount of ‘noise’ or ‘randomness’. Maybe it are related terms like stochastic trend or random walk that make people think this, but a unit root has nothing to do with the random part in the model. It’s possible to have a model with an R^2 of 99% (which means that only 1% is random) with a unit root. This is because the unit root is in the deterministic part of the equation and not in the random part. I hope this clarifies why arguments for or against the presence of a unit root on the basis of the amount of randomness are wrong. The only way to establish whether a theory predicts the presence of a unit root is via analytical derivation.

    A last point of concern I would like to raise is when people compare temperature graphs to graphs of a series with a unit root they found on the internet. Most (if not all) graphs I have seen on the internet showing a process with a unit root are a random walk. This is understandable, since a random walk is probably the simplest model with a unit root there is. However, a random walk takes on a very distinctive pattern which can look quite different from other models with a unit root. Just to illustrate the difference plot the following graphs (you can do this in Excel):

    Y(t) = Y(-2) + E
    Y(t) = -Y(-1) + Y(-2) + Y(-3) + E

    where E is white noise. These two models both contain a unit root, but are not a random walk.

    @Bart

    In several comments you stated that a unit root would only have consequences for statistical methods and that it would not affect any theory on global warming, i.e. situation 1. Was this an impression you got from the debate here or did you formally derive this? And if the latter, could you show us how you did that?

    @Adrian Burd

    I agree with you on the part of looking at a problem from two sides. Personally I think the best way is to start with theory and use statistical analysis to test them. To start with the statistical testing and subsequently build a theory feels a bit like cheating to me, though this is definitely not a view shared by everyone within the field of statistics.

    Most trends I have seen so far in statistical analyses of AGW are of a type called deterministic trends. However, if the data contains a unit root, then it has a stochastic trend, which is of a different type than a deterministic trend. Now if the ‘underlying mechanism’ has a stochastic trend, but our model of that mechanism does not have a stochastic trend, we are essentially estimating a misspecified model. This will cause biased estimates, whether or not we would include a deterministic trend. Moreover, as I explained earlier, the presence of a unit root will make many standard statistical tests invalid.

    The way to solve this problem is via a method known as cointegration. It’s a little bit technical to discuss exactly how it works, but most intermediate textbooks on econometrics will probably cover at least the basics of it. Cointegration takes into account that the two series have a unit root, so the analysis is done with a Vector Error Correction Model (VECM). What’s maybe more interesting is that it hypothesizes a relation between the two variables, which can be tested. This way it is still possible to test whether there is a correlation between two variables, while both of them have a unit root. So if both temperature and CO2 would have one or more unit roots, then cointegration is the way to test whether there is a relation between the two. This is exactly what VS has been advocating and, as for what I read here, has been done by B&R, though I must admit that I haven’t had the time yet to read that paper, so I don’t know whether their analysis is correct.

    Alex

  505. Alan Says:

    Allow me to be the first to ask a dumb question.

    If the temperature and CO2 data have one or more unit roots and if temperature is a function of CO2 and insolance and sulphates and feedbacks etc, will a cointegration test between (just) temperature data and CO2 data reliably reveal whether there is a relation between the two?
    [Reply: Statistics can say something about correlation. Physics can say something about a causal relation. BV]

  506. dhogaza Says:

    This is exactly what VS has been advocating and, as for what I read here, has been done by B&R, though I must admit that I haven’t had the time yet to read that paper, so I don’t know whether their analysis is correct.

    Well, you should. Though you can say their analysis is incorrect with certainty, from physics (with considering climate change at all), so the exercise should be … find out where they went wrong.

  507. dhogaza Says:

    without considering climate change at all …

  508. A C Osborn Says:

    Alex, HAS & VS, would the fact that the Global Temperature series being totally un-natural i.e. massaged to death make any difference to whether or not it has Unit Root, or is I(0), I(1) or I(2) etc?
    Has anyone tried the same test on an unadulterated Temperature series form one thermometer?
    [Reply: “One thermometer” cannot measure global avg temp. Keep baseless accusations (adulterated; massaged to death) at the door before entering. Thanks! BV]

  509. A C Osborn Says:

    Should have said
    from one thermometer?

  510. Shub Niggurath Says:

    Mr BV
    You presume only climate scientists can ‘understand’ the climate. And that the rest of us unwashed and cloth-eared masses should just ‘learn’ and ‘listen’ and show humility?

    Any question against the AGW theory from within the community is shouted down. Any questions from outside the questions are dismissively shooed away.

    Those who venture to discuss climate science are on a learning path – meaning they have learnt something. They are all not mindless ignoramuses. I would suggest you stop treating your audience as one. They are probably ahead of the curve of the AGW theory in its other facets.

    “Physics can say causation, statistics correlation” – is an ‘oversimplification, especially in the context of what has been discussed in this thread upto this point and especially in the context of climate science.

    Regards

    [Reply: I don’t presume that “only climate scientists can ‘understand’ the climate”. But I do note that most who claim that AGW is all bunk do so from a logically and physically incoherent argument. If pointing that out makes me impopular with those who love such claims, so be it. BV]

  511. mikep Says:

    For those not prepared to read 500 pages of Hamilton there is a nice informal introduction to co-integration (using random walks as an example) using the case of a drunk and her dog, here

    Click to access Murray93DrunkAndDog.pdf

  512. mikep Says:

    And there is a slightly less fun extension to the multivariable case here

    Click to access amstat.pdf

  513. docmartyn Says:

    “Arthur Smith Says:
    “Considering Earth’s average surface temperature as a reasonable metric (something more along the lines of total surface heat content is probably better, but average T is not a bad proxy for that)

    But the analysis VS is promoting suggesting something very different – that temperature is not constrained at all, but randomly walks up and down all on its own. That can only happen if the climate system is neither stable nor unstable (since we don’t have a Venus-like runaway either) but right on the cusp of stability, with positive feedbacks exactly cancelling negative feedbacks, at least on the time scale being discussed (decades to centuries?)”

    One could have a system with a stable total surface heat content and yet have a highly variable atmospheric temperature, pressure and water content/phase.
    Two or more decade cycles of wet weather or drought are the norm, not odd events.
    VS is being very well behaved; unlike many responders.

    [Reply: No, that’s not what VS is claimign (anymore). Read his newer posts and also the quick rundown here. Whether behavior correlates with being right is an open question btw. I wouldn’t be surprised to see some randomness in that relation. BV]

  514. VS Says:

    Hi docmartyn,

    For the record, I’m not claiming that temperatures are a ‘random walk’. Watch out for that one, it’s a strawman! I’m claiming the instrumental temperature record contains a unit root, making regular OLS-based inference (including trend confidence interval calculations) invalid.

    I think this post by whbabcock here and the subsequent post by Alex just above are a good indication of my methodological take on the issue.

    I encourage you to read the whole thread though :)

    Hi mikep

    Do you happen to be the author of this book? :)

  515. docmartyn Says:

    VS, I am a neuro/biochemist and would never put words into someone else’s mouth. I just love the way equilibrium thermodynamic has been applied to a steady state system. For instance, he black body temperature of the Earth is 5.5 °C, and as the Earth reflects about 28% of incoming sunlight, in the absence of the greenhouse effect the planet’s mean temperature would be about -18 °C.
    Hence, CO2 and water vapor must, in an equilibrium, produce about 33 ° C. However, at the top of Everest the temperature in the high summer climbing season is about -16 and in winters falls to about -37 ° C, yet the CO2 pressure is only about a third of that at sea level.

    http://www.mounteverest.net/story/ExWebseries-WinterclimbingTheBADchart,part2Dec172004.shtml

    VS have you ever done any steady state analysis?

    We have a good estimate of the amounts of CO2 humans generate by year, the Keeling CO2 data, the 14CO2 residency curves from the H-bomb tests (t1/2 = 12-15 years) and the pre-industrial steady state [CO2].
    It is rather trivial to work out the CO2 influx and outfulx into the atmosphere.
    Sadly, people like rabbit only like box-equilibrium models, equilibrium thermodynamics and statistics that have one dimensional Gaussian variances.

  516. tgv Says:

    “I would suggest that it’s the height of arrogance to claim (without hamepered by evidence or understanding apparently) that a whole scientific field has it radically wrong. Take it elsewhere. BV”

    Nowhere did I claim such a thing. My claim is that *you* have it radically wrong (actually, I think you were just being imprecise with your language :) ).

    My point in that the earth is never in equilibrium (or maybe better said, is only ever instantaneously in equilibrium). Rather it is always seeking equilibrium. There’s a stochastic component to ‘energy in’ due to variation in solar output, wobble, orbital asymmetry, albedo and a whole host of other factors (even small things like kinetic energy that is transferred from meteorites and space dust that is constantly hitting the earth). This, in turn, leads to a stochastic element of ‘energy out’ whose phase shift is also a stochastic function due to the complexities of ocean heat content and other things. Therefore, mean surface temperature (whatever that means) is measuring a complex interrelationship of stochastic processes. There is nothing ‘unphysical’ about mean surface temperature having a random component (while still being bounded). This is different from saying that temperature is a random walk.

    To say that ‘radiation in’ must equal ‘radiation out’ is overly simplistic because it ignores the dimension of time and the irregular nature of the associated temporal distortion.

    [Reply: Weather still happens indeed. BV]

  517. Al Tekhasski Says:

    It is quite audacious to argue that physics should prevent “global temperature” to walk around. Sometimes it is tricky to apply proper physics to complex systems far from equilibrium. I already responded that ‘radiation in’ equal ‘radiation out’ does not mean steady global average of surface temps, but my remark apparently was not appreciated (or understood), and ignored. Let me try again, with a simple example (for non-physicists or others).

    Let a planet to have only two climate zones, 50% equatorial with flat temperature T1, and 50% polar, with T2. Then the following example combinations will give the same OLR of 240W/m2:
    (A) T1=295K, T2=172.8K
    (B) T1=280K, T2=219.4K
    (C) T1=270K, T2=236.9K
    (D) T1=260K, T2=249.8K
    Yet the “global average temperature” will vary from 234K (case A) to 255K(case D), a swing of 21K. That’s a lot of potential for warming, all without ANY change in radiative balance. And a lot of room to walk chaotically, knowing that the atmosphere is a highly volatile turbulent system which, being quasi-2D, should theoretically have a Kraichnan’s inverse cascade, and low-frequency large-area fluctuations are expected, all in accord with physics.

    The above example is another illustration why the “global temperature” is unphysical, and therefore application of basic physics to this “index” may give a misleading impression.

  518. Willis Eschenbach Says:

    First, my thanks to most everyone for a fascinating discussion. My conclusion is that VS (and his citations) have shown that temperature series are I(1) and CO2 is I(2). What that means is still unclear to me.

    Next, I object to the argument that ‘if X is true then much of modern physics is untrue’. For example:

    His endorsement of a statistical analysis by B&S that essentially says much of modern physics is wrong, is simply stupid.

    First, I find nothing in B&S that says “much of modern physics is wrong”. What they are saying is that you can’t use OLS etc. to relate CO2 and temperature. How does that negate modern physics? Second, there is a big difference between “modern physics” on the one hand, and the (possible mis-) application of some part of modern physics to a particular problem on the other hand. Overthrowing mis-applications of physics is quite common.

    Next, dhogaza Says:
    March 17, 2010 at 18:38

    Any paper claiming that a 1 w/m^2 forcing from different sources result in a different climate response won’t make it into any reasonable journal in the physical sciences.

    If it gets in anywhere, I imagine it will be some economics journal.

    Say what? Since different forcings have different frequencies, why would they not have a different response? Consider a 1W/m2 change in solar vs GHG forcing on the ocean. Solar penetrates the ocean to a depth of tens of metres. Longwave is absorbed in the first mm of the oceanic skin surface. Which will cause a greater rise in the skin temperature? Which will cause a greater rise in evaporation? How will those possibly have the same climate response?

    Or you might take a look at “Efficacy of Climate Forcings”, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 110, by Hansen et al., which says:

    We find a substantial range in the ‘‘efficacy’’ of different forcings, where the efficacy is the global temperature response per unit forcing relative to the response to CO2 forcing.

    Economics journal? … Not.

    Your certainty that your opinion is right is misplaced, which makes your snide comments painful to read. You would do well to follow Oliver Cromwells advice, “I beseech you, in the bowels of Christ, think it possible you may be mistaken.

  519. Scott A Mandia Says:

    Shub wrote:

    Internal combustion with crude oil/gas derivatives are among the most energy efficient modes of power production invented and improved upon. Fossil fuel consumption is the foundation of Western civilization, especially in the the Northern hemisphere.

    Compare that with ‘green’ wind and solar power, for example. Abysmal output, requiring monstrous government subsidies derived for taxation of human productivity which is based on fossil-fuel burning, and most importantly – no input control whatsoever – that’s what these things are. Very energy efficient indeed! :)

    You are making the same mistake that many make by not factoring in the true cost of carbon. For example, the US spends about $80 billion per year far the Navy to monitor the Gulf region, about $80 billion per year in subsidies to fossile fuel companies (far greater than geen subsidies, BTW), and then thet cost of climate change as a result of this carbon is not factored in, etc., etc., etc. So it is always unfair for carbon with its many hidden costs to be compared to renewables which essentially have tranmsparent costs.

    BTW, the geopolitical impact of climate change are aslo typically ignored by many (but not by top US military experts) and these costs are frighteningly large.

    See my page that describes some of these implications:

    http://www2.sunysuffolk.edu/mandias/global_warming/talk_conservative_climate_change.html

    In the “business as usual” solution where emissions of GHGs continue to rise unabated, the following consequences are realistic:

    China and India pass the US as economic superpowers
    Increased immigration
    Higher food costs
    Greater government subsidies (higher taxes)
    Higher insurance rates
    Increased authoritarian governments
    Increased terrorism
    Nuclear proliferation
    Regional and global wars between countries with nuclear weapons

  520. Scott A Mandia Says:

    Sorry for the typos. 3 hours of sleep last night and my 5 and 2 year old boys are tugging at my sleeve! :)

  521. Alan Says:

    Alex wrote:

    This way it is still possible to test whether there is a correlation between two variables, while both of them have a unit root. So if both temperature and CO2 would have one or more unit roots, then cointegration is the way to test whether there is a relation between the two. This is exactly what VS has been advocating and, as for what I read here, has been done by B&R

    I asked:

    If the temperature and CO2 data have one or more unit roots and if temperature is a function of CO2 and insolance and sulphates and feedbacks etc, will a cointegration test between (just) temperature data and CO2 data reliably reveal whether there is a relation between the two?

    I’d prefer a reply from Alex and VS, if you don’t mind, Bart.

  522. J. Bob Says:

    Just jumped over here from WUWT’s discussion on playing with stats. Sounds like it’s time to dust off the “How to Lie with Statistics” book.

    Just a note. It seems very little is mentioned about the real long term data sets such as Central England, the DeBilt, Uppsalla, Berlin. While they may not be up to today’s specs, it seems the accuracy even in the 50’s wasn’t that fantastic. Back then, I had a chance to earn 50 cents (good money back then) a week recording hi/lo temperatures from a neighbor who sent in the results the the government. This was on a old hi/lo Taylor thermometer with mechanical positionable arms that recorded the hi/low, and would have to be manually reset. At best one could estimate to 1 deg.

  523. POUNCER Says:

    http://ideas.repec.org/p/anu/wpieep/9702.html

    Time series properties of global climate variables: detection and attribution of climate change

    Paper provided by Australian National University, Centre for Resource and Environmental Studies, Ecological Economics Program in its series Working Papers in Ecological Economics with number 9702.

    Download reference. The following formats are available: HTML (with abstract), plain text (with abstract), BibTeX, RIS (EndNote, RefMan, ProCite), ReDIF
    Length:
    Date of creation: Mar 1997
    Date of revision:
    Handle: RePEc:anu:wpieep:9702

    The test results indicate that the radiative forcing due to changes in the atmospheric concentrations of CO2, CH4, CFCs, and N2O, emissions of SOX, CO2, CH4, and CFCs and solar irradiance contain a unit root while most tests indicate that temperature does not. The concentration of stratospheric sulfate aerosols emitted by volcanoes is stationary. The radiative forcing variables cannot be aggregated into a deterministic trend which might explain the changes in temperature. Taken at face value our statistical tests would indicate that climate change has taken place over the last 140 years but that this is not due to anthropogenic forcing. However, the noisiness of the temperature series makes it difficult for the univariate tests we use to detect the presence of a stochastic trend. We demonstrate that multivariate cointegration analysis can attribute the observed climate change directly to natural and anthropogenic forcing factors in a statistically significant manner between 1860 and 1994.

  524. Eli Rabett Says:

    VS’s argument that “the instrumental temperature record contains a unit root, making regular OLS-based inference (including trend confidence interval calculations) invalid.” fails, because the proposition he is arguing against is not based on an OLS based inference about the global temperature series. His is rather an acoherent separation of the surface temperature record from everything it is connected to.

    As has been pointed out here, and here and here, the argument is that at all levels of modeling, from relatively simple one dimensional radiative models, to large three dimensional GCMs, increasing greenhouse gas concentrations has multiple observed effects. These include increased global surface temperature, decreased stratospheric temperature from 20-50 km, significantly increased Arctic temperatures and much more (if Eli left out your favorite, please feel free to add it). Moreover these predictions are validated by observations over the short (response to Pinatubo), medium (the satellite era), century (from 1850 or so, when we have instrumental records), millenial (proxy reconstructions) and eonical (ok made that word up, but ices cores, isotope tracers, etc).

    Denialists keep trying to knock these observations down, but outside of blogs and newspapers, it is the denialists that keep getting knocked down, for example the latest nonsense about station location.

    As has been pointed out, the global surface temperature record does not exist in isolation, however VS’s current argument decouples it from everything else and in doing so contributes nothing.

  525. adriaan Says:

    @Bart,

    Beste Bart,

    You try to hide the things that VS has shown. And I think VS was more right than wrong, without knowing anything of your nice models. Let me explain. In the IPCC report, WG1, chapter 2, page 213, note a. This formula is modelling atmospheric CO2 concentration (right?). Can anyone explain the physical basis of this formula? Dhogaza?

    [Reply: Arrhenius? Tyndall? Or the Rabett. Bart]

    Beste Bart, Dear Bart,
    Did you take the effort to read the ref? What does Arrhenius, Tyndall or Rabett have to do with the explanation of the cited formula? Your answer is completely O/T. Joos et al 2001 (Glob Bio Cyc) would have been a better reply. And I reread Frank et al 2010 and looked at fig 1 and 2 and the supporting data. But maybe you prefer Patterson et al 2010. PNAS and their supporting data. The big advantage of the method developed by Patterson et al is that it allows to get almost diurnal temperature readings by one of the most accurate proxies available. Their Mass spectrometry based methods in combination with thin slicing of shells is brilliant. But for this discussion Li et al 2009 Tellus B) would be more on topic. The ref to Mann 2009 which also came across by another commenter is laughable.

  526. adriaan Says:

    @VS,

    I would like to have a meeting with you. I think we have a lot to discuss on a completely different, but closely related area. Suggestion on how to arrange this?

  527. adriaan Says:

    @VS

    Het eerste biertje is op mijn rekening!

  528. Dave McK Says:

    I do believe that what was referred to as ‘statistically similar to a random walk’ in this context can be translated as ‘weather’.

    Is somebody saying there is no such a thing as weather?

    [Reply: Note that VS is not claiming (anymore) that global avg temp over 130 years are a random walk. See also here. BV]

  529. Alex Heyworth Says:

    There is one thing that puzzles me about this thread and the corresponding couple at Tamino’s. That is that AGW defenders seem to think that the idea that air temperature over the period of the instrumental record could be hard to distinguish from a random walk is in some way a threat to their theory.

    Given (1) the vastly greater heat capacity of the oceans, (2) our extremely limited present understanding of ocean dynamics and its drivers, and (3) the apparent strong links between variations in ocean surface temperatures and average air temperature levels, it would surely be expected that there would be a large amount of apparently random variation in the average air temperature, even if AGW theory is correct.

    The true confirmation of AGW theory is going to come via measurements of ocean heat content. When we have fifty plus years of high quality OHC measurements, the truth or otherwise of current theory will be apparent. By then, I imagine we will also have a far better idea of why it is correct (or why not, if it is false).

  530. Alex Heyworth Says:

    PS, Bart, I note that in a reply to a comment above, you said

    [Reply: My take on that question is here and here. Basically, temps being a random walk is inconsistent with energy balance considerations (a.o. conservation of energy). BV]

    I would take issue with that, simply on the basis that significant variations in air temperature could take place because of heat transfer between oceans and atmosphere. If one were to take your statement as applying to the heat content of the whole earth (ie everything from the center of the core to the top of the atmosphere) then it would be true. However, average global air temperature could vary randomly within that context without violating any physics.

    [Reply: Did you read my newer posts? I explicitly mention that energy tranfer from different parts of the climate system should othgerwise have contributed, but that is excluded based on them also gaining energy, rather than losing it. BV]

  531. Frank Says:

    BV, VS et al,

    This is a fascinating discussion and well moderated! To summarize, those skeptical of AGW (myself included for purposes of disclosure) have taken to heart statistical analyses that the historical records for temperature and greenhouse gas forcings have unit roots and are of different orders, thereby precluding any AGW-supportive inferences of trend and/or correlation from these records. Conversely, those supportive of AGW dismiss these findings of stochastic trends and spurious correlation outright, since they “know” from the paleo-climate record (e.g. ice cores) that climate is “deterministic”, “bounded”, etc.

    OK then. Let’s consider a Nimitz-class aircraft carrier – something very deterministic and bounded in form and function. Further to the analogy, I’m going to provide a series of 50 lb. samples of the aircraft carrier to people who have no inkling of what an aircraft carrier is or does. How many samples will it take before the people to whom I provide these samples will be able to form an accurate assessment of what an aircraft carrier is and does? Quite a number no doubt. And for those scoffing at the analogy of sampling an aircraft carrier in 50 lb. increments, the surface thermometer record referenced at the top of the thread scales similarly to the paleo-climate record.

    So, AGW supporters can ignore the statistical findings because they’ve already seen the aircraft carrier, so to speak. But here’s the rub – in invoking the paleo-climate record, what becomes more relevant is how unusual the current 130-year temperature record is compared to centennial-scale changes in the former? (Answer, not very). And, how about carbon dioxide’s well documented, consistent lagging of temperature from the ice cores, or inconsistencies between carbon dioxide levels and ice-/hot-house conditions throughout the Phanerozoic?

    In short, AGW supporters can’t have it both ways. They can either accept that the current data doesn’t statistically support their case, or in invoking the paleo-record to prove that the statistics don’t matter, provide evidence that current climatic conditions are unusual in comparison to that record.

    [Reply: The “case of AGW” is not weakened by the presence of unit roots. How about you try to explain the large temp changes in the past (eg the Phanerozoic) without a substantial effect of CO2? BV]

  532. Alex Heyworth Says:

    PPS, further to my puzzlement two comments back, lest people suggest I should be equally puzzled as to why AGW doubters cast the “apparent randomness” of the temperature record as a refutation of AGW, my take on that is that it is a reflection of their lack of statistical and scientific knowledge. I expect better from AGW proponents, particularly those who are scientists.

    I’d also note that the emphasis on air temperature is understandable in the past, given that air temperatures were recorded for other purposes and were available to analyze. However, maybe it is time to think about moving on. For the reasons I’ve outlined above, average global air temperature is not a very good way of measuring what is happening to the climate system, even though air temperature is what we most immediately notice in terms of the environment’s impact on our comfort.

  533. dhogaza Says:

    There is one thing that puzzles me about this thread and the corresponding couple at Tamino’s. That is that AGW defenders seem to think that the idea that air temperature over the period of the instrumental record could be hard to distinguish from a random walk is in some way a threat to their theory.

    Trust me, no one does. The basic argument is whether or not analysis such as B&R’s (which VR endorses and says is correct) can overturn much of modern physics totally unrelated to AGW.

    Because these are the implications.

    B&R are doing nothing less that suggesting that much of modern (i.e. century old and younger) physics needs to be flushed down the toilet.

    VS claims he’s not supporting this, yet refuses to tell us where B&R goes wrong (my opinion of VS is that he has just enough understanding to run a bunch of scripts in R, make juicy quotes based on various papers, and to yell “tamino may be a PhD in statistics but he’s an idiot, as are all those who suggest we’re wrong!”). So I accuse VS of supporting B&R’s claims which refute so much of physics.

    Ignore “AGW defenders”, just concentrate on what B&R conclude about the physics of CO2 and LW IR absorption. Ask yourself why laboratory measurements don’t support this. Etc etc.

    Be a skeptic but at least be a smart one, OK?

    I’d also note that the emphasis on air temperature is understandable in the past, given that air temperatures were recorded for other purposes and were available to analyze. However, maybe it is time to think about moving on. For the reasons I’ve outlined above, average global air temperature is not a very good way of measuring what is happening to the climate system, even though air temperature is what we most immediately notice in terms of the environment’s impact on our comfort.

    You’re behind the times, but despite denialist hopes, sea temps seem to be rising, too.

  534. Rattus Norvegicus Says:

    Adriaan, I have to call BS on your cite. Note a in the online version is merely a list of the various groups doing modeling. It is a note to a table and singularly uninformative re: your question. Please provide a link to an online version of the report. here is what I found in WGI, Chapter 2.

  535. David Stockwell Says:

    What an active thread! There seems a lot of concern about the implications of temperature testing I(1), but most of the GCM temperature outputs test I(1) too.

    So being I(1) doesn’t block a conventional origin for the behavior. What is does do is change the critical value for significance tests.

    For that matter, one wouldn’t really be sure if climate models also have the same general behavior shown by B&R unless you actually tested them, as B&R claim cointegration of delta rfCO2 and temperature is the emergent behavior of the system. If the models are any good, they will match the integrative behavior seen in nature.

  536. Alan Wilkinson Says:

    I am amazed, not at the debate since much of this statistical ground was covered in an earlier thread on B&R at WUWT, but at the strange defensive posture of climate scientists (or at least their advocates) when faced with an analytical tool they were previously unaware of.

    I would have expected excitement to discover what new insights this tool could bring but instead we see rage that existing beliefs might be challenged.

    To those, and particularly BV, who say that AGW theory cannot be wrong I say of course it is wrong. The only question is how much and in what ways. That is why it is still an active field of research instead of a dead one.

    Congratulations to all those who have contributed objectively so much to this discussion, in particular of course, VS, Alex and indirectly David Stockwell.

    [Reply: I have not claimed that “AGW cannot be wrong”. I have claimed that claims that it is radically wrong are entirely without base. BV]

  537. Marco Says:

    @Alan:
    Bart most certainly does not claim AGW theory cannot be wrong. And ADF has been used on various occasions, too, so it’s not like climate scientist are totally unaware of it.

    The issue is quite nicely summarised in dhogaza’s answer to Alex Heyworth, which I will make even shorter:
    B&R claim that AGW is mostly wrong based on their analysis, but make ‘predictions’ that do not make physical sense (same forcing, vastly different change in temp and permanent vs temporary) and that go against observations and analysis thereof. While it certainly is possible that AGW is wrong, the B&R analysis actually negates a much broader area of physics. I’d expect a bit more humility from scientists when their analysis contradicts loads of basic physics. It may just as well be the methodology that has a problem with the data.

  538. Alex Heyworth Says:

    The basic argument is whether or not analysis such as B&R’s (which VR endorses and says is correct) can overturn much of modern physics totally unrelated to AGW.

    Because these are the implications.

    B&R are doing nothing less that suggesting that much of modern (i.e. century old and younger) physics needs to be flushed down the toilet.

    These statements amount to nothing more than an admission that you can’t think of a way to interpret B&R’s findings that is compatible with physics. This is indicative of a lack of imagination. Could I suggest, for example, that the climate system has mechanisms that respond to increases in GHG forcings by the reduction of other forcings? While this is purely speculative, and I propose no actual mechanism, it is both possible and not against the laws of physics :) No doubt “real” climate scientists could do a lot better than me in suggesting mechanisms, if they were willing to put their minds to it.

    Ignore “AGW defenders”, just concentrate on what B&R conclude about the physics of CO2 and LW IR absorption.

    ie nothing?

    You’re behind the times, but despite denialist hopes, sea temps seem to be rising, too.

    If I’m behind the times in observing that obsessing about air temperature is not sensible, then how come so much effort is devoted to convincing the public that “x is the hottest ….whatever”? Just look at the press releases by NASA, the NOAA and Hadley. Are they even further behind the times?

    Sea temps seem to be rising, too? As confirmed by the AQUA data?

  539. JvdLaan Says:

    @Willis Eschenbach
    THE Willis Eschenbach? http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php – talking about painfull.
    And in the meantime a lot from the WUWT-crowd is now coming in…quite sad giving the nice discussion that was taken place here.

  540. David Stockwell Says:

    The main issue in my mind is whether B&R are right or not. So far I have done two tests.

    1. Since they only used 3 GHG series for forcing, I thought that with more forcings there might be a different result. So I replicated their analysis with all the AGW forcings in the RadF file from GISS. The result was the same as B&R (http://landshape.org/enm/cointegration/).

    2. I wanted to test their result in a completely different way, without unit roots or anything. So I developed a linear model of temperature with natural sources of variation, CO2 and delta CO2. Which ever was more significant tests the result. It turned out that delta CO2 was more explanatory than CO2 – again consistent with their claim (http://landshape.org/enm/testing-beenstock/).

    Its a bit like the saying there are no proven theories, only those that haven’t been disproved yet. So far I haven’t seen any convincing disproof.

  541. HAS Says:

    My feeling increasingly is that a lot of the heat here is caused by the failure of science educators to teach some basics of the philosophy of science.

    Going back in time a bit
    RV in reply to Alan on March 20, 2010 at 13:18 said:

    “Statistics can say something about correlation. Physics can say something about a causal relation.”

    RV this duality you are espousing will stop you being a great scientist. Physics is an empirical science. Statistics is about empiricism; it is your friend and is the tool whereby you can make the move from association to causality. But as you are finding out it is a hard task-master (as it should be because causality isn’t cheap and easy).

    Then Eli Rabett Said on March 21, 2010 at 01:37

    “As has been pointed out here, and here and here, the argument is that at all levels of modeling, from relatively simple one dimensional radiative models, to large three dimensional GCMs, increasing greenhouse gas concentrations has multiple observed effects.”

    I’m sure this is a mis-speak, particularly in light of the rest of the post. Models produce predictions, the real world is observed. I wouldn’t normally bother to draw attention to this, but it does reflect a recurrent undercurrent of sloppy thinking that somehow says if a model has produced it, those results are real.

    Models are abstractions that are useful for their explanatory power. While they work they are great, when they don’t it’s time to get a better one.

    dhogaza on March 21, 2010 at 05:16 demonstrates this lack of understanding of the distinction between models and reality when he says:

    “B&R are doing nothing less that suggesting that much of modern (i.e. century old and younger) physics needs to be flushed down the toilet.”

    B&R are saying that the observations as reported by NASA GISS have characteristics that mean that the particular models implied by the various papers by Kaufman et al are wrong. Given that it takes a careful read to see that the conclusions only relate to these papers, and their claims of an impulse effect are somewhat more forcefully stated than might be appropriate (not to mention that this is a controversial area and it probably pays to be somewhat more circumspect), B&R are open to criticism.

    But not because the citric doesn’t understand that rejecting one particular complex model doesn’t mean the end of the world. It happens every day in real science as we stagger on in this grand endeavour to understand the physical world.

    Marco on March 21, 2010 at 09:23 you too should go back and read B&R more carefully and less defensively. The challenge to you is to understand the implications and incorporate them into your next iteration so you produce a more robust model of the climate.

    [Reply: Perhaps those calling themselves “skeptics” should also take the scientific methods into account, and apply their scepticism in all directions. This book chapter gives an excellent overview of the scientific methods, and how climate science stacks up against it. Slideshow is here (start at slide nr 30 to jump to the philosophy of science part). BV]

  542. David Stockwell Says:

    “their claims of an impulse effect are somewhat more forcefully stated than might be appropriate”

    I would agree with that, and also add that the results might only be saying that the earth/climate system absorbs more energy via impulses than via slowly changing forcings, which is a fairly normal property of a complex system if you think about it.

  543. Paul_K Says:

    Alex Heyworth: Since you raised the issue of our use of near surface temperatures, let me share my thoughts on this, and incidentally partially answer the question posed by Alex in two previous posts. Alex basically posed the challenge:
    Is it possible to put together a suite of governing equations which could be used to predict, and draw inferences about the statistical properties from, the temperature series? AND
    Can one say from theory whether the temperature series should have or should not have a unit root?
    I believe one can say with reasonable certainty that this challenge cannot be met with any confidence for the near surface temperature record, at least as it currently stands. The reason is that any such attempt comes up against a mathematical problem which falls into a class known as “knapsack” problems.
    We can assert that ANY such formulation of physics-derived governing equations must start with an attempt to estimate NET radiative transfer gain to or loss from the Earth’s system. The integration of the resulting power terms in theory allows one to say whether the system has gained or lost heat over a period of time and hence one can attempt to predict temperature at that time (with a myriad of different assumptions). I will ignore at this stage the highly non-trivial issue of estimating how the energy is partitioned within the system at any point in time, since my comments apply even to the simplest models of the Earth’s system.

    Now here is the insuperable problem: thermal emission from the Earth’s surface varies in accordance with T^4 (temperature to the fourth power). So how do we average the Earth’s temperature such that the application of a single temperature term (or a small number of distinct temperature elements if we partition the system into latitudes and sea vs terrestrial) works correctly to estimate the aggregate emission? We expect a valid temperature weighting within each partitioned element(valid in the sense of being consistent with its applicability in the emission term) to look something like the fourth root of an areal weighted average of T^4. However, the current surface temperature record is an areally-weighted average.
    There is a well-known mathematical inequality that relates the two methods of averaging, but, and this is the main point, there is no way to invert the existing areal average into the appropriate average for use in the emission equation. QED It is not possible to derive the equations sought by Alex because the difference terms in the two temperature series will be unpredictably different even if the two series derived from the different averaging methods are (as they must be) strongly correlated.
    This does not preclude the possibility of some energetic individual re-averaging all of the raw temperature series with process-dependent averaging and THEN looking for the statistical characteristics of this newly averaged temperature series, but the conclusion I present here is that one cannot draw inferences about the EXISTING surface temperature dataset(s) directly.
    Incidentally, for those who are fully following this argument, you will also note that in the time domain the difference terms in a “T^4 average” are non-trivially different from those found in an areal-average T. This should not affect the validity of applying appropriate statistical tests to the validity of GCM outputs against surface temperature data, (since the average temperatures in both datasets are computed in the same areal-weighted way), and that methodology does continue to show that the GCMs have little predictive skill. However, it puts a question mark in my mind about the validity of statistical inferences drawn offline as it were when people seek to test correlation of any radiative effect with an average surface temperature.

  544. Kweenie Says:

    “I wouldn’t normally bother to draw attention to this, but it does reflect a recurrent undercurrent of sloppy thinking that somehow says if a model has produced it, those results are real. ”

    Method Wrong + Answer Correct = Bad Science

  545. Alex Heyworth Says:

    David Stockwell

    The main issue in my mind is whether B&R are right or not.

    Even if they are not entirely right, their paper and the following discussions have raised important issues about the application of statistical methods to climate science.

    David Stockwell (later)

    “their claims of an impulse effect are somewhat more forcefully stated than might be appropriate”

    I would agree with that, and also add that the results might only be saying that the earth/climate system absorbs more energy via impulses than via slowly changing forcings, which is a fairly normal property of a complex system if you think about it.

    Indeed, B&R are a bit over the top in the phrasing of their conclusions. (it is only a draft paper, remember!) It will be interesting to see what they say if/when they finally publish.

    Paul K: interesting comments.

  546. Aprendiendo de las discusiones ajenas. « PlazaMoyua.org Says:

    […] https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-a… […]

  547. Frank Says:

    The “case of AGW” is not weakened by the presence of unit roots. How about you try to explain the large temp changes in the past (eg the Phanerozoic) without a substantial effect of CO2? – BV

    Correct! You can neither strengthen nor weaken a hypothesis that lacks evidence by further failing to provide evidence. Let’s review what we have here – surface thermometer records that GISS and CRU have poked and prodded (particularly with respect to UHI effects) in ways that demonstrably enhance the visual appearance of modern era warming. However, an impartial statistical analysis of these series says there’s nothing to see here.

    Re. explaining large temperature changes in the past without a substantial effect of CO2? Not my job, but please feel free to provide evidence that CO2 does explain large temperature changes in the past. The ice core data of the most recent 450 kyrs certainly does not do this, as CO2 lags temperature by about of 800 yrs on average. And while I don’t pretend to know what causes persistent periods of widespread glaciations that have now been going on for about 3 myrs, I’m not aware of anyone that has plausibly suggested CO2 as a causal agent.

    As you are aware, proponents of AGW require that we make dramatic (and expensive!) changes in our lives. Good science says that they therefore need to provide us with dramatic evidence. So far, we have been given nothing.

    [Reply: You’re providing a total caricature of the science. Where do you get your information from? Certainly no from a random walk through scientific sources. Your claim re temperature series having been prodded with is false (see e.g. here about the effects of adjustments. Evidence for AGW, see eg here.

    Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, our planet is experiencing a build-up of heat (Murphy 2009). This heat build-up is manifesting itself across the globe. Statsopheric cooling; nighttime temps having warmed more than daytime temps: Signatures of greenhouse forcing.

    CO2 has been an important factor in pretty much all large temperature changes in the earth’s past, see eg this excellent presentation. That includes the ice age cycles, where CO2 was not the initial cause, but a strong amplifying feedback. Without a substantial effect of CO2 you could not explain the amplitute of temp change over the ice age cycles. If you can, you’ll be instantly famous.

    A claim that a whole field of science is radically wrong is an extra-ordinary claim, which needs extra-ordinary evidence. You have supplied none. BV]

  548. Paul_K Says:

    David Stockwell: I read your analysis concluding a higher correlative significance in first differences of CO2 forcing than in the absolute CO2 forcing itself. I would be grateful if you were to take a minute to read my rather long (sorry!) comment above and let me have any thoughts. I believe that your results do go some way to demonstrating why B&R reached their conclusions, but I am concerned that your analysis (and theirs) may be subject to the problem I was attempting to expose in how one averages surface temperatures.

  549. Shub Niggurath Says:

    Stockwell

    “There seems a lot of concern about the implications of temperature testing I(1), but most of the GCM temperature outputs test I(1) too.

    So being I(1) doesn’t block a conventional origin for the behavior. What is does do is change the critical value for significance tests. ”

    This partially answers the question I raised earlier. Thanks

  550. tgv Says:

    “Weather still happens indeed. BV”

    I think we now agree precisely on where we disagree. The issue is all about timescales. That’s why VS’s analysis is interesting when done on a 150 year timescale. It suggests that the random components of the stochastic processes are still largely in play at a resolution of 150 years. (Again, this is different from saying that temperature is a random walk).

    It does not say that CO2 is having no contribution. But it does raise important questions for those who say that there is high certainty that CO2 is the predominant driver of net warming over a 150 year timescale. I more supportable statement would be that “CO2 is contributing to net warning (and therefore should be addressed through prudent public policy), but the degree of its contribution is still uncertain”.

  551. Eli Rabett Says:

    To continue beating my drum. You cannot draw meaningful conclusions from the statistical behavior of a single parameter in a coupled system.

    Even in 1988, Hansen’s argument was more sophisticated, viz: using physical constraints the outputs of the theory follow observation of a number of parameters including global temperature. Since we are confident of the theoretical inputs, and reasonably confident of the drivers, the forcings, and the observables over multiple time scales, the three legs of the theory support each other.

  552. Eli Rabett Says:

    HAS misunderstands Eli’s point about models, so let us add some emphasis
    ——————————
    ““the argument is that at ALL LEVELS of modeling, from relatively simple one dimensional radiative models, to large three dimensional GCMs, increasing greenhouse gas concentrations has MULTIPLE observed effects.”

    I’m sure this is a mis-speak, particularly in light of the rest of the post. Models produce predictions, the real world is observed. I wouldn’t normally bother to draw attention to this, but it does reflect a recurrent undercurrent of sloppy thinking that somehow says if a model has produced it, those results are real.
    ——————————

    Eli clearly separated the models from the observations, the observations are meaningless for prediction and understanding without the models, but one ALWAYS has to be careful that there is some wrong element in any ONE model that leads it to match the observation, especially if ONLY considering a SINGLE outcome. The FACT that all levels of modeling, over 100 years, RESULTS from INCREASES in greenhouse gases AGREE on basic outcomes provides confidence in the models for understanding and prediction. Increasing model complexity principally increases the resolution of the models and brings out emerging properties of the system.

  553. Eli Rabett Says:

    Alex says:

    Could I suggest, for example, that the climate system has mechanisms that respond to increases in GHG forcings by the reduction of other forcings? While this is purely speculative, and I propose no actual mechanism, it is both possible and not against the laws of physics :)
    ————————————-
    eg: Here occurs a miracle. Very similar to a paper by Ferenc Miskolczi which was described by Nick Stokes as “The greenhouse gas theory that has been used for the last century is TOTALLY WRONG! The proof is left as an exercise for the reader.” Seriously, if you are making a claim like this, you need a good argument, put with some clarity. You would usually write down a model with some unknowns, state some physical principles with their resulting equations, and derive relations which characterise the unknowns.”

    You cannot just wave your hands.

  554. Eli Rabett Says:

    Adriaan, a useful place to start is David Archer’s piece on the multiple time scales for absorption of a pulse of CO2, and oh yes, RTFRs. You could also try to get an idea from the Joos report referenced in the footnote you cite.

  555. A C Osborn Says:

    Well nobody bothered to answer my question, so I will ask it again.
    We all know that the Global Temperature Anomaly series is “Corrected”, “Celled”, “Averaged” and “Homogenised”.

    Has anyone looked at a Raw Temperature Series to see if it exhibits the same Statistical characteristics?

    [Reply: One example here. BV]

  556. Marco Says:

    @tgv: to claim that CO2 is driving the temperature increase of the last 130 years (1880 and onward, not 1860) would be wrong, and something that is not claimed by climate scientists. In fact, the IPCC argues that the increase in the early 20th century may be, at least in part, explained by increase in solar input, but this explanation goes away when looking at the temperature increase after 1970. If anything, from 1970 onward there is a *decrease* in TSI.

  557. tgv Says:

    @Marco: “In fact, the IPCC argues that the increase in the early 20th century may be, at least in part, explained by increase in solar input, but this explanation goes away when looking at the temperature increase after 1970.”

    The IPCC is confusing weather with climate.

  558. Al Tekhasski Says:

    [BV replies : Perhaps those calling themselves “skeptics” should also take the scientific methods into account, and apply their scepticism in all directions. This book chapter gives an excellent overview of the scientific methods, and how climate science stacks up against it.]

    Perhaps before expressing such a definitive opinion on the Oreskes “overview”, one should become familiar with details of the original material that she uses to make her case. In this chapter, after citing the work from “climateprediction.net” modeling effort and the picture of ensemble trajectories (Figure 4.2), she writes:

    “What does an ensemble like this show? For one thing, no matter how many times you run the model, you almost always get the same qualitative result: the earth will warm.”

    She said that the “Figure prepared by Ben Sanderson with help from the project team”. What she is not aware of here is that in actuality about 43% of trajectories were EXCLUDED from that picture under one or another excuse, because the climate trajectory would show either “unphysical cooling”, “substantial drift in control phase” or another blowup of their model.

    Click to access nature_first_results.pdf

    As we see, the data were subjectively selected by the “project team”.
    Therefore, she was using (knowingly or not) severely incomplete and biased information, and therefore all her musings should be safely dismissed.

  559. Marco Says:

    @tgv: you are confusing fact and fiction.

  560. Willis Eschenbach Says:

    JvdLaan Says:

    March 21, 2010 at 10:09
    @Willis Eschenbach
    THE Willis Eschenbach? http://scienceblogs.com/deltoid/2009/12/willis_eschenbach_caught_lying.php – talking about painfull.
    And in the meantime a lot from the WUWT-crowd is now coming in…quite sad giving the nice discussion that was taken place here.

    I see, you don’t have the nerve to call me a liar yourself, so you’ll do it second-hand? What is this, “It must be true, I read it on the Intawebs”?

    I was not “caught lying”. I was accused of lying, by a man whose motives and honesty are suspect.

    However, I’m used to these ad-hominem attacks by now. You can’t think of anything to counter my science, so you call me a liar. Real classy … can we get back to the science?

  561. Willis Eschenbach Says:

    Eli Rabett Says:

    March 21, 2010 at 17:41

    Alex says:

    Could I suggest, for example, that the climate system has mechanisms that respond to increases in GHG forcings by the reduction of other forcings? While this is purely speculative, and I propose no actual mechanism, it is both possible and not against the laws of physics :)

    ————————————-
    eg: Here occurs a miracle. Very similar to a paper by Ferenc Miskolczi which was described by Nick Stokes as “The greenhouse gas theory that has been used for the last century is TOTALLY WRONG! The proof is left as an exercise for the reader.” Seriously, if you are making a claim like this, you need a good argument, put with some clarity. You would usually write down a model with some unknowns, state some physical principles with their resulting equations, and derive relations which characterise the unknowns.”

    You cannot just wave your hands.

    I propose just such a mechanism here. Basically as tropical temperatures increase, clouds and thunderstorms increase, driving the temperature below the starting point. This does exactly what Alex suggests above.

    Eli, just as you cannot just wave your hands and say there is an answer, you cannot wave your hands and say there is no answer …

  562. Eli Rabett Says:

    Yeah, and so did Lindzen, and people went looking for it and it was not there, although, of course, Lindzen still sees it in his dreams

  563. HAS Says:

    “HAS misunderstands Eli’s point about models”

    Yes I see now, “observed effects” was referring to what was observed in the output of the models, not that what the model produced were observations.

    It is still worthwhile making the point that if what is predicted from the model differs from reality, this is more of a problem for the modeller than for Nature.

    I note in passing that “fit to observations” when those observations have been used for model development is a very weak form of model verification. The model is just telling you what you already told it.

    I would also add that the ability of multiple models to produce the same outcome is obviously a necessary but not sufficient basis for gaining confidence in the models. In particular it is quite possible that all models incorporate a common erroneous assumption that is driving the results in question.

    End of platitudes, but they do seem to need to be said to take some of the heat out of the debate.

  564. Scott Mandia Says:

    I summarize the scientific consensus regarding “model accuracy” on the page below:

    http://www2.sunysuffolk.edu/mandias/global_warming/climate_models_accuracy.html

  565. Alan Wilkinson Says:

    Marcos: “While it certainly is possible that AGW is wrong, the B&R analysis actually negates a much broader area of physics.”

    As I see it, that is factually incorrect. None of climate science that consists of accurate collection and measurement of data is negated by B&R. None of physics that consists of individual interactions of matter and energy is negated by B&R. What may be negated are simplistic interpretations of the behaviour of complex systems incorrectly analysed. (Paul’s comments on the inappropriateness of averaged global temperatures being one such factor.)

  566. DirkH Says:

    Hi. I’m looking at this from a signal processing view.

    If GHG forcings are I(2) and temperature is I(1) than the GHG forcings level can not directly cause (granger-cause as i learned) the temperature, but the first derivative (or the first differences in a discrete time series) can.

    Some people now argue that that’s unphysical… i disagree, it’s an indicator that something is amiss in the hypothesized physical mechanism of the atmosphere. And that a refined physical explanation would have to be thought up that is in line with the statistical results.

    This physical mechanism would have to incorporate a negative feedback to match the statistical properties. I won’t offer a candidate explanation here, i’m not a physicist. I’m just saying that a negative feedback can solve the dilemma.

    I’m one of the WUWT crowd BTW so you know what to do…

  567. Alan Wilkinson Says:

    Apologies, for Marcos read Marco.

  568. HAS Says:

    Scott Mandia Says on March 21, 2010 at 21:44

    “I summarize the scientific consensus regarding ‘model accuracy’ on … ”

    I had prevously read your summary, but would suggest doing the hard yards and reading the IPCC 2007 WGI report Chapter 8 et seq.

  569. Scott Mandia Says:

    HAS,

    Much of what appears on my page IS from Chapter 8 which I link to and reference. Perhaps I misunderstand your comment?

  570. HAS Says:

    Scott Mandia

    I just think for someone coming to this anew that some of the complexity and uncertainty gets lost in the summary.

  571. Alex Heyworth Says:

    # Eli Rabett Says:

    Alex says:

    Could I suggest, for example, that the climate system has mechanisms that respond to increases in GHG forcings by the reduction of other forcings? While this is purely speculative, and I propose no actual mechanism, it is both possible and not against the laws of physics :)
    ————————————-
    eg: Here occurs a miracle. Very similar to a paper by Ferenc Miskolczi which was described by Nick Stokes as “The greenhouse gas theory that has been used for the last century is TOTALLY WRONG! The proof is left as an exercise for the reader.” Seriously, if you are making a claim like this, you need a good argument, put with some clarity. You would usually write down a model with some unknowns, state some physical principles with their resulting equations, and derive relations which characterise the unknowns.”

    You cannot just wave your hands.

    Eli, my point is that when data and theory disagree there are only three possible solutions: better data, better analysis or better theory.

    Those who say “B&R must be wrong because their result doesn’t agree with the prevailing theory” are in effect refusing to look at any of these options. Who did you say was doing the hand waving?

    At least Tamino recognized this and attempted to critique their analysis methods.

  572. Scott A Mandia Says:

    HAS, that is fair enough!

  573. Willis Eschenbach Says:

    Eli Rabett Says:

    March 21, 2010 at 21:01
    Yeah, and so did Lindzen, and people went looking for it and it was not there, although, of course, Lindzen still sees it in his dreams

    I love how, whenever there is a dispute or disagreement about some piece of evidence y’all don’t like, AGW supporters declare it “debunked” or “disproved” or the like, and declare the game over.

    In fact, NASA says:

    Reconciling the Differences

    Currently, both Lindzen and Lin stand by their findings and there is ongoing debate between the two teams. At present, the Iris Hypothesis remains an intriguing hypothesis—neither proven nor disproven. The challenge facing scientists is to more closely examine the assumptions that both teams made about tropical clouds in conducting their research because therein lies the uncertainty.

    In other words, NASA says your claim is nonsense. They have a good three part article about it here.

    My point is that your bravado and certainty are misplaced. My rule of thumb?

    When the Wabbit says the discussion is over and the science is settled … he’s bunny-hopping as fast as he can away from something which is not settled at all.

  574. adriaan Says:

    # Eli Rabett Says:
    March 21, 2010 at 17:55

    Adriaan, a useful place to start is David Archer’s piece on the multiple time scales for absorption of a pulse of CO2, and oh yes, RTFRs. You could also try to get an idea from the Joos report referenced in the footnote you cite.

    As you could have seen, I have read Joos et al, and many more. But maybe you can help me to translate this into meaningful physiscs? That is why I referred to the Li paper 2009. Give me the physical explanation for the model that is underlying the exponential decay as postulated in the Bern CO2 cycle, and also why CO2 appears to have no effective half life time? Why does 21% of a pulse of CO2 remain forever in the atmosphere? What physics is this? Arrhenius? In my biological models, this would be equivalent with BS. And physics is presumed to be purer science than biology is, isn’t it?

  575. eduardo Says:

    VS said

    ‘To make a very long story short, seeing a very high temperature level in 2000, starting in 1850, is not at all ‘unlikely’ and inconsistent with a ‘random walk’. I hope you also see now why that GRL comment is nonsense.’

    With all respect, to keep this comment short, it seems that you did not read that GRL paper, and perhaps no one of the commenters here did. That GRL paper was not about ‘random walk’ or ‘unit root’ whatsoever. We tested the H0 hypothesis that the global annual mean temperature could be represented by a non-deterministic fractional-differenced process- not a random walk- and thus stationary.

    Before writing the word ‘nonsense’ shouldnt one read the original text first?

  576. Marty Says:

    I am coming in late in this discussion, but there seem to be some points worth making.

    First, unit root tests are famously low in power. Thus, the failure to reject the null hypothesis of a unit root is not quite as convincing as for the more powerful tests most applied researchers are familiar with. And co-integration fishes for relationships in the murky waters of unit root hypotheses. Let me quote here from quite a while ago:
    “It is shown analytically, using local to unity asymptotic approximations (Bobkoski (1983), Cavanagh (1985), Phillips (1987)), that whilst point estimates of cointegrating vectors remain consistent, commonly applied hypothesis tests no longer have the usual distribution when roots are near but not one. The size of the effects can be extremely large for even very small deviations from a unit root; indeed it will be shown that rejection rates can be close to one. Hypothesis tests on general restrictions on the cointegrating vector are only affected if the restriction includes coefficients on variables which do not have an exact unit root, and are unaffected asymptotically by the presence of near unit root variables not included in the restriction.” [Elliot, G., “On the Robustness of Cointegration Methods When Regressors Almost Have Unit Roots,” Econometrica, Vol 66, No. 1, p. 149]

    Second, the CO2 forcing series is convincingly I(1) over a range of tests and periods for trend plus intercept specifications, it is not so for I(2) for a intercept (or intercept plus trend, which I don’t think applies for the differences). The I(2) test should hold for a intercept specification but doesn’t at least in the post 1970 period. So I just don’t think of the CO2 forcing as I(2). I am not convinced by the rejections or the failures.

    Third, temperature series are even more problematic. I have worked with over 100 daily temperature series at US stations, and I have never found a unit root. Of course, we could have a data generating process that doesn’t show the unit root at the daily but does at the annual. Still it would nice to understand why. Also, the tests of unit roots for temperature levels are no where near as convincing as those for CO2 forcing levels. For instance, for the 1900-2008 period the Hadley, NOAA, and NASA series have rejects for a intercept plus trend specification, but the CRU doesn’t. All fail to reject for an intercept or no exogenous specifications.

    Now suppose we look at the 1979-2008 period so to also incorporate the satellite data. With the ADF we again have failures to reject for no-intercept-trend specification for the surface, but we have strong rejections for the satellite data. Interesting. So how about a constant term? Same thing except the surface data is getting closer to rejection while the satellite data is having weaker rejections. More interesting. Now with the intercept + trend specification of the ADF, all series reject the unit root hypothesis. Ahha.

    But let us not stop here. The ADF is, as has been pointed out, not the only test. Suppose we use the ERS (Elliot-Rothenberg-Stock) test with the same intercept+trend specification. With the ERS tests there is a failure reject for all series (two surface rejected at the 10% level). It seems that aggregate temperature series can’t make their minds up as to what they really are, at least statistically.

    So does an aggregate, annual (or longer period) temperature variable have a unit root? Maybe.

    Fourth, and here those readers who are inclined to dismiss Messrs. Beenstock and Reingewertz, unit roots are a knife edge criterion that is not well suited for testing against close alternative hypothesis AND series close to integrated status may work sufficiently well for a co-integration specification. For those of you who are a little foggy on co-integration the following non-technical paper (“A Drunk and Her Dog”) offers a nice description:

    Click to access Murray93DrunkAndDog.pdf

    Random walks have infinite variances; highly autocorrelated processes have very large ones. Large variances makes statistical significance, in the large sense of the term, difficult. One can dismiss Beenstock and Reingewertz because they are wading into an area of statistical uncertainty, but any of us who are making inferences about temperature trends should realize we are all wading in the same waters. Macroeconomics have been wading in those waters since the early 1970s when Box and Jenkins (statisticians, not economists) told the profession to wake up: the time series properties are as important as the structure. It was a message that was much resisted, but guys like Granger and Engle and a whole bunch others gave enough guidance and the profession — well at least the econometrics — changed for the better. Maybe — and I truly don’t know — Messrs. Beenstock and Reingewertz and others of that time series inclination are simply doing for climatology what Box and Jenkins did for macroeconomics.

  577. adriaan Says:

    @Eduardo,

    Ref to the GRL paper please, GRL has been mentioned so often.

    [Reply: How unusual is the recent series of warm years? BV]

  578. eduardo Says:

    VS wrote
    ‘There is more. Take a look at Beenstock and Reingewertz (2009). They apply proper econometric techniques (as opposed to e.g. Kaufmann, who performs mathematically/statistically incorrect analyses) for the analysis of such series together with greenhouse forcings, solar irradiance and the like (i.e. the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated, as this makes them asymptotically independent. They, therefore have to be related via more general methods such as polynomial cointegration).’

    I would like to pose this question for my understanding. Assume we have a discreet process C which is found to be I(2), and is the result of discreet sampling of a continuous function c(tau). Now consider t(tau) the time derivative of c, and its discreet sampling T. In other words, T would be the time-difference of C. Would T be I(1) ? I would say so.
    You would conclude. according to the above paragraph, that C and T are independent, and yet they are physically bound to each other, being just the samplings of c and t.

    Am I wrong? Thank you

  579. eduardo Says:

    @ adriaan

    http://www.agu.org/pubs/crossref/2008/2008GL036228.shtml

  580. Shub Niggurath Says:

    Scott:

    From above

    “…about $80 billion per year in subsidies to fossile fuel companies (far greater than geen subsidies, BTW), and then thet cost of climate change as a result of this carbon is not factored in, etc.”

    If the only rationale of mass deployment of modes of energy production is to avert climate change then the framework for our analysis is

    1) The newer modes (solar, wind) should be as energy ‘efficient’ as the previous one
    2) Being so, they should not contribute to CO2

    Having agreed to this (I am sure you will) -if one examines the various parameters that go to contribute to an energy-efficient source – it is clear that the alternative sources fail on several of those parameters, especially in energy density and control of output rates.

    Therefore the so-called monstrous error therefore lies not in the subsidies themselves, but in the fact that one has to subsidize these inefficient modes of production.

    Subsidies for fossil fuels are meaningless in the context of our present discussion because they are subsumed in the cost of achieving presnt-day human productivity – and therefore is our baseline.

    Back to the unit-root. :)

    Regards

  581. Willis Eschenbach Says:

    eduardo Says:
    March 22, 2010 at 00:51
    @ adriaan

    http://www.agu.org/pubs/crossref/2008/2008GL036228.shtml

    Eduardo, would I be correct in saying that the conclusion from your paper is that the temperature series is not stationary?

    And given that (as far as we know) global temperatures have been rising since the Little Ice Age, would I be correct in saying that this is not a surprising finding?

    Thanks,

    w.

    [Reply: The LIA ended when solar activity picked up again. It has long since stabilized (ie no trend in solar indexes since the 1950’s. The warming since then has nothing to do with the LIA having ended almost 100 years earlier. See also here. BV]

  582. adriaan Says:

    @Eduardo,

    Thanks for the link, I downloaded the paper and the additional info. I will be back on it.

  583. adriaan Says:

    @Eduardo,

    It looks as if the conclusion of your paper is only vaild for averaged data. On a single station basis, your study shows that observed warming lies not outside the natuaral variation. Does this mean that warming is dependent upon averaging? I have my doubts about the use of averaging and gridding of temperature data. As for the memory effect, have a look at the Bern CO2 model. (which you are familiar with) It implicitly implies a memory effect, without which the buildup of antrogenic CO2 would not be possible.

    [Reply: It shouldn’t come as any surprise that the variability in a single location is much larger than that in the global average. The consequence is that statistical significance is not reached as quick for single locations (or short timescales). BV]

  584. Alan Wilkinson Says:

    Marty, for my clarification there was considerable earlier discussion about the necessary length of data for useful application of unit root tests. VS pointed out the inadequacy of short runs such as you describe (eg since 1979).

    What makes you think TSA of such data subsets has any power?

  585. eduardo Says:

    @ Willis,

    Dear Willis,

    the conclusion is that the recent clustering of record annual temperatures is very unlikely in a long-term-persistence process (fractional differencing process), and therefore points to either a non-stationary or deterministic trend in the period analyzed.

    This statistical analysis cannot discriminate between causes, as only temperature data were analyzed , not forcing data. You say, however, that temperatures have been increasing since the LIA. That’s true, the question is to identify and quantify the driver. It is not sufficient, I think, to just say ‘they have been increasing’.

    I think that, to judge a theory (CO2 or sun), one should require the same level of accuracy and/or explanatory power.

  586. eduardo Says:

    @ adriaan,

    The outcome of a test depends on the signal-to-noise ratio present in the data. In averaged data much of the local random variations are filtered out, so it may well be that the the level of significance is lower for local than for averaged data, although the signal strength may be the same in both.

    Memory. A test cannot demonstrate a hypothesis, it can only reject one. In this case, that the clustering or records years could be due just to a long-term persistence model.
    So temperature may have a memory effect, quite likely.

  587. Willis Eschenbach Says:

    eduardo Says:

    March 22, 2010 at 01:57
    @ Willis,

    Dear Willis,

    the conclusion is that the recent clustering of record annual temperatures is very unlikely in a long-term-persistence process (fractional differencing process), and therefore points to either a non-stationary or deterministic trend in the period analyzed.

    This statistical analysis cannot discriminate between causes, as only temperature data were analyzed , not forcing data. You say, however, that temperatures have been increasing since the LIA. That’s true, the question is to identify and quantify the driver. It is not sufficient, I think, to just say ‘they have been increasing’.

    I think that, to judge a theory (CO2 or sun), one should require the same level of accuracy and/or explanatory power.

    Thanks, eduardo. I agree that there is a trend in the data (non-stationary or deterministic, as you point out).

    However, since the temperatures have been increasing since the LIA, it seems unlikely that CO2 is the driver … which does not, of course mean that the sun is the driver either.

    It simply means that we don’t know what the driver is.

    [Reply: non sequitur. BV]

  588. VS Says:

    Hi eduardo,

    Do you, by any chance, happen to be Eduardo Zorita, one of the authors?

    I stated up there that I have’t read the paper carefully, my apologies if I missed something crucial (in my defense, at that point, I didn’t think this debate would evolve this far).

    However, I did notice a stationariy process is assumed (i.e. no unit root).

    What’s the basis for concluding that in light of these test results and these auxilarry test results, as well as all the literature pointing to the presence of a unit root?

    I’m interested in your motivation. Now, I also took a closer look at your paper. I cite from page 2:

    “For the process to be stationary d must lie between 0 and 0.5.”

    Then you write:

    “The Whittle method gives values slightly larger than 0.5, even disregarding the period from 1960 onwards, for all three global records.”

    Your calculations imply nonstationarity, and are completely in line with my results posted above. But then you guys go on, and do this:

    “Considering these possible uncertainties, it will be assumed here that d is smaller, but very close, to 0.5”

    And simply assume a I(0) process with a high persistence. I hope you can understand my initial reaction, although I do apologize for the tone (it was a ‘different’ debate up there :)

    So, allow me to summarize:

    – The Whittle method (i.e. your calculations) implies non-stationarity
    – The literature widely reports the presence of a unit root (i.e. non-stationarity)
    – Most test results imply (see links above) imply the presence of a unit root (i.e. non-stationarity)

    Why do you then assume stationarity (i.e. I(0) with high persistence) in your analysis? I’m very interested in your motivation in light of the above.

    Finally, wouldn’t you agree (analytically) that the presence of a unit root in the GISS series in fact invalidates the conclusions of Zorita, Stocker and von Storch (2008)?

    ***

    As for your second post. I have to point out that I was being sloppy there and made a ‘typo’ (just noticed). Polynomial cointegration, see also Engle and Yoo (1991), is a method for relating I(1) and I(2) series. The difference of order of integration implies that the series cannot be integrated linearly. An attempt to cointegrate them polynomially is then tested and rejected by BR.

    So, perhaps I misinterpreted your question (I think we are using different ‘jargon’ :), but what I am trying to say here is that the different orders of cointegration don’t make series completely independent (simply linearly independent). What has been tested by BR, is that ‘general’ dependence (i.e. via polynomial cointegration).

    I’m very interested in your input, especially since your findings are so widely cited (and used as an argument in a lot of on and offline ‘discussions’ :).

    PS. In light to your answer to adriaan, I do have to point out that the KPSS test in fact rejects stationarity. See linked test results.

    ————–

    Hi Marty,

    Again, see the links in the answer to eduardo for test results. On the basis of these test results, with this series (not completely differently structured, daily, series), we conclude that the series contain a unit root.

    Ergo, this series is non-stationary and a unit-root based approach is justified.

    You mention the low power of the ADF test (i.e. it has a low probability of rejecting a false H0, i.e. the chance of a type II error is high). However, we also tested the unit root hypothesis via the KPSS test (see test results) which takes stationarity (i.e. the absense of a unit root) as the H0. Here we reject stationarity (at 5% and 10% sig).

    What’s your opinion on that?

    PS. Note that daily/monthly data need to be properly corrected for seasonality (using TSA methods). This greatly complicates the analysis.

    ————–

    Hi Willis Eichenbach,

    At WUWT you asked me about the Hurst coefficient. I snooped around quickly (by no means a conclusive query) and my first inference is that the calculations of the Hurst coefficient in fact assume stationarity.

    Can somebody please correct me if I’m wrong here (good chance of that being the case)?

    ————–

    NB1. Thanks everybody for the input, I like where this is going. I still owe some people a reply, especially Paul_K (actually, in that particular case, I actually have a couple of question. very interesting post! :). I’ll try to get to them soon.

    NB2. David Stockwell, welcome! Could you please send an (empty or not :) email to vs dot metrics at googlemail dot com? Thanks!

  589. VS Says:

    typos:

    “The difference of order of integration implies that the series cannot be integrated linearly.”

    cointegrated

    “but what I am trying to say here is that the different orders of cointegration don’t make series completely independent”

    integration (without co :)

    doh, it’s late

  590. John Whitman Says:

    ”””’HAS Says: March 21, 2010 at 11:18 – But not because the citric doesn’t understand that rejecting one particular complex model doesn’t mean the end of the world. It happens every day in real science as we stagger on in this grand endeavour to understand the physical world.””””

    HAS,

    Well put, ‘this grand endeavor to understand the physical world’. Thanks BART for supplying a good venue at your blog. Keep at it.

    In the Western civilization ‘this grand endeavor to understand the physical world’ started its focus in the ancient Greek era and (with fits and starts) has continued to this very blog thread.

    VG, thanks for introducing the Augmented Dickey-Fuller tests and unit roots into the ‘grand endeavor’ on this blog threat. You put us on a steep understanding curve.

    John

  591. John Whitman Says:

    Apologies, spell check error in my ‘John Whitman Says: March 22, 2010 at 04:25 ‘ comment.

    The word ‘thread’ meant, not ‘threat’.

    Strange mistake, I agree. Sorry.

    John

  592. HAS Says:

    Re BV’s reply to my comment at March 21, 2010 at 11:18

    “Perhaps those calling themselves “skeptics” should also take the scientific methods into account, and apply their scepticism in all directions. This book chapter (http://www.lpl.arizona.edu/resources/globalwarming/documents/oreskes-chapter-4.pdf ) gives an excellent overview of the scientific methods, and how climate science stacks up against it. Slideshow is here (http://www.lpl.arizona.edu/resources/globalwarming/documents/oreskes-on-science-consenus.pdf start at slide nr 30 to jump to the philosophy of science part).”

    BV what you have linked to is advocacy for the “scientific consensus on climate change” not for the scientific method. These are different things, and (how do I put it gently) the point of my comment was that to have a productive debate we need to focus on the latter rather than the former (for or against from either side).

  593. Tim Curtin Says:

    A C Osborn said on Global average temperature increase GISS HadCRU and NCDC compared

    Has anyone looked at a Raw Temperature Series to see if it exhibits the same Statistical characteristics?
    A C Osborn Says:

    and again on March 20, 2010 at 15:59
    Has anyone tried the [unit root tests] on an unadulterated Temperature series form one thermometer?
    [Reply: “One thermometer” cannot measure global avg temp. Keep baseless accusations (adulterated; massaged to death) at the door before entering. Thanks! BV]
    But AC Osborn raises an interesting issue. Why could not the IPCC offer in its AR5 the climate statistics from each one of all (c1200) stations in current GISS, HadleyCRUT, that have at least 50 years of unbroken records to date, classified by their respective max & min temperature and rainfall etc for each of the last 50 years, with the SSR and [CO2] at each. Let us do the trending, averaging, unit rooting, and gridding.

  594. Alex Heyworth Says:

    Any time I see someone mentioning the scientific method from now on, I am going to point them in the direction of an excellent book I have recently read: Henry Bauer’s “Scientific Literacy and the Myth of the Scientific Method”.

    A good quote from an Amazon review:

    A key point stressed by Prof. Bauer in different contexts is that the power of science is that it is agreed on by consensus, but that does not always mean that the consensus is right, again because humans are fallible, and because data is *always* interpreted according to a theory or some other bias. The author, as have many other philosophers of science, refutes the common belief that in science knowledge is gained exclusively by strict Baconian impartial induction. Examples are cited where scientists could not accept data obtained wholly by scientific methods because it didn’t fit their prejudices.

    The chapter called “The So-Called Scientific Method” is the best I’ve read on why the empirical scientific method, while a wonderful ideal to strive for, is nevertheless a myth. Prof. Bauer makes many important points, such as that some sciences (physics) are theory-driven, while other sciences are observation-driven (geology); some sciences can make precise theories through specific experiments (physics and chemistry), while other sciences (cosmology and paleoanthropology) cannot run experiments and are thus very “data deficient.”…

    Another chapter that is also outstanding is the following chapter, “How Science Really Works.” Prof. Bauer uses as the main theme the excellent analogy devised by Michael Polyani of scientific problem solving as a puzzle of different teams communicating with each other, getting at the truth, piece by piece, separately but in tandem nevertheless. Another theme that is very helpful in this chapter is the author’s cogent distinction between textbook science and frontier science. Textbook science is almost always reliable because it has passed the test of time through repeated verification. On the other hand, frontier science, which is unfortunately what is usually reported in the news precisely because it is “new” and exciting, often turns out to be dead wrong. The chapter also discusses those levels of science between these two “extremes.” After reading this chapter I feel that I now have a much clearer way to assess the truth of whatever science I might be reading about.

    An excellent read, I’d go so far as to say that if all followers of climate blogs read it, it could reduce the level of misunderstandings and falsehoods by perhaps three quarters. For a start, it would get rid of most of the rubbish about science not being about consensus and the Popperian falsification stuff.

    [Reply: Sounds interesting indeed. I wrote about the relevance of consensus here. BV]

  595. ScP Says:

    Tim, A C Osborn, wander over to the Chiefio, he loves that kind of thing.

    http://chiefio.wordpress.com/

  596. HAS Says:

    Alex Heyworth, Bauer’s a truely interesting character!

    http://thetruthbarrier.com/essays/46-john-strausbaugh/168-science-and-scientism

  597. Anonymous Says:

    @ VS

    Dear VS,

    yes, I am one of the authors. I am really not aware that the GRL papers has been or is being discussed. Let me summarize the paper by first saying what it doesnt say. It doesnt say whether the global mean temperature is an integrated process or not. This aspect is interesting in itself, also from the physical point of view, but the objective of the paper was different. I also think that the design of the tests on unit root deviate from the original discussion (see last paragraph).

    I see the point of discussion as follows: some people claim that CO2 is causing warming, as seen in the temperature trends; other people claim that the trends are of stochastic nature caused by either a unit-root process or by a fractional-difference process. What should be tested in my opinion is: (1) can the global mean temperature be a unit-root or fractional differencing process *in the absence of anthropogenic forcing* ? and (2) assuming the global temperature shows stochastic trends in the absence of anthropogenic forcing, what is the likelihood of observing the 20th temperature trends (or the clustering of record years for that matter). In other words, if natural variations can be described by a unit-root or fractional differencing, could these natural variations give rise to the observed trends ?
    The GRL paper simply explored part of the second question: if the natural variations of the global mean temperature is a fractional difference processes what is the likelihood of observing the recent clustering of record years ? This probability turns to be very small.

    To the question of whether the observed global mean temperature is a unit-root process: it may be, some tests indicate it is, some tests indicate it isnt. If I turn the heating in my room continuously higher and higher, the temperature will be a unit-root processes. Does this demonstrate that the heating has no influence on temperature ? Obviously not. No, actually it doesnt demonstrate anything interesting, and it bears to relevance for the testing of observed temperature trends. It would be interesting *if* the temperature would contain a unit-root *in the absence of heating*. This reasoning also applies to the observed global annual temperature: a unit-root process is relevant for the significance of trends *if* the global temperature is a unit-root process under the null-hypothesis, namely that CO2 is not driving the temperature. So to be informative all the tests for a unit-root should be conducted either for periods where the anthropogenic influence was not present, or in control simulations with climate models

  598. VS Says:

    Hi Eduardo,

    Thank you for your reply! I do have a couple of comments / questions.

    —————-

    First you write:

    “Let me summarize the paper by first saying what it doesnt say. It doesnt say whether the global mean temperature is an integrated process or not.”

    I disagree here. You assume temperatures not to be integrated of the first order (but rather fractionally integrated). I think that also amounts to ‘saying’ it.

    —————-

    Furthermore, that assumption is contradicted by the test results (and more test results) for the series in question. Also, your own calculations indicate that the series is non-stationary (which is in line with those test results). I cite, again, from your paper:

    “For the process to be stationary d must lie between 0 and 0.5.” and “The Whittle method gives values slightly larger than 0.5, even disregarding the period from 1960 onwards, for all three global records.”

    In other words, what you have stated in your reply is your opinion on the structure of the temperature series which is clearly contradicted by the observations (i.e. contradicted by analytical facts: i.e. both testing and your own calculations).

    Isn’t formal testing exactly that which turns opinion into science?

    Now, from your post I infer that you have the same opinion on unit roots in temperature series as Kaufmann et al (2006). May I remind you however that while Kaufmann indeed holds the same idea on the ‘essential’, stationary, nature of temperature series, he does respect the test results and in all of his analyses he treats the global average temperature series as an I(1) process.

    —————-

    “So to be informative all the tests for a unit-root should be conducted either for periods where the anthropogenic influence was not present, or in control simulations with climate models”

    Would you mind sharing the formal method (e.g. derivation, simulation) used to arrive at this conclusion?

    I find it very awkward also that you start by assuming that anthropogenic forcings are warming the planet, therefore we cannot test the global mean temperature series for unit roots. At the same time, the first step we need to take in order to (empirically) assess whether anthropogenic forcings are indeed warmig the planet, is test the global mean temperature series for unit roots.

    ??

    —————-

    Finally, you wrote:

    “The GRL paper simply explored part of the second question: if the natural variations of the global mean temperature is a fractional difference processes what is the likelihood of observing the recent clustering of record years ? This probability turns to be very small.” (bold added)

    This implicitly answers the question I posed here, namely:

    “Finally, wouldn’t you agree (analytically) that the presence of a unit root in the GISS series in fact invalidates the conclusions of Zorita, Stocker and von Storch (2008)?”

    So, assuming the series contains a unit root, the conlusions on the ‘probability’ of modern warming arrived at by Zorita, Stocker and von Storch (2008), are invalidated.

    Would you be so kind to also confirm this explicitly?

  599. Anonymous Says:

    Dear VS,

    I thing you are not understanding correctly the logic of GRL paper. We were *not* assuming that the global annual temperature was not a unit root process. We were exploring the consequences of the* natural* variations of the global mean temperature being a fractional-difference process. The fact that the observed record may show a unit root does not invalidate the paper. The conclusions of the paper would be invalidated if the *natural* variations of the temperature (i.e. without anthropogenic contribution) were shown to be able to cause a unit-root process.

    VS said
    ‘I find it very awkward also that you start by assuming that anthropogenic forcings are warming the planet, therefore we cannot test the global mean temperature series for unit roots. At the same time, the first step we need to take in order to (empirically) assess whether anthropogenic forcings are indeed warmig the planet, is test the global mean temperature series for unit roots.’

    I do not agree with your logic. The logic is not to assume that the CO2 is warming the planet. The underlying logic here, and also in the basic statements of the IPCC, is to test the hypothesis that ‘*natural* variations are warming the planet’ .Then you assume some models for the structure of those natural variations: white noise, red noise, fractional -differencing and unit root, or natural variations based in control simulations with climate models. etc . Then you try to rule out these null-hypotheses one by one. The GRL paper was focused on two of them, red noise and fractional differencing.
    Once one of these hypothesis cannot be ruled out, the story is not finished. One needs to explain what physical mechanism can cause it. If you find that a unit-root process can describe the observed temperature trend, ok, perfect. Now you need to suggest what *natural* mechanisms can cause a unit-root processes. To say the trend occurs because temperature is a unit-root process does not say anything by itself. You could as well say that it is caused because Jupiter wishes it to happen.

    This is the eternal scientific logic. A hypothesis or theory can never be proven, it can only be falsified. The CO2 influence on temperature will never be proven in the logical sense. We can only falsify all other hypothesis known to us. I think this is nothing new.

  600. Alex Heyworth Says:

    HAS, thanks for the link to the Henry Bauer interview. I knew he had a broad background, but did not realize that he was quite so eclectic. What a character indeed! His book that I recommended gives little clue to any of this.

  601. MartinM Says:

    You assume temperatures not to be integrated of the first order (but rather fractionally integrated). I think that also amounts to ’saying’ it.

    Seriously? You don’t see the difference between assuming a particular model for the purposes of testing it, and asserting that all other models are incorrect?

    So, assuming the series contains a unit root, the conlusions on the ‘probability’ of modern warming arrived at by Zorita, Stocker and von Storch (2008), are invalidated.

    Since the probability arrived at by Zorita et al is explicitly conditioned on a particular class of model, of course it’s not invalidated by the existence of other models.

  602. VS Says:

    Hi Eduardo

    (1) the probability of seeing the ‘record years’ is conditional on the stationarity (and fractional integration ‘order’, also assumed) of the giss series.

    (2) test results, and your own calculations, and the literature, point to non-stationarity

    (3) ergo, your probability equations are incorrect

    VS

  603. MartinM Says:

    No. P(A|B) doesn’t change just because B happens to be false. This is a remarkably simple premise, and if you don’t understand it, you really have no place discussing anything at all involving statistics or the scientific method. Zorita et al tested two particular models and found them wanting. Pointing out that those models are incorrect quite obviously doesn’t contradict their results; that’s precisely what they found.

  604. VS Says:

    Hi MartinM,

    I’m fine with the academic exploration of the probability: P(Y=y_observed|Y=stationary)

    I’m simply pointing out that our observations tell us that Y is in fact not stationary.

  605. MartinM Says:

    Yes, but that has absolutely nothing to do with Zorita et al. Are you now withdrawing your claim that their conclusions are invalid?

  606. VS Says:

    Ah, I see: “(3) ergo, your probability equations are incorrect”

    Oops, that’s a different discussion :)

    Make ‘incorrect’: ‘of no empirical relevance’.

    Sorry, my bad.

  607. Marty Says:

    Alan Wilkinson, Re: length of series. Yes, the sample size makes a difference, a big difference. From my memory let me offer a quick example: for an alternative hypothesis rho o something like 0.95 the power of the ADF (or maybe Elliot Rothenberg Stock test) at 5% significance was around 10% for N=25 and over 80% for big N, probably over 250. So yes, I don’t think that a sample of 29 has much power (although I am a semi-believer in the nature of “highly insignificant” test results, at least where they exist remembering the particular power is defined in conjunction with the size of the test). I was just offering the relative rejections or not over different periods, data sets, and specifications (of something that should be roughly the same in all) as an example of the problematic nature of dealing with unit roots. They are like the stock market: you know you should use it because if offers the best (long-term) returns, but that doesn’t mean you are always confident with what you get.

    VS, (first a thank you put the unit root and co-integrated series fat, so-to-speak, into the climate fire, even though I suspect I am much more agnostic of the significance — by that “significance” I do not mean statistical but generic as in importance such as passing the interoccular test: when the result is really important it hits you between the eyes, which, of course and alas, does not always happen — of any particular co-integration result. This particular “fat” was needed and we will just have to see what comes.)

    Regarding the KPSS test, I don’t know enough about it (haven’t read the article or anything more than in a manual) to argue its relative merits. I do worry about the spectral estimation and the bandwidth because these make a difference (look at the Hadley data with a Andrews bandwidth) but figuring out what is going on there is out of my league these days (I’m a old guy). I also don’t know how non-stationary but non-unit-root conditions factor into the critical region, say heteroskedasticity. But just for heck I tried the test against an AR1 process, covariance stationary and randomly growing exogenous driver (but a simple trend), and the test rejected almost every time. Presumably I violated the maintained hypothesis, but that’s the problem: so does the real world, be in economics or climate. (I am not dismissing the KPSS test, only noting that it may not be perfectly applicable to the issue here.)

    Regarding your more general belief in the unit roots of the temperature series, I am not saying you are wrong. What I am trying to do is tone down your certainty, and let me note that you seem to me doing a some that yourself. You first comments here — and I confess I have not followed the thread all the way — conveyed the tone of certainty of, say, a Paul Samuelson in 1968 when he said that we have fiscal and monetary tools to fight unemployment and inflation simultaneously. And of course, then came the 70s. I am not trying, like some unfavorable referee a nasty editor pulled out to axe your article, to reject your results or views. Rather I am saying that they are interesting (in the academic sense) even when there is a degree of skepticism about the maintained hypothesis. In the words of Edward Leamer (taken by Peter Kennedy), I suspect a little “sinning in the basement” but like both I am not terribly bothered by it. I just don’t want preaching from the econometric temple to be confused with the actual practice.

    We who believe in statistics should be careful. Since most of our statistics is based on measurable spaces, we may get preached to by the physicists like Georg Cantor was by Leopold Kronecker: “Die ganze Zahl schuf der liebe Gott, alles Ubrige ist Menschenwerk.”

    God created the natural numbers, all else is the work of man.

    And as a physicist once told me, there are less than 10^100 elementary particles, so maybe old Leopold was right and there goes multivariate statistics. :-)

  608. A C Osborn Says:

    A C Osborn Says:
    March 21, 2010 at 17:59

    BV your reply has absolutely nothing to do with first or second versions of my question. I am not trying to have one temperature series represent the globe.

    My point is that the Global Temperature series is not NATURAL.
    It is Adjusted, Gridded, Averaged by Grid and then averaged overall.
    By which time it bears no resemblance to a Natural Raw Temperature.

    So my question is, Does a Natural raw Temperature Series have the same Statistical characteristics as the Global temperature series?

  609. Josh Says:

    VS, you inspired me to do a cartoon of you

    http://www.cartoonsbyjosh.com

    I am probably wildly off the mark and I hope I don’t offend anyone [except Tamino ;-) ]

    Do drop me an email – I would very much like to get in touch.

  610. Kweenie Says:

    “Does a Global Temperature Exist?”

    Click to access GlobTemp.JNET.pdf

  611. Marco Says:

    @Kweenie:
    Ask McKittrick how he handled missing values for the two averaging methods. Then ask him why he used two different methods for handling missing values. Don’t be surprised if he repeats what he told Tim Lambert: “oh, that makes it four different methods” !

  612. MartinM Says:

    “Does a Global Temperature Exist?”

    How the hell did that get published?

  613. AndreasW Says:

    I totaly agree with Bishop Hill:

    Statistics wasn’t supposed to be this much fun!

    VS

    Really interesting debate you started.

    The IPCC logical tactics has been:

    We can’t prove CO2 is warming the planet but we can’t prove any other factor is warming the planet either so it must be CO2 at a likelyhood of 90%.
    Anonymous says he can’t prove a hypothesis, only disprove it. So he chooses to disprove some hypothesis about “natural warming” and find them not true. Fine. But the flipside to that coin is that you can test the hypothesis about CO2 which i think is the point made by VS. If the test says the hypothesis about CO2 is false it’s game over. You don’t need to understand why the planet is warming in order to disprove CO2.

    [Reply: AGW is falsified by the presence of a unit root. BV]

  614. michel Says:

    How about you try to explain the large temp changes in the past (eg the Phanerozoic) without a substantial effect of CO2?
    Logical fallacy again. If CO2 is not causal, you cannot explain it with CO2. So you are begging the question you set out to prove.

    What you’re really saying is that we have a rise in CO2, and we have a rise in temp, and we have some explanation about how despite the one following the other, the relation was causal. But this is not an argument for CO2 having the effect you need. If you use it like that, you’re going in a circle. What you need is some independent evidence, other than the previous warming you are trying to account for, that CO2 really can do that. And for that, the time lags are a real problem.

    [Reply: I guess chicken don’t get out of eggs in your world? BV]

  615. MartinM Says:

    But the flipside to that coin is that you can test the hypothesis about CO2 which i think is the point made by VS. If the test says the hypothesis about CO2 is false it’s game over.

    Not only do the tests VS mentions not falsify AGW, they couldn’t possibly do so. They’re just not suited to the task.

  616. MartinM Says:

    Logical fallacy again. If CO2 is not causal, you cannot explain it with CO2. So you are begging the question you set out to prove.

    …what? You appear to be suggesting that assuming a particular model to see if its output matches observation is logically fallacious. Well, there goes science, then. Fantastic.

  617. claw Says:

    Holy smokes! I finally finished the thread (for now). Very interesting reading and I never thought I’d say that about statistics. Great questions asked and answered. I think whbabcock has a very good summary of the thread even if it is in the middle.

    Sure would be nice to see this kind of (usually) civil debate at any (both pro and con) other blogs. Thanks for hosting Bart

  618. GDY Says:

    Bart, VS and others – thank you for the constructive, educational dialogue. As a relative newcomer (and somewhat well-educated layperson) to the topic, I am trying to listen to the arguments of substance from both the ‘pro’ and ‘sceptic’ AGW camps. I applaud those of you who have attempted to further our understanding of the world we live in.

    VS – when would the Stat Tests show a trend in a meaningful ‘simulation’ of a future non-stationary climate process with some ‘trend’ added in? Is that possible to do? So generate some autoregressive non stationary sequence, then adjust all numbers up by some amount (0.1 degree, 0.2 degree, etc). How long would it be before the tests rejected no trend for any particular trend introduced (time/magnitude relationship?)?

    Bart – do you have any reservations about immediate public policy responses given the limited number of data points we have for the proposed/strongly suspected GHG forcing trend (including dramatic restructuring of our means of production and all the potential unintended consequences that could unleash)? And further, can we safely conclude our understanding of the nature of the relationship between CO2 and temperature? Isn’t it possibly non-linear? Or given what VS is saying with regards to the actual temperature record, possible still no relationship?
    I do also find naturally intellectually reasonable the idea of the “global energy budget”, as well as the specific evidence of more energy coming in then going out. Do we have reliable historical data on deep ocean temps or is this something for which we have good data only post ARGO network? Have we solved the Global Heat Anomaly (ie, the Trenberth/Pielke Sr vein…)?

    Thanks again, I apologize if I should have already known these things from other sources!

    [Reply:There is a lot we know (esp the general picture) and there’s a lot we don’t know exactly (esp specific details) about climate change. But we’ll have to decide on a course of action, whether that’s BAU or some emission reduction path. Given what we know, I think it’s prudent to reduce our emissions. Based on what the science sais, the risks are real, and the time lags both in the energy system and the climate system add to that risk. (just as stopping with smoking when you’re driven into the IC is a bit on the late side; the effects are cumulative.) See also this post. BV]

  619. Kweenie Says:

    Marco and MartinM’s reactions are more predictable than climate. Actually I was wondering what the “other team” has to say about this paper.

  620. Marco Says:

    @Kweenie:
    Seriously, ask McKittrick what he has done. Then make up your own mind.

  621. AndreasW Says:

    MartinM

    I thought the tests showed the correlation between CO2 and temperature is spurious. You say the tests “couldn’t possibly do so”. How do you mean? Is VS using the wrong tests or is CO2 the “holy variable” that never could be tested against spurious correlation or is the proper test not invented yet?

  622. David S Says:

    I’m shocked at some of the logic here:
    “The underlying logic here, and also in the basic statements of the IPCC, is to test the hypothesis that ‘*natural* variations are warming the planet’”
    and then, if we cannot find a “natural” explanation, to assume that we have ruled out all possible alternatives and therefore it must be CO2. This is exactly the same reason that the ancients used to “prove” the existence of God and miracles. If we cannot find a natural explanation, it must be…..

  623. eduardo Says:

    @ David,

    well, you may be shocked, but this is the scientific method.
    Actually, I usually try to see the merits in all arguments from the so called skeptics, but this time I am really disappointed. It seems to me, and I hope I am wrong, that logical basis of modern science is not understood by some here.

    I wrote that any particular hypothesis or theory – including CO2 as driver of the present warming- can *never* be logically proven to be right. Theories or hypothesis can only be disproved when its predictions contradict the observations. So the way science work is by disproving competing hypothesis until one or none(!), remains. A theory is challenged continuously, and the CO2 theory (or the solar theory,or any other) should be also continuously challenged . The example presented by VS about CO2 being I(2) and temperature being I(1) could be a logical startling point (I do not dispute that), but the technical details are not clear, at least not accepted by all, as we can see in this discussion thread.

    What I see as a tautology is to say ‘ temperature is a an integrated process and therefore the observed trend is ‘normal’. By the same token I could say: temperatures trends are caused by Jupiter, so that the observed trend is ‘normal’. In other words, to be an integrated process is no explanation from first principles at all, is just a phenomenological, perhaps interesting, description, but nothing more.

  624. eduardo Says:

    One interesting thing that VS could do is to apply the I(2)-I(1) tests to the temperature simulated by the IPCC models for the 20th century, and see if the results of the those tests are really different than for the observed temperature. I would be happy to provide the global temperature means from the models, around 20 of them.

  625. David Stockwell Says:

    eduardo: “temperature is a an integrated process and therefore the observed trend is ‘normal’” I agree with you about the stats, but this is not the argument, at least of B&R. I see it as a system identification exercise, where the I(n) status hints at a system where temperature is related to change in CO2 more than level of CO2. IMHO nothing more, and the physics proceeds from that point.

    Further to testing the B&R idea I ran some independent regressions here http://landshape.org/enm/testing-beenstock/. One of the results was:

    TEMP ~ -0.49(***)+0.06*OO(***) + 0.72*GHG() -11.1*dGHG(***) + 4.0*V() -0.09*SS() R-squared: 0.8709

    (***) means highly significant and () not significant

    In this case the delta GHG (or change in GHG) was a much better explanation of temperature than the level of GHG. In other words, unless I screwed up somewhere, it is independently confirming B&R without recourse to the statistics of unit roots.

  626. Alan Wilkinson Says:

    I am not sure why we should be so surprised if temperature is I(1). Assuming surface temperature is a proxy for energy, then assuming that the net energy flow into the globe’s surface for a year is relatively random (eg moderated by chance configurations of clouds and convection patterns, for example) then that is exactly what we would expect.

    Now BV will say any increased surface temperature should lead to increased emissions increasing the likelihood of cooling but that can be negated or at least reduced by various feedbacks, including increased water vapour and CO2 emitted from oceans, melting ice, etc. So curiously this turns the AGW argument somewhat on its head.

  627. Shub Niggurath Says:

    Eduardo:
    “I wrote that any particular hypothesis or theory – including CO2 as driver of the present warming- can *never* be logically proven to be right. ”

    The same statement, if made by a global warming skeptic, will never be accepted.

    “Theories or hypothesis can only be disproved when its predictions contradict the observations.”

    A theory/hypothesis is a complex sum of its many parts. If contradictions are observed in its parts – it may mean that the theory may not be a good fit on the whole.

    “So the way science work is by disproving competing hypothesis…”

    So you are saying essentially that, in order to disprove a theory, one should come up with competing hypotheses which should then be proven? Is that correct?

    Can science work by disproving existing hypotheses?

    Regards

  628. Willis Eschenbach Says:

    David S said:
    March 22, 2010 at 22:35

    I’m shocked at some of the logic here:
    “The underlying logic here, and also in the basic statements of the IPCC, is to test the hypothesis that ‘*natural* variations are warming the planet’”
    and then, if we cannot find a “natural” explanation, to assume that we have ruled out all possible alternatives and therefore it must be CO2.

    eduardo replied:

    March 23, 2010 at 00:08

    @ David,

    well, you may be shocked, but this is the scientific method.
    Actually, I usually try to see the merits in all arguments from the so called skeptics, but this time I am really disappointed. It seems to me, and I hope I am wrong, that logical basis of modern science is not understood by some here.

    I wrote that any particular hypothesis or theory – including CO2 as driver of the present warming- can *never* be logically proven to be right. Theories or hypothesis can only be disproved when its predictions contradict the observations. So the way science work is by disproving competing hypothesis until one or none(!), remains.

    Eduardo, perhaps you misunderstand David. What we see happening is that “scientists” are saying:

    We can’t explain it with our current models without CO2, therefore it must be CO2.

    This is absolutely the antithesis of the scientific method, and I’m shocked that you see it otherwise. You are speaking in support of the Fallacy of the Excluded Middle. Do you truly think that there are only two possibilities?????

    The scientific method is to say:

    We can’t explain it with our current models, therefore either:

    a) our models are not as complete as we think, or

    b) it’s CO2, or

    c) it is natural variation from an unknown forcing (cosmic rays, sulfur compounds from plankton, changes in the combination heliomagnetic/geomagnetic field, whatever), or

    d) the earth has a thermostat as required by the Constructal Law, or

    e) it’s some unknown factor, or

    f) some combination of the above.

    Concluding that the cause of temperature rise is CO2 simply because we can’t explain it is not scientific in the slightest. Neither is believing in CO2 because it cannot be falsified. As David said, it’s like the ancients saying

    We can’t understand lightning, so it must be Thor’s hammer striking fire, and you’ve never been able to falsify my Thor theory so we should believe it.

    Or as Shakespeare said:

    There are more causes of temperature changes in heaven and earth, Eduardo,

    Than are dreamt of in your philosophy.

    There is another, more subtle difficulty with your “last thesis standing”, which is that the CO2 hypothesis makes no testable predictions, so it cannot be falsified. What would it take for you, Eduardo, to give up your claim that CO2 inexorably must cause rising temperatures? Fifteen years without rising temperatures? We’ve already seen that …

    I look forward to your answer.

  629. GDY Says:

    Silly question – is it even appropriate to use surface temperatures as a proxy for global temperature? you know, given the ocean’s are 70% of the earth’s surface and contain 1.37
    billion km^3 of water and all. shouldn’t we develop a global temperature index INCLUDING OCEAN TEMPS and then run the unit root tests and ONLY THEN perform the appropriate statistical tests for significance of relationship between CO2 and global temperature? (if we have such an index, i haven’t come across it in the increasingly large amount of time spent on this subject). I think Bart may have been implying this way, way above in his ‘this is interesting in an academic way but says nothing about global warming’ comment. It makes sense to me as a layperson that the complicated relationship between ocean temperature and the atmospheric climate ‘data generating process’ could potentially obscure any signal from a GHG forcing, presuming there is one.
    Thanks again everybody!

    [Reply: Surface temps over the oceans are included in most global avg temp timeseries. But much more heat is stored in the ocean waters than in the air, and indeed, it’s best to look at the whole picture of global change (also including changes in the cryosphere (ice) and ecosystems, in hypothesizing what it’s caused by. BV]

  630. Marco Says:

    @GDY:
    You may be shocked to learn that there IS a global temperature index including ocean temperatures…

    Perhaps you may want to read this:
    http://data.giss.nasa.gov/gistemp/
    (look for LOTI).

  631. Scott Mandia Says:

    Willis said:

    Fifteen years without rising temperatures? We’ve already seen that …

    Ugh! Why do people persist in this fallacy?

    First off, it is NOT true. 20 of the warmest years on record have occurred in the past 25 years. The warmest year globally was 2005 with the years 2009, 2007, 2006, 2003, 2002, and 1998 all tied for 2nd within statistical certainty. (Hansen et al., 2010) The warmest decade has been the 2000s, and each of the past three decades has been warmer than the decade before and each set records at their end. 2010 is likely to establish a new record.

    Secondly, CO2 forcing is weak so it takes much time to rear its head above the shorter term variability. Why not look at temps since 1979 when satellites were included?

    Notice anything?

  632. Anonymous Says:

    @ Willis

    Willis wrote
    ‘Eduardo, perhaps you misunderstand David. What we see happening is that “scientists” are saying:

    We can’t explain it with our current models without CO2, therefore it must be CO2.

    This is absolutely the antithesis of the scientific method, and I’m shocked that you see it otherwise. You are speaking in support of the Fallacy of the Excluded Middle. Do you truly think that there are only two possibilities????? ‘

    Dear Willis,

    Sorry if I misunderstood David. But I think you misunderstood me this time. In my previous postings I wrote that no theory can be proven right, including AGW. So the assertion ‘it *must* be CO2’ cannot stem from me, and if it did it would contradict what I wrote. I do not think i said that, and I do not thing that the IPCC said that either. The IPCC writes in terms of likelihood. If ‘scientists’ wrote that sentence or even ‘the science is settled’ that is their problem. Every scientist knows that this can never be achieved.
    This being said, the situation now is that among the competing theories – you mentioned just one, cosmic rays – CO2 has so far the largest explanatory power. It is not without problems, however, For instance the lack of hard, real, and testable predictions, but one has to consider that it is difficult to make experiments and it is difficult to extract signals from the noise data sets. I welcome that other theories be proposed and tested. Cern is running now an experiment to test the cosmic rays theory, and I am curious what comes out of that.

    My point is that the other ‘theories’ you mentioned ( natural variations, unknown factors.. and the like ) are not theories. They are even less testable than Co2. To this category belongs also the ‘theory integrated process’. This is not a theory, it is just a description. To be a theory you would need a mechanism that explains why the global mean temperature could naturally be an integrated processes, what is the magnitude of the random natural variations, by which mechanism these random variations can be translated to a century-long trend, etc, etc. In summary, we need to confront *all* theories, CO2 included. with the same standards of skepticism and see which one fares better.

  633. AndreasW Says:

    Edoardo

    So you mean that you did test the hypothesis for co2 warming? May i ask what test you used and what was the result?

    GDY

    Well if you want to take the path with energy budget, and energy coming in and out, you are not interested in temperatures but heat content. the question you should ask: Is air temperature a good proxy for heat content? The answer is no.
    Another point is if you look at the earth with the south pole in the middle it’s fair to say that for the vast majority of what you see you have no historic temperature record. That means discussing a global average temperature is meaningless. What is more interesting is discussing patterns where you have a decent record. Do you have a camelpattern or a hockeystick. The nordic countries and the us clearly have camels. For starters you could count hockeysticks and camel and see which is the dominant.

  634. JvdLaan Says:

    We can’t explain it with our current models without CO2, therefore it must be CO2.

    Eh what about Physics of CO2. Does that not counting anymore?

  635. JvdLaan Says:

    Aaarg, must read: Isn’t that counting anymore?
    starting to form dyslexia at my age ;-)

  636. HAS Says:

    Moving right along from the “he said she said” VS just coming back to inter correlation in spatial datasets my initial interest was in the possibility that the confidence limits in the gridded temperature estimates were understated (as well as being biased in time). However as a consequence of poking around in IPCC WPI and deciding to have a look under the bonnet of a climate change model I do wonder if these issues mightn’t spill over into the validation of the climate models.

    I’m not sure how well this particular model is regarded, but it was the first I found. “Bergen Earth system model (BCM-C): model description and regional climate-carbon cycle feedbacks assessment” (2009) J. F. Tjiputra1, K. Assmann, M. Bentsen, I. Bethke, O. H. Otter, C. Sturm, and C. Heinze. http://www.geosci-model-dev.net/3/123/2010/gmd-3-123-2010.pdf.

    (As an aside for those that are interested the description of the model gives you an insight into the complexity of these models and their assumptions).

    By way of validation they run a base model until it stabilises using pre-industrial atmospheric CO2 concentration to generate a number of parameters that they then compare with the Levitus and NCEP reanalysis of climate. They show the annually-averaged sea surface temperature, salinity, surface air temperature, precipitation and sea level pressure on a Taylor diagram to demonstrate the model performance as a function of normalized standard deviation, centered root-mean-square (RMS), and pattern correlation (see Fig. 1).

    (I should say as another aside that I want to give more to the adequacy of this validation process per se).

    Now if inter correlated spatial data is problematic (as I think you VS were suggesting – and I did try to understand http://www.cemmap.ac.uk/wps/cwp107.pdf but I know my limits) then as I understand it potentially some at least of these statistics being used to validate the model will be over stated.

    Is this a correct assumption?

    If it is and the bias is in some sense proportional to the inter correlation between adjacent points it did strike me that the quality of fit does seem to be inversely proportional to viscosity of the medium through which the measured phenomenon is passing i.e. the gradient might be expected to be lower in those measures that showed better fit. If I’m right about the statistical theory then this is a testable hypothesis around validation.

    Probably making a fool of myself in public, but it’s always worth asking.

  637. HAS Says:

    that was “more thought to the adequacy of this validation process per se”

  638. AndreasW Says:

    JvdLaan

    Eh what about physics of co2. You need more than co2 physics. You need the physics of feedback system and that is poorly understood.

    [Reply: This is a good starting point. And this a good elaboration of the net effect of feedbacks leading to a sensitivity close to 3 deg per doubling of CO2. BV]

  639. Tim Curtin Says:

    One or two commenters here have asked whether the Beenstock & Reingewertz finding that “global temperature and solar irradiance
    are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences” are valid at a localised level (to get away from the successive averagings and griddings of GMT per GISStemp et al). I have done the ADF tests for January average temperatures and [CO2] at Indianapolis (picked at random from NOAA-NREL data) from 1960 to 2006, and find that the B-R statement is confirmed. What conclusions are to be drawn from this may be another matter!

  640. VS Says:

    ——————-

    PLAYING THE ARIMA GAME

    ——————-

    The point that I was trying to make in my previous couple of comments, is that the probability arrived at by Zorita, Stocker and von Storch (2008) is not very informative.

    Allow me to elaborate.

    Zorita et al (2008) assumed the temperatures to be a stationary process, an assumption which, as I mentioned here, is not supported by observations.

    How should we proceed then? Well, let’s constructe a very simple and naive specification by ‘listening’ to the data.

    ——————-

    SPECIFYING THE NAIVE ARIMA MODEL

    ——————-

    Well, first of all, we found here and here, that the temperature series in fact contain a unit root. The calculations of Zorita et al (2008), when applying the Whittle method, in fact independently confirm this (observed) non-stationarity.

    We will therefore model the first difference series, which is stationary (again, see test results).

    since the ADF test equation employed three autoregressive (AR) lags in first differences (see test results), we try out that specification. We simply model the (first difference) series as:

    D(GISS_all(t)) = constant + AR1*D(GISS_all(t-1)) + AR2*D(GISS_all) + AR3*D(GISS_all) + error(t)

    The estimation results are given here, coef (p-value):

    ————–

    Constant: 0.006186 (0.1302)
    AR1: -0.452591 (0.0000)
    AR2: -0.383512 (0.0000)
    AR3: -0.322789 (0.0003)

    N=124
    R2=0.23

    We furthermore test the errors for normality via the Jarque-Bera test:

    JB, p-value (H0: disturbances are normal): 0.403229
    Conclusion: normality of disturbances not rejected

    ————–

    Note that the constant term is statistically insignificant (the AR terms are significant at a 1% level). Again, we let our test results guide us, and ‘reject’ the presence of a constant term in the simulation equation. (Actually, we ‘fail to reject the non-presence’, I elaborated on statistical hypothesis testing here)

    We reestimate the model, now without constant:

    D(GISS_all) = AR1*D(GISS_all(t-1)) + AR2*D(GISS_all) + AR3*D(GISS_all) + error(t)

    The estimation results are given here, coef (p-value):

    ————–

    AR1: -0.438867 (0.0000)
    AR2: -0.368938 (0.0001)
    AR3: -0.308871 (0.0006)

    N=124
    R2=0.22

    We again test the errors for normality via the Jarque-Bera test:

    JB, p-value (H0: disturbances are normal): 0.393751
    Conclusion: normality of disturbances not rejected

    ————–

    Note: adding a fourth AR term adds nothing to the model, in the sense that the coefficient estimate of the fourth term is equal to 0.018457 with a s.e. of 0.092670, which implies a p-value of 0.8425. We therefore choose not to include a fourth AR term.

    ————–

    We then inspect the disturbances of the error term for autcorrelation, and I give you the Breusch-Godfrey test p-values, for a given set of lags (minimum, 2), which takes ‘clean’ distrubances as the H0:

    Lags (p-value):

    2 (0.208611)
    3 (0.245690)
    4 (0.376448)
    5 (0.507945)

    Conclusion: no significant autocorrelation present in disturbances

    ————–

    Finally, we take a look at the estimated standard deviation of the error term (i.e. error(t)), and we find that it is equal to: 0.096399.

    So, what we have here is a very simple and naive model that captures the ‘variance’ displayed by the GISS series pretty well.

    IMPORTANT: This is not ‘The Model’ of ‘The Temperatures’. It is a simple, test-derived, specification that accomodates the observed non-stationarity, autocorrelation structure and disturbance properties of the GISS series.

    ——————-

    SIMULATIONS

    ——————-

    Now, we are going to take our naive ARIMA specification, and ‘generate’ it 100 000 times. Note that, when employing this ARIMA specification, our data do not reject normality of disturbances. Furthermore, the BG test (see above), rejects any residual autocorrelation in the errors.

    We therefore take the liberty of modelling the error(t) variable as normally distributed white noise, with a standard deviation of 0.096399.

    NOTE: The simulated probability here can also be determined exactly by maximum likelihood, as our data generating process is fully specified. However, I’m just too lazy for that right now :) Hence, we simulate. If anybody feels inspired, please do!

    Here’s the Matlab code, with comments:

    %=====================================
    % NAIVE ARIMA TEMPERATURE SIMULATION
    %=====================================

    %Set length of error term, which also implies the number of ‘years’ we want
    %to study. I set our period equal to our estimation sample, namely
    %1880-2008
    d=128;

    %Set number of ‘last years’ you want to compare
    yrs=14;

    %Input estimated coefficients of ARIMA(3,1,0) process, no constant
    a1=-0.438866631230771;
    a2=-0.368937963283039;
    a3=-0.308870699290478;

    %Set number of iterations for simulation
    B=100000;

    %Define vector to store simulation results
    results=zeros(B,1);

    %Initiate simulation
    for z=1:B

    %We generate a vector of normal disturbances, standard deviation set to
    %estimated value (i.e. sd(e)=0.096399)

    e=randn(d,1)*(0.096399);

    %We clear our ‘first difference’ vector, and enter the first three
    %disturbances as starting values
    x=zeros(d,1);
    x(1)=e(1);
    x(2)=e(2);
    x(3)=e(3);

    %Here we innput the first three observed values of the GISS-temp data
    %in our level series, y
    y=zeros(d,1);
    y(1)=-0.2;
    y(2)=-0.22;
    y(3)=-0.24;

    %We generate the first difference series, x
    for i=4:d;
    x(i)=a1*x(i-1)+a2*x(i-2)+a3*x(i-3)+e(i);
    end

    %Here we generate the level series, y, from the first differences, x
    for k=4:d
    y(k)=y(k-1)+x(k);
    end

    %Evaluation code! This part here evaluates the property of the
    %generated series for each iteration. In this particular case, we are
    %comparing the average temperature over year 1881 to 2008-yrs, with the
    %average temperature of 2008-yrs+1 to 2008.

    treshhold=mean(y(1:(d-yrs)));
    last_yrs=mean(y((d-yrs+1):d));

    if last_yrs>threshold
    results(z)=1;
    end
    end

    %Calculate and display simulated probability
    disp(mean(results));

    %=====================================
    % END
    %=====================================

    ——————-

    RESULTS

    ——————-

    Let’s now see what our simple simulations tell us. First we run the program, as given above. We are testing, conditional on the specified (naive, non-stationary) data generating process, what the probability is of observing a higher average temperature over 1995-2008 than over 1880-1994.

    Simulated probability: 0.4967

    Not very impressive. How about if we force a 0.2 degree higher average? The code changes appropriately to:

    if last_yrs>(threshold+0.2)
    results(z)=1;
    end

    Simulated probability: 0.2521

    Again, not very impressive. Let’s now measure the observed difference in temperature means, over the two sample periods. This turns out to be a whooping (statistically significant) 0.546516291. So what happens when we run the following evaluation code:

    if last_yrs>(threshold+0.546516291)
    results(z)=1;
    end

    Simulated probability: 0.0332

    Now, let’s crank it up, and see what the probability is of observing the highest temperature values in the last 14 years of the sample, in the last 14 years. We modify the code again:

    threshold =max(y(1:(d-yrs)));
    last_yrs=min(y((d-yrs+1):d));

    Simulated probability: 0.0020

    Now, this should get us worried, right? Not really, since we were very (very) restrictive in our ‘demands’ here (i.e. the last 14 values all had to be strictly higher than all the values before 1995). Note that the higher the number of ‘restrictions’ you impose, the lower the estimated probability.

    Take the simulation code and instead of ‘testing’ just save the values of the last observation (representing temperatures in 2008). This will generate a 100000 observation long vector, that we can then use to ‘estimate’ both the expected value and standard deviation of the final realization of our variable y. That is, simply replace the whole ‘if’ statement in the evaluation series with:

    results(z)=y(128);

    Below are results from one of the runs. Note that this is a simulation of the distribution of the final value of the temp series, conditional on our DGP:

    Mean: -0.2412
    Std: 0.5189

    Using these values, we can calculate the 95% confidence interval for the final anomaly value in 2008, while starting with the 1881-1883, and assuming an ARIMA(3,1,0) process: -0.2412 +/- 1.96*0.5189.

    This yields the following 95% confidence interval: (-1.258244, 0.775844)

    What is the last observed value in the GISS data? It’s equal to 0.43, which makes it an obedient inhabitant of our 95% confidence interval. In other words, if we just listen to the data (instead of making scenarios based on ‘theory’), our simulation results tell us that observing a temperature ‘anomaly’ of 0.43 in 2008 is not that exciting.

    NB. Note that here I disregard the whole discussion about the reliability of the recent temperature record. If these values are inflated, as some allege, the ‘testing difference’ would be significantly lower than 0.55, and the accompanying simulated probability much higher.

    ——————-

    CONCLUSIONS AND PURPOSE OF EXPOSITION

    ——————-

    First off, for the record, this was a simple exposition, not an academic article, so I hope we won’t engage in trivial nitpicking and fail to see the forest from the trees (e.g. we can debate endlessly about how to handle the 3 starting values). Also, I welcome all who have a better idea on how to do this, to do it and post it here. I’m very eager to see your results.

    Probabilities of events happening are always conditional a certain data generating process (DGP). For these probabilities to have any empirical relevance, the assumptions governing the DGP, must be rigorously tested. This is the main the difference between my results here and those of Zorita et al (2008). While they disregard the implications of test results when constructing their simulations (sorry, but that’s what you guys de facto do), my simple naive specification adhers to them. In other words, I picked a simple specification which is ‘at peace’ with the observations (Please don’t confuse this with this being the specification! …seriously, it will launch an army of ‘strawmen’ from the usual suspects).

    Do note that without rigorous formal testing of DGP assumptions, any simulation result is simply an extrapolated opinion.

    Now, I hope as many people as possible will copy this Matlab code and play around with it. If you spot any errors, do let me know, I have to admit I wrote it down rather quickly :) Also, try simulating a couple of ARIMA(3,1,0) series, and plot the results. This will help you grasp the concept of an integrated (in this case I(1)) series and you will see why it has absolutely nothing to do with how the series ‘increases’ in terms of ‘polynomial order’. After a while you will (hopefully :) also notice that the generated series indeed resemble the ‘variance structure’ of the annual global mean temperature record.

    Finally, I hope this little exposition will induce a good deal of skepticism towards any (ludicrous) ‘probability’ statement such as:

    “The panel concluded that it was at least 90% certain that human emissions of greenhouse gases rather than natural variations are warming the planet’s surface.” Source: BBC News

    As a side note, I’m really curious to learn the identity of the indvidual who came up with this particular insult to science. In addition, if somebody could point me to the method used to derive this ‘probability’, even better!

    Cheers, VS

    PS. Eduardo, the reason I’m doing ‘this’ is because your hypothetical probabilities are cited a bit too often as ‘evidence’ of ‘unprecedented’ warming. If you don’t believe me, take a look around on the net (even in this thread). I simply strongly disagree with the idea that observations imply this particular probability. This post was a demonstration of a part of my arguments. I sincerely hope you take no offense.

    PPS. The careful reader would have noticed that my ARIMA estimation results in fact REJECT the random walk (of the GISS series) hypothesis. For the GISS series to display the random walk property, the following hypothesis AR1=AR2=AR3=0 must not be rejected. I calculate the appropriate Wald statistic for this test, and get the F-statistic, 11.48393, which corresponds to a p-value smaller than 0.0001. We can therefore safely reject the H0 that the GISS series follows a random walk. Note that Alex (answer to Pat Cassen) engaged in a similar excercise much earlier, here. His conclusions were the same: GISS temp is not a random walk.

    PPPS. Before somebody brings it up. I also estimated the ARIMA specification for 1880-1994, these are the results:

    AR1: -0.439312 (sig at 1%)
    AR2: -0.368242 (sig at 1%)
    AR3: -0.332911 (sig at 1%)

    Furthermore, the s.e. of the regression (i.e. the estimated standard deviation of error(t)) is equal to 0.096167. So estimating the model without the last 14 years, and using those coefficients, doesn’t significantly change our simulation inputs (if anything, these results broaden our confidence interval for the anomaly in 2008).

  641. John Whitman Says:

    A Draft Summary Attempt – Rev 0

    So, I am trying to get my head around were the dialog stands now, almost a week after whbabcock summarized the issues. And almost 20 days since the first VS comment.

    ”””whbabcock Says: March 17, 2010 at 16:36 – The issues being addressed in this thread relate to a single question, “Does available real world data support the hypothesis that increased concentrations of atmospheric greenhouse gases increase global temperature permanently?”””’

    whbabcock also says in the same comment:
    ””””What does all this mean? It could mean that the theory is incorrect. Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.” It certainly “raises a red flag” as VS has noted several times. And, it does mean that one can’t simply point to highly correlated time series data showing rising CO2 concentrations and rising temperatures and claim the data support the theory.”””’

    It appears to me that VS’s contention still holds that ‘Beenstock and Reingewertz’ findings are still, as a minimum, a significant ‘red flag’ for AGW theory.

    It appears, to me anyway, that independent analysis by David Stockwell shows the red flag still waving.

    As to the explanation and basis/restrictions of the statistical processes used by ‘Beenstock and Reingewertz’ , there probably (no pun intended) needs to be significantly more continued dialog. This continued dialog is of both educational and critical nature.

    There seems to be insufficient explanation of the physical processes of the climate that could account for ‘Beenstock and Reingewertz’. That should stimulate more research into the physical climate processes. Good thing there.

    Some good focus made on the IPCC scientific method applied to hypothesis testing of natural variation as it relates to justifying IPCC support of CO2 as the cause of AGW. Much more dialog on this could be expected.

    I would appreciate more detailed summaries than mine.

    Bart, thanks again for your wonderful venue.

    John

  642. AndreasW Says:

    Tim

    Now we are talking. Thats the way to do it: study the parametres simultaneously. Now throw in cloudcover data, landuse change and socioeconomic factors and see what you get.

  643. VS Says:

    Nice one, Tim!

    PS W.r.t. my previous post: For the hardcore skeptics, I also estimated the model for the period 1880-1964 (remember, 1964 was the alleged ‘structural break’, also identified via the Zivot-Andrews testing procedure.. you know, when everything changed!), and these are the results:

    AR1: -0.344765 (sig at 1%)
    AR2: -0.332371 (sig at 1%)
    AR3: -0.403615 (sig at 1%)
    Std: 0.088824

    Apart from the expected mutations (as a result of using 3/4 of our series), again we see no radical difference in our estimates.

  644. Gary M Says:

    @ Anonymous/Eduardo

    “It is not without problems, however, For instance the lack of hard, real, and testable predictions, but one has to consider that it is difficult to make experiments and it is difficult to extract signals from the noise data sets.”

    As I understand it, this is precisely why econometrics is the correct tool for statistical anlysis.

    “This being said, the situation now is that among the competing theories … CO2 has so far the largest explanatory power.”

    However according to the analysis by B&R and in this thread VS, the AGW hypothesis has no explanatory power statistically speaking – as I understand it, other statistical anaylsis (e.g. OLS) showing apparent correlation is spurious?

  645. A C Osborn Says:

    Re
    Tim Curtin Says:
    March 23, 2010 at 13:55

    One or two commenters here have asked whether the Beenstock & Reingewertz finding that “global temperature and solar irradiance
    are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences” are valid at a localised level (to get away from the successive averagings and griddings of GMT per GISStemp et al). I have done the ADF tests for January average temperatures and [CO2] at Indianapolis (picked at random from NOAA-NREL data) from 1960 to 2006, and find that the B-R statement is confirmed.

    Tim, thankyou for answering my question.

  646. AndreasW Says:

    VS

    Would be interesting to see what happens if you throw Michaels and Mcitricks (2007) paper in the unit root grinder.

  647. eduardo Says:

    @ VS,

    Dear VS,
    I take no offense, and I am learning from the technical implementation of your tests. I just think that what your are calculations are not explanatory. There is an error in logic.

    Basically what you have done is this:
    -design a statistical model that fits the observed temperature
    -confirm that your statistical model describes the observed temperature, eg. through the probability of record years.

    What have you learned from the functioning of the system ? I think that not much.

    To construct a real theory, it must be constructed so that you can logically convince those that don’t believe it.

    What I think you should have done is the following:
    design a statistical model that describes the *natural* variations of temperature. This means that you can can convince everyone (even me) that you don’t have in your model the anthropogenic contribution. This can be done by fitting the model in a previous period, before the putative anthropogenic influence kicked in.
    – then confirm that the observed 20th century variations can be described by this model as well.

    I think this is a pretty clear logic. Perhaps you can do it, and then I will congratulate you because that would be real progress.

    For clarification, in Zorita et al we did not assume that the observed temperature is stationary, but that the *natural* variations of temperature are stationary. I have explained this already several times (although you keep repeating the wrong assertion), so I will not repeat it again. The interested reader will be able to discriminate between my explanation and yours.

  648. eduardo Says:

    @ Andreas W

    Dear Andreas,

    Usually a theory is tested with experiments. In tis case no experiments are obviously possible so the tests that are used for the ‘anthropogenic global warming’ are based on climate simulations of the 20th century with climate models, which embody the physical process of the theory.

  649. eduardo Says:

    @ David Stockwell

    I was not discussing B&B, but VS, perhaps it is another thread of discussion.

    I really welcome all these statistical analysis as an orientation of what the underlying physics could be. But I would suggest to do it it carefully. For instance, it is very well known that the radiative forcing of CO2 is proportional to the logarithm of the concentration and not to the concentration itself. So what is the rational of using the concentration of CO2 as a regressor for temperature ?
    Further, apart from CO2 there are other greenhouse gases, for instance methane ,which contributes about 1/3 of the total GHG forcing,and whose associated radiative forcing is proportional to the square root of the its concentration.

    [Reply: That is a similar point that I have been trying to make a couple of times: Use the estimated net forcings as a regressor, and account for effects of internal variability as eg caused by ENSO. Buried deep in this thread is a comment from “MP” going that route, which is worth considering. Or using GCM output. Picking only one forcing (albeit the biggest one) is incomplete, as you correctly point out. BV]

  650. VS Says:

    Hi Eduardo,

    Ok, fair enough, I see your point (on the same-sample issue). Let me try again then.

    Now, let’s say one argues that anthropogenic forcings after 1950 in part caused the warming that we then observe as an anomaly in 2008. In that case the anomaly value in 2008 ought to be deviant, considering the (autoco-)variance structure we observe before, say, 1950.

    These are the coefficients that describe the process (same methodology as above) over the period 1880-1950.

    a1=-0.292473426894202;
    a2=-0.361941227657014;
    a3=-0.240667782178227;

    Std=0.084845
    JB=0.397678

    We run the simulation in order to establish the confidence interval, just like in my previous post.

    (-1.2370, 0.7544) of which 0.42 is still a proper element.

    How about we estimate the coefficients on 1880-1935? Surely the forcings then couldn’t have ‘pushed’ around the process that much back then?

    a1=-0.368527004574269;
    a2=-0.392924333253741;
    a3=-0.342340792689471;

    Std=0.085588
    JB=0.684530

    Using those coefficients, we run the simulations, yet again, and we get:

    (-1.1496, 0.6688) of which 0.42 is still an element.

    So, we estimated the model on early data, and projected (using those estimates) towards the future. Again, even if we observe temperatures only betwen 1880 and 1935, and take the structure of the observations, and project them to 2008, there’s (still) nothing exciting going on.

    In plain terms: the temperature anomaly observed in 2008, is perfectly in line with the variance (structure) displayed by temperatures 1880-1935 (or 1880-1950, for that matter).

    Note, also, that the simulated confidence intervals don’t change significantly if you take an earlier (smaller) sample to derive your coefficient estimates (!).

    ***

    Coming from my field, where stationarity is a non-trivial assumption, I really feel you guys are going over it with too many words, and too few tests/proofs.

    So since we’ve been at it in this thread for some three weeks already :), could you please elaborate on how climate scientists ‘deal’ with non-stationarity of temperature data?

    I also have another question (if you can answer it): How many of your colleagues are aware of the fact that these series can only be cointegrated?

    Cheers, VS

    PS. AndreasW: with the references contained in this thread, you could grind them yourself ;)

  651. GDY Says:

    VS – love your rigor. the education continues. My broader point – my suspicion is that Surface Temperature Records OVERSTATES the variance of a true ‘Global Temperature’ Index. The problems with construction of that index are stated above by others. So, if a trend exists in ‘global temperature’, using any variance statistic of the Surface Temperature record will obscure (reject) the trend for longer, right?? Please Statisticians and Scientists, would love to hear your thoughts on my undeveloped suspicion…
    maybe the data behind this study would be relevant?
    http://www.agu.org/pubs/crossref/2009/2009JD012105.shtml

    also
    http://www.agu.org/pubs/crossref/2009/2008JC005237.shtml

    Marco – GISS LOTI includes only Ocean Surface Temps, as far as I can tell. Not the temperature at any depth (recent studies have shown continued warming at greater depths).

    the bottom line is given all the noise in construction of proxies, combined with the underlying natural variability in climate, it may well be impossible on 30years of data (postulated trend time) to conclude statistically ONE WAY OR THE OTHER!
    Given that uncertainty, now what? It certainly cannot be “end of conversation”, right?
    To me, the next question is: what do the physics say??

    Again, I am just an interested layperson – if I am mistaken, please go easy on me…

  652. AndreasW Says:

    Edoardo

    Now wait a minute! You said yourself that a hypothesis can’t be proven only disproven. So you have a bunch of hypothesis about the warming and then test them one by one and if they fails the test you rule them out. Of course you can’t do experiments. That’s why you do statistical testing on the data.
    So you have temperature data and you have co2 data and find correlation. Now you do the statistical testing to see if the correlation is spurious or not.
    So, once more: Did you do any statistical testing of the co2/temperature correlation?

    About the natural variability. I actually do have a “natural” explanation of the recent warming. Atmospheric oscillation changes! And here comes the funny part: My source is the latest IPCC report. You surely can’t find it in the summary for policymakers. See if you can find it.

  653. Willis Eschenbach Says:

    Scott Mandia Says:
    March 23, 2010 at 11:03

    Willis said:

    Fifteen years without rising temperatures? We’ve already seen that …

    Ugh! Why do people persist in this fallacy?

    First off, it is NOT true. 20 of the warmest years on record have occurred in the past 25 years.

    Why would that be relevant? Among other things, that also is also true for every single year from 1948 to 1963 (GISS data). See here, Update 13.

    But more to the point, people persist in what you deem (without a scrap of evidence) a “fallacy” because the lack of post-1995 warming is real. See here for the math.

    When even Science magazine is asking what happened to global warming, it’s probably worth thinking about …

    [Reply: Did you read this actual post? (With the understanding that the trend estimate of the trend is actually larger than estimated from OLS.) BV]

  654. VS Says:

    Hi Eduardo,

    I just saw your answer to AndreasW. I have to disagree with you (again ;)

    “Usually a theory is tested with experiments. In tis case no experiments are obviously possible so the tests that are used for the ‘anthropogenic global warming’ are based on climate simulations of the 20th century with climate models, which embody the physical process of the theory.”

    As far as I know (or perhaps, believe), theories are tested with facts, or in more precisely observations. I cannot imagine how you can put (calibrated!!!) theoretical extrapolations (i.e. computer simulations) over empirical relationships.

    On March 5th, I wrote to Heiko:

    “As for ’statistics’ not being able to disprove a model: that’s a novelty for me. The scientific method, as I was taught, involves testing your hypothesis with observations. Statistics is the formal method of assessing what you actually observe.

    Given the hypercomplex and utterly chaotic nature of the Earth’s climate, and the non-experimental nature of observations it generates, I don’t see any other way of verifying/testing a model trying to describe or explain it.

    Here’s an interesting reading, it is an article published in 1944, in Econometrica, dealing with the probabilistic approach to hypothesis testing of economic theory (i.e. also a discipline attempting to model a hypercomplex chaotic system generating non-experimental observations).

    It is written by Trygve Haavelmo, who later received a Nobel Prize in Economics, in part also for this paper.

    Click to access the_probability_approach_in_econometrics.pdf

    You will note that many of the assertion made about the then-standard approach to hypothesis testing in economics, are in fact applicable to present day ‘climate science’ :)”

    And to Bart and Alan, I wrote on March 8th:

    “Alright, allow me to elaborate on why statistics is relevant in this case. Let me start by stating that every, and with that I mean every, validated physical model conforms to observations. This is the basic tenant of positivist science. However, usually within the natural sciences, you can experiment and therefore have access to experimental data. The statistics you then need to use are of the high-school level (i.e. trivial), because you have access to a control group/observations (i.e. it boils down to t-testing the difference in means, for example).

    In climate science, you are dealing with non-experimatal observations, namely the realization of our temperature/forcing/irradiance record. In this case, the demand that the model/hypothesis conforms with the observations doesn’t simply dissappear (if it is to be considered scientific). It is made quite complicated though, because you need to use sophisticated statistical methods in order to establish your correlation.

    So correlation is, and always will be, a necessary condition for validation (i.e. establishing causality) within the natural sciences. If you don’t agree with me here, I kindly ask you to point me to only one widely used physical model, for which no correlation using data, be that experimental or non-experimental, has been established. Do take care to understand the word ‘correlation’ in the most general manner.

    Now, I’ve tried to elaborate this need in my previous posts, but I fear that we might be methodologically too far apart for this to be clear, so allow me to try to turn the question around.

    Let’s say that you have just developed a new hypothesis on the workings of our atmosphere. You read up on the fundamental results regarding all the greenhouse gasses, and the effects of solar irradiance on them. You also took great care to incorporate the role of oceans and ice-sheets etc into your hypothesis (etc. etc. i.e. you did a good job).

    Put shortly, you developed a hypothesis about the workigns (or causal relations within) a very complex and chaotic system on the basis of fundamental physical results.

    Now, guys, tell me how you think this hypothesis should be validated? Surely it is not correct ‘automatically’, simply because you used fundamental results to come up with that hypothesis? There must be some checking up with observation, no?”

    I’m very curious to hear what you and Paul_K, or anybody else for that matter, think of the above.

    VS

  655. Willis Eschenbach Says:

    Anonymous Says:
    March 23, 2010 at 11:06

    @ Willis

    Willis wrote

    ‘Eduardo, perhaps you misunderstand David. What we see happening is that “scientists” are saying:

    We can’t explain it with our current models without CO2, therefore it must be CO2.

    This is absolutely the antithesis of the scientific method, and I’m shocked that you see it otherwise. You are speaking in support of the Fallacy of the Excluded Middle. Do you truly think that there are only two possibilities????? ‘

    Dear Willis,

    Sorry if I misunderstood David. But I think you misunderstood me this time. In my previous postings I wrote that no theory can be proven right, including AGW. So the assertion ‘it *must* be CO2′ cannot stem from me, and if it did it would contradict what I wrote. I do not think i said that, and I do not thing that the IPCC said that either. The IPCC writes in terms of likelihood. If ’scientists’ wrote that sentence or even ‘the science is settled’ that is their problem. Every scientist knows that this can never be achieved.

    Eduardo, thank you for your reply. You are correct that the IPCC never said that. What they do say is that if you remove CO2 from a climate model that is tuned to replicate the past when CO2 is included, it no longer replicates the past. D’oh …

    They then offer this up as evidence that CO2 is the cause of post-1950 warming. See here for an example.

    The fact that this claim is used as “evidence” in what is supposed to be a scientific publication is a sad commentary on the state of climate science. Any tuned model will perform less well if one of the forcings is removed. This shows nothing.

    This being said, the situation now is that among the competing theories – you mentioned just one, cosmic rays – CO2 has so far the largest explanatory power.

    A citation to some evidence would be useful here. Remember that computer results are not evidence … if they were, I’d be a very rich man.

    It is not without problems, however, For instance the lack of hard, real, and testable predictions, but one has to consider that it is difficult to make experiments and it is difficult to extract signals from the noise data sets.

    Surely you see the contradiction between that and your previous statement. If CO2 makes no testable predictions, the explanatory power must be zero …

    I welcome that other theories be proposed and tested. Cern is running now an experiment to test the cosmic rays theory, and I am curious what comes out of that.

    My point is that the other ‘theories’ you mentioned ( natural variations, unknown factors.. and the like ) are not theories. They are even less testable than Co2.

    Since you have already said that the CO2 hypothesis makes no testable predictions, how can a competing theory be “less testable”?

    I also fear that you have fallen into the idea that falsification requires an alternative explanation.

    Next, given that the Constructal Law says that a flow system far from equilibrium (like the climate) has preferred states, how is the idea that the earth has a thermostat “not a theory”? If you have some alternate explanation why the earth’s temperature has stayed within ±3% for half a billion years despite meteor strikes and millennia-long volcanic eruptions and wandering continents, what is that explanation?

    Finally, you did not answer the one question I asked, which was, what would it take for you to give up your belief that CO2 is the cause of the post-1950 warming? We have seen no change in the rate of sea level rise (in fact the rate of rise has slowed lately), we have seen no anomalous warming, we have seen no change in global droughts, we have seen no change in global sea ice (not Arctic, global), we have seen no change in global precipitation, temperatures have been flat for fifteen years … so why do you believe that CO2 is causing changes in the climate? See here for details.

  656. Willis Eschenbach Says:

    AndreasW Says:
    March 23, 2010 at 13:34

    JvdLaan

    Eh what about physics of co2. You need more than co2 physics. You need the physics of feedback system and that is poorly understood.

    [Reply: This is a good starting point. And this a good elaboration of the net effect of feedbacks leading to a sensitivity close to 3 deg per doubling of CO2. BV]

    Well, I looked, and neither post said a single word about the physics of the feedback systems. In fact, the second cite said nothing about feedbacks at all, physics or otherwise. Since cloud feedbacks are widely agreed to be the elephant in the room, this is a serious omission.

    [Reply: The net effects of the feedbacks is what matters in the end, and the last ref is very relevant to that. BV]

  657. JvdLaan Says:

    My question was raised because everyone at that certain point in the discussion seemed to ignore – or at least seemed to underestimate – that co2 is in fact a greenhouse gas (and yes the plants in my fishtank grow much better with a co2 fertilizer, but that is another physical attribute of it), feedbacksystems or not.
    Ignoring that physical fact is a very large elephant in a small room!

  658. Pofarmer Says:

    This can be done by fitting the model in a previous period, before the putative anthropogenic influence kicked in.

    Now you get into all the problems of measurement error, biases, etc, etc. We really don’t have enough information to be modeling whats going on NOW(which seems to me to be what VS is pointing out) I certainly don’t see that we have enough ACCURATE information to model what was going on, what, 150 years ago?

  659. Scott A. Mandia Says:

    Willis, you are obviously on a different planet than I and most everybody else.

  660. Pofarmer Says:

    My question was raised because everyone at that certain point in the discussion seemed to ignore – or at least seemed to underestimate – that co2 is in fact a greenhouse gas (and yes the plants in my fishtank grow much better with a co2 fertilizer, but that is another physical attribute of it), feedbacksystems or not.
    Ignoring that physical fact is a very large elephant in a small room!

    The statisticians aren’t ignoring the physics, they are trying to PROVE or DISPROVE a hypothesis based on those physics. The AGW supporters are vigorously defending their positions and positing that they really don’t need any help from the statistics crowd, thank you very much. The earths atmosphere is not controlled by simple physics. If it were, then we wouldn’t be having any of this discussion.

    To commenter HAS.

    Way up above you made a comment that got my attention. You seemed to be theorizing that he common way station adjustments are made using “nearby” stations may not be statistically valid. Is this a hunch, or is there more to it?

    [Reply: Strawman. Of course statistics is needed. But it’s a physical system we’re dealing with, not some bunch of bare, meaningless numbers. BV]

  661. Scott A. Mandia Says:

    Here is what is happening on my planet

  662. Willis Eschenbach Says:

    Scott A. Mandia Says:
    March 23, 2010 at 20:55

    Willis, you are obviously on a different planet than I and most everybody else.

    Scott, it would help if you quoted whatever it was that I said that you disagree with … it’s hard to discuss planetary locations without some details.

  663. Ian Says:

    VS,
    I understand you to say that you tested a single year (2008) against a range (e.g., 1880-1950) – it that correct? I’m asking because I understood the argument differently. I would have thought that recent temps are anomolous because of a string of consecutive high values. E.g., how likely were the last 10 years (or whatever period would be suggested by physical explanations) given a base period of (say) 1880-1950?

  664. Willis Eschenbach Says:

    Willis Eschenbach Says:
    March 23, 2010 at 20:06
    AndreasW Says:
    March 23, 2010 at 13:34

    Well, I looked, and neither post said a single word about the physics of the feedback systems. In fact, the second cite said nothing about feedbacks at all, physics or otherwise. Since cloud feedbacks are widely agreed to be the elephant in the room, this is a serious omission.

    [Reply: The net effects of the feedbacks is what matters in the end, and the last ref is very relevant to that. BV]

    BV, thanks for the reply. He asked for some references about the physics of the putative feedbacks. You gave him nothing. Now you say that the “net effects of the feedbacks” is all that counts … but since we have little direct evidence of the “net effects of the feedbacks”, why have physics suddenly become unimportant?

    In fact, both supporters and those dubious about the AGW hypothesis say that the lack of understanding of cloud feedbacks is a huge uncertainty. The models say that cloud feedbacks are positive … which seems very doubtful, since in the <a href="”>tropics clouds form in response to rising temperatures, cutting the energy received by the earth.

    In any case, the range of feedbacks in the models (-1 to 1.4 Wm-2 °C-1) indicates that the physics are not well understood.

    The modeled claims of a positive cloud feedback are also curious given that the ERBE data shows that globally clouds reflect 48.4 W/m2 (cooling), and contribute a longwave warming of 31.1 W/m2. This gives a global net cooling effect for clouds of 17.3 W/m2 … and since clouds increase with rising temperatures, this sure looks like a negative feedback to me. More heat = more evaporation = more clouds.

    Cloud negative feedback also agrees with our experience. Clouds warm us during the night and cool us during the day, but the cooling is much larger than the warming.

    In addition, the cloud effects include thunderstorms, which cool the earth in a host of ways. These effects are not included in the climate models, since they are way sub-grid.

    At the end of the day, scientists don’t even agree on the net sign of cloud/thunderstorm forcing, much less the amount.

  665. Willis Eschenbach Says:

    Ooops, looks like my link regarding tropical clouds didn’t make it. Here it is again.

  666. Willis Eschenbach Says:

    ERRATA:

    Above it should say

    In any case, the range of feedbacks in the models (-0.1 to 1.4 Wm-2 °C-1) indicates that the physics are not well understood.

  667. Scott A. Mandia Says:

    Willis:

    Finally, you did not answer the one question I asked, which was, what would it take for you to give up your belief that CO2 is the cause of the post-1950 warming? We have seen no change in the rate of sea level rise (in fact the rate of rise has slowed lately), we have seen no anomalous warming, we have seen no change in global droughts, we have seen no change in global sea ice (not Arctic, global), we have seen no change in global precipitation, temperatures have been flat for fifteen years … so why do you believe that CO2 is causing changes in the climate? See here for details.

    Not on my Earth.

  668. Willis Eschenbach Says:

    Scott A. Mandia Says:
    March 23, 2010 at 22:10

    Willis:

    Finally, you did not answer the one question I asked, which was, what would it take for you to give up your belief that CO2 is the cause of the post-1950 warming? We have seen no change in the rate of sea level rise (in fact the rate of rise has slowed lately), we have seen no anomalous warming, we have seen no change in global droughts, we have seen no change in global sea ice (not Arctic, global), we have seen no change in global precipitation, temperatures have been flat for fifteen years … so why do you believe that CO2 is causing changes in the climate? See here for details.

    Not on my Earth.

    Again, Scott, details, cites, quotes, and scientific arguments are important. I understand that you disagree with me. But until you quote some of my words regarding my exact claims and tell me precisely where I’m wrong, with some scientific citations or other evidence to support your claims, you are not adding anything to the discussion.

    Science moves forward by falsification … but “Not on my earth” is not a falsification.

  669. Suibhne Says:

    Was Bob Dylan a time traveller? From John Wesley Harding Album

    There was a wicked messenger
    From Eli he did come,
    With a mind that multiplied the smallest matter.
    When questioned who had sent for him,
    He answered with his thumb,………

    And the people that confronted him were many.
    And he was told but these few words,
    Which opened up his heart,
    “If ye cannot bring good news, then don’t bring any.”

  670. AndreasW Says:

    VS

    Who me do the grinding? Sorry that is out of my statistical leage, so to speak.

    Are you familiar with Michaels and Mcitrick( 2007)? They found strong correlation between surface temperatures and socioeconomic factors. Then they tested against lower troposphere temeratures to rule out the correlation beeing a mere coincidence and they found this correlation much weaker although not zero.
    Would be very interested in your view of this study.

    Bart

    Great thread this one. Actually one of the best i’ve seen in years in climate blogworld. It’s about the core issue of the debate and still pretty decent.

  671. mikep Says:

    Andreas, in the M&M study the dependent variable is the trend is the measured surface temperature over a given time period for suitably defined areas of land surface (i.e. one trend per area). This is not a time series, so the co-integration analysis is not appropriate. But what is very interesting is that the International journal of Climate published a supposed refutation by Gavin Schmidt in 2009 which appear to contain two fairly straightforward errors.

    First Schmidt claims that M&M ought to have allowed for spatial autocorrelation, but appears to have confused autocorrelation in the dependent variable, which is not per se a problem, with autocorrelation in the residuals of the regression, which would be a problem if it existed in M&M. But the M&M equation does not have autocorrelated errors.

    The second error is that that Schmidt’s coefficients, when he estimates a supposedly better version of M&M, fall outside the confidence intervals he claimed they needed to fall within to validate his argument. So his equation undermines his own argument.

    Even more interestingly one of the referees of Schmidt’s paper was Phil Jones, and his response in the the climategate emails and he missed both these errors in a very cursory review.
    And to top everything the IJOC has now turned down McKitrick and his co-author’s response, which I find devastating and unanswerable, on what look like extraordinarily flimsy grounds.

    See details under new items at http://sites.google.com/site/rossmckitrick/

  672. dougie Says:

    @Tim Curtin & A C Osborn

    re-

    “Has anyone looked at a Raw Temperature Series to see if it exhibits the same Statistical characteristics?
    A C Osborn Says:

    and again on March 20, 2010 at 15:59
    Has anyone tried the [unit root tests] on an unadulterated Temperature series form one thermometer?
    [Reply: “One thermometer” cannot measure global avg temp. Keep baseless accusations (adulterated; massaged to death) at the door before entering. Thanks! BV]
    But AC Osborn raises an interesting issue. Why could not the IPCC offer in its AR5 the climate statistics from each one of all (c1200) stations in current GISS, HadleyCRUT, that have at least 50 years of unbroken records to date, classified by their respective max & min temperature and rainfall etc for each of the last 50 years, with the SSR and [CO2] at each. Let us do the trending, averaging, unit rooting, and gridding.”

    not sure if this helps, but have you had a look at –

    MAX – MIN vs MEAN _ Well it is a sort of answer

  673. Shub Niggurath Says:

    I think Eduardo sort of ‘dropped the ball’ on how a scienctific hypothesis is proven/disproven.

  674. Scott Mandia Says:

    Willis,

    Everything in that blurb of yours that I quoted is wrong. What do you not understand?

    Planet Earth

    Why am I engaging you?

    It is comments such as yours that lend credence to a moderation policy. You are denying the obvious warming of the planet. Denying AGW is one thing but denying the obvious warming, please!

  675. Willis Eschenbach Says:

    Scott Mandia Says:
    March 24, 2010 at 00:55

    Willis,

    Everything in that blurb of yours that I quoted is wrong. What do you not understand?

    I don’t understand why you think that waving your hands and saying “everything is wrong” is a scientific argument. However, at this point, I can see that you don’t do science, so I will let it go.

    Science is where you say “your claim 3 in paragraph 4 is wrong because it is based on bad data, the real data is here” with a citation.

    Not science is saying “Not on my planet.” I’m sure you can see the difference.

    For example, you say “You are denying the obvious warming of the planet.” Where did I say that, and referring to what time period? I cited Science magazine saying no warming in the last ten years, is that what you are talking about? In the link I gave above, I agree that there have been warming periods, but according to Phil Jones, they are not unusual or anomalous. Is that what you are talking about? If so, come over to that link and tell me exactly where I’m wrong. That’s how science works.

    Until you give specifics, I can’t address your points. Since you are unwilling to do that, this conversation in meaningless.

    We now return you to your regularly scheduled programming …

  676. Frank Says:

    Scott Mandia Says:
    March 24, 2010 at 00:55

    Planet Earth

    Why am I engaging you?

    Scotty,

    Please note, the first graph in your reference is a fallacious use of endpoints – if you’ll take the blinders off a moment you’ll notice that the “slopes” of the individual warming intervals are roughly equivalent. BTW, [edit], did you happen to notice that the genesis of this thread was that the historical temperature record doesn’t have a statistically significant trend? You might want to pass that on to the folks at SUNY.

    [Reply: Isn’t there some irony in Willis et al using a much shorter time interval still to arrive at unwarranted conclusions? See also VS’ reply later on. BV]

  677. HAS Says:

    Pofarmer on March 23, 2010 at 21:01

    I’m struggling along with everyone else! I should also add that I don’t think my questions raise anything that hasn’t been dealt with before; when it comes to the statistical side I’m just trying to speed up my learning by seeking wise guidance.

    For this reason I thought I’d try and write this down in reasonably plain language to allow others to straighten me up.

    My interest here arose from thinking about how well the variances that arise from the various adjustments to the temperature measurements were being reported. I had an instinct that the confidence limits on the global estimates seemed low, and if that was the case then that might have implication for the confidence you could treat statements about temperature trends etc.

    What follows is my current take on this.

    First to the particular question you ask (adjustment to accommodate moves in stations etc), this is not what I was referring to earlier. However I do have questions about these processes. These adjustments often seem to be applied deterministically rather than recognising that the estimated part of the composite series carries errors in addition to those that simply arise from the instrument, the observer and the specific environment. They are uncertain because the adjustment model is uncertain. Because typically it is the current site that is used as the basis for the series, and that confidence limits expand the further you get away from the basis of the adjustment model, this means that the early temperature record will have significantly greater uncertainty than current observations.

    I also believe that many of these adjustments were made historically and appropriate meta-data not kept, so the extent of this uncertainty remains unknown and therefore unreported.

    The issue that is being discussed here (if I have it right) is the observation that temperature and CO2 concentration time series observations seem to exhibit autocorrelation, that is what you get today is dependent to some extent on what happened in the past. When you stop and think about this in physical terms this is not a particularly surprising observation.

    This is a testable hypothesis and for a couple of global datasets VS has been showing that you can’t reject the hypotheses that these series are autocorrelated (and further that they are specific types of autocorrelation). I should note that we don’t know if this applies to more localised measures, although an earlier commentator had tested one site and achieved a similar result. This is empirical work that would need to be done before going further down this track.

    Now the problem with autocorrelation is that the assumptions that allow common statistical testing to take place breakdown. By way of a simple example you no longer know how many independent measures you have of what’s going on. Some of what you are measuring is already contained in the other measures. This is a problem for statistical inference because the more independent measures you have the more confident you can be about the inferences you make.

    I was interested in this because I had been thinking about the problems of building a composite temperature measure over an area when the measures are strongly intercorrelated. A simple example is three weather stations, two in the same city and one in the countryside. The problem when you come to work out the variability in you estimate of (say) the average temperature in a region, or even interpolate a temperature at another point, do you have two independent measures or three (or something in between)?

    Add possibly autocorrelated time series to this problem and you can see that estimating the variance of a grid temperature is a complex problem. You will see my earlier reference to some papers that try and deal with this – but in what I felt was an unsatisfactory. So my interest here was the extent to which VS’s kind of analysis had been applied to this problem – so far I have not found any obvious references.

    Moving on from this point, and forgetting about time series as such, spatial autocorrelation is a similar problem (and I see it mentioned in a comment a few back). If temperature at one grid point is correlated with that at its surrounding grids this can be a problem for statistical inference.

    So it then occurred to me that if you are comparing the output from a climate model with actual measures at a grid level there was a risk that both datasets would suffer from spatial autocorrelation, and this should be tested for before using standard statistics as validation of the model output. I’m still not sure if this is a problem theoretically or not (that was the question I asked in a more recent post), but I can’t immediately identify literature that shows this being tested for and rejected, so maybe it’s not a problem theoretically or maybe it’s well established that it can be ignored in practice.

    Any wise thoughts?

  678. VS Says:

    Hi HAS

    “This is a testable hypothesis and for a couple of global datasets VS has been showing that you can’t reject the hypotheses that these series are autocorrelated”

    In broad lines, you are correct, just a technical/definition issue. The problem is not ‘auotcorrelation’ but rather that the series are integrated, i.e. test results (all posted in this thread) show they contain a unit root. Note that while most tests indeed fail to reject the presence of a unit root, the KPSS test in fact rejects the non-presence of a unit root. Note also that the calculations performed by Zorita et al (2008), also imply this non-stationarity (see earlier posts).

    In it’s turn the presence of a unit root (i.e. non-stationarity) implies that any OLS specification assuming trend-stationarity (yes, every deterministic trend estimate!) is in fact misspecified.

    I think this is important!

    Alex wrote here on the (broad) theoretical implications of a unit root for theory (see his cases (1),(2) and (3)).

    I think Bart still owes him a reply ;)

    ——————–

    Hi Eduardo,

    Let me just summarize where you still owe me a reply ;) First off, I corrected my simulation procedure to account for your comments, the new results are presented here. We find that the variance-covariance structure displayed by the GISS data over the period 1880-1935, is completely in line with the temperature anomaly observed in 2008. Nothing exceptional going on.

    I posed two questions in that particular post, and I’m very eager to hear your answers (if you can give them :).

    Finally, I raised some methodological issues regarding the relevance of testing scientific theories on observations, here. Again, eager to hear your opinion.

    ——————–

    Hi Willis Eichenbach and Scott Mandia,

    While I enjoy watching you guys go at each other ;) I do have to point out the irony of debating which OLS temperature trend estimate is correct, in a thread where we established (via formal testing) that any OLS trend specification (which implicitly assumse trend-stationarity) is in fact misspecified.

    Hehe ;)

    ——————–

    Hi Shub Niggurath,

    Indeed, I think this is a very important issue. I’m still waiting for a proper reply (3 weeks on :).

    ——————–

    Hi AndreasW,

    We’ll get to it, no worries. In the meantime, it would be fun if people would start collecting papers where temperatures are blindly regressed on CO2. All of these papers are invalid!.

    I gave the references to prove this claim here.

    I’ll list them again, for your convenience:

    – Davidson and MacKinnon (2004), pp. 609-610, Regressors with a Unit Root
    – Hamilton (1996), pp. 557-561, Spurious Regressions (very formal treatment)
    – Greene (2003), pp. 632-636, Random Walks, Trends, and Spurious Regressions
    – Verbeek (2004), p. 313, Models with Non-stationary Varibles – Spurious Regressions (undergrad treatment)

  679. Alan Wilkinson Says:

    “t would be fun if people would start collecting papers where temperatures are blindly regressed on CO2. All of these papers are invalid!.”

    VS, what about those regressing temperatures against time?

  680. Willis Eschenbach Says:

    VS Says:
    March 24, 2010 at 08:06


    Hi Willis Eichenbach and Scott Mandia,

    While I enjoy watching you guys go at each other ;) I do have to point out the irony of debating which OLS temperature trend estimate is correct, in a thread where we established (via formal testing) that any OLS trend specification (which implicitly assumse trend-stationarity) is in fact misspecified.

    Hehe ;)

    Tru dat …

  681. VS Says:

    Hi Alan Wilkinson,

    See my response to Willis. Regressing temperatures on ‘time’ is trend estimation. This implicitly assumes trend-stationarity, which we have formally rejected.

  682. Kweenie Says:

    “In the meantime, it would be fun if people would start collecting papers where temperatures are blindly regressed on CO2. All of these papers are invalid!.

    Is this what they use to call a “Tipping Point”?

  683. Paul_K Says:

    VS: In your post of March 23 you asked (I think) what I thought of your comments regarding the need to apply statistically rigourous tests to validate any conceptual model against observations (my paraphrase). I support you 100% in this and believe I made the identical point in my post of March 19th in the second thread that Bart started: “Is the global temperature series just a random walk?”

    My belief however remains that one cannot justify theoretically a test of the validity or correlatability of radiative effects against the average surface temperature series. The fact that one or several parties on either side of the debate may have done so does not, in my view, offer carte blanche to the opposing side to repeat the same theoretical error but apply better statistical methodology in so doing (the “tu quoque fallacy”).

    I would go so far as to say that perhaps the ONLY occasion when one can legitimately compare observations and predictions in areal-weighted average temperature series is when making comparisons between GCM results and observations, since the averages under the null hypothesis are both derived in the same way with the same weighting, and one might reasonably infer that the underlying structure (statistical properties) of the datasets should be similar.

    Such tests suggest that the GCMs have a low predictive skill with respect to average surface temperature (and even poorer predictive properties with respect to intermediates like regional temperature variation and precipitation). This on its own does not and cannot falsify AGW; it tells us (only) that the models are not good predictors, but not WHY they are not good predictors. Given the number of approximations for numerical robustness which need to be made between the physics formulation and model results, one cannot say whether poor prediction performance implies incomplete physics, poor parameter estimation, poor conversion of the governing equations into manageable numerical form or other numerical problems in the modelling. (John F Pittman’s makes comment on just two examples of such approximations in the Open Thread on this site, but there are many more involving scaling of sub-grid-scale processes.)

    For “offline” tests of AGW (i.e. tests other than GCM-based), I believe that we need to discard surface average temperatures altogether, and consider instead the Earth’s energy budget. However, a full explanation of this would not be appropriate for this thread.

  684. VS Says:

    ————————-

    THE PHILLIPS-PERRON TEST REVISITED

    ————————-

    Hi guys,

    Given that the simulation algorithm I posted here is very well suited to perform a Monte-Carlo analysis, I decided to do just that, on the Phillips-Perron (PP) test.

    Those who have followed the this thread carefully, have noticed that the PP test is in fact the only test which rejects the presence of a unit root in . Both me and Alex (answer to Adrian Burd) have stated that this rejection of the presence of a unit root might have something to do with small sample properties of the PP test, rather than the unit root itself.

    This was an opinion, not a formal result. However, since statistics is a formal discipline, this opinion is worthless unless put to a proper test.

    That’s what I’m going to do here.

    Observe

    ————————-

    MONTE CARLO SIMULATION

    ————————-

    Before diving into monte carlo simulation, it believe it would help to elaborate on the exact function of such a procedure. Those who don’t feel comfortable with their understanding of hypothesis testing within statistics, are referred to my post here on that issue. In case you are one of these individuals :) I suggest you read that post first, before continuing here.

    In my previous simulation post I justified my use of a ARIMA(3,1,0) process to describe the general structure of our temperature record (GISS combined record).

    In the following Monte Carlo simulation we will assume that the results generated by the other tests are correct (i.e. presence of a unit root), and that the ARIMA(3,1,0) structure describes the variance-covariance properties of the series well. Note that the specification was arrived at through formal testing, and as such doesn’t simply constitute an opinion.

    Assuming the above, we will generate 50000 ARIME(3,1,0), n=128, realizations (note: these contain a unit root!!!). Then, for each realization, we will perform the PP test, with the significance level set at 5%, 0 lags (as 0 lags are used in the tests both we and Tamino performed), and take a look at how often, in this particular set up, the PP test in fact rejects a true null hypothesis of a unit root. Note that this probability, is in fact an estimate of the true significance level corresponding with the relevant critical value.

    If the test is unbiased, it should be somewhere around 5%, because that’s what we set the critical value at. However, if the test (given this data) is biased towards rejecting the true null hypothesis too often, the resulting estimated probability ought to be higher than what what we set it too (i.e. higher than 5%).

    ————————-

    SIMULATION CODE (MATLAB)

    ————————-

    Again, in the spirit of open science, I share all my code and methods with you, so that you can replicate it and convince yourselves.

    Here’s the Matlab code employed:

    %=====================================
    % NAIVE ARIMA TEMPERATURE SIMULATION
    %=====================================

    %Set length of error term, which also implies the number of ‘years’ we want
    %to study. I set our period equal to our estimation sample, namely
    %1880-2008
    d=128;

    %Set number of ‘last years’ you want to compare
    yrs=14;

    %Input estimated coefficients of ARIMA(3,1,0) process, no constant
    a1=-0.438866631230771;
    a2=-0.368937963283039;
    a3=-0.308870699290478;

    %Set number of iterations for simulation
    B=50000;

    %Define vector to store simulation results
    results=zeros(B,1);

    %Initiate simulation
    for z=1:B

    %We generate a vector of normal disturbances, standard deviation set to
    %estimated value (i.e. sd(e)=0.096399)

    e=randn(d,1)*(0.096399);

    %We clear our ‘first difference’ vector, and enter the first three
    %disturbances as starting values
    x=zeros(d,1);
    x(1)=e(1);
    x(2)=e(2);
    x(3)=e(3);

    %Here we innput the first three observed values of the GISS-temp data
    %in our level series, y
    y=zeros(d,1);
    y(1)=-0.2;
    y(2)=-0.22;
    y(3)=-0.24;

    %We generate the first difference series, x
    for i=4:d;
    x(i)=a1*x(i-1)+a2*x(i-2)+a3*x(i-3)+e(i);
    end

    %Here we generate the level series, y, from the first differences
    for k=4:d
    y(k)=y(k-1)+x(k);
    end

    %We perform the Phillips-Perron test, with 0 lags, 5% significance level
    %and the alternative hypothesis (Ha) set to ‘trend-stationarity’

    results(z)= pptest(y,’model’,’TS’);
    end

    %Calculate and display simulated significance level
    disp(mean(results))

    %=====================================
    % END
    %=====================================

    ————————-

    MONTE CARLO RESULTS AND CONCLUSION

    ————————-

    We run the simulation and we find that the PP test, given the general structure of our data and employing a 5% significance level, in fact rejects the true null hypothesis of a unit root in (drumbeat!) a whoopin (estimated) 85,46% of the cases!.

    Again: Using a significance level of 5%, with our data, we are in fact using a significance level likely around 85%!!!

    Run the simulations and convince yourself! Keep in mind that our (simulated) DGP in this case has a unit root, so we are testing how often the PP test (which ‘claims’ to reject the true H0 in only 5% of the cases) will reject this true H0.

    What does mean? Well, if we use a 5% significance level, we actually expect to reject the true null hypothesis of a unit root, in only 5% of the cases. However, given our sample size and the structure of our data, simulations tell us that the the PP test will in fact be heavily biased towards rejecting the true null hypothesis of unit root presence.

    We have thus formally established that we can disregard the test results arrived at via the PP test, as they are heavily biased towards rejecting unit presence when a unit root is actually present. Now add to this the diagnostics I have performed on the particular case of an ADF test, with trend in the Ha, and using the BIC/SIC for lag length, which disqualify this particular set-up from being employed.

    Note that these were the only two cases in which the unit root hypothesis has in fact been rejected.

    We can now safely conclude that ALL RELIABLE TEST RESULTS point to the presence of a unit root in the GISS temperature series.

    VS

    PS. Hi Paul_K, thank you for your answer, I will try to get back to you ASAP. Suffering from time constraints here :)

  685. Tim Curtin Says:

    Many thanks to those including VS, Osborn, and Dougie who responded to my last on the data on temperatures and [CO2] at Indianapolis. First I must make a correction, the data I used was annual, not monthly, that comes next!

    VS back on 8th March asked for info on non-CO2 forcings. I can now report on solar surface radiation or SSR (but “AVGLO” in NREL speak, which is average total direct and diffuse horizontal radiation, in Watt hours per day). Prima facie, as this is very much first stab, it seems that SSR at Indianapolis (annual 1960-2006) is unit root in both 1st and 2nd differences, so non-stationary, while first differences (annual changes) regressed on lagged (previous year) SSR do not have a unit root so are “stationary” – but “explosive and subject to chaotic trends” as the absolute value of the coefficient is >1 (Seddighi, Lawler, and Katos, Econometrics, 2000, p.246). Sound familiar in real climate (not the Schmidt version)?

  686. Tim Curtin Says:

    OOps, correction, should read average Wh/sq.metre per day

  687. Scott Mandia Says:

    Willis,

    You wrote:

    We have seen no change in the rate of sea level rise (in fact the rate of rise has slowed lately),

    Wrong.

    We have seen no anomalous warming,

    Wrong.

    we have seen no change in global droughts,

    Wrong.

    we have seen no change in global sea ice (not Arctic, global),

    Wrong.

    we have seen no change in global precipitation,

    Wrong.

    temperatures have been flat for fifteen years

    Wrong.

    Each of these points is discussed on the link to my page I gave to you. Illustrations included.

    I assumed that you would read it. I will save you some time and summarize here:

    1) 20 of the warmest years on record have occurred in the past 25 years. The warmest year globally was 2005 with the years 2009, 2007, 2006, 2003, 2002, and 1998 all tied for 2nd within statistical certainty.
    2) The warmest decade has been the 2000s, and each of the past three decades has been warmer than the decade before and each set records at their end.
    3) Temperature data from 1850 to present shows that there has been an increasing trend and the rate of warming has increased rapidly in the past few decades.
    4) Surface temperatures north of latitude 60o are warming at an accelerated rate in the past few decades.
    5) The Arctic was experiencing long-term cooling in the past 2000 years according to Milankovitch cycles until very recently. The cooling trend was reversed during the 20th century, with four of the five warmest decades of the 2000-year-long reconstruction occurring between 1950 and 2000.
    6) Sea ice extent has been dramatically reduced since the 1950s.
    7) Since measurements began in 2004, there has been a dramatic decrease in sea ice thickness.
    8) Greenland is losing ice mass and the rate is accelerating.
    9) Antarctica is losing ice mass and the rate is accelerating.
    10) The average mass balance of the glaciers with available long-term observation series around the world continues to decrease.
    11) 90% of worldwide glaciers are retreating.
    12) Much of the heat that is delivered by the sun is stored in the Earth’s oceans while only a fraction of this heat is stored in the atmosphere. Therefore, a change in the heat stored in the ocean is a better indicator of climate change than changes in atmospheric heat.
    The heat content of the oceans is increasing.
    The oceans are taking in almost all of the excess heat since the 1970s which underscores the point that ocean heat content is a better indicator of global warming than atmospheric temperatures.
    13) Much of this ocean heat will be vented to the atmosphere in the future thus accelerating global warming.
    14) The Palmer Drought Severity Index (PDSI) curve reveals widespread increasing African drought, especially in the Sahel.
    15) Global warming due to human activities is increasing the severity of drought in areas that already have drought and causing more rainfall in areas that are already wet.
    16) According to the US Climate Extremes Index (CEI), extremes in climate are on the increase since 1970.
    17) The concentration of CO2 has reached a record high relative to more than the past 500,000 years and has done so at an exceptionally fast rate.
    18) Most of the warming in the past 50 years is attributable to human activities.
    19) Although large climate changes have occurred in the past, there is no evidence that they took place at a faster rate than the present warming.
    20) If projections of a 5 oC warming in this century are realized, Earth will have experienced the same amount of global warming as it did at the end of the last ice age.
    21) There is no evidence that this rate is matched to a comparable global temperature increase over the last 50 million years!
    22) Sea level gradually rose in the 20th century and is currently rising at an increased rate, after a period of little change between AD 0 and AD 1900.
    23) The trend is 50% greater than that reported by the IPCC in 2007.
    24) Sea level is predicted to rise at an even greater rate in this century, with 20th century estimates of 1.7 mm per year.
    25) When climate warms, ice on land melts and flows back into the oceans raising sea levels.
    26) When the oceans warm, the water expands (thermal expansion) which raises sea levels.
    27) IPCC 1990 projected sea level increases were too conservative. The latest observations show that sea levels have risen faster than previous projections.
    28) Rising sea-levels will result in more damage from hurricanes even if hurricane strength remains unchanged.

  688. VS Says:

    Scott, really, please don’t take offence, but this is not the right thread for this.

  689. Scott Mandia Says:

    Frank,

    My endpoints are 0 AD and 2009 AD. How is that fallacious? 2000 years is certainly not a cherry-pick.

    Here is what you refer to from my page:

    20 of the warmest years on record have occurred in the past 25 years. The warmest year globally was 2005 with the years 2009, 2007, 2006, 2003, 2002, and 1998 all tied for 2nd within statistical certainty. (Hansen et al., 2010) The warmest decade has been the 2000s, and each of the past three decades has been warmer than the decade before and each set records at their end.

  690. Scott Mandia Says:

    VS,

    I agree in one sense but Willis did ask me to respond and he was the one who started the nonsense. I am sure I will never convince him but there are others who are reading that need to see errors pointed out.

    This thread actually IS about the temperature record and why there is an upward trend. The comments have evolved into a stats discussion and that is why Bart tried to move it to a new thread. You chose to stay here which is fine with me. I think this is one of the best threads on the Web now.

    As long as somebody posts nonsense here I will feel obligated to correct it.

  691. kim Says:

    [edit]

  692. A C Osborn Says:

    HAS Says:
    March 24, 2010 at 05:21

    That is the point that I was trying to make over the way that the “Global Temperature” dataset is constructed.
    Of special significance is the Urban Heat Island affect being amalgamated with Rural data or even replacing it.
    If you want some idea of just how bad the datasets are have a look at Chiefio’s analyis of the world dataset, it is truly dismaying.
    http://chiefio.wordpress.com/
    They can say what they like about their reasons for the adjustments, but they just don’t do not look correct.

  693. VS Says:

    ——————–

    MONTE CARLO ANALYSIS EXTENDED

    ——————–

    In my previous post, I elaborated on the small sample issues regarding the PP test. I concluded that the PP test is heavily biased (we thing 5%, in fact 85%) towards rejecting a true null hypothesis of unit root presence, conditional on the variance structure and sample size, of our data.

    I decided to extend the analysis to the two employed other tests, the ADF (with 3 lags) and the KPSS test (with no lags).

    ——————–

    KPSS TEST EVALUATED

    ——————–

    We repeat our Monte Carlo analysis (see code above) but now for the KPSS test instead of the PP test. Note that the KPSS test takes stationarity as the null hypothesis. Note that we are now simulating the power (i.e. the probability of rejecting a false null hypothesis) of the test given the structure of our data.

    Simulated rejection power of the test: 0.9932

    Note that this implies, that given the structure of our data, the KPSS test will reject the untrue null hypothesis of stationarity in 99.32% of the cases.

    Note that this is in contrast to what I ‘assumed’ earlier. Namely, that the KPSS test is biased towards ‘not’ rejecting stationarity. In our case, there is no reason to conclude that.

    However, it is good news in terms of our test results and the implied power.

    ——————–

    ADF TEST EVALUATED

    ——————–

    We take 3 lags, as determined by the Akaike Information Criterion, and (again) trend-stationarity as the alternative hypothesis.

    We choose 3 lags, because with less than 3 lags, our errors are autocorrelated, which messes with the ADF test. Note that we have not rejected normality of disturbances, so the use of normal disturbances is justified.

    NOTE: The ADF, just like the PP test, takes unit root presence as the H0.

    Simulated significance level: 0.0515

    This is very good news indeed! Almost exact!

    It implies that the significance level of 5% is indeed valid with the ADF test (almost exactly!). It also justifies the diagnostics we have performed earlier, to justify the use of the ADF.

    ——————–

    I reiterate: There is, by now, way more than enough statistical evidence to conclude the presence of a unit root in the GISS temperature record

    As far as I’m concerned, and unless somebody comes up with more simulation results / test statistics (i.e. some formal argument, no ‘hand-waving’ please), this issue is closed for now, at least in terms of statistical inference.

    This is what formal handling of our observations tells us.

    VS

    PS. To run the simulations yourself, change the PP test equation to:

    KPSS test:
    results(z)=kpsstest(y,’lags’,0,’trend’,true);

    ADF test:
    results(z)=adftest(y,’model’,’TS’,’Lags’,3);

    PPS. For the record, here are the unit root test results, and here are the unit root test results, controlling for a structural break (i.e. Zivot-Andrews unit root test).

  694. Tim Curtin Says:

    Scott: just to take one of your “rebuttals” of Willis E:

    Please provide here a tabulation of the percentage coverage of the globe by instrumental global temperature measurements since 1850 (HadleyCRUT) and 1880 (GISS). It is available, I have it, but I want to see you produce it.

    Anecdotally, David Livingstone did NOT find the natives sending tom-tom daily readouts of the temperatures along the Zambezi in the 1860s (pace UEA-HadleyCRUt); nor Stanley along the Congo River; and amazingly enough Winston Churchill was not sent by his CO to get a weather report from the Khartoum Met Office before his cavalry charge at Omdurman in 1898. Yet Jones at CRU and Hansen at GISS would have us believe that they have data from all these places when they build their series from 1850/1880. Put back today’s temperatures at Khartoum, Kinshasa, and Livingstone (I have been at all of them) adjusted by any trend you like to 1850/80, and “global temperatures” as measured since 1990 with those places will cease to be the warmest ever since 1850/80. But if you are Hansen, then of course New York 1880 is a perfect proxy for Khartoum 1880.

  695. JvdLaan Says:

    If you want some idea of just how bad the datasets are have a look at Chiefio’s analyis of the world dataset, it is truly dismaying.

    If you have any idea how bad that is read the whole series about Watts, d’Aleo and the real author of that piece Smith aka Chiefio:

    Message to Anthony Watts

    They can say what they like about their reasons for the adjustments, but they just don’t do not look correct.
    Are you implying fraud? And is there still no apologies from Watts and co?

    No back OT please

  696. JvdLaan Says:

    Apologies: No should read now and the closing italic did not work,
    another typo from me, I promise to make an appointment with a neurologist, this is getting out of hand!

  697. Bart Says:

    Alex,

    In a previous comment you wrote:
    “This is because the unit root is in the deterministic part of the equation and not in the random part.”
    So a unit root means that the deterministic part of the timeseries has a dependency on past values (rather than that the random part/variability has such a dependency)?

    That sounds similar to the effect of a positive feedback:

    If the climate forcing (i.e. the driving factor for the deterministic part of the timeseries) goes up, the temperatures would go up, which, in the case of a dependency on past values would cause subsequent temperatures to also go up more than they would otherwise have. (Of course, this would work in both directions; ‘up’ could be replaced by ‘down’).

    And another stats clarification question: I assume that the ‘lag’ refers to the number of timesteps that the dependency holds? (i.e. in the random walk equation you gave, Y(t) = Y(-2) + E, the lag is 2) Dependent on the lag time, it could also cause cyclical behavior.

    But such a mathematical positive feedback in the timeseries would have to have a physical basis: A physical positive feedback. Otherwise energy balance considerations would dictate that the forcing reverses as a result. So just thinking out loud here, the presence of a unit root seems consistent with positive feedbacks in the climate system. Though that really depends on whether the deterministic trend already captures those feedbacks: If it does, there shouldn’t be such a dependence (mathematical positive feedback) left in the deterministic part of the timeseries. So perhaps a unit root signifies that the deterministic trend used to test for its presence is missing some (positive feedback) mechanism/contribution.

    More in general, if the unit root is a characteristic of the deterministic part of the timeseries, it makes it all the more important to get that deterministic part right: Use the net climate forcings and all know causes of internal variability (ENSO, etc). Since the forcings don’t translate into temperature directly, another option would be to use the modeled response to forcing instead. That may have other issues though, since models create their own weather related random variability (although I don’t quite understand how; perhaps Eduardo Zorita could shed light on that?)

    MP and Tamino have taken a stab at using net forcing (which still omits known sources of variability such as ENSO). VS has contested their results, but AFAIK (VS, correct me if I’m wrong) hasn’t used the net forcing or model output himself in his tests, whereas the choice of deterministic trend is clearly important for the test results. That, plus other stats choices to be made which are only marginally clear to me, mean that there is more ambiguity about unit roots than VS latest comment appears to allow for.

  698. VS Says:

    [edit]

    I ran the Monte-Carlo simulation on the ADF test, the way Tamino ran it to ‘debunk’ me, namely with 0 lags in the test equation (arrived at via the BIC). What do we get?

    Simulated significance level: 0.8578

    So, once you ignore the autocorrelation in the errors of the test equation (like Tamino did), your significance level (otherwise exact, see post above) shoots up 85%!! In other words, you are rejecting a true null hypothesis of a unit root in 85% of the cases (and you think it’s in only 5% of the cases).

    Now, that’s the bias induced by autocorrelation in your test equation, we were talking about earlier.

    Do note, that when we apply the proper 3 lags in our test equation, our significance level is almost exact (simulated value: 0.0515), given the derived DGP.

    PS. To replicate this Monte Carlo simulation, simply input the following code:

    results(z)=adftest(y,’model’,’TS’,’Lags’,0);

  699. Ian Says:

    VS, I think my question may have been lost upthread; here it is again.

    I understand you to say that you tested a single year (2008) against a range (e.g., 1880-1950) – it that correct? I’m asking because I understood the argument differently. I would have thought that recent temps are anomolous because of a string of consecutive high values. E.g., how likely was the period of the last 10 years (or whatever period would be suggested by physical explanations), given a base period of (say) 1880-1950?

  700. VS Says:

    Hi Bart.

    “MP and Tamino have taken a stab at using net forcing (which still omits known sources of variability such as ENSO). VS has contested their results, but AFAIK (VS, correct me if I’m wrong) hasn’t used the net forcing or model output himself in his tests, whereas the choice of deterministic trend is clearly important for the test results. That, plus other stats choices to be made which are only marginally clear to me, mean that there is more ambiguity about unit roots than VS latest comment appears to allow for.”

    First, see my previous two posts.

    I think Tamino’s ‘analysis’ is more or less dead, by now.

    The CO2 forcings variable is I(2), and I’ve provided enough references for you guys to check that one cannot use it in the CADF test equation (which assumes stationarity of the covariate). Note that I tested the CO2 ppm variable for stationarity, again, we find two unit roots.

    As for MP’s comment, I replied here.

    I’m still waiting for both his data and his corrected test results. Especially given the significance of lag length choice for the ADF test determined here.

    [Reply: CO2 concentration is not the same as net climate forcing, which would still not be the same as the expected deterministic part of the temperature timeseries. BV]

  701. GDY Says:

    VS – thank you for performing the Power tests on the KPSS and ADF tests, that was the next logical question to have given the Power analysis of the PP test. The completeness in your analysis is compelling, and I hope that the Climate Science community welcomes the input and recognizes the need for the level of rigor you have brought.
    I have a feeling a lot of PhDs could get minted from extending your analysis here alone –
    – does it hold for longer time series, using Vostok Ice Cores (not tree rings please!!)
    – what would a similar analysis conclude for Deep Ocean Temp Series, or for a ‘more complete’ Global Temperature Series (global heat budget, etc),
    – are the other trend projections (sea level rise) based on the OLS Temperature trend projection, etc, etc.

    The irony to me is we in this post appear to have near-unanimity on the inability to conclude anything (project with meaningful confidence) about the future from the GISS data alone.

    I look forward to the conversation continuing…

    [Reply: Projections for future climate are not based on OLS but on physics combined with projections of potential future paths society could embark on. Unit root or not, we can still make meaningful statements about the physics of climate, and thus about future projections (given a certain emissions path). BV]

  702. VS Says:

    Hi Bart,

    First of all, I linked in my post to CO2 forcings (the 5.35*log(CO2_ppm/CO2_1880) transformations). We find these are I(2).

    I just added the CO2 ppm results to exemplify that this property is in fact displayed by the ppm variable, and the conversion to forcings simply preserves the non-stationarity of the ppm variable.

    Now, again, I don’t know what you mean with ‘net’ forcings (but I doubt that this ‘transformation’ of the ppm variable will make it ‘stationary’, there’s two unit roots in there!).

    Link to data please.

    ——————

    Hi Ian,

    I personally think that’s the wrong way to look at it. Any sample realization is ‘special’, and any such analysis is bound to find some extremely low ‘probability’. That probability however, is not very informative (but it is very good to sling at people in non-scientific debates).

    You should look at the ‘temperature level’ reached by 2008, and how anomalous that is. We let the anomaly value in 2008 exemplify that ‘temperature level’.

    What I did was to take the variance-covariance structure of the temperature series in 1880-1935, when anthropogenic forcings arguably had a minimal effect on the variance-covariance structure of the GISS combined record.

    If we then start in 1881 with this structure, and construct a confidence interval for the temperature level reached in 2008, on the basis of the variance-covariance structure displayed by the data over the period 1880-1935, we find that it fits perfectly in the picture.

    So, I would conclude that there is nothing anomalous of the temperature level we’ve reached in 2008. In fact, temperature variations 1880-1935 account for it fine.

    [Reply: Forcing data: http://data.giss.nasa.gov/modelforce/.
    What are you implying with your last statement? The absence of a deterministic trend? I thought you had walked away from your random walk? Please clarify. Do you think the chance for high and low temperatures in the next decade is approximately equal? In that case, there would be some people willing to offer you a bet on that. BV
    ]

  703. VS Says:

    Hi GDY,

    Thank you for the compliment :)

    The reason why I went on through with all of this, when it actually got some attention, is because I was annoyed to tears by all the claims of ‘unprecedented warming’.

    There is absolutely nothing in the data to suggest that. Deterministic trends are nonsense in the presence of a unit root, as is any non-cointegration based analysis of covariates.

    In addition, I elaborated here on why I believe the tree ring reconstructions glued onto the instrumental record are invalid for concluding anything on the topic of ‘unprecedentedness’.

    Shortly put: clearly different variance structure.

    I’m still puzzled as to how Mann managed to publish his Hockey Stick in the first place (in an A journal of all places!). Anyhow, that’s a discussion I don’t want to get dragged into now :)

  704. AndreasW Says:

    VS

    So bottom line:
    Even if, and that’s a big if i would say, mr Jones and mr Hansens temeratures are accurate, the claim that most (whatever that means) of the warming the last decades is antropogenic with a likelyhood of 90 % is not even a wild guess. It’s completely wrong because if co2 was the driver of temperature this unit root cointegrated stuff would turn out a different result. The temperatures we see now is nothing extraordinary based on the structure of the observations, co2 or not.

    Bottom line of bottom line:

    GAME OVER!!

    Final score: VS 1 – AL Gore 0

    Wow!

    Do you agree or did i jump too far?

  705. VS Says:

    Bart,

    The deterministic trend was rejected. I posted a couple of thousand of pages worth of simulations and test results.

    Also, read Alex’s post again on the difference between a ‘stochastic trend’ and pure randomness. Nobody is claiming pure randomness.

    And no, I’m not implying a random walk. Read my simulation procedure, and the response to Eduardo, again. I’m simply projecting the observed variance-covariance structure of 1880-1935 into the future, and find that the anomaly observed in 2008 is in fact not anomalous.

    Finally, I’m not ‘saying’ anything. The formal results speak for themselves (and I think I elaborated enough on the procedure).

    VS

    PS. While you’re at it, remove that link to this James and Jules blog, you (for some reason) added to my post. Keep the link to the CO2 forcings tests, that’s still there.

    PPS. AndreasW, hehe, I wouldn’t conclude that, just yet. However, what I posted here, I believe to be the result of a proper application of the scientific method.

    [Reply: From Alex’ post, to which I replied here, I understood that unit roots are a property of the deterministic part of the equation. We know that there is a radiative forcing (i.e. deterministic) acting upon the system, and atmospheric temp change is just one of many changes we are observing as a result, which, by and large, are consistent with our current physical understanding of the system. Perhaps we’re using different lingo here, but there surely is a deterministic effect of radiative forcing on climate in the physical sense. If not, you’d have a helluva lot of explaining to do, nevermind textbooks that need to be rewritten. BV]

  706. VS Says:

    Eh, I posted a couple of thousand words

  707. Scott Mandia Says:

    VS you really SHOULD publish that hockey stick paper. Why would you not? This is “big time stuff”. If so, publish it in a science journal and not a math/economics journal. BTW, do not put it in E&E. :)

  708. JvdLaan Says:

    VS, an advice from an old(er) man: watch the sun, don’t get to close.
    http://nl.wikipedia.org/wiki/Icarus

  709. TINSTAAFL Says:

    VS, for me as a layman who has followed this compelling thread from the first to the last post, what you are saying is: yes the Earth is warming, but in 2008 not differently as to the period of 1880-1935 and the (anthropogenic) role of CO2 in the warming process is not (yet) defined?

    (petje af voor je schrijfstijl, zoals je het beschrijft leest deze droge kost als een spannend boek!)

  710. A C Osborn Says:

    JvdLaan Says:
    March 24, 2010 at 13:44

    You and Tamino can say what you like.
    Have you actually read E M Smiths analysis?
    Doesn’t it envoke any concerns whatsoever?

  711. Pofarmer Says:

    But it’s a physical system we’re dealing with, not some bunch of bare, meaningless numbers.

    Of course it is.

    But.

    Do rote physics rule? Do physical process rule? Do solar or magnetic processes rule? Is something else in play?

    The statistics gives you the tools to figure that out.

    [Reply: solar or magnetic processes? Take a look here for example. (Temp, CO2, cosmic rays, total solar irradiance, sunspots, resp.) BV]

  712. JvdLaan Says:

    You and Tamino can say what you like. And there is more to come…
    Have you actually read E M Smiths analysis? Yep
    Doesn’t it envoke any concerns whatsoever? Yep, some are just wrong and regarding the history of Watts and co on the truth, make an educated guess who ‘some’ is.

  713. AndreasW Says:

    VS

    Seems like king Mcintyre has found his heir now that he’s retired sort of. You should run your own blog. I guarantee it would be a success. Obviously it’s a big personal and proffessional risk entering this minefield that climate science is, specially if you end up on the other team.

    Where is Lucia and the folks at the blackboard? I’d love to se her engage with her “toy planets”.

  714. A C Osborn Says:

    # JvdLaan Says:
    March 24, 2010 at 15:36

    You and Tamino can say what you like. And there is more to come…
    Have you actually read E M Smiths analysis? Yep
    Doesn’t it envoke any concerns whatsoever? Yep, some are just wrong and regarding the history of Watts and co on the truth, make an educated guess who ’some’ is.

    So you have looked at the raw data, recreated the graphs and disagree with his results, which Temperature series was that?

  715. Frank Says:

    BV says: “Projections for future climate are not based on OLS but on physics combined with projections of potential future paths society could embark on. Unit root or not, we can still make meaningful statements about the physics of climate, and thus about future projections (given a certain emissions path)”.

    How so?. Since different (GCM) models give widely different projections, they’re certainly not incorporating the “same” relevant physics, or “all” of the relevant physics, or even entirely “correct” specifications of the relevant physics. They are (admittedly) tuned using all available historical data up to the present, and therefore can neither be independently verified nor provide meaningful statements about the physics of climate.

    [Reply: Your conclusions are a non sequitur. They are parameterizing the physics in different ways. BV]

  716. Ian Says:

    VS, about testing single years rather than a period of consecutive years – I appreciate the need to avoid “cherry picking” a year or group of years, either purposely or unintentionally. I suppose you could apply that objection to 2008 as well, absent some justification for its use. That’s why I mentioned using a period of consecutive years that’s suggested by a physical explanation. I don’t know how the mechanics of such a test would be set up, but in principle it seems a better test of an “AGW signal” in the temp series.

  717. Ian Says:

    to clarify that last comment: I’m suggesting testing a period of whatever length is suggested by a physical explanation (20-30 yrs rings a bell, but I don’t have a source – does anyone else recall this more clearly?) against the base period.

  718. Pofarmer Says:

    [Reply: solar or magnetic processes? Take a look here for example. (Temp, CO2, cosmic rays, total solar irradiance, sunspots, resp.) BV

    Yes, you should really look at that a little more closely.

    Maybe check the results for those graphs statistically.

    You certainly are resisting having these hypotheses formally tested.

    [Reply: The only thing I’m resisting is jumping to conclusions. BV]

  719. Pofarmer Says:

    How about a graph like http://www.climate4you.com/images/TotalCloudCoverVersusGlobalSurfaceAirTemperature.gif?

    There’s lot’s of graphs out there.

    What VS is trying to show you, is that the stuff you think is correlated, might not be at all. If you would quit being so defensive and THINK you might see the opportunity being presented here.

  720. Paul_K Says:

    VG,
    Your results re the PP test are truly shocking.
    If the problem is truly arising just because a sample size of 128 is too small for statistical discrimination, then the test needs to carry a serious health warning.
    I understand that your specific aim here was to demonstrate that the specific test applied by Tamino had no power (and oh boy did you do that!).
    However, in so doing, I suspect that you may have unwittingly written off the PP test in the minds of many readers here, and perhaps unfairly.

    Did you consider redoing the test stats with increasing values of n (and testing for covariance lags from 0:3, say) in order to discover whether the discriminatory power is low because of the sample size or low because you generated ARIMA(3,1,0) data and applied inappropriate testing to it?
    I believe that your findings here are worthy of a short paper given that PP is a commonly applied test for unit root, and has been widely used across the disciplines.

  721. Paul_K Says:

    Sorry for the typo; the above post should have been directed to VS, and not “VG”, but I think you deserve a VG for effort.

  722. VS Says:

    Hi Paul_K

    Very quickly (RL is breathing down my neck)

    I think it has more to do with the variance-covariance structure of the data than with the sample size. Given this structure, the asymptotic correction doesn’t work as well as it should.

    I increased the n, and the results stayed more or less the same.

    So, the test apparently doesn’t handle the ARIMA(3,1,0) structure well, which is indeed what I sought to show in this case :) I suspected as much, given that it gives such different results than any other case.

    Keep in mind that MC analysis is situation specific. I simply performed it in for the case of the GISS temperature record. I don’t think it says something about the PP test, in general.

    Cheers, VS

    PS. I still owe you a reply to your latest post :)

  723. JvdLaan Says:

    @Osborne
    I went over there and saw this remark from him
    Gee… we plant the thermometers in grassy fields that are covered in snow much of the time. Then change the place to be black tarmac, jet wash, and snow ploughs … and “find warming”. Yeah, I could see that pretty easily… but we’ll see what the data say. -E.M.Smith
    And called it a day when reading so much … (could not find the proper words) Just more surfacestations.org crap.

    And from now on, back OT. It is interesting enough to read VS comments and you are only distracting with your OT comments, so I will not react on you anymore.

    @Doghaza
    Ps I’ve been to Deadhorse in 1991 for photographing Spectacled Eiders

  724. Frank Says:

    Frank says: “Since different (GCM) models give widely different projections, they’re certainly not incorporating the “same” relevant physics…”

    BV says: “Your conclusions are a non sequitur. They are parameterizing the physics in different ways.”

    I disagree. What’s the difference between not incorporating the same phsics and parameterizing the physics in different ways? The parameterization process is a “plug” to make different models fit observed (past) conditions. As Fermi (and many others have) said, the fact that models can be tuned has no bearing on their accuracy or skill unless they can be validated against out of sample data.

  725. Pofarmer Says:

    The only thing I’m resisting is jumping to conclusions.

    Unfortunately, you seem to have jumped to plenty of them.

  726. Willis Eschenbach Says:

    VS, a question for you. I understand that temperature is I(1) and CO2 is I(2), so we can’t directly compare them.

    My question is, what statistical tests can be used with I(1) datasets? Can OLS be used, or used with some kinds of adjustments? I have in the past used both the method of Quenouille and of Nychka to adjust for autocorrelation. Koutsoyiannis has also written on the subject. Are any of these methods usable?

    [edit. OT]

  727. ScP Says:

    [edit. OT]

  728. Bart Says:

    ALL: As of now, off topic comments are no longer allowed. Please take a look at the new comment policy for reference. Hopefully this way we can keep the discussion a bit more focussed than it has been.

    Thanks!

  729. ScP Says:

    [edit. OT]

  730. Alan Wilkinson Says:

    Scott Mandia, it is honest science to say that the last decade has been the warmest of the century but it is also honest science to say that the warming occurred in the previous decade rather than the last one.

    If you want any public credibility as a scientist your honesty must be comprehensive, not selective.

  731. ScP Says:

    Bart thats fine – can you direct us to the sea ice thread ;-)

    [Reply: Open thread. BV]

  732. Alan Wilkinson Says:

    In similar vein to my last comment, while VS complains about claims of “unprecedented warming” with proper qualification such as “within the past 150 years” they can be true.

    What he has shown is that although within that period the temperature has reached an “unprecedented level” it got there in a manner consistent with its statistical pattern over the full period for which we have directly measured temperature records. So there is no statistical evidence of recent “unprecedented warming” trends.

    This is the proper way to communicate science to the public.

  733. Scott A Mandia Says:

    Alan, I am unsure what your comment to me means? What did I say that was selective? What is different about “last” vs “previous”?

  734. dhogaza Says:

    This is the proper way to communicate science to the public.

    The proper way would be to state that he has made *claims* which contradict the work of many professionals, including professional statisticians (which he is not).

    And that he’ll be a very famous man if he publishes his work in respectable journals, it’s not every day one gets to shoot down a large body of work.

    Until then, it’s just VS BS.

  735. Alan Wilkinson Says:

    Scott, the warming occurred in the nineties, it plateaued in the naughties.

    dhogaza, VS has not shot down a large body of work. He (actually Beenstock and Reingewertz, and others who have published similar cointegration studies) have just improved the analysis and understanding of a very complex system and time series.

  736. guthrie Says:

    Alan Wilkinson – you wrote:
    “What he has shown is that although within that period the temperature has reached an “unprecedented level” it got there in a manner consistent with its statistical pattern over the full period for which we have directly measured temperature records. So there is no statistical evidence of recent “unprecedented warming” trends.”

    But what was responsible for the warming in, say the first half the 20th century and late 19th? People currently blame CO2, but how did the earlier warming come about?

  737. Scott Mandia Says:

    Alan writes:

    Scott, the warming occurred in the nineties, it plateaued in the naughties.

    Absolutely true…if your definition of plateaued means “rising temps”.

    Contrary to the widespread myth, global warming has not stopped, and even if the rate slowed a bit in the very short term, the temps are still going up – hardly a flat plateau.

  738. Shub Niggurath Says:

    Scott:
    I am sorry if I did not catch this above. But your claim of a widespread myth contradicts Richard Kerr’s article in Science in October 2009 (which Willis provided a link for). No one’s quoting WUWT directly here – it is a careful citation of a source you shouldn’t have problems accepting. In other words, unimpeachable for the purpose of the present discussion.

    And you saying Richard Kerr is participating in propagating this so-called myth? As far as I know, Kerr is sympathetic to consensus climate change views. If we assume the Science/Nature guys say naughty things to keep the vat boiling, even Phil Jones said the same thing which ended up with us getting stuck discussing the philosophical meaning of the ‘p value’.

    I would ask you to resolve this for us. Your citation, refuting this myth, should be acceptable to the opposite party. I was under the impression that the official position was that there is a stationary trend in the current decade or even greater, only that it was too short a period to draw any long-term conclusion from.

    Regards

  739. Alan Wilkinson Says:

    Scott, you seem able to convince yourself of anything but I prefer simply to use my eyes:

    http://rankexploits.com/musings/2010/hadcrut-down-slightly-from-january/

    Your trust problem (entirely self-created) is that the public will do the same.

  740. Eric Says:

    Apologies if this is off topic. Just wanted to thank our host for allowing this discussion. I can’t think of many (any) other blogs where this would have been possible.

    Even further off topic. I took a number of econometrics courses from Dr Zivot at UW in the late 90s. He is a wonderful teacher and a very nice man. It pleases me greatly to see his work cited here.

    Thank you Dr. Verheggen you are doing us all a great service.

  741. Eric Says:

    Thanks also to the many others who have contributed much here.

  742. John Whitman Says:

    VS,

    Admittedly, I am on a steep learning curve with the statistical information you have kindly shown. I appreciate the large amount of energy you have given to explain the statistics!

    Given:

    1) that the CO2 forcing [& CO2 ppm] you found to be I(2) and that the GISS temp record [& CRUTEM3] you found to be I(1)

    2) that several others have found this independently of you, including some that were published in professional journals

    3) that there is an established basis in the industry (of Statistics) for the methods you have applied to analysis on those time series. And associated Nobel Prizes.

    4) that the knowledge of 1) & 3) appear, at least to me, not disseminated into the general climate science blog community

    Questions:

    a) What, in your professional opinion, are the weakest points in 1) & 2) that should be the next area of professional focus. In other words are there commonly known ‘contentious’ areas in your statistical universe on these kinds of time series analyses?

    b) Further, what in your professional opinion, would constitute, looking forward, further confirmation of those givens. I am trying to project from where we are here at this blog into next areas of focus.

    c) What statistical publications/blogs/associations would you recommend for intermediate statistical enthusiasts to pursue the statistics you have been using?

    Bart, thank you once again for this wonderful venue where such a collection of lively statistical minds have congregated to the great benefit of people like me. : )

    John

  743. Alan Wilkinson Says:

    Guthrie: “what was responsible for the warming in, say the first half the 20th century and late 19th?”

    The short answer is that I don’t know. My null hypothesis was given above (March 23, 2010 at 01:53). That seems at least consistent with Roy Spencer’s work with cloud forcings, the correlations with PDO and the statistical patterns we’ve looked at here. Obviously there are many possible factors. As Spencer also points out, any and every forcing missing from the models results in CO2 forcing being over-estimated.

    For that matter, what caused the recent “mini ice age”? Possibly the reverse mechanism ended it? I think it will turn out that many factors affect climate and many other mechanisms moderate it. I doubt we understand half of it yet.

  744. Scott Mandia Says:

    Shub and Alan:

    Do yourselves a favor and go to:

    http://www.woodfortrees.org/plot/

    Plot 2000 – 2009 for each of the four major temperature data sets: GIS, HadCRU, RSS, UAH along with the OLS trend fit.

    Then read this:

    http://www.realclimate.org/index.php/archives/2009/10/a-warming-pause/

    Then read the Science article. Yes, Kerr propagated a myth.

  745. Scott Mandia Says:

    Also, recall that 10 years does not determine a trend with significance. Each of the past three decades has been warmer than the one before and the next decade is very likely to be even warmer than the previous one. I wonder what you will be saying in 2020 when we set another record decade?

  746. Alex Heyworth Says:

    But Scott, if temperature is I(1), what is the point of plotting an OLS trend line?

    [Reply: If that means that the error of the trend is underestimated, but the trend estimate itself is not heavily influenced (not sure, but seems likely), then at least one can see to what extent the last 10 years deviate from such a trend line. They don’t deviate much at all. That sais something about the last 10 years not having been really different from the 25 years prior (as in the last graph of my post). Of course, it would have been better to take the nature of the variability into account in the trend estimate, I realize that. Then there’s the 11 year running mean, showing no sign of slowing down. BV]

  747. Alan Wilkinson Says:

    Scott, if we set another record decade in 2020 I will say so, just as I am describing accurately what has happened in each of the last two decades.

    I didn’t claim a trend for the last decade but you did, and wrongly. I don’t need to plot the data. Lucia is a smart, neutral on AGW, lady and has already plotted the source data as I linked.

    Neither do I read Real Climate. I read only sources that allow counter arguments freely debated. [edit]

  748. Steve Fitzpatrick Says:

    Bart,

    What a thread!

    VS,

    Very impressive work. You really should formally write it up and submit it for publication in a climate journal.

    Your results do not at all take away from climate science, and certainly do not conflict with long established physical science. They simply prove that claims of “X of the last Y years” being the warmest ever would not be terribly unusual in the absence of any forcing, based on the structure of the temperature data from before the time when GHG forcing was significant. It seems to me you have just shed some light on the true level of uncertainty in the temperature trend.

  749. Global Warming...Fact or Fiction? - Page 100 Says:

    […] […]

  750. John Whitman Says:

    ”””’Alex Heyworth Says: March 25, 2010 at 06:19 – But Scott, if temperature is I(1), what is the point of plotting an OLS trend line?”””’

    Alex,

    Is the problem with Scott’s suggestion [ @Scott Mandia Says:

    March 25, 2010 at 05:57] for plots of temp be that with temp being I(1) then the OLS he suggests leads to spurious regression? That it will show false positives?

    Appreciate help in trying to understand this.

    John

  751. Alan Says:

    mmm … I did laugh out loud when Alex posted the question. I am lost.

    My weight since 2000 is a virtual mirror of GISS but I take comfort that ‘statistically’ it is not valid to say that I have put on weight at the rate of x kgs/year.

    Now if only my pants could autocorrelate with my waist.

    Gotta laugh …

  752. Alex Heyworth Says:

    John, briefly the answer is yes. David Stockwell has given an excellent couple of expositions of orders of integration for the layman here: http://landshape.org/enm/orders-of-integration/#more-3994

    He also has an excellent series of articles explaining cointegration for the layman here: http://landshape.org/enm/cointegration-primer/

    I was actually rather tongue in cheek in my comment to Scott; old habits die hard, no doubt, plus he may not be convinced that temperature is in fact I(1).

  753. John Whitman Says:

    ”””’Alex Heyworth Says: – March 25, 2010 at 09:27 – John, briefly the answer is yes. David Stockwell has given an excellent couple of expositions of orders of integration for the layman here: http://landshape.org/enm/orders-of-integration/#more-3994 . . . . I was actually rather tongue in cheek in my comment to Scott; old habits die hard, no doubt, plus he may not be convinced that temperature is in fact I(1).”””’

    Alex,

    Yeah, saw Stockwell’s good stuff for laymen and did my homework there. But, I need to try out my understanding on live people. Thanks for being my sounding board.

    Regarding you being sorta tongue in cheek with Scott, understand. I try to stay away from that stuff unless I know someone really really well . . . and I am a newbie here.

    John

  754. VS Says:

    Hi Steve Fitzpatrick,

    I think you summarized the findings pretty well. Allow me to pick up where you left off and provide some more illustrations, because I still don’t have the feeling that people understand the fundamental difference between stochastic and deterministic trends. Hint: it’s in the forecasting confidence intervals!

    Remember that the day before yesterday I used simulation to calculate my forecasting confidence interval in 2008, considering only the information available in the period 1880-1935. Turns out that one of my statistical software packages actually does that for you (doh! shows once again that I’m not a TSA expert, haha).

    Now let’s take a look at the difference between accounting for a unit root in your ‘trend behavior’, and ignoring it, in terms of forecasting confidence.

    ——————-

    ESTIMATING THE DETERMINISTIC TREND
    BASED ON INFORMATION 1880-1935

    ——————-

    Let’s now assume that we know absolutely nothing about unit roots, and we blindly assume trend stationarity of the GISS record. We take the the data 1880-1935 and estimate our (misspecified!) deterministic OLS trend equation.

    Note that, as far as I have seen, this is exactly what most climate scientists do when analyzing time series!

    OLS trend equation:

    GISS(t) = Constant + Beta*t + error(t)

    We get the following estimates (p-value):

    Constant: -0.305717 (0.0000)
    Beta: 0.002847 (0.0005)

    R2: 0.206546

    We test our disturbances for normality, and fail to reject it. So far so good.

    JB: 0.423915

    Now we use the Breuch-Godfrey test (2-lags) for residual autocorrelation, and we in fact reject the non-presence of a autocorrelation. We therefore cannot assume our errors are uncorrelated.

    F-statistic: 6.886857
    p-value: 0.002251

    This means that we need to account for this autocorrelation in our specification. We add an AR(1) term in the equation and estimate the following (misspecified!) trend-stationary equation:

    GISS(t) = Constant + Beta*t + AR1*GISS(t-1) + error(t)

    We now get the following estimates (p-value):

    Constant: -0.322131 (0.0000)
    Beta: 0.003309 (0.0135)
    AR1: 0.452317 (0.0006)

    R2: 0.388423

    We again test our errors for normality, and again we fail to reject it.

    JB: 0.423915

    We then turn to the Breuch-Godfrey test (2-lags) for residual autocorrelation, and this time we fail to reject it:

    F-statistic: 0.612306
    p-value: 0.546195

    Ah, great! So our disturbances have been ‘cleaned up’ by the AR(1) term, our R2 is close to 40%, and our errors are normal. Looks like a good fit, no? Yes, it’s great, were it not that we actually had a unit root in our series…

    Note that this is the reason why we start our analysis with unit root analysis. Ignoring this crucial step (which serves to establish/reject stationarity of our series), might lead us to conclude that we have in fact estimated a proper trend model, in terms of diagnostics!

    Now, let’s move on to the stochastic trend estimation.

    ——————-

    ESTIMATING THE STOCHASTIC TREND
    BASED ON INFORMATION 1880-1935

    ——————-

    Now we are going to take a look at what happens if we actually account for that (damned :) unit root in the series. The stochastic trend equation we are going to use is:

    [edit as per VS’ later correction:)

    D(GISS(t)) = AR1*D(GISS(t-1)) + AR2*D(GISS(t-2)) + AR3*D(GISS(t-3)) + error(t)

    with the relevant starting values given in the GISS series. In the first part of the posted simulation, I described (under ‘SPECIFYING THE NAIVE ARIMA MODEL’) the exact procedure for arriving at this specification.

    Note that this specification also has normal disturbances, and doesn’t suffer from any autocorrelation (see link above).

    In any case, we get the following estimates (p-values):

    AR1: -0.368527 (0.0091)
    AR2: -0.392924 (0.0051)
    AR3: -0.342341 (0.0173)

    R2: 0.231259

    Note that this is the stochastic trend specification arrived at through formal testing. I.e. the data are ‘at peace’ with this specification, while they reject (in the unit-root testing stage) the deterministic OLS trend specification.

    ——————-

    DETERMINISTIC VERSUS STOCHASTIC TREND
    FORECAST CONFIDENCE 1880-2008 BASED ON
    INFORMATION AVAILABLE 1880-1935

    ——————-

    Now we use our both specifications to determine our confidence intervals. We let EViews calculate the standard errors in each year for us, and we take the (projected) expected value of the series in every year and construct our forecasting confidence interval via the following equation: E_f[GISS(year)] +/- 1.96*STD_f[GISS(year)].

    I will now proceed to plot the actual GISS data, together with the 95% forecasting confidence intervals, using the two specifications.

    Here are the two figures.

    FIGURE 1 – (misspecified) Deterministic trend forecasting confidence intervals (based on 1880-1935 observations), together with GISS series

    FIGURE 2 – Stochastic trend forecasting confidence intervals (based on 1880-1935 observations), together with GISS series

    They say a picture is worth a 1000 words. I think these two are worth more than that, considering the length of this thread :)

    The fundamental difference here is that our misspecified deterministic trend model would lead us to conclude that the warming seen in the past 20-30 years is anomalous considering the trend observed over 1880-1935. Note how the actual realized GISS series is jumping in and out of our 95% forecasting confidence interval.

    Climate scientists looking at this seem to conclude that something fundamental has changed in the latter half of the last century (e.g. increased climate forcings). An econometrician however would refrain from jumping to conclusions, and investigate the series for non-stationarity.

    Now, notice how the forecasting confidence intervals of the stochastic trend estimate leads us to conclude that ‘the trend’ has not changed at all significantly (in statistical terms).

    In fact, every single realization of the temperature anomaly between 1935-2008 falls perfectly within our 95% forecasting confidence interval !!!

    In other words, when applying formal methods to arrive at our ‘trend estimate’ (i.e. find the unit root, and account for it), we find that the temperatures observed over the period 1935-2008 are perfectly in line with the ‘trend behavior’ exhibited by the temperature series in 1880-1935.

    Now everybody meditate on that for a minute or two… ;)

    Cheers, VS

    ——————-

    Hoi JvdLaan, juist, het maaiveld, was ik even vergeten.. Ovidius ook ;)

    ——————-

    Hi John Whitman, I’ll try to answer your questions later, was busy with this post :)

    ——————-

    Hi, Eric, you studied under Zivot? OK, now I’m officially jealous ;)

    ——————-

    MONTE CARLO UPDATE/CORRECTION

    ——————-

    I ran the PP test monte carlo simulations again. This time, instead of 0 lags, I employed more.

    NOTE: these are ‘different’ lags than those in the ADF equation (which is why I missed them the first time around ;)

    Entered lags – Simulated sig values.

    1 Lags – 0.7915
    2 Lags – 0.7580
    3 Lags – 0.7257
    4 Lags – 0.7557
    5 Lags – 0.7791

    So, with different lag lengths the results stay the same (but in all fairness, I really should have checked before posting :).

    I wrote earlier Tamino and I both used 0 lags. I double checked that, and it’s not the case. I think Tamino actually used 4, I indeed used ‘0’, but my method was slightly different (involving kernel corrections, it’s up there in the test results if you’re interested).

    Apologies.

    Thanks Paul_K, for reminding me to double-check :)

    [Reply: These figures are useful indeed. Aids the discussion much to actually *see* what you mean. But what are you attempting to show with these? Nobody expects the trend during 1935-2008 to merely be a continuation of the trend during 1880-1935. Rejecting that is rejecting what nobody is claiming in the first place. Your bottom figure seems to have hardly any predicting skill. Anything goes, as long as doesn’t deviate too much from preceding values. I think that physics based climate models posses a lot more skill, see e.g. the top panel here (anthropogenic and natural forcings. yellow: individual model runs. Red: ensemble mean of models. Black: measurements). BV]

  755. VS Says:

    Oops,

    D(GISS(t)) = D(GISS(t-1)) + D(GISS(t-1)) + D(GISS(t-1)) + error(t)

    should read:

    D(GISS(t)) = AR1*D(GISS(t-1)) + AR2*D(GISS(t-2)) + AR3*D(GISS(t-3)) + error(t)

    [Changed in original. BV]

  756. Alex Heyworth Says:

    John, further to my comment above, the following statement by VS (a fair way upthread) summarises the situation well:

    … given the (established) presence of a unit root, simple OLS based (multivariate or not) inference is invalid. This includes OLS based trend estimation and calculation of the relevant confidence intervals. Note that this is not a matter of opinion, but rather a formal result.

  757. Tim Curtin Says:

    VS: Time to publish!

    But can I also refer VS et al to Terence C Mills’s paper not that long ago in Journal of Royal Statistical Society; JRSS,A titled ‘Time series modelling of two millenia of northern hemisphere temperatures: long memory or shifting trends?’ is in Volume 170, Part 1,pp83-94, 2007. This unit root stuff is really at least partly about “memory”. Mills is no dummy, he has a good textbook on all this, TIme Series Analysis for Economists, CUP (Yes, CUP), 1998. At the least he deserves a citation!

    [edit]

  758. Steve Fitzpatrick Says:

    VS,

    So, assuming the limits shown on your two graphs are the 95% limits, the correct statistical treatment says that the rise in temperature post 1935 has not yet reached significance at 5%.

    Now it does look to me (I if understand your graphs correctly) like the rise would be significant at 10%. So the best we can say is that the null hypothesis remains a 5% to 10% possibility, in spite of the measured increase in temperatures since 1935.

  759. Alex Heyworth Says:

    Bart, when I commented to Scott

    But Scott, if temperature is I(1), what is the point of plotting an OLS trend line?

    I was being ironic. My point was that it is a little incongruous to be suggesting that someone look at a graph of the last ten years’ temperatures with an OLS trend line, on a 700+ comment thread which has been at least partly devoted to establishing that OLS analysis of temperature trends is statistically invalid.

    I suppose I shouldn’t expect anyone other than Aussies, Kiwis and Poms to “get” irony.

  760. VS Says:

    Hi Steve Fitzpatrick,

    Good point.

    When using a 90% forecasting confidence interval, we find that the following years fall outside of it:

    1998 and 2001-2007 (2008 is back in line)

    Now, let’s go back to the definition of the ‘90% forecasting confidence interval’. What does this actually mean? Well, it implies a ‘significance value’ of 10%. And what does this mean? Well, that we would expect, that under the true null hypothesis we will observe an ‘anomaly’ 1 in every 10 disturbance realizations.

    Also note, that because of the autocorrelation structure of our series, if one year ‘pops’ out of our confidence interval, the following couple of years will stay ‘above’ it. Now observe 2001-2007.

    Think about this too for a minute… it’s a simple definition, but it takes some time to wrap your head around it ;)

    Nevertheless, good point.

    As a side note, if the ‘evidence’ of ‘unprecedented warming’ is the difference between a 5% and 10% significance level, I would say that we really shouldn’t jump to any conclusions (and ‘carbon taxes’), just yet ;)

    VS

    PS. Note that this doesn’t consitute a ‘proof’ that there is no change in our stochastic trend. However, it is enough statistical evidence to suggest that if some change indeed took place, it’s not extreme.

    Now compare that to the results arrived at by Zorita et al (2008), i.e. 1/10,000 ‘probability’ of observing the recent warming record, considering ‘nothing changed’.

    Ehm.

    PPS. Alex Heyworth, what do you mean only Aussies, Kiwi’s and Poms ‘get’ irony? How about this comment of mine to Eichenbach and Mandia :P

  761. Scott A. Mandia Says:

    BTW, Alex, I did get the irony and I even laughed when I posted my comment knowing what was coming. :)

  762. VS Says:

    Hi Bart,

    Thanks for your reply.

    You misinterpreted my post. I was trying to explain the difference between a deterministic and stochastic trend.

    I surely wasn’t comparing my 4-parameter and 3 starting values statistical specification, based on some 55 observations, with GC models, that rely on (probably) hundreds of parameters, many many more observations and God knows how much CPU time.

    I mean, come on.

    We’ll get to that when we actually start reproducing Beenstock and Reingewertz :) That specification actually contains explanatory variables… fun!

    What I did here was illustrate what a stochastic trend is and why it’s so much more appropriate in this case than a deterministic OLS one (apart from the formal results). Now, you say it has no predictive power (in terms of expected value). Of course it doesn’t, it’s based on one variable (i.e. time), it has 4 paramaters, and the series contain a stochastic trend!

    Besides, given the plainness of the specification and the fact that it was estimated on 55 observations, I think managing to project with 95% confidence where the temperature levels are going to be at 73 years later, and succeed, is pretty good for a 4 parameter model. The (misspecified!) trend-stationary OLS specification fails miserably.

    So once again everybody: quit calculating deterministic OLS trends on the temperature record!

    Now, somebody alerted me about your commment on Tamino’s blog, where you state that I set up a ‘strawman’. In fact, I think it was you who set up a strawmen right there, Bart :P

    Cheers, VS

  763. Bart Says:

    VS,

    Perhaps I misunderstood you indeed. Though I’d hardly think that I’m the only one. There are people very eager to jump to conclusions, and they need very little in terms of a jumping board.

    You wrote: “The (misspecified!) trend-stationary OLS specification fails miserably.” Well, of course it does. Nobody expects it to have any value. So either it’s a strawman or it’s, well, what exactly? Stating the obvious?

  764. VS Says:

    Hoi Bart,

    “Perhaps I misunderstood you indeed. Though I’d hardly think that I’m the only one. There are people very eager to jump to conclusions, and they need very little in terms of a jumping board.”

    Ok, fair enough. I guess that if that’s indeed the impression of a layman, then your reply is a valid addition to the debate (although the version at Tamino’s wasn’t, sorry).

    So again,

    This post, was a demonstration of what the data ‘tell us’ about ‘trends’ when we take good care to listen to them properly.

    Also, it was an exposition of what they don’t tell us (in this case the direction of change, and the ‘confidence’ with which we ‘know’ this).

    As such, it served to show how deterministic OLS trend estimation is pointless for saying ‘which way’ temperatures will move, because the series are I(1) with no significant drift. The story doesn’t suddenly change when you estimate the deterministic trend on other time intervals (viz. your original blog entry) while making assumptions that are not supported by analytical facts (i.e. evidence of trend-stationarity).

    Put shortly, this was an answer to Adrian Burd’s question, to me: “So, how do we correctly estimate the trend, or is this impossible to do, given the data at our disposal?” (bold added)

    Cheers, VS

    PS. Tom, I appreciate this :)

  765. Shub Niggurath Says:

    Hi Scott
    So I am quoting Science magazine and you are quoting RealClimate?

    Let me elaborate a bit for the benefit of our readers:

    “The blogosphere has been having a field day with global warming’s apparent decade-long stagnation”

    “The pause in warming is real enough, but it’s just temporary, they[climate researchers] argue from their analyses”

    “Corrected for the natural temperature effects of El Niño and its sister climate event La Niña, the decade’s trend is a perfectly flat 0.00°C. ”

    “So contrarian bloggers are right: There’s been no increase in greenhouse warming lately.”

    Richard Kerr spoke of the stationary trend in his piece in context of the Hadley Centre’s attempts to see if its models recapitulate what is observed in nature – which they apparently do – they apparently also generate decade long pauses (17 times in 700 years of simulation).

    His source: J. KNIGHT ET AL., BULL. AMER. METEOR. SOC., 90 (SUPPL.), S22–S23 (AUGUST 2009). Google it – you can download a 64MB pdf file.

    The whole Kerr article as such only gives voice to the opinion that “no sort of natural variability can hold off greenhouse warming much longer” so we can take its contention that there has been a pause seriously. No harmful non-consensus views there.

    Could the pause be due to natural variability? I give you Stefan Rahmstof:

    “Pinning the pause on natural variability makes sense to most researchers. “That goes without saying,” writes climate researcher Stefan Rahmstorf of Potsdam Institute for Climate Impact Research in Germany by e-mail. “We’ve made [that point] several times on RealClimate

    Look at the (long forgotten) original post in this thread. Bart graphs the trend for the last ten years – a stationary trend. I can use this point to counter yours precisely because I am quoting a source non-adversarial to you and your position.

    Last ten years stationary or not? That is all the question we are addressing now. What it implies is not my concern. We should be able to agree on that before considering further discusion.

    [Reply: The rate of increase in GISS temps over the last 10-12 years are of a similar rate as that of the past 25 years. See e.g. the Copenhagen Diagnosis. For HadCRU that is not the case (but that series omits the Arctic). But both your assertion and the sources I base my reply on are (AFAIK) based on OLS trends however (the irony hasn’t escaped me), and thus give a deflated sense of the error. RealClimate is run by climate scientists btw, who have published in Science and Nature themselves. BV]

  766. Bart Says:

    VS,

    I haven’t seen a clear (to me, at least) answer to the question “how do we correctly estimate the trend?”
    What is according to you the trend (in deg C per year) from 1975 to 2009?

    Are you planning to take up my suggestion to work with the net forcings and/or GCM output?

    Any thoughts about the physical constrainst and forcings on the system, as I brought up my two subsequent posts?

  767. The Blackboard » Questions for VS and Dave Says:

    […] Alex Heyworth, have been asking me to jump into the VS/Bart/Dave Stockwell/Tamino fray. It appears VS is making some sort of claims in comments at Bart’s blog. The claims and clarifications are spread out over 760 or so comments, with various interjections. I […]

  768. AndreasW Says:

    VS

    So with a 95% confidence interval there’s been no statistically significant warming since eh …. 1880?

    Or did i get it wrong?

  769. Pofarmer Says:

    What is according to you the trend (in deg C per year) from 1975 to 2009?

    My understanding is-none, since it’s within the expected variablility of the data.

    Not answering you, Bart, just putting this out there to see if I understand it correctly.

  770. VS Says:

    No,

    Observations imply that, if the data are to be believed, the planet has warmed gisstemp(2008)-gisstemp(1881)=0.43-(-0.2)=0.63 degrees, in the period 1881-2008.

    We’re talking about trends.

    [Reply: At the risk of misunderstanding you again, are you implying that the best estimate of the (rate of) change in temperature is to just take the difference between two years? That’s hard to believe, since it’s very dependent on the exact choice of years (having quite a bit of yearly variability). Fitting a linear trend, even if imperfect, seems to provide a much better estimate (because it’s less dependent on the exact choice of end years, if the time period taken is long enough). 0.7 degrees is the number most often quoted for the amount of warming since the pre-industrial period. BV]

  771. GDY Says:

    Pofarmer, Andreas and Bart –
    the analysis VS performed shows that the GISS data are still consistent with “no trend”. The Statistics used in some previous papers (not sure anyone has compiled this list, in response to VS’s request to do so) claiming “trend with statistical significance” were based on the wrong statistical analysis (OLS, which assumes stationarity in the time series).

    VS has gone even further and shows that based on the data just from 1880-1935 (the variance covariance structure), the GISS data are still within the 95% confidence interval for all years through 2008.

    VS has scrupulously avoided any discussion of the physics of climate science (to his credit, in my opinion).

    Frankly, to me, his result is not surprising, given that the Atmospheric Temperature record (not including Ocean Temps at any depth) is a poor [edit] proxy for “global warming”.

    What he has done is take away the ability of anyone to declare things like ‘we have certainty that co2 is creating unprecedented warming’. The statement of certainty is inappropriate given the GISS data structure.

    VS – sorry if I have misstated your analysis.

    [Reply: No scientist in their right mind has ever proclaimed mathematical “certainty”. BV]

  772. eduardo Says:

    @ VS

    VS wrote
    ‘Now, let’s say one argues that anthropogenic forcings after 1950 in part caused the warming that we then observe as an anomaly in 2008. In that case the anomaly value in 2008 ought to be deviant, considering the (autoco-)variance structure we observe before, say, 1950. ‘

    It seems that now we are understanding better what the other says. I cannot check your calculations in detail – I really do not have much time- but in principle I would not object to that. If you find that the ‘natural variations’ are cointegrated and the recent variations can be described with that model, it is fine for me.
    The origin of this discussion between us was the misunderstanding of the GRL paper.

    To the question of testing the theory with model simulations, perhaps again we are misunderstanding each other. I meant that models embodied the theory and then the theory is tested by performing simulations with different forgings (hypothesis) and checking with observations

  773. eduardo Says:

    @ Willis,

    Willis wrote,
    ‘Finally, you did not answer the one question I asked, which was, what would it take for you to give up your belief that CO2 is the cause of the post-1950 warming? precipitation, temperatures have been flat for fifteen years

    I answered this question already in our blog, hidden in the comments probably http://klimazwiebel.blogspot.com/2010/01/ten-years-of-solitude.html

    Actually I would be happy if I were wrong, and CO2 cause no warming. I would ask you also, what would it take for you to believe that CO2 is the main cause of post-1950 warming. What would be the rate of warming in the next say 2 decades for you to acknowledge that perhaps there may be some true in AGW?

    Willis wrote
    ‘ If you have some alternate explanation why the earth’s temperature has stayed within ±3% for half a billion years despite meteor strikes and millennia-long volcanic eruptions and wandering continents, what is that explanation? ‘

    Well, I do not agree with you that the Earth’s climate has been stable. 2% temperature change is more or less what models would predict for 2100. For illustration, 2%-3% change in the global mean temperature means the following difference: to be able to enjoy an afternoon in summer outdoors in Stockholm drinking a beer (alcohol free for sure) as in the present interglacial, or to to be buried on the same spot by 3 kilometers of ice 20 thousand years ago. This differences were caused by very minor changes in the orbital parameters of the Earth. I dont think any human capable of comparing these two situations would agree that the climate is stable.
    Probably this is OT here, but perhaps you can give it a critical thought

  774. Jeff Id Says:

    Bart,

    I left a reply at my blog, my comment was to Roman who is a statistician and would be more appreciative of the stats discussion here. When you direct someone to an interesting point in a thread with 700+ comments, it’s important to narrow it to the part that you think will interest them. It not a slight of your post.

    This has been an excellent thread and very much worth reading. I’ve read “your post” and about half way through the comments now.

    Oddly enough, you should take the time to read mine. It is also on global trend and it is WRT a better method for combining surface station data. — You will get a higher trend and more accurate representation of the data. Whether it’s significant or not is up to different people.

  775. Shub Niggurath Says:

    I think Eduardo means “different forcings”, not “forgings. Is that correct?

  776. lucia Says:

    VS–
    I have a question that goes way back to comment March 23, 2010 at 14:13.

    Using

    D(GISS_all(t)) = constant + AR1*D(GISS_all(t-1)) + AR2*D(GISS_all) + AR3*D(GISS_all) + error(t)

    You found

    Constant: 0.006186 (0.1302)
    AR1: -0.452591 (0.0000)
    AR2: -0.383512 (0.0000)
    AR3: -0.322789 (0.0003)

    You deemed the constant not statistically significant, and if I am not mistaken, all further tests assume the constant is identically zero.

    Have you computed the statistical power of the test to reject a null trend using the alternative hypothesis that the trend really is 0.006 C/year? That is, if the data really do conform to having a trend of 0.006C/year, and AR1-AR3 are as you specify above, and the error(t) has the innovation you found suitable, in what fraction of all possible realizations of this process woud you have managed to reject m=0?

    I ask this because when testing claims by proponents of AGW, I don’t really consider treating m=0 as a null entirely justified and I’m pretty leery of accepting as conclusive results of tests that hinge on the assumptoin that m=0 if the basis for believing this is that m=0 failed to reject, but using a test that had low power relative to the values of trends that proponents of AGW actually expected to occur during the period in which you applied your tests.

    (Also, so that I don’t have to trace back in comments to far, what span of years did you use when obtaining those specific coefficients? 1900-2000? 1900-1950? )

    Thanks.

  777. Howard Says:

    Am I getting this right?

    If VS’ stochastic analysis is correct, the temperature increase over the later 2/3’s of 20th century could be within the expected range from the same climate forcing mechanisms acting during the 1880-1935 period. Therefore, it may be possible that the warming is mostly natural.

    One could also conclude (if I read his result correctly), that the bottom curve is an equally likely potential 2008 temperature due to natural (or 1880-1935 forcings) variation. If this was the case, then the CO2 forcing/feedbacks could quite possibly be responsible for 1.8 deg C warming. Taking this one step further, if the 5-sigma curves (as per “hard” science *convention*…LOL) were plotted, the potential CO2 forcing/feedback warming could be even greater.

    This analysis provides red meat for all preconceived notions: 1) It’s all natural or 2) It’s much worse than we thought.

  778. Pofarmer Says:

    No scientist in their right mind has ever proclaimed mathematical “certainty”

    Then what’s the point of the whole “the science is settled” routine?

    [Reply: I don’t know; you tell me. It’s mostly used as a strawman argument, whereas scientists would hardly ever call their field of science “settled”. Why would they be scientists if it were? See also RC’s piece. BV]

  779. Pofarmer Says:

    Am I getting this right?

    No.

  780. VS Says:

    Hi lucia,

    Try modifying my code above, and running the monte carlo yourself. You need to run it on 55 observations (the estimation sample), and the variable of interest is x(i). Make sure you add the constant (i.e. the drift parameter) in your data generating process (i.e. in the equation for x(i)).

    The standard error of the regression (i.e. standard error of, error(t)) is 0.097597. The coefficients are indeed:

    Constant: 0.006186 (0.1302)
    AR1: -0.452591 (0.0000)
    AR2: -0.383512 (0.0000)
    AR3: -0.322789 (0.0003)

    Run it like 50,000 times, and save the Boolean result of your hypothesis test (i.e. reject=1, fail to reject=0). The average over these 50,000 iterations is your estimated statistical power.

    Do share your code/results.

    Also, we find a statistically insignificant drift coefficient (p-value>0.10). What makes you think it is in fact significant? It seems like a circular argument to me (i.e. climate scientists estimated a misspecified OLS trend that implies that there is ‘trended’ warming, therefore there must be a warming ‘trend’, therefore the proper test, which says that there isn’t any trend, is wrong.. or something like that).

    I’m going on a limb here, but aren’t my results in line with climate science resutls that are in fact predicting we are currently in a long term cooling trend due to Milanković cycles (Imbrie (1980))?

    —————-

    Hoi Bart,

    That’s indeed the best ‘estimate’ of realized warming, 1881-2008, I can think of given the data I have.

    If you know the standard-errors of the various data-point (i.e. global mean temperature) estimates, we can check if it’s statistically significant via a difference in means test, while accounting for correlation in estimator distributions (it should be, unless NASA can’t measure at all, which I sincerely doubt).

    In this case OLS trend calculations don’t do what you want them to do.

    Again, the basic assumption of trend-stationarity is violated, ergo that confidence interval doesn’t mean what you think it means (i.e. it isn’t the ‘confidence’ of the estimated rate of change).

    The very first Gauss-Markov assumption (often overlooked) which makes the OLS estimator is the Best Linear Unbiased Eestimator (BLUE), is that of a correct specification. Given that the deterministic OLS trend specification assumes trend-stationarity, which we have formally rejected, the model is misspecified, and this assumption is violated.

    —————-

    Hi Eduardo,

    Thank you for your reply. I do hope you will have time to look at those calculations in the future. I’m interested in your feedback

    VS

    [Reply: So you are indeed implying that the (rate of) change in temperature is better estimated by just taking the difference between two years, than by fitting a linear trend? And that the best yest still simple way would be to calculate the mean of some series of points at the beginning and the end of the interval? Do I understand you correctly? Btw, Milankowitch forcing to go towards cooling takes a few 10s of thousnads of year. Hardly relevant for what we’re talking about here. BV]

  781. Søren Rosdahl Jensen Says:

    VS:
    In your upper figure (at this page: http://img146.imageshack.us/img146/6674/deterministicvsstochast.gif) you present the result of forecasting the with the trend from 1880-1935. As far as I can tell you do NOT take autocorrelation into account when estimating the prediction intervals.
    Is that correct?

    I can reproduce that figure by calculing the OLS trend estimate and then calculate prediciction intervals with the formula given here as formula (5):
    http://www.amstat.org/publications/jse/secure/v8n3/preston.cfm
    i.e. by not correcting for autocorrelation.

    Is it correct that you did not correct for autocorrelation in the first figure?

  782. VS Says:

    Hi Søren Rosdahl Jensen,

    “As far as I can tell you do NOT take autocorrelation into account when estimating the prediction intervals. Is that correct?”

    No, it’s not.

    The forecasting errors are calculated by Eviews, and this a plot. You can extract the data from the link itself.

    As you can see, these are not the forecasting errors of a trend-stationary process with no AR terms (which are simply a flat line).

  783. VS Says:

    PS. The time axis of that chart is incorrect. I start in 1885, not 1881. Apologies.

  784. Søren Rosdahl Jensen Says:

    VS:
    Thanks, so I guess it is just a weird coincidence that my procedure of calculating the prediction intervals gives a figure similar to yours.

    I have no clue of what Eviews is, I have done my calculations in Scilab using the formulas described. It is rather straight forward to replicate using e.g. Matlab.

    Is forecasting error = observation-forecast from trendline?

    I am bit confused over this forecasting stuff. I am used to calculate trendlines to test if data is significant different from noise, not to use teh trendline to forecast future changes.

  785. VS Says:

    Hi Søren Rosdahl Jensen

    The figure is probably almost the same, because as you can see, the forecasting error doesn’t deviate much (it varies between 0.09 and 0.12).

    If you take a closer look at the chart though, you will see it’s not a straight line.

    As for forecasting intervals (of such simple models), trust me, it’s hardly rocket science :) Any basic econometrics book should have at least a section on it.

    PS. People, I know for a fact that all the textbooks I referred to are ‘out there’, if you catch my drift.

  786. Søren Rosdahl Jensen Says:

    I have uploaded my graphs here.
    This is the one I think looks like yours:
    http://img710.imageshack.us/i/compareb.png/
    This is with increased prediction interval to take care for autocorrelation, might be too simplified though:

    Please be gentle, I am no statistician…

    Cheers

  787. Dave McK Says:

    [Reply: No scientist in their right mind has ever proclaimed mathematical “certainty”. BV]

    Heh. That is simply precious, Bart. Now you wish to quibble about whether they meant ‘mathematical’ or not, I suppose, because there’s no quibble about the question of the sanity.

  788. VS Says:

    “Please be gentle, I am no statistician…”

    :)

    I think your software package automatically performed the correction and now you did it once more. A trend stationary process, with no autocorrelation, looks like this:

    y(t)=a+b*t+e(t), with e(t) ~ White Noise

    Now, when you remove the trend, the remaining detrended series is detrend_y(t)=e(t). The error is white noise with constant variance over time (remember: the process is trend-stationary).

    Your forecasting errors are thus flat over time, because the deterministic trend contains all the time-related information. The error is simply uninformative white noise, broadening your confidence band as the ‘error of the regression’ increases.

    Therefore I think you performed the correction twice ;)

    I think repetition is starting to work:

    The assumption of trend-stationarity is violated in the GISS series due to the presence of a unit root.

    Unit root test results here and here allowing for a structural break in the Ha. Monte carlo diagnostics on PP test here, and on ADF here. Forecasting confidence intervals for trend stationary and stochastic trend specification over 1885-2008, with parameter estimates based on 1881-1935 sample, here, with accompanying figures here.

    References:

    ** Woodward and Grey (1995)
    – confirm I(1), don’t test for I(2)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Kaufmann et al (2006)
    – confirm I(1), (they state however that temperatures are ‘in essence’ I(0), but their variable GLOBL is confirmed to be I(1), and treated as such)
    ** Zorita et al (2008)
    – in line with calculations implying non-stationarity, for all global-mean series
    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

  789. Søren Rosdahl Jensen Says:

    VS:
    My software package does nothing except what I want it to.
    The data I use is the GISS global annual mean anomalies.
    Then I calculate the OLS estimates by coding in the well known formulas. After that I calculate the prediction interval as specified by the link I gave.
    No built in functions are used, all formulas coded from scratch, 67 lines all in all

    No detrending or correction is involved, at least not in my script.
    Please define forecasting errors, if you refer to the coloured lines, they are the prediction intervals and not constant over time, you can see that either from the equation or from the graphs, they curve.

    And yes, I am aware of you objections to the OLS trend estimate for the GISS data. I am for the moment comparing my OLS method to yours.

    One question: Why did you choose to analyse the annual data rather than the monthly? To avoid annual cycles?
    If you use the monthly data you will have a lot more data points.

    BTW: something went wrong wtih your collection of links in the previous post.

    Thanks for you replies, I hope to learn from them,
    Cheers

  790. VS Says:

    Hi Bart,

    Allow me to snipe your reply inside my post here:

    “So you are indeed implying that the (rate of) change in temperature is better estimated by just taking the difference between two years, than by fitting a linear trend?”

    Yes, fitting a linear trend is misspecification.

    “And that the best yest still simple way would be to calculate the mean of some series of points at the beginning and the end of the interval? Do I understand you correctly?”

    Best way? Might be, I haven’t checked the literature, and I’m no measure theorist.

    It is however better than estimating a misspecified OLS regression, if you are interested in the rate of change. There is no ‘stochastics’ involved here, you are performing arithmetics on a realized temperature record. The question is: what was the average rate of increase?

    “Btw, Milankowitch forcing to go towards cooling takes a few 10s of thousnads of year. Hardly relevant for what we’re talking about here. BV]”

    What?

    It’s highly relevant. My estimation results show no significant warmig trend. This is completely in line with a decrease almost equal to 0/year (because the cooling trend is spread out over some 29,000 years, or so).

    Again, we’re talking about a trend!

    Bart, with all due respect, unless it’s a short clarification, I would prefer it if you would post a separate comment (like you did in the ‘good old days’ some week and a half ago ;), and not half a paragraph at the end of my posts :P

    Cheers, VS

  791. Alex Heyworth Says:

    VS and Scott, I am delighted!

  792. philc Says:

    It would be really nice if climate scientists started using heat terms in their normal physics/engineering sense-
    sensible heat- heat represented by a change in temperature of a mass cal/gr-degC
    latent heat- heat contained in a phase transition, such as ice to water cal/gr

    Heat is not a substance(caloric), but a flow of energy.

    When radiation strikes a surface, such as the ocean, two things happen- the water warms(sensible heat) and the temperature rise causes evaporation(latent heat). The overall balance between the two is, as the IPCC is wont to say “not well understood or quantified”. For example, sunlight on a choppy, well-mixed sea is likely to primarily heat the water, where sunlight falling on a stagnant pond goes mostly into evaporation, since the water temperature is limited by the wet bulb temperature.

  793. Alan Wilkinson Says:

    VS, I question the reality of your 95% confidence lines of the projection given the assumption of I(1). Since you have known data to 1935 it makes no sense to me that the origin of the curves is symmetric back to 1880.

    It seems to me you are forecasting the probabilities from an 1880 starting point whereas the forecast range actually begins in 1935. So your spread in 2000 is incorrectly centred at the very least and probably incorrect in width as well.

  794. philc Says:

    I think most everyone here answering VS’ posts needs to take another look at what he has said. Wading through everything(and I have to thank you all, and Bart, that for the most part name calling and such has been fairly limited) VS’ basic comment is:

    The major tenet of the global warming hypothesis is that CO2 and other GHG have been increasing and, as shown by the obvious correlation between temperatures and CO2, they must be related. Some basic radiation absorption physics gives a basic model of the process, so CO2 causes warming.

    VS points out that in order to show causation using the existing data you must at least first show there is a correlation. If there is a real correlation, then you can go further.

    BUT, the appropriate statistical tests show that the two time series are not comparable, so all the ordinary linear regression comparisons that have been made are meaningless and do not support the physical model. Also, any “correlation” between them is meaningless because they are different statistically and can’t be correlated statistically, only an eyeball correlation works, but it is meaningless.

    So, VS’ comment that this raises a big red flag seems to be right on the money. There is no correlation between temperature and GHG, so some serious looks are needed at the theory to try and figure out what is actually going on.

    I think that in layman’s terms it means that when you look at the temperature data you can’t predict with any confidence whether or not the next data point will be larger or smaller than the current one, or by how much.

    An obvious next step would be to examine the predictions from the climate models and see if they generate statistically similar data- i.e. I(1) for temperature and I(2) for GHG forcings.

  795. lucia Says:

    VS Says:
    March 25, 2010 at 21:30

    Try modifying my code above, and running the monte carlo yourself.

    I don’t plan to modify your code, run it or share the code. Sharing code is useful in some circumstances, but I think communicating through code when words will do is the supremely inefficient to the point of being idiotic. I’m also not trying to verify the results you already reported– I’m trying to learn results you did not discuss in your comment. I’d prefer to communicate using words.

    Also, we find a statistically insignificant drift coefficient (p-value>0.10). What makes you think it is in fact significant?

    Why are you asking me what makes me think the drift is significant? I didn’t suggest it’s statistically significant.

    I’m asking you if you’ve tested for statistical power of your “fail to reject” m=0 and if you did, I’d like to know what you found. As far as I can tell, none of the statistics you quoted back to me represent the statistical power.

    To verify this, I think this line

    Constant: 0.006186 (0.1302)

    Communicates:
    For the “constant”, the best estimate is best estimate =0.006186 (with mystery units) and
    13.02% = p value.

    Right?

    None of those represent the statistical power. Or have I misinterpreted something?

    If so, tell me which. Or, if you did not do any analysis to discover the statistical power, just say you didn’t and you don’t know whether, given some phenomenologically justified value of the constant, the power = (1- false negative rate) is 5% or 99.9999% or some value in between.

    (Or alternatively, the statistical power assuming that 0.006186 whatever units is actually true.)

    It seems like a circular argument to me (i.e. climate scientists estimated a misspecified OLS trend that implies that there is ‘trended’ warming, therefore there must be a warming ‘trend’, therefore the proper test, which says that there isn’t any trend, is wrong.. or something like that).

    What argument do you think I’m making? I’m not telling you your test is wrong. I’m asking you to tell me something about the statistical power of your rejection of the trend.

    I’m going on a limb here, but aren’t my results in line with climate science resutls that are in fact predicting we are currently in a long term cooling trend due to Milanković cycles (Imbrie (1980))?

    I have no idea. Should I care? I don’t see what this has to do with my question about the statistical power of your test?

  796. sod Says:

    Observations imply that, if the data are to be believed, the planet has warmed gisstemp(2008)-gisstemp(1881)=0.43-(-0.2)=0.63 degrees, in the period 1881-2008.

    sorry, but this is [edit].

    VS is nothing but another guy who is trying to confuse the uneducated by lots of math.

    the idea that you should just compare the difference between two years instead of looking at a linear trend is [edit].

  797. hengav Says:

    sod
    [edit]
    Play nice, everyone else is…

    What is incorrect about the assertion of a 0.0496 deg/decade of warming over the series provided? It for me is just another estimate using a statistical method NOT used in any of the current climate published works. Why do you fear it so much? It shows “warming”. I don’t dispute the finding, nor to I consider it the “end game”. It is a very good starting point for more analysis at the low end of current estimates of the rate of change in global temperature.

    [edit]
    Keep the thread alive.

  798. hengav Says:

    See note above, tell me why my comment was rejected whilst sods is allowed to stand. Please don’t stand on item #5 below. For those who know, my comment stands.

    “Accusations of other people or of a profession.

    Derogatory language.

    Endless repetitions. If you made your point, don’t keep repeating it without bringing extra information or arguments to the table. It’s okay to agree to disagree.

    Comments offending any of these rules will be deleted (or at least the offending part will be). You’re welcome to re-submit off-topic comments to an open thread. (Just a tip: Save your comment in a word processor to make this easier.) Repeat offenders will be put on moderation.

    Needless to say, I’m the sole arbiter of which comments are allowed and which are not.”

  799. John Whitman Says:

    ”””sod Says: March 26, 2010 at 06:47 ”””’

    sod,

    I will not repeat what you said.

    Perhaps it is not my place to say so, but please, we’ve had pretty much almost 10 days civil discourse on this thread with respect shown by the commenters to each other.

    John

  800. sod Says:

    ahm, i only posted an hour ago. i simply think that Bart hasn t seen my comment yet.

    and he is of course free to erase my comment above (and/or this one).

    but i really can t think of a better word to describe the VS claim about the comparison between two years. it is about as unscientific as things get. and is showing some really extreme lack of insight.

    so please let me repeat:

    the idea that you should just compare the difference between two years instead of looking at a linear trend is plain out stupid.

  801. hengav Says:

    sod is in violation of Item #3 of the moderators code: repetition.

    [edit]

  802. VS Says:

    sod,

    Take a look at Bart’s graphs.

    Now, if we were ‘estimating’ the realized ‘historical/empirical rate of change’, should we truly have confidence intervals that wide? Is there really so much uncertainty in figuring out how fast warming / cooling actually occurred in our realized temperature record?

    Regression analysis (i.e. the estimation of the following conditional probability: E[Y|X=x]) serves to uncover underlying data generating processes. Currently we are investigating whether there is a ‘trend’ in the data or not. Read the whole discussion. Then barge in with a ‘constructive’ comment like that.

    Everybody, discussing how to ‘estimate’ a realized rate of change is OT. If you want to know how to do that best, ask somebody specializing in that (i.e. not me).

    What we are discussing here is the testing of scientific hypotheses, in this case the presence of a trend in the GISS record, using formal methods on limited observations (i.e. econometrics/statistics).

    ——————

    Hi lucia,

    There is nothing in the data to suggest that there is in fact a positive trend in the GISS series, or that our stochastic trend equation is misspecified. All my diagnostics are up there.

    Furthermore, theory suggests that we shouldn’t find a positive warming trend (e.g. Imbrie (1980).

    I have performed no analysis to determine the statistical power of that particular hypothesis test, because I don’t think it’s relevant. I might do it later, if it’s the point of contention, but right now, other things are taking up more time (really :).

    For the record: we performed a hypothesis test, using a BLUE estimator (i.e. our first difference series is stationary, our errors tested for both independence and normality, so the estimator is in good shape), and we reported a test result. Note that the regular deterministic OLS trend regression in fact fails it’s diagnostic tests (i.e. we find a unit root). That result is thus meaningless for establishing the presence of ‘trends’, because the estimator is biased (and that’s putting it mildly).

    Now, apparently you want to know what the statistical power of that particular hypothesis test is. So run the Monte-Carlo. I more or less explained how to modify the code I posted up there (although I have to correct myself, you need to run it on n=128, not n=55).

    I’m interested in your findings (and when you report them, share the code please).

    VS

    [Reply: discussing how to ‘estimate’ a realized rate of change is on topic, as it relates to the global avg temp series and its statistical interpretation. Otherwise your claim that there is not trend would be similarly off topic. BV]

  803. VS Says:

    correction: (i.e. the estimation of the following conditional expected value: E[Y|X=x])

  804. sod Says:

    I think that in layman’s terms it means that when you look at the temperature data you can’t predict with any confidence whether or not the next data point will be larger or smaller than the current one, or by how much.

    that is a triviality. it is called weather noise.

    i can tell you with 100% confidence, that 2010 will not be colder than 1880. neither will 2011.

  805. HAS Says:

    sod I’m not sure if this is just bored graffiti on your part, but on the off chance that you genuinely are asking if putting a linear trend through the dataset is better than just comparing the start and end points then what this thread is telling you is that this graph/data doesn’t have a linear trend in it, so it probably is. I also suspect that VS (even though he isn’t a Australian, Kiwi or Pom) might have been a touch ironic in suggesting that the difference between the start and finish point would be as good as anything.

    However I should say that if you were “uneducated” and were asked how much the temperature had increased over the period you’d probably go for VS’s answer rather than yours.

  806. HAS Says:

    whoops “probably isn’t”.

  807. Tim Curtin Says:

    Sod said (alas!) “i can tell you with 100% confidence, that 2010 will not be colder than 1880. neither will 2011”. So can I, as GISS 1880 excludes ALL of tropical Africa, and most of Central America and SE Asia, while its 2010 and 2011 INCLUDE those areas. But no, for Sod, leaving out hot places in 1880 and bringing them to account in 2010 does nothing for the average in either year (as Tamino keeps on proving).

  808. HAS Says:

    and sod in respect of your last post philc has somewhat oversimplified.

    In fact what is being said here is that on the data, and given the temperature in 1880, the temperature in 2010 is not particularly surprising, even if recent CO2 increases are put aside.

  809. VS Says:

    Thanks HAS,

    A slight correction (to be rigorous ;):

    Considering the stochastic trend (with no drift parameter, so no ‘tendency’) present in the series 1881-1935, the temperature levels reached in 2008, are not surprising (or ‘anomalous’).

    Figure 2 exemplifies that.

  810. HAS Says:

    I rather felt that under the circumstances “on the data” and “even if recent CO2 increases are put aside” were acceptable simplications.

  811. VS Says:

    Hi HAS,

    In a normal discussion, I would say you are right, but since the discussion is so ‘tense’, I prefer to say exactly what the data show us :)

    Nevertheless, thanks ;)

    VS

  812. Michael Says:

    “At the risk of misunderstanding you again, are you implying that the best estimate of the (rate of) change in temperature is to just take the difference between two years? ” (BV)

    Sorry to intrude (I just read the whole thread in one go, btw.): Yes, I think you misunderstand again (or still). All that can be said, is that from 1881 to 2008 there has been -according to GISS- a 0.63 deg. increase in temperature. You can not, nay, must not “estimate the rate of change”. Why? Integration!

    How this was made clear to me back at school: Leaving Paris 8.00h in the morning, arriving in Berlin 18.00h in the evening, 1000km distance. ALL you can compute is average velocity. You can’t know anything else from that data alone. That traveller may have been going steadily at 100 km/h, or (accelerating, braking) with an average velocity of 100 km/h but at a speed of 0 km/h at times (taking a nap, gas station) and 180 km/h (Autobahn!) at other times. He may even have walked to the airport and then taken a plane. (Massive acceleration!) We just can’t know. Physics has nothing to do with it. All 3 scenarios are perfectly possible. The first one is implausible, of course. But that’s just common sense.

  813. TINSTAAFL Says:

    I wonder why Lucia is in such a defensive mode?

  814. AndreasW Says:

    VS

    There’s one thing not beeing discussed. All this is under the assumption that gisstemp will stay the same. 30 years ago there was cooling between the 40ies and the 70ies. That’s all gone by now. Watching giss is like watching a tree grow. So what would it take in terms of giss-adjustment to get the “desired trend”? How would that graf look like? Maybe it’s easier to adjust the variance structure so that the unit root would disappear.

    Do i love this thread or what? It’s like watching a heavyweight fight.

    Tinstaafl

    I agree about Lucia. She use to let the numbers talk, like she did with the spherical cows. ” i don’t plan to modify your code, run it or share code”. Sounds more like Phil Jones to me.

  815. eduardo Says:

    VS,

    I have used your parameters estimated in 1880-1935 and I could reproduce your results in the figure stochastic-deterministic. However, I think that this model is not compatible with what it is know about past ‘natural temperature variations. In addition of simulating Monte Carlo series for just 55 years, I generated Monte Carlo series for 1000 and 10000 years. It is known that the range (maximum minus minimum) of temperature variations of , say 30 year means, has not likely been larger than 1 degrees and definitively not larger than 2 degrees in the past 1000 years. For the last 10000 years (the Holocene) the range has been at most 2 degrees.

    I generated with your model 1000 series, 1000 years long each.
    I smoothed them with a 31-year running mean
    I calculated the range= maximum minus minimum

    For the 1000 year case, I get that 895 of the series (=89.5% ) display a range larger than 1 degrees, and that 200 (=20%) series display a range larger than 2 degrees. This would mean that only 10% of the generated series would be compatible with observations, and your model would be at the edge of incompatibility

    For the 10000 year case, I get that 1000 (all) series display a range larger than 2 degrees and that 500 (=50%) display a range larger than 5 degrees. Note that 5 degrees for the global average temperature in the Holocene is impossible (thats is more or less the difference between glacial and interglacial. In the Holocene case you have to also consider that part of this observed range is externally forced (you mentioned the Imbrie theorie) , i.e. the possible stochastic range is lower than the observed 2 degrees at most. So for the Holocene case your model is very very likely (p<.001) incompatible with observations

    In retrospect these results are not surprising since a unit root model produce series with increasing variance over time. It seems to me that this model is not able to describe the stochastic variations, and either the period 1880-1935 is contaminated by the external forcing or the model is simply not adequate.

    [Reply: Any chance you could put up a figure of your results somewhere? BV]

  816. Marco Says:

    So, Tim, are you claiming Africa is warming faster than the rest of the world? After all, we’re discussing temperature *anomalies*, not absolute temperature. You did know that, right?

  817. Alex Heyworth Says:

    # TINSTAAFL Says:
    March 26, 2010 at 09:44

    I wonder why Lucia is in such a defensive mode?

    Could be that she has spent a good deal of time and energy over recent times doing OLS regression analysis of temperature time series, now VS has come along and said it’s all a waste of time.

  818. Paul Carter Says:

    A very illuminating thread – hats off to both sides of the debate – some very good commentary here.

    Throughout the thread and occasionally in refutation to VS there have been claims that Climate Models are Physics based. This is not true. At least 2 Climate Models cited by IPCC AR4 wg1 use Neural Networks – these are about as far as you can get from real Physics in a computer model.

    From IPCC AR4 wg1 Chapter 9, P 690:
    “Consistently, a neural network model is unable to reconstruct the observed global temperature record from 1860 to 2000 if anthropogenic
    forcings are not taken into account (Pasini et al., 2006).” !!

    For further examples search on ‘neural’ throughout AR4 ( this can be found at http://www.ipcc.ch/ipccreports/ar4-wg1.htm.).

    Neural networks are a kind of efficient “emulation function” which work under some circumstances but not always correctly. They’re supposed to have some kind of “intelligence” as they mimic neurons in a crude fashion – but their performance and even the name ‘neural network’ is all hype. They’re certainly not to be trusted for anything as complex as climate. There may be some justification for using them in very minor roles in Climate Models, but not to the extent they’re currently abused.

  819. Richard Holle Says:

    I am a critical thinker, so before making a comment I read the whole thread to this point, not a light task. My main concern would be that the Natural Variability components are not being considered, along with the CO2 ppm contribution, which is leading to all of the misunderstanding about the relative contribution of the AGW parts of the puzzle.

    Many time in most threads, and several times in this one I have run across comments that suggest that “If we knew the background variability better, we could reduce the noise component of the compound signal that is the climate.” The natural cyclic drivers that are not being considered, have been removed from the equation, since the early 1900’s.

    The Sun-Earth_Moon system is an interactive composite that cannot be separated from the weather / climate, puzzle and still get valid answers. I have been studying weather data since 1990, with regard to the patterns generated in the atmosphere due to Lunar declinational tides that are driving the Rossby wave patterns and Jet Stream production. The QBO has the same average period as the combination of the lunar phase / declinational beat period. The other decadal long periods found in the global circulation are also combinations of driving periods and periodic oscillations of air masses in ocean basins.

    The link below is the latest in my efforts to explain the interactions that need to be understood in order to solve for the unknowns for natural periodicities, that explain the trends in tornado production and hurricane generation and strength maintenance, to the point they are predictable well in advance.

    http://research.aerology.com/aerology-analog-weather-forecasting-method/

    On the main page http://www.aerology.com/national.aspx

    you will find maps generated back in 2007 for the following 6 years until 2014, that show the past three patterns of the cyclic patterns in the weather based on a repeat of the Saros cycle, beat frequencies with the 18.6 Year mn signal, outlined and explained in the first linked text sections sections. These global atmospheric circulation patterns repeat cyclically to the point that they are usable as a daily forecast. (with adjustment for changes in solar output, it would be better.)

    I hope that using these cyclic patterns, to better understand the background “Noise” will allow you to get a reduced formula for the better consideration of the Solar output, and CO2 components of the climate driving puzzle.

    The lunar declinational tides are the main driver of the meridional flow surges into the mid-latitudes that produce almost all of the severe weather, running the variations in the el nino /la nina oscillations when combined with the outer planet Synod planet conjunctions, resulting in the compounded signal, that is the background climate noise not attributable to the CO2 forcing, but still interacting with the solar forcing, both magnetically and by TSI output.

    I am not saying that CO2 does not play a part, I am just showing you how to solve for the other still unknowns in the equation.

  820. Bart Says:

    VS,

    Me: “So you are indeed implying that the (rate of) change in temperature is better estimated by just taking the difference between two years, than by fitting a linear trend?”

    You: “Yes, fitting a linear trend is misspecification.”

    How could taking the difference between two arbitrarily taken points say anything useful about what happened, in the presence of large amounts of variability? If you take 2009 or 2007 as your end year instead of 2008 the estimate would suddenly be more than 0.1 degrees (i.e. more than 15%) larger. Clearly, that is *not* a good way to estimate what the increase in global avg temp has been. Even though OLS is invalid to use for these dataseries, claiming that merely taking the difference between two arbitrary points is better seems a far stretch.

    You write:

    “theory suggests that we shouldn’t find a positive warming trend”

    and then you link to an article about Milankowitch cycling, which takes tens of thousands of years? What about radiative forcing of GHG? It very strongly suggests that the globe should warm in response to more GHG. Which it has.

    You claim there is no *statistically significant* warming trend.
    Philc likewise sais:

    “you can’t predict with any confidence whether or not the next data point will be larger or smaller than the current one”

    Wanna bet? (not about the next one datapoint of course, but about the average of the next 10 years being larger / not larger than the average of the past 10 years, for example?)

    You seem to go back to the thesis that the increase in temp is only stochastic (i.e. random) and not the (partly deterministic) consequence of radiative forcings acting on the system to push it in a certain direction (which, over the course of the past 130 years, has been a net positive radiative forcing, i.e. causing warming). Could you explain how this is possible in light of the physical constraints on the system, as I describe in my other post?

    You write:

    “Considering the stochastic trend (with no drift parameter, so no ‘tendency’) present in the series 1881-1935, the temperature levels reached in 2008, are not surprising (or ‘anomalous’).”

    But that model (lower panel) basically sais “anything goes” (as long as the change from directly preceding values is not to large.). It has hardly any predictive skill. It’s a bit like predicting my weight next year to be in between 60 and 100 kg. Well, yeah, that’s extremely likely indeed; hardly need a crystal ball or a complex statistical method for such a prediction. If you could tell me something about the chance of my weight being higher or lower than my current weight, then we’d be on to something. And my hypothesis is that that chance depends on the ratio of my caloric input and output (i.e. my personal; ‘energy balance’). That last statement holds, irrespective if a unit root is present in the timeseries of my weight against time (because there’s physics (and in this example biology) involved).

    I still haven’t seen a refutation of a deterministic trend based on the expected effects of the net forcing plus internal variation. Then you’d be on to something.

    Ideally an assessment of the significance of global change would be made on the basis of all the changes that are occurring: E.g. based on a whole series of obervations or a combined timeseries that takes into account the observed changes in different parts of the climate system (which by and large all point to a warming climate): Sea level rise, ocean heat content, ice sheet changes, sea ice changes, glacier changes, ecosystem changes, radiation budget changes, etc. Something along the lines of the IGBP climate change index, though with more complete input (e.g ocean heat content should of course be included).

  821. Steve Fitzpatrick Says:

    Eduardo,

    Thank you for your comment.

    I agree that a completely stochastic model for the temperature trend, disregarding that temperature must ultimately be controlled by an overall energy balance, will of course lead to unrealistic temperature trends over long periods. No surprise there.

    But I think the point is that over shorter periods (like the instrument temperature record), the stochastic model that VS generated really is compatible with the temperature history. The implication (for me at least) is that the certainty one can realistically assign to GHG forcing as the causative agent for the recent (last 60 years) temperature rise is not nearly so high as many have claimed. It is true that VS’s treatment ignores the physical requirements of a real system, and so can’t truly be accurate. But I think it is equally true that VS’s treatment shows that the way data has been treated in climate science often indicates a level of confidence in causation that is not really supported by the data. The result is (I think) an overestimate of certainty in the estimates of climate sensitivity and in future warming, based mainly on the results of climate models.

    Many people (like me!) would prefer that large scale public efforts to control CO2 emissions be put off until much better data (like ARGO ocean heat content and Glory mission aerosol concentrations) can better constrain climate sensitivity.

    [Reply: Of course an “anything goes” model is compatible with everything. No surprise there, and a conclusion the the uncertainty in climate sensitivity is therefore dramatically increased is quite a few jumps too far. BV]

  822. Tim Curtin Says:

    Marco: I have long been puzzled by [edit] the role of “anomalies”‘ vis a vis absolute temperatures. Is it your view that there is no relationship?

  823. Søren Jensen Says:

    Question to VS (or anybody else who can answer it):

    Is trend stationarity also violated in the GISS data if you only look at the years 1880-1935?

    If not, then it is NOT wrong to calculate a deterministic trend for that period, forecast it and then compare with the actual data since 1935.
    Which is what I did in the graphs linked in this post:

    Global average temperature increase GISS HadCRU and NCDC compared

    I still can’t understand why my predicition intervals without accounting for autocorrelation looks so similar to VS’s OLS result which is corrected for autocorrelation.

    In Open Thread 18 at Taminos blog I have posted a description of my method.

  824. lucia Says:

    I wonder why Lucia is in such a defensive mode?
    I’m not. I was trying to find out whether VS has computed the statistical power. I now know he hasn’t and for some reason, he doesn’t think it matters.

    The second part is odd because if you are going to make conclusions that a “fail to reject the null” actually confirms the null, you ought to know the statistical power of the null. If the statistical power is relative to prevailing theories of the magnitude of the trend is low, then the logical inference is “I have no idea if the trend is m=0 or not. The only thing I can say is people who think m=0 have not been proven wrong.” That’s not the same as making confident statements that m=0. In contrast, if the statistical power is high relative to someone else theory, you have some grounds for making confident statements.

    This is why I am asking that question. It’s a straightforward, fairly standard question. People often want to have a notion of the likely rate of type II error. Given that people often want to know this, I’m didn’t understanding why VS’s answer is so odd.

    I don’t know why VS’s answer suggested that I download the code, run it, modify it and share it. I’m only interested in asking a few questions because my readers have asked me what I thought he’d found and what it means. I’m not interested in embarking on his research topic. Why should I be? I wanted to let him know that in my answer.

    I also don’t know why my question about statistical power is causing VS to lecture me about Milankovich cycles. Presumably, he thought that was relevant to the statistical power question? I don’t have any notion who or why it’s related so I asked.

  825. VS Says:

    I’ll try to get to everybody’s questions later, but now quickly to Bart:

    [Reply: Of course an “anything goes” model is compatible with everything. No surprise there, and a conclusion the the uncertainty in climate sensitivity is therefore dramatically increased is quite a few jumps too far. BV]

    Bart, for the 100th time, it’s not an ‘anything goes’ specification, it’s a trend estimate.

    Look at the procedure, download an econometrics book and hold it next to my posts, and you’ll understand. The math you need for that is minimal.

    You might want a tighter confidence interval, but that’s impossible if we consider time only and those observations. It’s what the data tell us. Nothing more and nothing less.

    Now, we employed formal procedures, and arrived at this result. We then evaluate our trend (Figure 2) and find that it does a much better job at capturing the projected variance than the misspecified deterministic OLS trend specification (Figure 1).

    To quote whbabcock’s post here: It is what is!, or put differently, it’s an analytical fact.

    As for the forcings, we’ll get to them later once we start cointegrating (i.e. start employing covariates). Leave this out for now: here we’re dealing with trend estimation (101st time now).

    I also clearly explained here (and a bit down again) why OLS trend estimation is invalid in this case, even for what you want to do (i.e. ‘estimate’ the rate of change).

    Your ‘accusation’, which is more of a strawman actually, is beginning to annoy.

    Finally, I suggest you take a look at this nice post by Michael (thank you!) again. That should put a lid on the ‘rate of change’ issue.

    Again, now for the 102nd time, we are talking about trends.

    VS

    PS. I’ll try to give some more illustrations later today, I’m pretty busy right now.

  826. Bart Verheggen Says:

    VS,

    You keep referring back to your stochastic vs deterministic comparison to somehow prove it’s stochastic. I think that conclusion is unwarranted because nobody expects your deterministic trend to have any real world predicting value, and your stochastic model may not be *specified* as anything goes, in a normal undisturbed climate it’s very very close to a anything goes situation and offers no predicting skill. If you stop using it to make far reaching conclusions about there not being a deterministic trend, then I’ll stop pointing this out.

    I realize that OLS is invalid thank you very much. Keep your smug attitude in check. What I;’m saying is that taking the difference between two individual points is an abysmal way to estimate the change over a time period when the quantity in question exhibits a lot of variability, since it so strongly depends on the endpoints chosen. You have not providede any arguments against this.

  827. lucia Says:

    SteveF
    But I think it is equally true that VS’s treatment shows that the way data has been treated in climate science often indicates a level of confidence in causation that is not really supported by the data.
    Even though I think climate science tends toward indicating a level of confidence in causation not supported by a particular analysis, I think the scope of what VS shows is narrower. It only really addresses surface temperature time series which is not the whole of climate science.

    I think — assuming what VS shows is accurate (and no I’m not re-running code)– he is showing that the conclusion that the recent temperature levels are outside the range one would have expected if one used econometric methods to analyze the data from earlier years and also assumed the deterministic trend is linear. So, it specifically addresses the conclusions in the Zorita, Von Storch and Stocker paper. (So, I’m not surprised to see eduardo here discussing, as I’m sure eduardo is interested.)

    I think the result is pertinent to the question, “If we attempt to detect warming using a purely empirical method based on surface temperature measurements only, is the theorized warming sufficient to distinguish it from noise?”

    The result is (I think) an overestimate of certainty in the estimates of climate sensitivity and in future warming, based mainly on the results of climate models.

    Of course we still have physics, and I think many of the questions people are asking VS are coming from people who are used to applying statistics in fields where we do use understanding of physics to drive some of the tests we do.

    I think VS’s posts are valuable — but the questions he is being asked are good. At my blog, DeWitt Payne posed a comment that suggest he is fixin’ to drive a ‘two-lump’ model with forcings and see how that does with the various tests VS ran. If he does so, it will be interesting to see the result, since that could bracket what we think about the noise. (It wouldn’t show VS in anyway right or wrong– merely answer the sorts of questions engineers and physicists often ask. This supplements the type VS asks.)

  828. eduardo Says:

    @ Steve Fitzpatrick

    Steve,

    I would mostly agree with you. Each field has something to learn from the other: climatologist from the statisticians (for sure and thats what I am trying here) but also we should not forget the other direction of flow. By the same token why statisticians are astonished by the naivety of climatologist to handle data (we only know to calculate correlations..), I can only stress how often I am astonished of the naivety of statisticians writing about climate, who sometimes haven’t bother to read a basic introduction to climate physics or climate models (of course, we all know that models are rubbish..) Both , climate and statistics, are difficult topics and nobody should expect to master a new field in one afternoon. Skepticism is sound but applied to *all* hypothesis, not just to those of the opponent.

    [Reply: Wholeheartedly agreed. BV]

  829. VS Says:

    Hi Bart,

    “You keep referring back to your stochastic vs deterministic comparison to somehow prove it’s stochastic. ” (bold added)

    (1) I posted over 10,000 words worth of formal test results to prove my point. This includes testing from both sides, and allowing for endogenously determined structural breaks in the level series. A total of five different unit root tests have been performed, each employing different methods (or different information criteria).

    (2) I furthermore investigated the performance of the ADF and PP test via Monte Carlo simulations, and concluded that considering an an ARIMA(3,1,0) specification of our stochastic trend (which we have arrived at through formal testing), results in the ADF test being exact and the PP test (the only one that ‘rejects’) being ridiculously biased (nominal significance level employed 5%, in our case results in an actual significance level of over 80%).

    (3) I also performed elaborate diagnostics on all my test equations and specifications.

    (4) Finally, I referred you to some 10 published papers that find a unit root or non-stationarity of the GISS record (which directly implies a ‘stochastic trend’)

    I think your ‘somehow’ is a bit of an understatement. Excuse the ‘smugness’.

    Cheers, VS

    PS. Lucia, Eduardo, others, thank you for your feedback, I’ll try to get to your comments ASAP :)

  830. TINSTAAFL Says:

    “keep your smug attitude in check.” I think that’s unfair to VS, looking at the way he behaved in this thread despite the, unmoderated, attacks of the usual suspects.

    I’ve been visiting “Open Mind” and mind you, VS’ attitude is like a chorus boy compared with Tamino ;)

  831. Bart Says:

    VS,

    In your earlier reply to me you hinted that your examples of stochastic vs deterministic trends were only for illustrative purposes. But you then proceeded to draw far reaching conclusions based on them. That leaves my original question unanswered: What are you attempting to show with these? It now appears your goal is to prove the trend is stochastic, and you (very smugly) claim superior certainty. Yet what have you proven? That a deterministic model that nobody believes to be realistic fails (duh!). And that the data remain within the bounds of a stochastic model that possesses hardly any predicting skill (i.e. almost anything within physically reasonable bounds goes). No surprises there. A think the conclusion that indeed temperatures have been stochastic is unwarranted. Esp in light of the physical constraints I’ve repeatedly pointed out, and which you have repeatedly ignored.

    Time for you to take Eduardo’s advice to heart, and time to quit the smugness.

  832. AndreasW Says:

    Bart

    Thank’s again for this brilliant venue. I’ve seen you repeating this endpoint business. If you take the temperature difference between 1880 and 2008 you got a number. What does that tell you? It’s tell you the change in temperature between 1880 and 2008. Nothing moore and nothing less. If you change starting points you get different results, but who cares really? The only thing we’re interested in here is the trend. And the trend is (drumbeat!) ZERO! NOTHING! NADA! Based on the data we’ve got.

    VS

    I think you should summarize all this and write a post . It’s been great to follow this battle, but to reach a bigger audience you should start from scratch in a bigger arena like CA, WUWT or Blackboard (If Lucia would let you in that is).

  833. Climate Change Is Just A Natural Thing « Spurious Missives Says:

    […] is a climate science blog by a Dutch scientist, Bart Verheggen, where one of his posts has over 760 comments, covering a couple of weeks of some pretty deep, […]

  834. sod Says:

    Now, we employed formal procedures, and arrived at this result. We then evaluate our trend (Figure 2) and find that it does a much better job at capturing the projected variance than the misspecified deterministic OLS trend specification (Figure 1).

    because, as people have been telling you quite constantly, it is using a incredibly wide range at the end.

    the average global temperature in 100 years will be between -50 and 100°C . see, my model is 100% right! all the time. actually this covers a seriously long period of temperature on earth!

    What I;’m saying is that taking the difference between two individual points is an abysmal way to estimate the change over a time period when the quantity in question exhibits a lot of variability, since it so strongly depends on the endpoints chosen. You have not providede any arguments against this.

    he will not provide arguments. he was simply wrong, as he was with the original “random walk” claim.

    this is a new trend among “sceptics”. i call it “the smith method”. (and Loehle has a paper on this as well) using a lot of math and an approach that has absolutely no value to make wild claims about errors in climate science.

    i definitely think that discussing such wrong ideas at length is harmful at this moment in time.

  835. eduardo Says:

    @ Steve Fitzpatrick

    Steve wrote
    ‘I agree that a completely stochastic model for the temperature trend, disregarding that temperature must ultimately be controlled by an overall energy balance, will of course lead to unrealistic temperature trends over long periods. No surprise there.

    Dear Steve
    One clarification: energy imbalances can be also ‘stochastic’ in the sens you are using the term, I think. For instance, fluctuations in cloud cover can be internally generated, and they lead to imbalances in the energy flux .

    I think that in this thread statisticians are aiming at the same thing as climatologist: to set up a model for the unperturbed system and test if the recent warming is unusual or not. Climatologist do it with climate models by performing simulations with only natural forcings. Statisticians try to do with stochastic models, for instance, with a model containing unit root. But a critical point is that both types of models should be able to replicate well the observed characteristics. Both approaches are difficult, because one can never be a priori sure that the observations represent the natural unperturbed system. If they dont, i.e. if they are contaminated, then the recent observations themselves would appear as ‘normal’, obviously.

    Climatologist then try to test their climate models with past climates, sometimes with success sometimes with not so much success (uncertainties in past climates and so on). My suggestion was that the stochastic models should also replicate the past observations. VS calibrated a model with 1880-1935 data, but when applied to a situation in the past where the model should are be correct (why not?) it apparently does not perform well (unless I did a mistake in the calculation). You argue that ultimately there must be other feedbacks that put a lid on the temperature variance. Sure, but then I argue why arent those same process also acting at shorter timescales ? should they be also included in the model ? isnt this explanation a type of ad-hoc one? what are those processes ?

  836. sod Says:

    Thank’s again for this brilliant venue. I’ve seen you repeating this endpoint business. If you take the temperature difference between 1880 and 2008 you got a number. What does that tell you? It’s tell you the change in temperature between 1880 and 2008. Nothing moore and nothing less. If you change starting points you get different results, but who cares really? The only thing we’re interested in here is the trend. And the trend is (drumbeat!) ZERO! NOTHING! NADA! Based on the data we’ve got.

    no. ignoring all the datapoints between 1880 and 2008 is NOT a good idea.

    and the trend is NOT zero.

    you are a perfect example, of how this is spreading confusion.

  837. Pofarmer Says:

    (not about the next one datapoint of course, but about the average of the next 10 years being larger / not larger than the average of the past 10 years, for example?)

    Think of a stock chart. The chances of any 10 years being close to any other 10 years is pretty good, but you still see movements over time.

  838. Shub Niggurath Says:

    Sod
    “…this is a new trend among “sceptics”. i call it “the smith method”. (and Loehle has a paper on this as well) using a lot of math and an approach that has absolutely no value to make wild claims …”

    Oh, we are on the “a lot of math is a bad thing” now?

    Waving a lot of math around has been perfected by the consensus as much as a few ‘skeptics’.

    Regards

  839. GDY Says:

    VS has not made any projections or given any ‘attribution’ analysis for causation of variance in the surface temperature record, merely pointing out recent surface temperature observations are still consistent with natural variance in the autocorrelated GISS data. Using the appropriate tests for correlation (polynomial cointegration), the hypothesis that CO2 concentrations is causing the Realized Variance is rejected at a 95% confidence level. Again, this result is not too surprising, given what appears to be unanimity here in that Surface Temperatures are a poor proxy for “global warming” and likely ‘heavily influenced’ by other factors (many suggest the obvious Ocean Temps, someone else above suggested Lunar Cycles, etc), which would confound any correlation…
    An observation on culture that maybe will help the two camps understand each other better: In America, we were told by the mainstream media and politicians that it was absolutely 100% determined by ‘scientists’ that AGW is happening, more taxes needed to be paid as a result, and anyone questioning that was a heretic and a fraud. Obviously, as some new information came to light revealing uncertainty in the analysis, that really helped the credibility of the sceptics and has made people like me want to dig into the actual science and not rely on politicians with a terrible track record in ‘social management’ and very likely with an ulterior motive making sweeping declarations. Really, to understand life in America, please read Chomsky’s “Manufacturing Consensus”.
    Not sure how it is in Europe and elsewhere. Thanks again to everyone for the vibrant, robust exchange of data, theories, analysis and speculation – this can only help get to the right result!

    [Reply: Nobody expects temps to be only influenced by CO2. If net forcings, or better yet, GCM output is used in the test, then we’d be on to something. BV]

  840. AndreasW Says:

    Sod

    Who said anything about ignoring datapoints. We’re talking about estimating a trend using all datapoints. One specific datapoint has no interest for the trend.

    If the trendline in terms of temperature/time is a flat line i would say the trend is zero. It looked very flat to me in the figure above. If i’m wrong tell me what the trend is.

    [Reply: If in reply to the question “how much has the climate warmed from preindustrial to the present” the answer is given as temp(2008) minus temp(1881), then all other data are indeed ignored and the estimate is very poor. BV]

  841. Marco Says:

    @Tim Curtin:
    Absolute temperature and temperature anomalies are in many ways related, and in many ways they are not. The anomalies indicate an increase in average temperature, but not what that average temperature is.

    [edit]

  842. Steve Fitzpatrick Says:

    Eduardo,

    Thank you for your thoughtful comments.

    Of course energy imbalances can be also ’stochastic’ (weather and well known pseudo-cyclical processes like ENSO, AMO, PDO, etc.).

    I suspect that at least some of the statisticians who write about climate understand that climate must be in large part constrained (in a causative sense) by physics, and can’t possibly be 100% stochastic. For sure, it is unrealistic to expect that a purely stochastic model could make very accurate predictions over very long periods, and I would never argue that they could. However, the data do appear to be marginally consistent (just inside the 95% upper limit) with the null hypothesis (that is, stochastic, not causative) over the instrument record period. Of course someone might argue that the variation in the pre-1935 period was contaminated by the small radiative contribution of GHG’s in that period. But the GHG forcing in the pre-1935 period was really quite small compared to recent forcing, so I am a bit skeptical that this “contamination” in the pre-1935 data effectively refutes VS’s analysis.

    I do hope that I have not given you the wrong impression. I most certainly do not believe that the climate models are rubbish, and I do believe that climate models represent the best efforts of a lot of smart and dedicated people. But I also believe that confidence in projections of future warming, based mainly on climate models, is too often much overstated, if not by the modelers themselves, then for certain by many people who use model projections to justify draconian (and fantastically expensive) public efforts to reduce CO2 emissions.

    I think that VS’s approach to evaluating uncertainty sheds some useful light on the level of certainty that actually exists in the relationship between the temperature history and radiative forcing by GHG’s. I have absolutely no doubt that additional CO2 in the atmosphere will cause warming (in must), and I agree that at least some of the warming measured over the past 60+ years is most likely due to GHG forcing. But VS’s analysis does suggest to me that a substantial fraction of that warming could very well be stochastic rather than deterministic in nature. The “best estimate” of warming from GHG forcing is (of course) the measured increase in the temperature, but that best estimate is extremely uncertain.

    Lucia,
    “It only really addresses surface temperature time series which is not the whole of climate science.”

    Agreed. Some areas of climate science are no doubt quite robust in their analysis of uncertainty.

    “Of course we still have physics, and I think many of the questions people are asking VS are coming from people who are used to applying statistics in fields where we do use understanding of physics to drive some of the tests we do.”

    Yes, a purely stochastic model of climate is (of course) nonsense, and while I would not presume to speak for him, I suspect VS would probably agree. There are good reasons to question anybody who suggest a purely stochastic model is a good physical representation of the Earth’s climate, but I do not think this really is VS’s position. VS’s seemingly reasonable analysis does show that a purely stochastic model is in fact (just barely) consistent with the null hypothesis, assuming that the pre-1935 data is a fair representation of the climate “unperturbed” by GHG’s. The real (and crucial) question is how to honestly evaluate the contributions of stochastic and causative changes in temperature. I think VS’s analysis is a helpful contribution to answering this question.

  843. ge0050 Says:

    I haven’t had time to read this completely. However, what I have read is extremely valuable and important. VS stands out especially for his patient explanation.

  844. DLM Says:

    Andreas asked: “Who said anything about ignoring datapoints. We’re talking about estimating a trend using all datapoints. One specific datapoint has no interest for the trend.

    If the trendline in terms of temperature/time is a flat line i would say the trend is zero. It looked very flat to me in the figure above. If i’m wrong tell me what the trend is.”

    BV replied: “[Reply: If in reply to the question “how much has the climate warmed from preindustrial to the present” the answer is given as temp(2008) minus temp(1881), then all other data are indeed ignored and the estimate is very poor. BV]”

    BV,

    You did not reply to “tell me what the trend is”. What is the trend? If it is zero, isn’t the difference between the temperature in 1881 and 2008, temp(2008) minus temp(1881)?

    [Reply: All I said is that temp(2008) minus temp(1881) is a very poor measure. BV]

  845. Pofarmer Says:

    or better yet, GCM output is used in the test, then we’d be on to something.

    Why?

    [Reply: Because that’s what the trend is expected to be. The trend over 1935-2008 is most definitely not expected to be an extrapolation of what it has been during 1880-1935, nor is it expected to only depend on CO2 forcing. See also my other post. BV]

  846. steven mosher Says:

    bart,

    I suggested over at Tammy’s [ snipped of course] that it would be interesting to run these tests on the output of GCMS and the forcings..
    all forcings.

    [Reply: If I understand you correctly, somebody did: “the annual global mean temps simulated by GISS ModelE contain a unit root, according to the same tests that VS has been using.” BV]

  847. steven mosher Says:

    Sod,

    “the average global temperature in 100 years will be between -50 and 100°C . see, my model is 100% right! all the time. actually this covers a seriously long period of temperature on earth!”

    Not to go OT here, but I’ve had a similar complaint when people argue that GCMs are “consistent with” observations. And that “consistent with” is largely the result of pretty wide spreads in the model output. In fact you will find two distinct proceedures.

    A. For attribution studies. For attribution studies a sub species of models are used. select models,less than the 22 or so used for the “forecast” These models are also tuned using an inverse calibration to the instrument record, where forcings such as aerosols are varied to improve the hind cast ( ch09 Ar4 ).

    B For forecast studies any model submiited by a member country appears to be allowed into the forecast game ( democracy of models ) This results in some rather wide ranges for output.

    Nothing nefarious here, but just an interesting thing to look at. Lucia Might if I can convince her to

  848. AndreasW Says:

    About this trendbusiness:

    Take a look at the top of this post. Bart has calculated the trend over 1975 to 2009 and found a trend of 0.17 c/decade. No matter if this is the “right way” to do it or not, this is what he got. What does this trend tell us? It’s the slope of the trendline. So if someone would ask Bart how much warming we have seen since 1975 i think his answer would be 0.4 degrees(0.17*2.4). If we would ask Bart again about the trend over the last 100 years he might say the trend is 0.1 C/decade which leeds to a warming of 1 degree over the last 100 years.

    If we now ask VS about the trend over the last 100 years i think he would say there is no trend. It sure looks that way if you look at his grafs above. That means that the trendline has zero slope. So the warming over the last 100 years is 0*100=0. No warming! Correct me if i’m wrong VS but i can’t read this any other way. This doesn’t mean that the temperature is the same in 1880 as in 2008 but that the change isn’t statistically significant with a 95% confidence interval.

  849. mikep Says:

    Some of the comments on this thread puzzle me. What VS said was that it made no sense to fit a deterministic trend to the observed global average temperature data because that observed data behaved as though it were produced by a stochastic trend and gave formal evidence that that was indeed the case.

    People objected on the grounds that this was unphysical, because temperature could not increase without limit. But doesn’t exactly the same argument apply to the deterministic trend – that too implies that temperature could increase without limit. You might object that what is producing the trend is carbon emissions and that that forcing can’t increase without limit. But equally the stochastic trend is produced by all the physical factors that impact global temperature, and they can’t drive temperature upwards without limit, but it can still be true that in the period observed, observed temperature shows a stochastic trend.The existence of a stochastic trend in the observed output does not mean that global temperature is random, just that all the various forces affecting temperature combined, in the observed time period, lead to a stochastic trend. This is just a property of the observed time series of temperatures.
    Then coming on to GCMs. My reaction would be that if the output of GCMs – a time series of predicted temperatures – is not integrated of order one, then there is a problem. Either there is something wrong with the observations, or something wrong with the models, because any attempt to fit model outputs to observed temperatures will give residuals which are not white noise, implying that something systematic has been left out of the models.

    Note that this fascinating thread has so far got only as far as testing a single time series for orders of integration. The big questions of how one variable can be explained by some combination of other variables – the whole co-integration story – has yet to get properly started.

    [Reply: I address some of the physical aspects in this post. BV]

  850. ge0050 Says:

    I especially like the coin tossing example to explain the random walk. Here is another example. Before the introduction of GPS, autopilots were typically compass or inertially guided. They would make small errors left and right in their steering.

    Most people would think that over time the errors left and right would average out, and the errors long term would tend to zero. Sort of like the assumption that long term climate forecasts are more accurate than short term forecasts, because the errors plus and minus should average out like heads and tails.

    What this assumption ignores is that in a coin toss each toss is independent of the other. In the case of an autopilot or other time series forecast, your errors are not independent. They are cumulative over time. As a result, you drift in a path resembling a random walk.

    Over time this makes autopilots and time series forecasts such as stock markets, exchange rates, weather and climate unreliable to the point of dangerous, depending on how much you rely on them.

    Otherwise there would be no statisticians. They would have long ago have retired, having successfully forecast the weather, climate, stock market and exchange rates out to infinity and bet on a sure winner.

  851. Alan Wilkinson Says:

    lucia’s query of the discriminatory power of VS’s tests is valid and unexceptionable.

    It is equally valid to ask for the discriminatory power of the tests that validate CO2 forcing and sensitivity – the probability of getting an answer being lower, though Eduardo may have one?

    I continue to suspect that the error bounds shown
    March 25, 2010 at 12:22
    in VS’s fig 2 of his two figures have at best an incorrect origin being a projection from the 1880s rather than from 1935?

    [Reply: Regarding sensitivity, this is a good post (plus associated paper), taking a Bayesian approach to constraining its likely value. BV]

  852. Alan Wilkinson Says:

    To add to my last comment, the projection in fig 1 of the link given is effectively from the centroid of the 1880-1935 period.

  853. Alan Wilkinson Says:

    As I understand the continuing dispute over “trend” issues it amounts to this:

    We can measure a straight line “trend” over any period we like. It tells us something but not everything about what happened in that period.

    The question is whether it tells us anything about what happened or will happen in other periods?

    The longer the sample period the more information that is lost (averaged out) from the sample period, but equally the more confidence it provides that the “trend” can be projected to other periods.

    However, if the information in that sample period is statistically consistent only with a stochastic I(1) process then there is no valid reason provided by the data alone to have any confidence that “trend” can be projected to other periods. That does not exclude the possibility that there may be physical causal reasons that support doing so.

    Correct me if I’m wrong.

    [Reply: I think you’re basically right, but I would point out that nobody expects the trend from 1880-1935 to just continue into the time thereafter, since the forcings changed quite strongly after that. So a hypthesis that nobody is advocating has been refuted. BV]

  854. HAS Says:

    A few comments on what this all this is telling us. VS will tell me off for simplifying again but …

    First there is some confusion over the term “trend”. There is no doubt the GISS temperature record shows an increase over time. What is being discussed here is whether this can be attributed to a systematic ongoing increase in the data, or just to random fluctuations.

    The first thing being said here is that this time series lacks the fundamental characteristics that let you use the common method (linear regression) of sticking a trend line through it, testing the correlation coefficient and saying “the trend isn’t random”. I should say in my view it is this that is the important take out from this thread (along with the corresponding comments about the CO2 series) because it does mean that many of the statements in the popular press (at the very least) need to be revisited, and the issues around cointergration needs to be reviewed more widely.

    The second thing is that when you do look in detail at the data and use the appropriate approach you find you need to be looking not at the raw data, but at the time series made up of the difference between each pair of consecutive temperatures i.e. the change in temperature.

    Note this is done simply so the series can be properly analysed, and it is quite possible to find a trend in the temperature series – this will be shown by the change in temperature being a constant plus a random error.

    But this differencing is not enough, another adjustment needs to be made to this data because it turns out the difference in one period is correlated with differences in previous periods (autocorrelation).

    As it happens on this data series when we adjust for all these factors we find that the series is well behaved and we can reject the hypothesis that the difference in temperature has a constant term in it i.e. it might look like there is a trend but when you test it properly you find you can’t reject the hypothesis that its random.

    Because of the confusion around this I do wonder if a plot of DIFF(GISS_all), its estimation and the confidence limits might be of help.

    The third thing is that this model is just telling us what this particular dataset is telling us. If we have another time series of temperatures it is something empirical to be tested to see if it exhibits the same behaviour. It does however look robust over subsets of GISS. In addition it might just be something about GISS that is producing this model, hence the need to test back into the data sources to see how then individual temperature series and grid temperatures behave.

    Fourthly VS’s analysis doesn’t necessarily tell us anything about the physical processes leading to the temperature series. However one would expect consistency in two senses – first that the model developed by VS should not be inconsistent with the physical process, and second that any model that purports to produce this temperature time series should produce a series that has this characteristic (and this is a potential problem because most models produce a temperature series with a trend in it).

    I should just for completeness reiterate that the analysis in producing the models also shouldn’t fall foul of what this body of work is saying about acceptable approaches to statistical inference.

    I’ve previously drawn attention to this need to ensure validity of statistical inference as a reason why climate modelling is not immune from these findings, but some here continue to seek refuge in the first point above i.e. this can’t be right because it is inconsistent with the science.

    It is worthwhile stopping and thinking about what VS’s model says (and here VS will get deeply exercised about the simplifications).

    The model is saying:

    DIFF(t) = – 0.44 * DIFF(t-1) – 0.37 * DIFF(t-2) – 0.31 * DIFF(t-3) + error (t)

    Where error (t) has SD of 0.1

    What this is saying at a gross level of simplification is that the next change in the earth’s surface temperature will be in the opposite direction of the approximate weighted average of the last three changes (with some bias towards the most recent changes) plus a random bit, with an approximately 10% overshoot.

    If we are measuring temperature across a surface stuck between two large sinks and sources of energy does this really seem inconsistent with the physics?

  855. MartinM Says:

    Marvellous. I finally have time to contribute, and there are three hundred new comments.

  856. Pofarmer Says:

    The big questions of how one variable can be explained by some combination of other variables – the whole co-integration story – has yet to get properly started.

    Exactly.

    I’m stocking up on popcorn.

  857. Alan Says:

    Ahhh, Lucia …

    I think the result is pertinent to the question, “If we attempt to detect warming using a purely empirical method based on surface temperature measurements only, is the theorized warming sufficient to distinguish it from noise?”

    Thank you … not a I(0), Q(52), AR (eleventy-leven) in sight!

  858. MartinM Says:

    Right, here goes, one point at a time:

    1) VS’s claimed 95% confidence interval is not, in fact, a 95% confidence interval at all. Rather, for each year, he’s plotted out a range into which 95% of the realisations of that year’s temperature fall. This would only give a meaningful 95% CI if the joint probability distribution for all years was the product of the distribution for each individual year, which of course it isn’t; that would require the years to be independent.

    Quick check: were that really a 95% CI, the probability of any given realisation lying entirely within the envelope (as does GISTEMP) would be .95^125 = 0.0016. In fact, if you run the simulation VS coded, you’ll get around 50% of all realisations lying entirely within the envelope, which makes it closer to a 99.5% CI. Oops.

    Computing a proper 95% CI is fairly involved, but let’s wing it by assuming that VS got the shape right, and rescaling by 1.96 / 2.8 = 0.7, and see what happens. Still nearly 10% of all realisations lie within the envelope. In fact, to get down to the proper 0.16%, we have to reduce the size of the envelope by a factor of just over two. This leaves about a third of GISTEMP outside our (badly!) estimated 95% CI.

  859. eduardo Says:

    @ HAS,

    HAS wrote
    ‘What is being discussed here is whether this can be attributed to a systematic ongoing increase in the data, or just to random fluctuations.’

    Fair enough, but an additional important question is how these random fluctuations can be described. VS is arguing that these fluctuations must (or should) be a I(1) process. However, a I(1) process cannot be physically justified, since it is not stationary, and its variance would grow unbounded over time. This is not observed. Independently of the period of Earth history one chooses to look at last (thousand years, last ten thousand, last one million, it fluctuates, but within bounds. The variance of the temperature is always bounded. I would tend to give much more weight this observations than to any statistical test performed on a limited noisy sample. I would agree that the structure of the those random fluctuations could be complex, much more than it is thought, but it should be a stationary process. Other people have suggested a fractional-differenced processes. But then the recent trend becomes unusual, as far as I understood. So we have a conundrum here.

    ‘What this is saying at a gross level of simplification is that the next change in the earth’s surface temperature will be in the opposite direction of the approximate weighted average of the last three changes (with some bias towards the most recent changes) plus a random bit, with an approximately 10% overshoot.’

    I think this explanation applies also for a stationary process, even for a AR(n) process

  860. eduardo Says:

    @ Steve,

    I think many climatologist would agree with most of what you say. The uncertainties are undeniable large. The differences lie in the next step, related to your sentence:

    ‘and I agree that at least some of the warming measured over the past 60+ years is most likely due to GHG forcing. But VS’s analysis does suggest to me that a substantial fraction of that warming could very well be stochastic rather than deterministic in nature’

    Some opine as you: They could be, and the the response to CO2 would be smaller.
    Others think: what if they are not, there are indications that perhaps they arent ?

    The real difference lies in the attitude to risk, not in the perception of uncertainties (of course, filtering out all other political and ideological noise which in many cases muddles all this discussion).

  861. Willis Eschenbach Says:

    eduardo Says:
    March 27, 2010 at 01:14

    @ HAS,

    HAS wrote
    ‘What is being discussed here is whether this can be attributed to a systematic ongoing increase in the data, or just to random fluctuations.’

    Fair enough, but an additional important question is how these random fluctuations can be described. VS is arguing that these fluctuations must (or should) be a I(1) process. However, a I(1) process cannot be physically justified, since it is not stationary, and its variance would grow unbounded over time. This is not observed. Independently of the period of Earth history one chooses to look at last (thousand years, last ten thousand, last one million, it fluctuates, but within bounds. The variance of the temperature is always bounded. I would tend to give much more weight this observations than to any statistical test performed on a limited noisy sample.

    Eduardo, despite that, you seem to give it no weight at all.

    What your insightful observation means is that there is a thermostat of some kind keeping the temperature within bounds. I have posted what I think part of the mechanism of that thermostat is here … what do you think the mechanism of the thermostat is, and how is the existence and mechanism of that thermostat not one of the most important unanswered questions in climate science?

    Finally, I don’t see any reason that an I(1) process should necessarily grow unbounded over time, if it is a part of a larger process which puts bounds on the I(1) mechanism.

  862. DLM Says:

    BV,

    You did not reply to “tell me what the trend is”. What is the trend? If it is zero, isn’t the difference between the temperature in 1881 and 2008, temp(2008) minus temp(1881)?

    [Reply: All I said is that temp(2008) minus temp(1881) is a very poor measure. BV]

    What would be a better measure? The trend? Which is what?

  863. John Whitman Says:

    ””””Pofarmer Says: March 27, 2010 at 00:29

    ‘The big questions of how one variable can be explained by some combination of other variables – the whole co-integration story – has yet to get properly started.’ from a previous commenter up stream.

    Exactly.

    I’m stocking up on popcorn.”””””””

    ======

    Pofarmer,

    Ahhh, you need to revise your popcorn supply estimates sharply up.

    Below is an outline of my logic on what the discussion agenda would look like from this point forward. Actually, my purpose to do this is not solely to assist you with popcorn estimates. My ulterior motive is to help myself plan to do some homework in advance so that I do not struggle on the steep curve i’ve been on so far. : )

    Forward Looking:

    a) will be some tidy-up activities on a few points with the current GISS time series analysis

    b) then start on the CO2 forcing/CO2 concentration time series analysis which would similar process as just done of the analysis for GISS time series

    c) spend a little time on Solar forcings [hope Leif S. around for that] time series analysis. Perhaps also aerosols, etc. etc. . . . the other ‘forcings’. Basically, look at all the forcings that Beenstock and Reingewertz did.

    d) then the big ticket event is ‘cointegrating (i.e. start employing covariates)’ of the Temp and GISS time series. Perhaps for the other forcings too. Is it proper to call this covariate analysis?

    e) comparison of VS’ result to Beenstock and Reingewertz

    f) wrap up . . . . should be pretty lively

    VS, your energy level and patience with explaining for us semi-literates in statistics is deeply appreciated. We are in your debt, intellectually that is : ) . . . or for brews.

    Bart, again thanks for hosting this intellectual venue. But I wonder if you are thinking of sticking with the whole program agenda at your venue? Hint hint.

    John

  864. HAS Says:

    eduardo

    “a I(1) process cannot be physically justified, since it is not stationary, and its variance would grow unbounded over time”

    While simplifying I am still trying to be careful.

    I had intended the later bit of my comment “this model is just telling us what this particular dataset is telling us. If we have another time series of temperatures it is something empirical to be tested to see if it exhibits the same behaviour” to cover this point.

    In the end if we had more information, we would know more :)

    What I didn’t address is the question of whether there are better competing models for this dataset. You mention using a fractional-differenced process.

    First I hope I understood your own work correctly: my take from this thread and the read of the paper was that you didn’t test the hypothesis that GISS was fractional-differenced, rather you tested that if it was, then the recent temperature history was surprising.

    Second I’d obviously be interested in competing models and how they stack up (while not wanting to put aside the important point – on which I don’t think there is disagreement – namely what the GISS series is not).

    I’m also interested in the selection of competing models. However I should say a model that fits experience well within constraints even though we know it might breakdown under more extreme conditions (e.g. Newtonian mechanics) is not necessarily to be put aside for one that fits local experience poorly, but does fit with extreme conditions. Model choice is about utility as well as explanation power, and complex systems may use empirical models that ignore confounding variables because their impact is not significant under the conditions being modelled. Not to say we don’t want to find out more about them though.

    In your final point about a large number of potential time series models being consistent with physics, I agree. I was simply observing that at least VS’s passed this test too, notwithstanding some suggestions here to the contrary.

  865. Nica in Houston Says:

    WOW!

    I just read this whole stream. What an education for all of us. And the ratio of substance to dreck is astronomical. I am a statistical dilettante, but I have always wondered what signals processing/time series analysis mavens would have to say about all this climate stuff. What a great start.

    Kudos to Bart, thank you, thank you, thank you for hosting this

  866. Bart Says:

    ALL: Re-submit off topic comments to the open thread please! I will remove them here.

  867. Gary Moran Says:

    It seems to me that VS has made two crucial points:

    That GMST contains a unit root. If true then any analysis using linear regression is invalid. It also appears to me that it invalidates the simple signal and noise concept that we often here applied to the subject?

    Modeling 1880-1935 GMST data based on it containing a unit root, running such models X times, demonstrates that GMST in the 1990s and 2000’s was consistent with the earlier data, therefore not exceptional and not unprecedented.

    If we can satisfy ourselves that VS initial point is correct, and that the initial statements of the B&R paper are correct, then regardless of the wider conclusions that B&R attempt to make, climate science and its conclusions will be irrevocably changed.

  868. sod Says:

    It seems to me that VS has made two crucial points:

    no. VS has not made any crucial points.

    That GMST contains a unit root. If true then any analysis using linear regression is invalid. It also appears to me that it invalidates the simple signal and noise concept that we often here applied to the subject?

    this is bogus. we constantly use a linear trend on processes, that we know are not linear. we do so, because it is the most simple way to check for a trend and direction.

    people will still be using the linear trend, when VS has already been forgotten again.

    Modeling 1880-1935 GMST data based on it containing a unit root, running such models X times, demonstrates that GMST in the 1990s and 2000’s was consistent with the earlier data, therefore not exceptional and not unprecedented.

    yes, he does a useless statistical test, and derives at a useless result. his “trend” has no prognostic value at all.

    and tamino has cast serious doubts on his works as well.

    If we can satisfy ourselves that VS initial point is correct, and that the initial statements of the B&R paper are correct, then regardless of the wider conclusions that B&R attempt to make, climate science and its conclusions will be irrevocably changed.

    ever wondered how likely it is that VS breaks down climate science in a single blog post?

    isn t it more probable, that he is simply wrong again? like he was with his random walk claim?

    like he is with his two years difference claim?

  869. Pofarmer Says:

    I think you’re basically right, but I would point out that nobody expects the trend from 1880-1935 to just continue into the time thereafter, since the forcings changed quite strongly after that.

    I realize that you don’t accept what VS has done here. But, it would certainly seem like what VS work has shown, is that, regardless of what the “forcings” have done, that the temps after 1935 don’t show anything exceptional.

    ever wondered how likely it is that VS breaks down climate science in a single blog post?

    Probably more likely than we’re all gonna burn up and die due to a trace gas that just happens to have the vast majority of it’s absorptive spectrum overlain by a much more powerful actor.

    He gives good references to other work that backs up the methods. What will remain to be seen, is what happens from here on out, no?

  870. sod Says:

    I realize that you don’t accept what VS has done here. But, it would certainly seem like what VS work has shown, is that, regardless of what the “forcings” have done, that the temps after 1935 don’t show anything exceptional.

    well, they actually do, by as much as his “trend” allows.

    recent years are pretty close to the edge of that interval.

    Probably more likely than we’re all gonna burn up and die due to a trace gas that just happens to have the vast majority of it’s absorptive spectrum overlain by a much more powerful actor.

    no. absolutely nothing will come out of the work of VS. nothing at all.

    He gives good references to other work that backs up the methods. What will remain to be seen, is what happens from here on out, no?

    yes, we will see. but he has already achieved his main goal: 100s of people who know neither what a “unit” nor what a “root” is, have been convinced that the existence of a unit root means that climate science will collapse soon.

  871. Morph Says:

    This is most OT and should be on the “open thread” or “The value of ‘open debate’”, but somehow I have to say it here:

    I have been following this thread with great interest (although most of the hard statistics-stuff has just been flying over my head. A few drops have fallen down and percolated). It is great to see normal people and scientist from the whole spectrum of opinions to talk, discuss and really “make science” in public. This is the way it should be with science these days, taking advantage of the web to share ideas and hear the opinions of others.

    This looks like the beginning for science of what in the IT world is known as open-source development, where software is build in front of everyone, with hundreds or thousands of developers contributing, some with ideas or new concepts, some with source code. And that code is quickly dissected my many pairs of eyes, perfected, bugs squashed, etc. Contrasting with the “traditional way”, where software was developed behind closed doors, reviewed by colleagues, tested internally and still released full of error and not very usable at all.

    Socially speaking, this is a brave new world for Science, and foreign to scientists in general. It will take time for most to see the benefit, but as software developers realized it, so I believe most scientist will see it one day. And this thread can really be the seed.

    A big thanks to Bart, and everyone else participating here!!!

    Well, almost everyone. Some appear not to like this open discussion that much, and say so openly even saying things like “i definitely think that discussing such wrong ideas at length is harmful at this moment in time” (see sod entry, March 26, 2010 at 15:50. And most his other post are along the same line, with ad-hominem attacks to others, name calling, tagging ideas as wrong without any real scientific arguments. Have to say it: sod, you think that discussing such wrong ideas is harmful at this stage? [edit] Bart, I can understand why you are going soft on sod from the moderation perspective, but maybe it is time you do something about it…

    There, I said it… Bart, prune it from here if you wish it.

  872. Kweenie Says:

    VS: Illegitimi non carborundum ;)

  873. VS Says:

    Hi everybody,

    This post is structured as follows

    (1) What are ‘trends’ and what are they not
    (1a) Realized average rate of increase
    (1b) Trends in temperature data

    (2) Overview of test results

    (3) Answer to Eduardo (on ‘stationarity’ of the temperature record, and his results)

    (4) Answer to Lucia (on the power of my hypothesis test on the drift parameter)

    ———————

    1. What are ‘trends’ and what are they not.

    ———————

    Re-reading various comments made me realize that there is great confusion about what we mean when we say that a ‘statistically significant trend’ is observed in the data. In particular, and from what I have seen in the climate science literature, the term ‘trend’ seems to be used interchangeable with the ‘realized average rate of increase’.

    These are two completely different things. Let’s start with the latter, just to get that out of the way.

    ———————

    1a. Realized average rate of increase

    ———————

    When I wrote earlier that the best estimate of the increase in temperatures over the period 1881-2008 was simply giss(2008)-giss(1881), I wasn’t joking. This is the realized increase in temperatures over the period. I firmly stand by that.

    Bart, you said something about ‘estimating with only two data points’. There is no ‘estimation’ involved here, it’s simple arithmetics. You are simply answering the question ‘how much have temperatures risen over the period 1881-2008’.

    The only ‘confidence intervals’ that make sense in this context are the confidence intervals of the difference in these variables. These are relevant, if we are in fact estimating the actual temperature values, observe:

    Let,

    x = estimator of giss(1881)
    y = estimator of giss(2008)

    Now, let’s say that x follows a distribution, which is centered around the true parameter value of giss(1881) and y follows a distribution, which is cenetered around the true parameter value of giss(2008). Think of the ‘errors’ as unbiased measurement errors (and let’s assume that these measurement errors are independent of each other).

    Then, the difference, defined as (y-x) also follows a certain distribution. Now we can use that information to test the hypothesis H0: (y-x)=0, i.e. to see if there was a significant increase in temperatures. However, in all of the preceding we have implicitly assumed that the GISS series in fact represents the actual global-mean temperature, therefore, there is no uncertainty about the realized record, and we can simply state that the realized increase in temperatures is in fact equal to 0.43-(-0.2)=0.63. That’s it!

    Now, for the rate of change. I have thought about this for a while last night, and I have to say that I have indeed made an error in exposition, just like in the case of the random walk / unit root confusion. Indeed, just like sod kindly noted here.

    If one wants to ‘fit’ the line of average increase in we indeed have to use the formula for the OLS estimator. However, this is not statistics but rather linear algebra (i.e. matrix algebra). So none of the resulting confidence intervals calculated by your software package have any meaning.

    You simply ‘fitted’ a line using an algabraic procedure.

    Note that the OLS ‘estimator’ is in fact defined as a so-called pseudo-inverse. In other words, it is the orthogonal projection of the [n x 1] vector y, on the column space of the [n x k] matrix X, col{X}. Here’s a visual epresentation of what I’m talking about, if we would have only three datapoints/coordinates to produce our fit (with more than 3 datapoitnswe we dissappearinto hyperplanes that cannot be ‘plotted’ anymore.. ;)

    In our case y = [GISS] and X = [ 1, t ], or, a vector of 1’s (this ‘generates’ the intercept), and a ‘time’ series (i.e. t’=[ 1 2 3 4 5 …. 128 ]’). The pseudo-inverse itself is defined as (X’X)^(-1)(X’y)=b, where the resulting [2 x 1] vector which gives us the ‘best fit’ paratmetrs b’ = [ c1, c2 ]’, corresponding to the equation y_hat(t)=c1 + c2*t. The c2 parameter, is then the best fit for the realized rate of change over the period of estimation.

    For a textbook treatment of the pseudo-inverse, the reader is referred to Poole (2003) p. 575, section 7.3 Least Squares Approximation.

    Concluding, does everybody see that there is no ‘statistics’ invovled here! Just arithmetics! There is no need to talk about ‘estimating’ the ‘rate of change’, because we are talking about the realized record!

    THE RESULT OF THE CALCULATION DOES NOT REPRESENT A ‘TREND’!

    Ahem, now let’s move on to actual trends :)

    ———————

    1b. Trends in the temperature data

    ———————

    Note that google defines trend as:

    “tendency: a general direction in which something tends to move”

    So, when we say that we observe a trend in the data, we are in fact making a statement about the expected changes of the series in the future.

    Also, while most people mistake a ‘trend estimate’ for a ‘projected fitted line’ (i.e. your forecast of the future) this is only half the story. The forcasted trend consists of two components. The forecasted expected value (i.e. the line) and the forecasting error, at each future t.

    Now, given that at each future period (=t, taking the beginning of the forecast as t=0) we have

    a) the expected value of our variable of interest, E_f[y(t)]
    b) the expected accompanying forecasting error, E_f[(y(t)-E_f[y(t)])^2]
    c) and the distribution of y(t), which is defined given a and b

    So a ‘trend’ is in fact: a projection of probability distribution functions, of our variable of interest in the future, based on what we have observed so far. It took some messing around in Matlab, but I managed to take Figures 1 and 2, calculate the forecasting intervals for each significance level, and turn them into Figure 3 and Figure 4.

    These two figures are the full representation of what our two (i.e. misspecified deterministic, and proper stochastic) trend forecasts actually look like!

    So, those are our actual trend forecast ‘estimates’. Some have claimed that this is an ‘anything goes’ specification. Or that the confidence intervals at the end are ‘too wide’ (whatever that means). I’m sorry to say, but that’s complete nonsense. I just presented you with what the observations 1881-1935 actually tell us about the future, and the statistically signifcant ‘trend’ we have observed so far. Note that when we estimate our stochastic trend model on the period 1881-2008, we find a statistically insignificant ‘drift’ parameter (i.e. the constant in the first difference series, which obviously corresponds our ‘expected rate of increase’ of our stochastic trend, or the forecasted average trend over the period). This implies, that when our data are formally dealt with, we fail to find any significant ‘trend’ in temperatures over the period 1881-2008.

    This is what statistics/econometrics tells us about trends in the GISS record! This, together with all the diagnostics and tests results that I have posted, is an analytical fact resulting from proper/formal statistical inference!

    Now Bart, you have stated earlier that it’s ‘nothing new’ that these linear trend forecasts are ‘nonsense’. Actually, I think it is. Even you, in this very blog entry, made a non-sensical (in statistics terms) statment when you wrote:

    “The trend over 1975 to 2009 is approximately the same (0.17 +/- 0.03 degrees per decade) for all three temperature series.”

    This is not statistics! I have noticed in this thread that you guys are quite disdainful of statistics, as in ‘it cannot tell us anything about the future’ / ‘it is useless’ etc. Sure it’s useless, when you abuse it :P Seriously, it’s like me applying Newtons laws incorrectly and then concluding that physics can’t tell us anything about gravity… it’s an insult to a whole scientific discipline, which is by the way much older than climate science.

    Readers, I kindly ask you to refrain from huffing and puffing with posts like ‘Nooo, there must be a positive trend in temperatures’ without listing formal arguments that establish that I have made some type of error somewhere (those are welcome though! :). These type of comments are (scientifically) pointless, and I won’t validate them with a response.

    I sincerely hope I have managed to explain the concept of a ‘trend’ here, and why you have to think twice (three times even!) before making a scientific statement involving that word.

    ———————

    2. Overview of test results, establishing the stochastic trend specification, and references

    ———————

    First we performed a series of unit root test here, and found that in all but two cases, we infer that the series contains a unit root, and is therefore non-stationary, and it therefore contains a stochastic trend.

    The two cases are the Phillips-Perron test, with trend and intercept in the alternative hypothesis, and the Augmented Dickey-Fuller test, with trend and intercept in the alternative hypothesis, using 0 lags (selected via the BIC). I performed a Monte-Carlo simulation on the PP test here and concluded that it is heavily biased towards rejecting the true H0 of a unit root. I furthermore investigated the ADF with 0 lags via the same Monte-Carlo procedure here and concluded that residual autocorrelation in the test also heavily biases the test towards rejecting the true null-hypothesis of a unit root. This implies that we can disregard the only two ‘negatives’ that we found! Note that I also performed an analysis of the ADF test with 3 lags here and found it to be almost exact.

    Because some people were concerned about a possible structural break in the temperature record, that could bias our unit root tests, I performed the Zivot-Andrews unit root test that allows for an endogenously determined structural break here. We find an endogenous strutural break in 1964 (just as was ‘eyeballed’ by some authors), but even if we account for it in our alternative hypothesis, we fail to reject the presence of a unit root!

    Note that if choose to ignore the presence of a unit root, and simply test for a structural break in the temperature record, we find a significant one! (I think I have come across a couple of climate-science articles claiming just that, finding a ‘break’ in the 60’s/70’s). However, this test rests on the assumption of of trend-stationarity, that our previous results, as well as the Zivot-Andrews unit root test rejects! The inference about a structural break in the level series is thus invalid!

    All of these formal results put together, lead me to formally infer that the GISS combined temperature record is indeed infested by a unit root, and therefore contains a so-called stochastic trend.. (i.e. is non-stationary :)

    And I’m not the only one…

    References:

    ** Woodward and Grey (1995)
    – confirm I(1), don’t test for I(2)
    ** Kaufmann and Stern (1999)
    – confirm I(1) for all series
    ** Kaufmann and Stern (2000)
    – ADF and KPSS tests indicate I(1) for NHEM, SHEM and GLOB
    – PP annd SP tests indicate I(0) for NHEM, SHEM and GLOB
    ** Kaufmann and Stern (2002)
    – confirm I(1) for NHEM
    – find I(0) for SHEM (weak rejection of H0)
    ** Kaufmann et al (2006)
    – confirm I(1), (they state however that temperatures are ‘in essence’ I(0), but their variable GLOBL is confirmed to be I(1), and treated as such)
    ** Zorita et al (2008)
    – present calculations implying non-stationarity for all global-mean series
    ** Beenstock and Reingewertz (2009)
    – confirm I(1)

    (I found many more, but I’m too lazy to dig them up)

    ———————

    3. On the stationarity of the temperature record (answer to Eduardo)

    ———————

    Hi Eduardo,

    First of all, thank you for confirming my results/code :)

    Before I discuss your extrapolation results, allow me to dwell upon the assumption of stationarity of the temperature series. You claim that non-stationarity is physically impossible. I agree with you… with a caveat :)

    Let’s look at the Vostok temperature record first. Note that for the sake of exposition, I assume that the stability (not values!) of the mean and the variance displayed by the Vostok sample are in fact representative for global-mean results.

    Now observe Figure 5. Note the record of the last ‘Ice-age cycle’: we ‘eyeball’ that the temperature revolves around a stable mean (when we ‘imagine’ the other Ice age cycles). We can also agree, that if we manage to account for this cyclical movement in the entire record, and the different variance structures in Glacials/Interglacials (and a bunch of other cyclical patterns), the variance structures will be more or less stable. Without resorting to formal testing, I agree that this record indeed looks cyclically-stationary.

    Let’s look at our current Interglacial, in Figure 6. Again, we see (from the end of the last Ice-age) somewhat of a stable mean, and plausibly also a stable variance once the cycles are accounted for. Again, I think that it is plausible to assume that this record too, is in essence stationary.

    However, in both graphs I have indicated our objective sample, with a red circle. Note how our 128 year sample can never ever be stationary! This is also exactly what formal statistical testing shows us!

    In that sense, any calculations involving the probability of modern warming (including your 2008 GRL paper, Eduardo) that assume that the ‘pure record’ record is ‘in essence’ stationary, have nothing to do with formal statistics! This is also the reason why we have to treat our temperature variable as non-stationary for the purpose of (any, covariate or not) statistical inference, because on the sample we’re looking at, they are indeed not stationary. This is important!

    I hope you now understand my earlier hammering on the stationarity assumption.

    With the above in mind, I will proceed to show you what you have actually simulated using my simple 4 parameter (i.e. ARIMA(3,1,0)) stochastic trend estimate. Observe Figure 7, as this is the extrapolation you simulated.

    Now, given the non-stationarity of our data, this is the confidence with which we can project our ‘trend’ 1000 years into the future. Again, it is what is. Now, you state that, given this projection, the model is either inadequate or the data 1880-1935 must be contaminated. First off, I would like to see the formal argument for the latter claim. As for the former, it just show us what we can expect given the uncertainty in our record, it is our ‘trend projection’.

    Sure, some realizations are completely irrealistic, however, our data handled formally tell us, that this is the trend.

    Note that the normal distribution is also used to analyze the heigth and variance thereof, of people, while its support is [-Inf, +Inf]. Are we going to reject the application of the normal distribution for this purpose because we find the prediction that some people will be of length 0, and some of 10m, unreasonable?

    I’m keen to discuss this though.

    ———————

    4. The power of the hypothesis test on the drift parameter (ansewr to Lucia)

    ———————

    Hi Lucia,

    First of all, allow me to apologise for my earlier dismissiveness.

    There were several reasons for this, including the fact that I don’t consider myself a ‘homework’ boy in this discussion, who’s supposed to be running every simulation anybody requests out of the blue. Your request furtherore seemed ‘out of the blue’ for two reasons:

    (1) There is no reason to assume our estimator is misbehaved (I posted all diagnostics)
    (2) You could have simply calculated the full power function of the test yourself, using the results posted

    Now, you stated somewhere on your blog that you find it ‘normal’ to ask for the power of a test. If it’s such ‘standard procedure’ (that I’m apparently unaware of), would you be so kind to provide links to all the hypothesis tests on your blog where you in fact determined power functions of standard (well-behaved) regressions you have performed? :)

    Anyway, it isn’t at all that hard (now just watch me make an error somewhere ;), since the (first difference) system seems well behaved. From the reported p-value you can infer the standard error of that estimate: 0.004060. We can also simply take the normal distribution (which, with this many df’s, is almost identical to the t-distribution). We calculate the accompanying critical sample realization, corresponding to a significance level of 5% (one-sided, so that’s 10% two-sided), and it’s equal to 0.0067. Then, we calculate the Normal CDF (std=0.004060) for a series of mean values (i.e. Ha’s) and generate our power function.

    Here’s a plot, let’s call it Figure 8.

    The answer to your specific question = 0.4518. Everybody else, that’s the probability of rejecting the untrue null hypothesis that the drift parameter is equal to 0, if our actual drift parameter value is in fact equal to 0.00619 (under the given assumptions).

    I think that’s very reasonable and before anybody makes that error: this calculated power is not the ‘probability’ that the parameter is in fact equal to 0.00619.

    Also, you said that I was ‘lecturing’ you on Milanković cycles. I did no such thing. I merely pointed out that the result is in fact what (fundamental) physical theory suggests we should find (see point (1) above). Please assume good faith, this discussion here is tainted as is.

    Finally, I saw a comment of yours, again on your blog somewhere, where you wondered whether the OLS estimator is BLUE in the presence of a unit root. The answer to this question is a resounding NO. Estimating a specification assuming trend-stationarity, while your actual DGP is non-stationariy, is full fledged misspecification of your model. As such it violates that very first Gauss Markov assumption, namely that of a correct specification. Your estimator is thus NOT BLUE because the very first postulate of the Gauss-Markov theorem is violated.

    ———————

    All the best everybody, VS

    PS. sod, take a chill-pill, man.
    PPS. Kweenie: nunquam! ;)

  874. Pofarmer Says:

    yes, we will see. but he has already achieved his main goal: 100s of people who know neither what a “unit” nor what a “root” is, have been convinced that the existence of a unit root means that climate science will collapse soon.

    Not at all. The hope is that climate science, such as it is, will advance.

  875. Nigel Brereton Says:

    Bart,

    Excellent thread, I have been following for weeks and obviously not commenting because I am not qualified to. The discussions here are now far reaching across the Internet and in the media world which is due to the content but also to the host. There seems to be a lot more to come which would benefit all to be held in a positive manner, the negativity creeping in from certain parties could be detrimental to a more public audience.
    Many thanks Bart I hope that you can continue this discussion through to conclusion here on this thread.

  876. dhogaza Says:

    If we can satisfy ourselves that VS initial point is correct, and that the initial statements of the B&R paper are correct, then regardless of the wider conclusions that B&R attempt to make, climate science and its conclusions will be irrevocably changed.

    The consequences of B&R being true would be that a very large part of *physics* would be “irrevocably changed”. Climate science would be collateral damage, nothing more.

    Think about that a bit before you adopt B&R as being useful, or VS’s defense of same.

  877. VS Says:

    Dhogaza,

    How about doing all of us a favor, and actually giving the formal argument for this ‘proposition’ that you’ve been repeating here for weeks like a broken record player:

    “The consequences of B&R being true would be that a very large part of *physics* would be “irrevocably changed”.”

    Now, that would actually constitute an interested addition to the debate.

    Everybody else, I just made quite a lengthly post, some 1:30 hours ago, that should clear up a lot of misunderstandings. However, due to the links, it’s still stuck in moderation (because Bart has a life, I guess.. ;)

    Here are the contents:

    (1) What are ‘trends’ and what are they not
    (1a) Realized average rate of increase
    (1b) Trends in temperature data

    (2) Overview of test results

    (3) Answer to Eduardo (on ’stationarity’ of the temperature record, and his results)

    (4) Answer to Lucia (on the power of my hypothesis test on the drift parameter)

    Cheers, VS

  878. DLM Says:

    dhogaza:”The consequences of B&R being true would be that a very large part of *physics* would be “irrevocably changed”. Climate science would be collateral damage, nothing more.”

    Climate science would not be damaged, if B&R is true. It would be climate dogma that is undermined.

  879. Steve Fitzpatrick Says:

    eduardo:
    March 27, 2010 at 01:33

    “The real difference lies in the attitude to risk, not in the perception of uncertainties (of course, filtering out all other political and ideological noise which in many cases muddles all this discussion).”

    I agree, although filtering out the ideological and political noise would seem to be a huge challenge. Some people will never accept the need for CO2 restrictions, while others will insist on major restrictions based on even modest expected consequences, in both cases due mainly to politics/ideology. Rightly or wrongly, my personal perception is that many outspoken climate scientists are firmly in the latter group, and I think my perception (right or wrong) is shared by many other people; this in part is what makes reaching agreement on public policy so difficult. There is a lot of sincerely held doubt that climate scientists are acting as honest brokers.

    On the other hand, I think the large majority of people (including me) would support efforts to reduce CO2 emissions if the consequences of continued increases could be shown to be significant with a reasonably level of certainty. This is why I think major efforts to control CO2 emission will have to wait until climate science can better constrain the estimate of climate sensitivity. Much better ocean heat content and atmospheric aerosol data appear to be needed; these are both at present contributing large uncertainty.

    Bart,

    Sorry this comment off topic; I wanted only to respond to Eduardo.

  880. Bart Says:

    I’ve deleted (actually ‘unapproved’) a number of comments that were clearly off topic. Admittedly there are also some left, but they seem part of an interesting conversation; nevertheless: Please put comments on other topics that the interpretation of the temp record in the open thread.

    If someone knows how to simply *move* comments in wordpress, let me know. In the meantime, or if that’s not possible, then the onus is really on the commenter to put their comment in an applicable thread. (and just in case: If you want to discuss the comment policy, the thread with that title is the place to do so…)

  881. Kweenie Says:

    “It would be climate dogma that is undermined.”
    —–
    Climate Dhogma perhaps?

  882. Al Tekhasski Says:

    VS,
    Probably before performing any analysis of “global temperature”, you need to formulate the definition of what the “realized” global temperature is in pure mathematical statistical terms, and spell out all assumptions you are making about the process and instrument that measures it. Could you start over please?

  883. sod Says:

    However, in all of the preceding we have implicitly assumed that the GISS series in fact represents the actual global-mean temperature, therefore, there is no uncertainty about the realized record, and we can simply state that the realized increase in temperatures is in fact equal to 0.43-(-0.2)=0.63. That’s it!

    here VS is again doing something that is simply false. we do NOT assume, that there is no error.

    If one wants to ‘fit’ the line of average increase in we indeed have to use the formula for the OLS estimator. However, this is not statistics but rather linear algebra (i.e. matrix algebra). So none of the resulting confidence intervals calculated by your software package have any meaning.

    this special part of “linear algebra” actually has a name of its own. it is called “probability theory”.

    and it is the field that all statistics is founded on.

    Concluding, does everybody see that there is no ’statistics’ invovled here! Just arithmetics! There is no need to talk about ‘estimating’ the ‘rate of change’, because we are talking about the realized record!

    THE RESULT OF THE CALCULATION DOES NOT REPRESENT A ‘TREND’!

    this is funny. because my textbook Krengel “Einführung in die Wahrscheinlichhkeitstheorie und Statistik” 3rd edition 1991 does mention it on page 168. with a historic introduction about Gauss founding the error theory around this.

  884. VS Says:

    Hi Al,

    You’re right, and wrong, at the same time :) I’m juxtaposing this analysis, with most of the statistical analysis performed in climate science which (implicitly) assumes (trend)-stationarity, of the instrumental record.

    For this purpose, I believe all assumptions are listed.

    Cheers, VS

  885. VS Says:

    sod,

    Gauss invented OLS in order to estimate the parameters of an ellipse (the planetary orbits). His errors were measurement errors, and most importantly, his data generating process (i.e. the ellipse), was fully specified.

    Read your book again, this time don’s skim.

    VS

  886. steven mosher Says:

    Thanks VS For answering the Power question. I realize that it’s hard to
    conduct a blog conversation at two different places.

  887. steven mosher Says:

    Hey Bart,

    you might take VS’s long comment
    and write and intro from your perspective, then his piece, then a final comment from you.

    Use that as a seperate .post. Just a thought. It might help to re center the discussion.

  888. kim Says:

    VS’s answer to Lucia is posted at the Blackboard.
    ==============

  889. VS Says:

    Hi steven,

    We stay here ;)

    If you think the long post needs exposure, link to it.

    Cheers, VS

  890. Al Tekhasski Says:

    VS, I see your point now :-) You are just yanking their chains.

    Regarding “unphysicality” of random walk, the bounding argument is silly. Using similar reasoning, the entire mathematics of Maxwell-Boltzmann distribution can be declared “unphysical” on the ground that the distribution is formally unbounded, and therefore some molecules from the tail of M-B function would move faster than the speed of light, which is physically impossible. So, one could declare the entire kinetic theory of gases a non-physical bunk :-)

  891. dhogaza Says:

    Climate science would not be damaged, if B&R is true. It would be climate dogma that is undermined.

    Of course it would. If it were actually true that 1 w/m^2 of solar forcing causes 3x the amount of warming of 1 w/m^2 of CO2 radiative forcing, much of what we know of physics goes in the toilet, climate science with it.

    The notion that a watt is a watt is a watt isn’t “climate dogma”, it’s physics. Note that the unit carries no information regarding sourcing.

    Steve Fitzpatrick:

    However, I think the hard core folks like sod will never accept anything but a conclusion of high certainty of high climate sensitivity.

    Climate sensitivity of about 3C per doubling of CO2 is arrived at by a multitude of lines of evidence unrelated to the time series VS claims to be analyzing. Again, just how much of physics do you think VS has successfully flushed down the toilet?

    If it weren’t climate science being discussed, no one would pay attention, because typically one doesn’t find claims that statistical techniques applied by an economist shows that, oh, newton’s laws of physics are wrong on the macro scale and therefore that famous apple fell up into earth orbit rather than plunk newton on the head, for instance.

    If people read that, people would laugh. B&R’s analysis, in particular, is as laughable regarding physics, yet … there seems to be a lot of eagerness to accept their analysis as valid.

  892. Alan Wilkinson Says:

    “If it were actually true that 1 w/m^2 of solar forcing causes 3x the amount of warming of 1 w/m^2 of CO2 radiative forcing, much of what we know of physics goes in the toilet, climate science with it.”

    No, it simply means the forcing analyses have been done incorrectly or inadequately. In my opinion, there is every chance that is true. I will be very surprised if it is not true.

  893. DLM Says:

    DLM:Climate science would not be damaged, if B&R is true. It would be climate dogma that is undermined.

    dhogaza: Of course it would. If it were actually true that 1 w/m^2 of solar forcing causes 3x the amount of warming of 1 w/m^2 of CO2 radiative forcing, much of what we know of physics goes in the toilet, climate science with it…newton’s laws of physics…apple…etc.

    I don’t know what Newton had to say about the radiative forcing of CO2, but I do see your point: The alleged climate science consensus is not dogma, it is gravity.

    I can’t imagine why all those smart people, who recently congregated in Copenhagen, weren’t persuaded by the physics to do something substantive about our impending doom. Were they blinded by statistics?

  894. John Says:

    dhogaza:

    Thats easy dhogaza they are “special watts” just like the Climate sensitivity degrees are “special degrees”.

  895. steven mosher Says:

    Yes VS staying here is probably best and civil to boot.

    I’ve made my suggestion that your comment be used as a head post here.
    That suggestion is based on my experience which of course everyone is welcomed to take issue with, but it is my experience that good things would result. basically you try to clear away the excessive tangents that have clouded the issue.

    As I noted over at Lucia I do get what you are saying Thanks for stopping by to indicate that I understood you.

    Anyways, no more suggestions from me. Oh, and thanks to bart for putting up with the increased comment traffic.

  896. Willis Eschenbach Says:

    dhogaza Says:
    March 27, 2010 at 22:04

    … If it were actually true that 1 w/m^2 of solar forcing causes 3x the amount of warming of 1 w/m^2 of CO2 radiative forcing, much of what we know of physics goes in the toilet, climate science with it.

    The notion that a watt is a watt is a watt isn’t “climate dogma”, it’s physics. Note that the unit carries no information regarding sourcing.

    You made this same claim before, upthread, and I replied:

    Say what? Since different forcings have different frequencies, why would they not have a different response? Consider a 1W/m2 change in solar vs GHG forcing on the ocean. Solar penetrates the ocean to a depth of tens of metres. Longwave is absorbed in the first mm of the oceanic skin surface. Which will cause a greater rise in the skin temperature? Which will cause a greater rise in evaporation? How will those possibly have the same climate response?

    Or you might take a look at “Efficacy of Climate Forcings”, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 110, by Hansen et al., which says:

    We find a substantial range in the ‘‘efficacy’’ of different forcings, where the efficacy is the global temperature response per unit forcing relative to the response to CO2 forcing.

    The deal with scientific discussions is that when someone gives a cited opinion from a peer-reviewed journal that totally disagrees with what you have claimed, you need to show why it is wrong. If you just mindlessly repeat your initial claim, people just laugh at you.

    I say that one watt of LW has a visibly and provably different effect on the ocean from one watt of SW. Yes, as you point out, the unit Watts “carries no information regarding sourcing.” But to offer another example, that doesn’t mean that a Watt of SW radiation passes through the atmosphere the same way as a Watt of LW radiation.

    Hansen says that the results of one watt of forcing from different sources can have a “substantial range” in values.

    You say “A watt is a watt is a watt,” that they all have identical results.

    Your turn. As my high school science teacher used to say, “Show your work.”

  897. eduardo Says:

    Dear VS,

    I am trying to follow your reasoning and I am realizing that we are talking about different things.
    Your aim is to estimate a possible range of predictions given the data we have at hand. Thats why you compared the range of the OLS prediction and the range of the I(1)-based prediction. Well, from some point of view that could be interesting – I see your econometrics mindset, if I may say :-)

    But this is not what I was aiming for. Now try to see my point of view. My point is to find out whether the observed trend can be stochastic or forced, and if it is externally forced whether natural or man-made.

    Consider the following example, which actually can be very well implemented with your software. Let us take the observe daily temperatures in New York from January to May. Now you apply your I(1) tests and conclude that this series is I(1) -(I havent done it, but let us assume it for the moment.) According to your I(1) model you estimate your prediction range for July. The same thing now with a simple OLS model. You compare both results and maybe see that the I(1) range is more realistic.

    I say to myself: Ok, interesting, but..so what? The question that I would like to know is whether the temperature increase form January to May is forced by the sun, I am not interested in the a phenomenological range of predictions for July.
    Do you see the difference in the question we are interested in?
    So my setting would be the following: how can I model the statistical properties of a putative timeseries that is not forced by changes in solar insolation, a series that would only display random internal stochastic variations? Once I arrive at such model, I would try to generate the range of expected values from January to May and see whether or not this range is statistically compatible with the actual observation. If it is not, I would claim that random internal variations cannot explain the rise in temperatures from January to May. For this claim to be credible I need a model that describes the essential statistical properties of internal random variations (the null hypothesis). I could choose a model that simply states: ‘random variations are cyclical with a period of 12 months’ or alternatively ‘ random variations cause an increase of temperatures from January to May’. Or alternatively ‘random variations grow in amplitude unbounded’. Well, I would free to do that, but I am not going to convince anyone with those models.

    Thats why for the analysis of the population height, I can model it with a Gaussian distribution, because I am not predicting the height of a single individual. It is because, for instance, I am comparing two populations and want to know of they are significantly different, or more similarly to our problem, I want to know the likelihood that an individual belongs to a certain population. By the same token, in statistical mechanics I am not interesting in predicting the velocity of a single particle, but to estimate thermodynamical averages based on that Maxwell-Boltzman distribution (btw, this distribution is indeed wrong when the particles are relativistic or quantum effects are important)

    So usually climatologist are not trying to predict statistically into the future. Some studies do try (I am aware of a few), and in this case I would argue that you have a point: the ‘trend’ has to be modeled realistically. But these is by far not the basic issue. the basic issue is, as I said, to find out if we are observing ‘something unnatural’, and for this we have to model ‘what is natural’. What are the statistical properties of the natural variations? this is an interesting topic, and for this we could certainly need help from statisticians, which I would be happy to accept. But you need to also make an effort and try to understand the physical point of view and the physical constrains, otherwise our populations will be always statistically different and will not overlap.
    cheers

  898. HAS Says:

    I don’t want to get between eduardo and VS here, but eduardo I think the preoccupation with forecasting in its own right is largely a red herring. However I do think that forecasting should be part of the key stock and trade of climate scientists (just as it is for economics).

    About the simplest means to a testable hypothesis about whether something external is changing the natural order is to develop a model using data from when we know that all was well with the world, and then look at how well that model fits with data observed when unnatural things were occurring.

    This is what VS has done, and basically found (admittedly using a very simple model and dataset) that you can’t reject the hypothesis that the recent observations are just a product of the “natural” system.

    I’d personally go a bit further and say forecasts in time are essential to test models in climate science (as they are in economics). This is because the climate is so complex that there are very real limits on the ability to properly specify what’s happening directly from the laws of physics. This means that for example to test the impact of AGW you really need to rely on observed climate before the mid-20th century to credibly describe the “natural” world. Because of this you are inevitably comparing “what was” with “what has recently been” in a system that is dynamic in time.

    I should be clear that I’m not saying subsystems can’t be properly specified, just that under the circumstances validation over time should be expected.

    Finally I just note in passing that climate change models all seem to start with a specification that they validate against contemporary climate observations, and then use that model to predict (perhaps not statistically, more’s the pity) into the future.

  899. ge0050 Says:

    The graph at the start of this page, showing 95% confidence bands, how was that arrived at? 2 standard deviations in a normal distibution gives 95% confidence. However, this requires that average global temperature be normally distributed.

    The problem in using statistics is that you need to know the distribution of the data to arrive at a meaningful result. It isn’t sufficient to assume that the distribution is normal. Most statistical methods rely on a normal distribution and when applied incorrectly will lead to misleading results.

    For example, the recent financial crisis was based in large part on the assumption that housing prices would continue to rise. That low interest rates were forcing housing prices. Since interest rates were low, it was a safe bet that housing prices would remain high. The world has since learned otherwise.

    What we know about the real world is that it is rarely normally distributed. However, most of us continue to act as though the world is normally distributed, and we get routinelly burned and taken to the cleaners as a result.

    Every year some new huckster invents some new way to fleece us using just these techniques. They present us with some opportunity. We assume the data is normally distributed and invest (bet) accordingly. The huckster knows the data isn’t normally distributed, they bet against you and clean up when things don’t turn out like you expect.

    What I read here is that VS is explaning in a very patient fashion the current mathematical theory behnd this. How statisticians have learned to identify data that cannot be relied upon statistically to make predictions and how to overcome these problems.

    Further, he is explaining that “Average Global Temperature” does not appear to be a reliable measure on which to base predictions. To make a prediction based on “Average Global Temperature” a corection factor must be applied.

    Further, he is explaining that once this correction factor has been applied, then it appears that “Average Global Temperature” is not well correlated with CO2 level. It is however well correlated with Solar Activity.

    This has far reaching implications for climate science. It doesn’t disprove climate science nor does it lessen the value of climate science. What is does show is that if the current theories in statistics are correct, then any conclusion based upon statistical treatment of Global Average Temperature may be misleading.

    This would directly affect many of the studies before the IPCC that rely on Average Global Temperature to draw statistical conclusions. It would suggests that these each need to be reviewed in light of this new result to see how the conclusions are affected.

    This result should not be a surprise. It has long been argued that Average Global Temperature was not a meaningful statistical measure, at least not when used directly. The real world often behaves in ways contrary to what “looks right”. What was lacking was a rigorous method to demonstrate this.

  900. Al Tekhasski Says:

    Eduardo:
    When people tried to evaluate statistical properties of known chaotic attractors even in a few-dimensional systems that are PRECISELY mathematically defined ( in a form of differential equations), they have to use at least 10^9 or more data samples.
    (e.g. http://www.fi.isc.cnr.it/users/antonio.politi/Reprints/017.pdf )
    Weather attractor alone needs 10^6 samples to get a single data point every half hour. Any attempt to try anything on only 150 data points is total delusion and ignorance.

    Regarding the Maxwell-Boltzman distribution, as I already said, it is always formally wrong, because regardless of particles being non-relativistic or non-quantum, ANY Maxwell-Boltzmann distribution contains the relativistic tail, formally speaking. So, if someone wants to split hair and try to debunk the kinetic theory on the same formal basis of “boundedness” as you do in this particular climatological case, he will be formally correct unless physicists would specify a reasonable cut off and define an approximation. By saying that “global temperature” cannot exhibit a [nearly] random walk because it is bounded by solar constant and albedo is equivalent of saying that Maxwell-Boltzmann distribution is a bunk.

  901. JB Says:

    Hi dhogaza,

    I stopped reading your comments some time ago because you were not adding anything to the thread and were slowing down my reading. I bet many other people reading this thread (and there are many) are doing the same.

    After reading VS’s last excellent reply, however, I did read your reapting comment again. You seem to be impling that there is nothing else in climate science besides C02 forcing. While it is true that many of the fanatical share this view, I think I have seen one or two climate scientists talk about things other than C02. Could there be no other explaination for a lack of statisticly significant warming other than the physics being broken? Does physics exclude negitive feed back mechnisms? Are there no examples of stable equilibrium in nature?

  902. Al Tekhasski Says:

    Regarding solar forcing and CO2 forcing:

    The concept of radiative forcing from GH gases is based on idea that the gas mixes fast and the effective OLR boundary rises, while the temperature profile stays the same, higher=colder, and this creates a deficit of OLR, and allegedly must cause global warming. However, for people who are familiar with specifics of diffusion in turbulent media, the ends of this forcing idea do not fit.

    First of all, both CO2 concentration and temperature are mathematically similar entities with regard to how they propagate – they both are so-called “passive scalars”, and are governed by mathematically identical equations. It is also known that the effective coefficients of diffusion in strongly turbulent media are identical (as characterized by turbulent Prandtl and Schmidt numbers, which both are of the order of 1). Therefore, it is not possible to have the CO2 to quickly diffuse and “mix” while the average temperature profile of turbulent air stays unchanged: the concentration and temperature perturbations equilibrate at the same rate.

    Even if the bottom boundary has substantial thermal inertia (actually, how substantial it is, one night and surface drops several degrees?), the radiative cooling rate in tropopause is fast enough (1-2C/day) to eliminate any energy imbalance. Therefore, it is very physically possible that the hypothetical IPCC-Hansen “radiative forcing” may have only a fractional effect as compared with watts from real insolation. It could be even a very small fraction, which would explain the discrepancy between barely observable warming and the IPCC alleged effect.

  903. DLM Says:

    HAS:…the climate is so complex that there are very real limits on the ability to properly specify what’s happening directly from the laws of physics…

    OK, that’s true. But it shouldn’t stop us from making convenient assumptions, to fill in the blanks. And if we are not encumbered with sophisticated knowledge of statistics, and we are not squeamish, we are certainly free to mine for spurious trends, and suspect correlations/causations to support our assumptions:)

  904. Tom Fuller Says:

    Didn’t Andrew Dessler’s AIRS study provide some indication that CO2 doesn’t mix quickly? It seemed from the satellite footage he showed (and from his commentary) that CO2 billowed out like clouds from primary sources and hung together for a lot longer than previously thought.

  905. Al Tekhasski Says:

    There is no contradiction. Cloud upwelling and thunderstorms are components of general turbulent cascade in atmosphere. Since molecular mixing is veeery slow, CO2 cannot mix faster than the turbulent eddies can mix, so the Dessler observation is just a confirmation of this general idea. In climatology, “fast” means at least half a year. The real contradiction is that both temperature and GH mix of radiative forcing. That’s why CO2 does not seem to create any real forcing, it somehow dissolves in the overall complexity of atmospheric dynamics. IMO of course.

  906. Al Tekhasski Says:

    There is no contradiction. Cloud upwelling and thunderstorms are components of general turbulent cascade in atmosphere. Since molecular mixing is veeery slow, CO2 cannot mix faster than the turbulent eddies can mix, so the Dessler observation is just a confirmation of this general idea. In climatology, “fast” means at least half a year. The real contradiction is that both temperature and GH mix at the same rate in a matter of few days, and no imbalance occurs. That’s why CO2 does not seem to create any real forcing, it somehow dissolves in the overall complexity of atmospheric dynamics. IMO of course.

  907. Steve Fitzpatrick Says:

    dhogaza,
    “Again, just how much of physics do you think VS has successfully flushed down the toilet?”

    None.

    I think that if you actually read this thread, you will note that I (and others) clearly say that variability in the temperature data has nothing to do with the correctness of physics. I have no idea why you imagine VS or anyone else is saying this, and even less idea why you think repeating your refrain about the results refuting physics (many, many times) makes a useful contribution to the thread.

    What VS has shown clearly is that if 1) if pre-1935 temperature time series from GISS is a fair representation of the “natural temperature variability”, then 2) the structure of the data says that temperature change through 2009 is (just barely) consistent with the null hypothesis. This does not mean that radiative forcing by GHG’a is not real, and it does not mean that the measured warming during the instrument period was not, in whole or in part, caused by GHG’s. What it does mean is that the level of natural variability in the data is such that whatever GHG forcing effect there is (and there must be some, because basic radiative physics is of course correct) that GHG effect is not quite statistically significant at 95%. VS’s analysis addresses how to correctly handle natural variation in the temperature data. It says nothing about the correctness of physics (or chemistry, or anything else), and I find it most odd that you seem to imagine other people think so.

    If you wish to argue with the assumption that pre-1935 data is representative of the unperturbed system (as I think eduardo is suggesting, if I understand him correctly), or if you think there is some error in VS’s analysis, then please do explain.

    But I hope you will offer reasoned explanations instead of shrill and repetitive comments that are irrelevant to both the temperature data and VS’s analysis of that data.

  908. Steve Fitzpatrick Says:

    Tom Fuller,

    Could be Tom, but compared to the effective residence time for CO2 in the atmosphere (many years) local mixing time (hours to days) would not seem to be so important from the point of view of overall forcing.

  909. Dave McK Says:

    This really looks like the cross section of the heat flow in a ‘passive’ heat pump showing evaporation, convection, condensation, precipitation and radiation at the cooling end overlaid on the temperature profile of the refrigerant.
    That’s what I see.

    Increasing the heat capacity of the working fluid of a heat pump does not raise the temperature if the rate of flow remains the same and the sink is infinite.

  910. sod Says:

    This result should not be a surprise. It has long been argued that Average Global Temperature was not a meaningful statistical measure, at least not when used directly. The real world often behaves in ways contrary to what “looks right”. What was lacking was a rigorous method to demonstrate this.

    this is how the false and useless claims made by VS will be spun by “sceptics”. (also note the completely false ideas about CO2 not being well mixed by Tom Fuller, mentioned in this context)

    this is misinformation being spread, and nothing else.

    here is one of the parst, that mopst of the 2sceptic2 readers prefer to ignore:

    First we performed a series of unit root test here, and found that in all but two cases, we infer that the series contains a unit root, and is therefore non-stationary, and it therefore contains a stochastic trend.

    as Tamino has demonstrated, the case is far from as obvious as VS claims. basically he has been looking for a method taht provides the result he wants to get.

    ————————

    the most important point here is, that real scientists are trying to make confidence intervals SMALLER. they want to increase our knowledge, not reduce it.

    “sceptics” have the opposite target. bigger confidence intervals allow for more absurd claims.

    so let us look at what the VS method will tell us over the future, if we take the last 50 years as a new basis. and then see VS explain, how he would hanlde another “random walk” close to his upper boundary…

  911. Al Tekhasski Says:

    Sod, you need to calm down. Please. Real scientists cannot make any intervals smaller, the confidence interval is whatever it is, and is determined by quantity and quality of data, in full accord with a branch of mathematics called “statistics”. If data are vastly undersampled and subjectively massaged, confidence intervals will not decrease no matter how hard do you try (unless you are “unreal scientist” and science fiction writer).

    And speaking about level of determinism in climate proxies, I am sure you are familiar with this mathematically derived result:

    “A number of records commonly described as showing control of climate change by Milankovitch insolation forcing are reexamined. The fraction of the record variance attributable to orbital changes never exceeds 20%. In no case, including a tuned core, do these forcing bands explain the overall behavior of the records. At zero order, all records are consistent with stochastic models of varying complexity with a small superimposed Milankovitch response, mainly in the obliquity band. Evidence cited to support the hypothesis that the 100 Ka glacial/interglacial cycles are controlled by the quasi-periodic insolation forcing is likely indistinguishable from chance” – C. Wunsch

    Click to access milankovitchqsr2004.pdf

  912. John Whitman Says:

    ”””’the most important point here is, that real scientists are trying to make confidence intervals SMALLER. they want to increase our knowledge, not reduce it.””””’

    Based on what I conclude has been shown in this stream of >900 comments, then I would modify that statement as follows:

    **** ”””the most important point here is, that real scientists are trying to make confidence intervals SMALLER. they want to increase our knowledge, not reduce it.”””’ And to the extent that those scientists are doing so by carefully listening to the data to see what it actually says, then it is productive. But to the extent that any of them are not listening to the data, then they should revisit their statistical processes. ****

    Sod, thank you. Your comment helped me to realize a generic problem with some of the temperature analyses has been performed in the past. Appreciate your insight that help me realize.

    John

  913. Alan Wilkinson Says:

    sod: “The problem aint what you don’t know. It’s what you know that aint so – Will Rogers”

  914. John Whitman Says:

    sod,

    Question

    In you view, is a ‘skeptic’ someone who does not agree with your above comment ‘sod Says: March 28, 2010 at 07:40’ ?

    Knowing this would help me to understand your comments more fully.

    John

  915. sod Says:

    First we performed a series of unit root test here, and found that in all but two cases, we infer that the series contains a unit root, and is therefore non-stationary, and it therefore contains a stochastic trend.

    you are making conclusions that are not based on facts.

    VS is using a method with absolutely no value. he can tell us, that 50 years from now, temperature will be 1°C higher or lower than they are this year.

    this is something that most people could tell you, without any statistics at all.

    so can you tell me, what his method tells us about the 60 years to come and how we will handle the next random walk at the upper boundary?

    or will you defend his claim that a linear trend is not a trend?

    or his idea of simply looking at two data points?

    here is what Bart wrote above:

    How could taking the difference between two arbitrarily taken points say anything useful about what happened, in the presence of large amounts of variability? If you take 2009 or 2007 as your end year instead of 2008 the estimate would suddenly be more than 0.1 degrees (i.e. more than 15%) larger. Clearly, that is *not* a good way to estimate what the increase in global avg temp has been. Even though OLS is invalid to use for these dataseries, claiming that merely taking the difference between two arbitrary points is better seems a far stretch.

    this is a simple case of one eyed being king in the land of the blind. VS only has one target: to confuse the uneducated. and he is having serious success with that, as is demonstrated in the majority of replies by people, who call themselves “sceptics”.

  916. sod Says:

    hm, starting quote in the post above was a wrong one..

    i wanted to quote John Whitman from above:

    And to the extent that those scientists are doing so by carefully listening to the data to see what it actually says, then it is productive. But to the extent that any of them are not listening to the data, then they should revisit their statistical processes. ****

    ps: but i did notice that “sceptics” prefer to ignore those test that do no infer a unit root

  917. Contrarian Says:

    eduardo,

    “But these is by far not the basic issue. the basic issue is, as I said, to find out if we are observing ’something unnatural’, and for this we have to model ‘what is natural’. What are the statistical properties of the natural variations?”

    My take on VS’s arguments, as they relate to your question, is that we cannot decide what is natural by examination of the statistical properties of the realized temperature series. We have to decide what is natural on independent grounds: fundamental theory, controlled experiments, and other observed empirical relationships. We build a model based on those assumptions, and then see whether it can replicate the observed series, given the known values of the operative variables over the course of the series. What climate modelers seem to be doing instead is tweaking the assumptions and parameters as necessary in order to reproduce the observed series, *on the assumption* that the observed trend, or the most recent segment of it, is unnatural. That is question-begging.

    In other words, we can’t use the properties of the series itself to inform construction of the model, because the observed series is compatible with many conceivable DGPs. It is possibly (for all we can tell based on its statistical properties alone) stochastic.

    Now as determinists, we have to assume that there is some physical reason for every change in a temp series, from day-to-day, from season-to-season, from year-to-year, and from century-to-century. But we cannot deduce those reasons by simply examining the series. We can only use the series to validate driving factors for which we have independent evidence or which derive from fundamental theory.

  918. sod Says:

    hm, starting quote in the post above was a wrong one..

    i wanted to quote John Whitman from above:

    And to the extent that those scientists are doing so by carefully listening to the data to see what it actually says, then it is productive. But to the extent that any of them are not listening to the data, then they should revisit their statistical processes. ****

    ps: but i did notice that “sceptics” prefer to ignore those test that do no infer a unit root….

    (might be a double post)

  919. HAS Says:

    sod I can see that you are struggling to understand what is being discussed here, which has little to do with whether one is a believer, agnostic or an atheist (whatever they each might mean).

    It’s about how to do the science necessary to better understand this complex world we live in.

    People here are pretty tolerate of those that don’t completely understand the statistics, physics etc (at least they’ve been pretty tolerant of me), and I suspect are pleased to have contributions from within ones areas of expertise (and I hope I help a bit around some of the meta issues involved in modelling large complex systems and drawing inferences from those models, even if my experience in climate science is limited).

    The trick to get the best out this thread is to read what is posted, ask specific questions if you don’t understand something (but don’t feel slighted if you don’t get a response), and if you have expertise then express a well founded view that helps the debate progress.

    Otherwise, as I said when you first started commenting, people will just think you’re just into doing graffiti and you’ll increasingly get treated accordingly.

  920. Nick Stokes Says:

    VS
    However, this is not statistics but rather linear algebra (i.e. matrix algebra). So none of the resulting confidence intervals calculated by your software package have any meaning.
    This seems bizarre to me. The calculated trend is just a weighted average of the data, and as such has a standard error, which is what the package quotes. Is this any less meaningful than, say, the standard error of the mean?

  921. HAS Says:

    Nick Stokes

    VS is referring to the confidence intervals derived from the SD, not to the SD per se. The confidence intervals depend on both the SD and the distribution -so if the distribution isn’t normal your classic confidence intervals turn to custard.

  922. VS Says:

    ———————

    On Zorita et al (2008) (reply to Eduardo)

    ———————

    Hi Eduardo,

    Thank you for your reply :)

    I think we are still talking past one another. I have no issues with your hypothetical what-if analysis. My claim is that this ‘if’ is simply not met, ergo the calculated probability is in fact very uniformative.

    This has to do with the data generating process, or DGP. Note that this does not refer the ‘general’ process generating all of the observations (e.g. Vostok ice-core observations), but rather to the process generating our sample of observations. I have elaborated clearly in my previous post, why I believe that the assumption of stationarity, on the instrumental record, is unjustified, even though long term temperatures might be stationary. I listed both visual illustrations, and an overview of formal arguments, and references. Furthermore, your own calculations (i.e. the Whittle method) imply non-stationarity.

    Now, since you are relating your simulation analysis to the realized record, or put differently to the DGP behind our 128 year sample, for the purpose of describing unprecedentedness, you are simply not allowed to both ignore the properties of the DGP and claim that your result has anything to do with formal probabilities at the same time.

    Now, allow me to illustrate the DGP/sample issue with an example:

    Let’s say that somebody tells you that she has tossed ‘5 or higher’ with a fair die, five times in a row (i.e. a sample realization). You are interested in assessing the probability with which this happens, so you calculate (2/6)^5, and conclude that the probability is rather low. However, in this very instance, you have in fact assumed something about the data-generating process. Namely, that it is in fact a 6-sided die which generated those observations. On closer inspection, you notice that the individual in question actually used a 12-sided die, implying a probability of (8/12)^5.

    Don’t you agree with me that the first probability is rather uniformative? Well, that’s my take on your paper. Your probaility is akin to the probability if the individual used a 6-sided die. We have shown that the die is in fact not 6-sided (i.e. the series contains a unit root).

    This is why I find the conclusions of Zorita, Stocker and von Storch (2008) irrelevant, statistically speaking. However, it seems that your paper has provided ample ammunition to those determined to claim ‘scientific superiority’ in the AGWH debate. Again Eduardo, you seem like a very reasonable and likeable guy (viz. your politeness in light of my admittedly over-provocative approach, my sincere apologies for that, and for what follows), but in the context of this debate, I simply cannot let the conclusions of your paper, as most readers interpret them, ‘stand’. I sincerely hope you understand that.

    ***

    Now that I have read your paper more carefully, I’ve also come across a clear techincal error, that would, in my opinion, in its own right, be sufficient to invalidate your conclusions, in terms of formal statistics.

    I cite from page 1 (bottom):

    “The parameters of the autoregressive models were estimated from the observed records in the period up to 1960 to limit the influence of the anthropogenic forcing. For all but two regional records (South Australia and Southern South America) an autoregressive model of order one (AR-1) would be adequate. For these two regional records, the Durbin-Watson test indicated the presence of autocorrelated residuals in a AR-1 processes.”

    Disregarding the untested assumptions, which put the cart in front of the horse (e.g. ‘anthropogenic forcings influencing the record’), you also employed the DW test to evaluate the residuals of an AR (i.e. autoregressive, i.e. containing a lagged dependent variable as regressor) process. However, the Durbin-Watson test is invalid in the presence of an AR term. I cite from wikipedia:

    “An important note is that the Durbin–Watson statistic, while displayed by many regression analysis programs, is not relevant in many situations. For instance, if the error distribution is not normal, or if the dependent variable is in a lagged form as an independent variable, this is not an appropriate test for autocorrelation” (bold added)

    This elementary error should have been weeded out by a reviewer. I hope that you can now understand my skepticism towards any statistics/probability related claims eminating from climate-science ‘peer-review’. With all due respect, your article was published in the top journal in climate-science (i.e. GRL, i.e. an A-journal) containing an error that would flunk a first year econometrics student on his mid-term.

    ***

    In your reply, you also stated:

    “I say to myself: Ok, interesting, but..so what? The question that I would like to know is whether the temperature increase form January to May is forced by the sun, I am not interested in the a phenomenological range of predictions for July.”

    Now this is where (polynomial) cointegration kicks in. We’re not there yet, as we’re simply discussing trends now ;)

    ***

    Finally, you say that I ‘think like an econometrician’, and I thank you for this generous compliment :) In light of uncertainty, which follows from limited observations, the only valid approach is the formal scientific one. It is this approach which provides us with certainty about the uncertainty (e.g. significance levels, confidence intervals). In fact, if such a formal approach is not adherred to, all ‘statistics’ related claims (viz. most if not all IPCC-consensus-generated ‘probability’ claims) are better classified as astrology, rather than science.

    All the best, VS

    PS.

    Hi Nick Stokes,

    I can imagine that it seems ‘bizarre’, since your interpretation seems to be going one in climate-science, but the statement holds :)

    Note the discussion about the DGP above. Those standard errors refer to our ‘estimate’ of the beta parameter (i.e. the slope) governing the trend-stationary DGP. Since trend-stationarity is violated in the presence of a unit root, the confidence intervals/standard errors are meaningless (because formal testing has indicated that this particular beta paramter ‘doesn’t exist’;). Your question is however not trivial, cheers.

    PPS.

    sod,

    This is the very last time I reply to one of your ‘comments’. I suggest that other readers follow my lead.

    You state:

    “ps: but i did notice that “sceptics” prefer to ignore those test that do no infer a unit root….”

    Ignore?

    Have you even read my formal argument referenced in the post you are referring to? I performed Monte-Carlo simulations on the only two instances, out of over 15, that fail to spot the unit root. Namely, the ADF test with intercept and trend employing 0 lags, and the PP test with intercept and lag. Respectively, here and here.

    In both instances, the actual rejection rate of the true null hypothesis is, in this instance around 85% in both cases, while we use a nominal 5% significance level (i.e. we believe that we are rejecting in only 5% of the realizations, the true rate is over 80%. This is extremely heavy bias). The results are presented here.

    I performed the same Monte-Carlo analysis on the ADF test with 3 lags, the one that gives the consistent result (i.e. the presence of a unit root), and found that the rejection rate is almost exactly 5% (i.e. equal to our nominal significance level).

    As far as I can infer from your comments however, this information is wasted on you. Your contributions are not only unintelligent, they are very disruptive. I sincerely hope that Bart does something about your pollution of this otherwise stimulating discussion.

    Good day

    [Ignore user]

  923. VS Says:

    “and the PP test with intercept and trend

  924. mikep Says:

    On statistics and climate journals see also the Scmidt paper in the International Journal of Climate which appears to confuse autocorrelation in the regression residuals with autocorrelation in the dependent variable, another elementary mistake which passed peer review (one of the reviewers was Phil Jones who actually thought the mistake was a valuable contribution). And the journal wouldn’t allow a reply…

  925. VS Says:

    Hello mikep, welcome to the jungle :)

  926. Ibrahim Says:

    VS

    Duim :-)

  927. sod Says:

    This is why I find the conclusions of Zorita, Stocker and von Storch (2008) irrelevant, statistically speaking. However, it seems that your paper has provided ample ammunition to those determined to claim ’scientific superiority’ in the AGWH debate.

    is it too much to ask you, to do 5 min of research on the position of Hans von Storch in this debate?

    sorry, but this is among the best illustrations of what is going on here: “sceptics” assume that there is a lot of truths in the complicated stuff that you write, which they don t understand. they ignore the very obvious errors in the stuff which they do understand.

    i follow a different type of “sceptisism”: with the amount of obvious errors in the simple parts of your posts, the probability that there is any thing of value in the complicated parts is very close to zero. (and at best the result of pure chance)

    Have you even read my formal argument referenced in the post you are referring to? I performed Monte-Carlo simulations on the only two instances, out of over 15, that fail to spot the unit root. Namely, the ADF test with intercept and trend employing 0 lags, and the PP test with intercept and lag. Respectively, here and here.

    Tamino (the guy who you demonstratively falsely accuse of being an amateur) also did some tests and found some more rejects.

    Still Not

    Hi Nick Stokes,

    I can imagine that it seems ‘bizarre’, since your interpretation seems to be going one in climate-science, but the statement holds :)

    Note the discussion about the DGP above. Those standard errors refer to our ‘estimate’ of the beta parameter (i.e. the slope) governing the trend-stationary DGP. Since trend-stationarity is violated in the presence of a unit root, the confidence intervals/standard errors are meaningless (because formal testing has indicated that this particular beta paramter ‘doesn’t exist’;). Your question is however not trivial, cheers.

    you are just making up all that stuff. the vast majority of linear trends will not show errors of confidence intervals.

    your original reply to Bart (making the “just compare two years” claim) was also not based on errors. you just claimed superiority for a method that is obviously nonsense.

  928. VS Says:

    Bart, please moderate

  929. John Whitman Says:

    VS,

    I now concur with the last portion of your ‘VS Says: March 28, 2010 at 11:15’ comment.

    I was trying a somewhat indirect path toward engaging him, unsuccessfully. I will desist in the future.

    Once again, I find this stream of comments to be very enlightening. Thank you. If this all would have happened while I was an undergraduate >40 yrs ago, I would ‘probably’ have been a statistician.

    Keep up the energy. Are you the energizer bunny of statistics? : )

    Bart, my thanks again for the intellectual venue. This stream of comments rivals and perhaps exceeds those early 20th century salons & coffee shops of Paris/Vienna where the leading intellects of the time gathered to hash out essential issues.

    In my humble opinion, the issues discussed here are essential.

    John

  930. JvdLaan Says:

    VS:
    You could have expected this. Bear this as a man and please do not react the way you do. If you are so sure of what you are telling us all, comments by Sod are minor then, or he does he have some grain of truth? What is then annoying you the most?

  931. VS Says:

    JvdLaan,

    Answer: effective crippling of an already difficult and complicated discussion.

  932. JvdLaan Says:

    VS: as I already (friendly!) advised: take care of the wax in your wings.

  933. Shub Niggurath Says:

    Can a deterministic process – heat exchange and equilibration across all climate interfaces, temporally and spatially – produce a stochastic output i.e., surface temperatures?

    In my mind, it seems to be possible.

    VS, apologies if this has been addressed before but,… You have used the last 140 years of thermometer-based data…is it possible that you may derive any different conclusion/s if the reconstructed paleotemps from Esper et al 2002 are used as equivalent to the instrumental record?

    I assume it is the same data series that seen as the red dashed line in Fig 6.13, although I cannot understand why a lot of series are abruptly truncated at 1000 AD

    I ask this because in Fig 6.10 and 6.13, this data seems to show maximum variability in the millennial timescale and does not show ‘divergence’.

  934. Tony Says:

    Bart,

    I thank you, as our host, for giving the occasion to such a brilliant debate.

    And to VS, for providing such excellent demonstrations and responses. You must now have an inkling of why Faraday, Curie and the others liked to perform public lectures/demonstrations!

    Re your analysis, it would be very interesting if you did the same thing on a continous unadjusted temperature record from a single location. Would it show the same ‘unit-root’ characteristic response?

    At least this would eliminate claims from the anti-Climatologists that the ‘unit root’ behaviour was artefactual; i.e. it was caused by step-changes/spikes from data adjustments/splicing, etc.

  935. Cartoons by Josh – The Auditor « Climate Audit Says:

    […] And of course, VS and his unit root groupies from the now famous Bart thread: […]

  936. Cartoons by Josh « Watts Up With That? Says:

    […] And of course, VS and his unit root groupies from the now famous Bart thread: […]

  937. Steve Fitzpatrick Says:

    VS,

    I would not worry so much about comments that are just shrill noise. Some commenters at any blog are immune to reason and do not want to learn anything (visit WUWT or RealClimate for a multitude of examples). I understand your frustration, but I have found it is not a productive use of time to engage these people.

  938. eduardo Says:

    Dear VS,
    yes, unfortunately we are talking past each other, but only in the last, essential bit. I will keep my answer short and to the essentials.
    You said one has to consider the DGP. I agree. You state that for past temperatures the DGP is stationary but not so for the 20th century data. Something has changed. Actually that is the point that climatologist are defending and so you completely agree completely with us!
    Now, the real question is: why has the DGP changed ? We say that it is because now we have new forcing that was not present before. What is your explanation? That would be the exciting part and the part that could contradict the AGW hypothesis. That is exactly the point why I do not see that your analysis ‘debunks’ or compromises the science. Does your analysis contradicts the notion that there is a new forcing in the 20th century ? Rather the opposite, it confirms that now the DGP is non-stationary and before it was. Please, could you state here your explanation for the a lack of stationary in the 20th century and we could discuss further instead of moving around in circles ?

    Following your example of the coin: yes, we would conclude, as you, that the coin is loaded. In other words,we conclude, like you, that the 20th century data cannot be generated by a stationary process. The real question is not that it is not stationary (we all agree), the real question is why it is not. Until you address this question, the presence of a unit root does not compromise anything.

    To Zorita et al. paper. Thank you for pointing out the error. I am sure there are many more statistical errors in my papers. I will check it and I will learn something new. You would however agree that this error is not central, because we were exploring possible stationary GDP (autoregressive, fractional-differencing). So it is not essential what exactly the DGP was and what exactly the parameters of the natural process are. Actually, the argument is that we cannot know because the 20th century data could (let me use this nuanced expression) be contaminated. Thats why we explore a broad range of parameters and two DGP. The review process should have pointing this out, and we would have applied another test, or simply delete that part, and the rest of the paper would remain unchanged. Nevertheless, thank you very much for your correction and for reading the paper.

  939. VS Says:

    Hi Eduardo,

    Thank you for your reply. I will try to get back later to you with a more complete answer, but quickly now:

    “The real question is not that it is not stationary (we all agree), the real question is why it is not”

    Because the series simply doesn’t (I would perhaps even dare say, cannot) display the stationarity property (i.e. stable mean and variance), over such a small interval.

    Take another look at Figure 5 and Figure 6. Then, consider the properties of our instrumental record in light of the definition given above (i.e. our sample).

    I see no evidence to conclude that something ‘has changed’ in our modern record.

    Cheers. VS

  940. A C Osborn Says:

    Tony Says:
    March 28, 2010 at 15:53
    Re your analysis, it would be very interesting if you did the same thing on a continous unadjusted temperature record from a single location. Would it show the same ‘unit-root’ characteristic response?

    The answer is “Yes” according to another poster after I aske the same question see

    # Tim Curtin Says:
    March 23, 2010 at 13:55

    One or two commenters here have asked whether the Beenstock & Reingewertz finding that “global temperature and solar irradiance
    are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences” are valid at a localised level (to get away from the successive averagings and griddings of GMT per GISStemp et al). I have done the ADF tests for January average temperatures and [CO2] at Indianapolis (picked at random from NOAA-NREL data) from 1960 to 2006, and find that the B-R statement is confirmed. What conclusions are to be drawn from this may be another matter!

  941. A C Osborn Says:

    Tony Says:
    March 28, 2010 at 15:53

    Of course VS could do a more thorough examination if they had the time.

  942. Mari Warcwm Says:

    Still at it?

    I have been out to do some gardening this afternoon, and I would like to point out that my ‘February Gold’ daffodils are flowering at last, at the end of March.

    My vote is with the deniers.

    Carry on your discussions, gentlemen.

  943. Tony Says:

    AC Osborn, thanks for the response. And yes, it is interesting to speculate on the consequences of these findings.

  944. Steve Fitzpatrick Says:

    VS and Eduardo,

    I think you are very close to understanding each other. The only issue appears to be the question of stationarity in the long term (Holocene) temperature record versus lack of stationarity in the instrument record. I think the basic question is how much data would be needed to establish stationarity, starting at any arbitrary point in the Holocene.

    If the Vostok ice core estimates are even reasonably representative of the global temperature trend, then any short (125 year) period in the Holocene would probably be non-stationary, even if the Holocene taken as a whole is stationary. Of course, how to relate the Vostok ice core temperature estimates for the Holocene to the global average temperature may not be so simple, since it seems likely (based on the respective glacial/interglacial temperature changes) that the Vostok temperature changes much more (3-4 times?) than the global average temperature.

  945. mpaul Says:

    Eduardo said, “You state that for past temperatures the DGP is stationary but not so for the 20th century data. Something has changed. … Now, the real question is: why has the DGP changed ? We say that it is because now we have new forcing that was not present before. What is your explanation?”

    It would be interesting to run a Benford analysis on the adjusted GISS temperature series for the period before 1935 and for the period after 1935 and compare the results.

  946. DLM Says:

    Sod says:here is what Bart wrote above:

    How could taking the difference between two arbitrarily taken points say anything useful about what happened, in the presence of large amounts of variability? If you take 2009 or 2007 as your end year instead of 2008 the estimate would suddenly be more than 0.1 degrees (i.e. more than 15%) larger. Clearly, that is *not* a good way to estimate what the increase in global avg temp has been. Even though OLS is invalid to use for these dataseries, claiming that merely taking the difference between two arbitrary points is better seems a far stretch

    VS says:Bart, you said something about ‘estimating with only two data points’. There is no ‘estimation’ involved here, it’s simple arithmetics. You are simply answering the question ‘how much have temperatures risen over the period 1881-2008′.

    The only ‘confidence intervals’ that make sense in this context are the confidence intervals of the difference in these variables. These are relevant, if we are in fact estimating the actual temperature values, observe:

    Let,

    x = estimator of giss(1881)
    y = estimator of giss(2008)

    Now, let’s say that x follows a distribution, which is centered around the true parameter value of giss(1881) and y follows a distribution, which is cenetered around the true parameter value of giss(2008). Think of the ‘errors’ as unbiased measurement errors (and let’s assume that these measurement errors are independent of each other).

    Then, the difference, defined as (y-x) also follows a certain distribution. Now we can use that information to test the hypothesis H0: (y-x)=0, i.e. to see if there was a significant increase in temperatures. However, in all of the preceding we have implicitly assumed that the GISS series in fact represents the actual global-mean temperature, therefore, there is no uncertainty about the realized record, and we can simply state that the realized increase in temperatures is in fact equal to 0.43-(-0.2)=0.63. That’s it!

    So you have Bart saying: “Even though OLS is invalid to use for these dataseries, claiming that merely taking the difference between two arbitrary points is better seems a far stretch.” Now do you expect VS to say-OK, let’s just use your invalid stuff. VS is a statistician, not a magician. And he really scares you. Why is that? Jones et al. have done far more damage to the dogma than VS ever could.

  947. HAS Says:

    Eduardo

    I’m not sure you as yet understand VS correctly. You say:

    “You state that for past temperatures the DGP is stationary but not so for the 20th century data. Something has changed. Actually that is the point that climatologist are defending and so you completely agree completely with us!

    “Now, the real question is: why has the DGP changed? We say that it is because now we have new forcing that was not present before. What is your explanation?”

    Steve Fitzpatrick I think has it right. The Stationary/Non-stationary argument as put by VS is simply about the time span in which one looks at the data. If you look at a small period in our current times you can’t see (or detect) the stationary component, but you can when you step back and look long-term.

    So nothing in the DGP has necessarily changed, it is just the difference between doing say an analysis at the nano-scale and at the human scale. In the former some forces that are important at the macro scale are safely ignored because they can’t be detected or are completely swamped by others, and vice versa.

    In fact VS is not agreeing with you on the need to assume a new forcing (aka a change in the DGP) in the late 20th Century. He is saying that, on the short-term data you can do perfectly well without it. As he says “I see no evidence to conclude that something ‘has changed’ in our modern record”.

    Having said all this I’m not sure that I’d worry too much about whether this particular result “debunks” or otherwise the AGW hypothesis. What is does say however is that we need to be much more rigorous in both our hypothesis formation and its testing.

    The paradox of all this is that I’m very sure more rigorous statistical analysis will make some of the statements being made in climate science less certain, but that those that are made will be more so.

  948. Nir Says:

    VS said:

    The only ‘confidence intervals’ that make sense in this context are the confidence intervals of the difference in these variables. These are relevant, if we are in fact estimating the actual temperature values, observe:

    Let,

    x = estimator of giss(1881)
    y = estimator of giss(2008)

    Now, let’s say that x follows a distribution, which is centered around the true parameter value of giss(1881) and y follows a distribution, which is cenetered around the true parameter value of giss(2008). Think of the ‘errors’ as unbiased measurement errors (and let’s assume that these measurement errors are independent of each other).

    Then, the difference, defined as (y-x) also follows a certain distribution. Now we can use that information to test the hypothesis H0: (y-x)=0, i.e. to see if there was a significant increase in temperatures. However, in all of the preceding we have implicitly assumed that the GISS series in fact represents the actual global-mean temperature, therefore, there is no uncertainty about the realized record, and we can simply state that the realized increase in temperatures is in fact equal to 0.43-(-0.2)=0.63. That’s it!

    And I think that’s where a lot of the disagreement lies. If there is a trend (as most climatologists believe, and which the physics appears to support), then clearly what should be estimated isn’t the difference in temperature between 1880 and 2008, but instead the difference in temperature between 1880 and 2008 once weather noise is removed. This difference in temperature, less weather noise, would obviously be a reflection of the trend. Now my understanding is that VS (and some others) claims we cannot detect any trend due to the lack of data (or perhaps due to the absence of a trend, I’m not sure which of these two options is currently the favourite).

    Now I’m curious if perhaps what’s at fault (given the fact that the physics for the presence of the trend seems sound) is the fact that our statistical tools are too primitive to deal with this type of data at the moment…

  949. sod Says:

    # Tim Curtin Says:
    March 23, 2010 at 13:55

    One or two commenters here have asked whether the Beenstock & Reingewertz finding that “global temperature and solar irradiance
    are stationary in 1st differences, whereas greenhouse gas forcings (CO2, CH4 and N2O) are stationary in 2nd differences” are valid at a localised level (to get away from the successive averagings and griddings of GMT per GISStemp et al). I have done the ADF tests for January average temperatures and [CO2] at Indianapolis (picked at random from NOAA-NREL data) from 1960 to 2006, and find that the B-R statement is confirmed. What conclusions are to be drawn from this may be another matter!

    this is scary stuff. if VS and his friends here would care about climate science, they would seriously rethink their position and how they present it, after reading such nonsense.

    people with no understanding of the subject, using random statistic methods on random data is a road to disaster. this is what comes out of what VS is doing here. .

  950. Tony Says:

    sod,

    Do you fear a disaster for the climate, or for climate science?

  951. Listen to the Data Says:

    sod –

    What is scary is your comment:

    “…if VS and his friends here would care about climate science…”

    That is what is wrong with the AGW crowd: they “care” about climate science, not about what the data tell us. Science is supposed to be apolitical, non-partison, and have no agenda, policy or otherwise. Gather the data, analyze the data, and let it tell you what it does, independent of the consequences.

    Your statements are dangerous in that they are always uttered by the “believers” as a way to suppress the “non-believers”.

    I urge you to open you eyes and your mind.

  952. John Says:

    Nir
    Your logic surrounding the increased Co2 seems flawed, ie that it must result in an observed increase in the global mean temp. The increase in Co2
    causes an increase in energy in the system = to 1.2c /doubling or whatever. There are numerous reasons why this need not reflect directly on the global mean temp, negative feedback or already cooling etc. spring to mind.
    B&R seem to suggest the Co2 caused a blip in temp that then returned to the normal variance they also say there is no direct longterm correlation between between Co2 and the global mean

    Sod- Can I take that Giss is indeed random !!

  953. eduardo Says:

    Dear VS,

    I have problems in identifying what is your main claim:

    -is it your main claim that you cannot detect anything unusual in the current observations compared to the past?

    One explanation is that may be your method is not sensitive enough. However, the question is whether the observations are unusual compared to what would have been without CO2. Econometrics example: ‘the present unemployment figures are normal because we had them in the 1930 already, so the recent banking crisis has nothing to do with the present unemployment’. The real interesting and causal comparison would be between unemployment with banking crisis and unemployment without banking crisis. You would retort that in the 1930 there were other factors acting. In the Vostock ice record there are a lot of other external factors as well. The Vostock ice core is mostly deterministically forced, not stochastic.

    -is your main claim that the presence of unit root in the observations invalidates AGW? That is the one I do not see and I would like an explicit and logical explanation of it. It would invalidate AGW if the simulated temperatures would not show a unit root. This is why we asked you to make the same analysis with the modeled temperature. That would be the real interesting stuff.

    -is your main claim that we have to be careful with statistical analysis? I agree 100%.

  954. Tom Fuller Says:

    One question that I think is relevant at this point is, Are we investigating this because we believe that climate scientists in general or some papers specifically have not availed themselves of co-integration in their examination of the temperature records and greenhouse gases?

    VS makes a lot of sense in his explanations here. But are there real world examples of assertions that need to be re-examined as a result?

  955. Alex Heyworth Says:

    VS Says:
    March 28, 2010 at 17:53

    Take another look at Figure 5 and Figure 6. Then, consider the properties of our instrumental record in light of the definition given above (i.e. our sample).

    I see no evidence to conclude that something ‘has changed’ in our modern record.

    Good point. For those who are saying, the physics tells us … blah blah blah … I say, until you can explain the (pre-modern) blue line on these charts fully using physics, you really can’t say that you understand the reason(s) for the small recent temperature increase.

  956. HAS Says:

    Nir

    “…. Now my understanding is that VS (and some others) claims we cannot detect any trend due to the lack of data (or perhaps due to the absence of a trend, I’m not sure which of these two options is currently the favourite).

    “Now I’m curious if perhaps what’s at fault (given the fact that the physics for the presence of the trend seems sound) is the fact that our statistical tools are too primitive to deal with this type of data at the moment…”

    The simple point being made is that if there is a trend it simply cannot be confidently observed in this data set. It is primarily about the primitive nature of our dataset (and observational methods) not our statistical tools. So eduardo I’m not sure you can say “One explanation is that may be your method is not sensitive enough”, unless by “method” you are referring to “experiment” rather than “statistical tools”.

    There is related point that no matter how much you believe there is a trend, if you can’t observe it with confidence then you can’t assume (or act as though) it exists while still saying you are an empiricist.

    This of course is not to say that you shouldn’t hypothesis about a trend existing in order to test that hypothesis, and more sophisticated experiments/better data may well show it exists. In fact a number of comments are making the point that increasing CO2 concentrations in the lab leads to temperatures rising while others are arguing about how this translates into what happens in our atmosphere. This is all good stuff.

    But there is a more subtle point being made. These particular time series have characteristics that mean they can’t be analysed using commonly used statistical tools, particularly when you are considering the time scales we are trying to assess climate change over (centuries rather than millennia).

    If these characteristics are manifest more widely in climate related time series, and previous studies haven’t checked for them, then it is quite possible that the statistical inferences made are incorrect (and the evidence suggests that applying the wrong statistical tools will overstate the degree of confidence you have in those inferences).

    And yes, Tom Fuller there have been a number of papers mentioned in this thread that seem to get into this problem.

  957. Selgovae Says:

    (Some comments from an interested amateur.)

    Eduardo, you ask of VS “is it your main claim that you cannot detect anything unusual in the current observations compared to the past?”

    My own understanding of VS is that the pattern of late 20th century temperatures can’t be distinguished from a random pattern. This doesn’t mean the pattern is not “unusual”. Almost by definition, any random pattern will be unusual.

    You also ask, “is your main claim that the presence of unit root in the observations invalidates AGW?”

    While it may “invalidate” claims for AGW based on current models, I don’t think anyone is saying that it *rules out* the idea that recent warmer temperatures may be caused by an increase in greenhouse gases. But to make the case for AGW, we’d need to produce a model that can more accurately predict global temperature than a model based simply on randomness. So my “random model” may predict that the temperature in any year will be the same as the temperature of the previous year plus or minus 0.25C. (Based on a quick eyeball of the maximum annual change in the GISS data) Our AGW model, at the very least, would have to make more accurate predictions than that. (And presumably also show the greenhouse gas contribution to the prediction.)

  958. DLM Says:

    sod,

    According to very recently reported results of a poll commissioned by Der Speigel:

    “Germans are losing their fear of climate change, according to a survey, with just 42 percent worried about global warming.”

    There was no mention of VS, or Beenstock&Reingewertz, as being among the suspects responsible for the the recent precipitous decline in the number of believers. Guess which acronyms were mentioned? Your climate science is in need of an overhaul.

  959. Nick Stokes Says:

    HAS says:
    if the distribution isn’t normal your classic confidence intervals turn to custard
    Indeed, but this is generally true for parametric methods (well, assuming some> distribution), and it seems extreme to banish them from statistics. My point is just that confidence limits on trend have the same status as limits on means etc.

  960. Steve Fitzpatrick Says:

    eduardo,

    “However, the question is whether the observations are unusual compared to what would have been without CO2. ”

    For sure this is the most important question. But we have only one instrument temperature record, and even before 1935, this is not a record where we can be certain of the absence of a significant warming trend due to GHG’s; neither can we be certain of the presence of a significant GHG driven warming trend prior to 1935. If I understand correctly, you are saying that you believe global temperatures prior to the instrument temperature record were stationary, and so the upward trend in temperatures can be correctly treated as a change in an otherwise stationary temperature series. Which of course means that you can more easily attribute temperature increases to radiative forcing.

    The Vostok ice core proxy record shows that there has been substantial variability in temperature near the south pole throughout the Holocene. If even a modest fraction of this variability (like 25%) showed up in the global average temperature during the Holocene, then the short term variability of the Vostock series during the Holocene suggests (to me at least) that randomly selected 125 year periods would quite often appear non-stationary in an instrument temperature record for those periods (if one existed!).

    So my question is, how do you support the assumption that the pre-instrument temperature was stationary for relatively short periods like 125 years?

    Perhaps examination of other temperature proxies for the Holocene (O18 in ocean sediments, for example) could shed some light on the “normal” variability of temperature during the Holocene, and clarify whether or not the Holocene temperature can reasonably be considered stationary over relatively short periods.

    By the way, many thanks for your very civil contributions to this tread.

  961. John Says:

    I’m just a humble Electrician who has worked on control systems for 35 years. I just can’t understand the mantra “The Physics Tells Us This “.
    The physics is no doubt correct (although the worlds scrapyards are full of things that work in theory), The physics however cannot tell you missing data that could be provided by all the (possibly unknown ) variables of the climate control system of the Earth.
    So it is not beyond the realms of possibly that the trend is simply not there or is so small as to be not detectable .

    A simple analogy.
    Your car is fitted with a data-logger that looks at the coolant temp,the rpm, the road speed and the injector opening times. You then drive it along a straight road at 50mph with coolant temp normal, you the apply the handbrake a couple of notches but increase the throttle to maintain 50mph. The coolant temp will rise till the fan cuts in the drop back to normal and then cycle with the fan stat,you the take the handbrake off and maintain 50mph. Then you ask someone to analyze the data which will tell them that the temps gone up and wiggled about and that the injectors have been open longer (more load=more fuel) so the physics tells them you have used more energy. But they could hypothesize for ever and the data will not tell them why you used more energy to go the same speed!! Because they do not have the correct data/input solve, it but the physics is correct.

  962. dhogaza Says:

    John…

    B&R seem to suggest the Co2 caused a blip in temp that then returned to the normal variance they also say there is no direct longterm correlation between between Co2 and the global mean

    If this were true, of course, we’d be experiencing snowball earth …

    CO2’s warming of the planet isn’t directly tied to AGW. CO2 warms the planet, and did so long before we started to pour CO2 into the atmosphere through the burning of fossil fuel. CO2 (and other GHG) forced warming an associated feedbacks are the only reason the planet is inhabitable by life a we know it in the first place.

    Anthropogenic CO2 simply *adds* to the CO2-forced warming that’s already there and has been since the first CO2 molecule entered the atmosphere.

    Again, it appears that many of you have absolutely no idea (or no respect for) the huge amount of known science that is “invalidated” by B&R.

  963. Bob Koss Says:

    delurking…

    Can a temperature record such as the Vostok ice core reliably be determined to be stationary or non-stationary? It isn’t like instrument records where you have accurate dating.

    According to http://en.wikipedia.org/wiki/Ice_core
    “Dating is a difficult task. Five different dating methods have been used for Vostok cores, with differences such as 300 years at 100 m depth, 600yr at 200 m, 7000yr at 400 m, 5000yr at 800 m, 6000yr at 1600 m, and 5000yr at 1934 m.”

    Deep in the core they model accumulation rate and ice flow to determine age.

    relurking…

  964. John Says:

    dhogaza

    Granted there has been a huge amount of science but its still a travesty that we don’t know why the warming has slowed.
    Of course I have respect for scientists well ones that deserve it (not sure about tree ring reading though), It does however occur to me that if they are correct they cannot be “invalidated” by B&R or indeed anyone.
    At 56 I have spent most of my life listening to dire consequences from scientists none of which have come to fruition.

    The climate system appears to me to be fairly stable considering the mass of the planet.

    Also I find it hard to believe that a temp rise of 0.6c over a hundred years with a span off -50 to + 50 less than 1% is in anyway a sign that than gain involved in the loop could push it out of control.

  965. Frank Says:

    dhogaza says:

    “If this were true, of course, we’d be experiencing snowball earth …”

    and

    “…it appears that many of you have absolutely no idea (or no respect for) the huge amount of known science that is “invalidated” by B&R.”

    Only if CO2 is the only possible mechanism governing long-term climate effects. Fortunately, there are other hypotheses undergoing testing, including the effect of GCRs on clouds, which can explain a number of phenomena across the entirety of the geologic time (e.g., snowball earth(s), faint sun paradox, ice-/hot-house climate regimes, Bond events, etc.). The main problem I have with with AGW is how often its proponents have to slide a long ways down Occam’s Razor to “square” CO2’s effects with the paleo record.

  966. Shub Niggurath Says:

    dhogaza
    Would you agree that the dynamical nature of the climatic temperature is preserved at present-day CO2 concentrations given the decade-long pause in warming?

  967. sod Says:

    dhogaza
    Would you agree that the dynamical nature of the climatic temperature is preserved at present-day CO2 concentrations given the decade-long pause in warming?

    there is no decade long pause of warming.

    using the new method invented by VS to figure out the warming over the last 10 years, we get a pretty massive 0.2°C (actually nearly 0.25°C) difference between 2009 and 1999 or 2000.

    http://www.woodfortrees.org/data/gistemp/from:1999/compress:12

  968. HAS Says:

    Apologies Nick Stokes (and VS)

    I had been assuming (as perhaps you were) that the conversation was still about what goes wrong if you incorrectly assume the series is not I(1). In fact on a closer read the argument is I think that because it is I(1) there is not a constant slope, and it is therefore a nonsense to try and estimate it statistically, and therefore forget confidence limits et al.

    I was helped by Wikipedia’s entry on Linear Least Squares which says in part:

    “If the experimental errors are uncorrelated, have a mean of zero and a constant variance, the Gauss-Markov theorem states that the least-squares estimator has the minimum variance of all estimators that are linear combinations of the observations. In this sense it is the best, or optimal, estimator of the parameters. Note particularly that this property is independent of the statistical distribution function of the errors. In other words, the distribution function of the errors need not be a normal distribution. However, for some probability distributions, there is no guarantee that the least-squares solution is even possible given the observations; still, in such cases it is the best estimator that is both linear and unbiased.”

    So what I think VS is saying is: an estimate of the slope doesn’t exist, but if you must have one the Least Squares estimate has some value (and is adding: don’t kid yourself that you are doing something statistical here).

    For my part I think this is all a bit of a diversion. A line from the start point to the end point also has some value, and we could argue until the cows come home about which one best meets our particular needs (and it will be different for all of us).

    The key issue here is the statistical statements about the nature of the time series, and their implications.

  969. DLM Says:

    We are impressed sod. You better tell the famous climate scientist, Phil Jones Phd., about that massive recent warming. He is spreading the lie that there has been no statistically significant warming for the past 15 years. Has Exxon gotten to him too? Has VS gotten to him?

  970. GrantB Says:

    Sod,
    You are foundering. VS is saying an estimate of the slope is not a valid statistical procedure here. He has not “invented” a start point to end point line as a trend estimator. He is saying that it is just as good as anything else, namely no good at all.

    However, look on the bright side. You can now refute those who say there has been no warming at all since 2002 for exactly the same reasons.

  971. mikep Says:

    I think Sod does not recognise that different statistics are answers to different questions. Let me use an analogy with a traveller across Europe. You can ask how far is he from London than he was three weeks ago. To do that all you need to know is where London is and where he is now, two data points only. This is the exact analogue to the question of how much the temperature has gone up since 1880. Of course there may be some further uncertainty because we don’t know to within a few metres where he is now, or exactly where he was in London three weeks ago – but this is a side issue for VS. Then there is the question, given that we know the travellers daily itinerary can we predict where he will be tomorrow or at the end of next week. This is the analogue of the question to which to OLS estimate of trend purports to give an answer. What VS shows is that the method gives a very bad answer, because examining the data we find that it has a stochastic trend, not a deterministic trend. In the simplest case of a stochastic trend – the random walk – our best prediction of where the traveller will be tomorrow is where he is today plus a confidence interval around that based on his previous daily travel pattern. Temperature follows a more complex stochastic trend, but the idea is the same. Other questions are possible e. g. how fast on average doe he travel, which in turn depends on what we think direction is and which average we want to use. But for the simple question of how far is the traveller from some arbitrary starting point (we could have chosen Amsterdam two weeks ago) the answer VS gave is a better one than the OLS estimate, and the OLS estimate does not give a good answer to the question of where he will be tomorrow either.

  972. cohenite Says:

    A great thread; VS has developed what David Stockwell has done at his site on the subject of cointegration and the statistical fallacy of the CO2/temperature correlation; pity about dhogaza and his anger and sod who is a spoiler from way back; even eli has had to pull up his socks, or are they pressure bandages, poor old dear that he is.

    One thing that has been overlooked is the division of global temperature into an effective component of 255K and a greenhouse component of 33K; so when dhogaza says:

    “Anthropogenic CO2 simply *adds* to the CO2-forced warming that’s already there and has been since the first CO2 molecule entered the atmosphere.”

    He is only talking about the 33K; how much of that is due to CO2; only the first 100ppm; every subsequent CO2 increment is subject to Beer-Lambert and exponential decline; the wings expansion argument to counter saturation relies on pressure and temperature which are not present and have never been present on Earth. In short CO2 is a shot goose which is why the B&R paper deals with change in CO2 and temperature not final quantities; there is no further correlation between CO2 total and temperature because the effect is near asymptotic; and certainly negative to the tangent when the main greenhouse gas, water, responds to changes in CO2, which apart from evaporation and its Miskolzian path, convectively nullifies any lingering CO2 heating.

  973. Eli Rabett Says:

    cohenite, good to see you, you probably forgot that the temperature and pressure decline with altitude in the troposphere which means that your statement about pressure and temperature having no effect is silly

  974. cohenite Says:

    Effect to mitigate saturation eli.

  975. Tim Curtin Says:

    Eli Rabett said March 29, 2010 at 10:59
    “cohenite, good to see you, you probably forgot that the temperature and pressure decline with altitude in the troposphere”. Is that why they are so good at raising global surface temperatures?

  976. philc Says:

    RE the inline reply to VS 3/25 12:22
    The point of VS’ presentation, which you seem to miss, is that if graph 1 is properly specified, then it would lead one to believe that the temperature data is showing an anomalous change- it goes outside the expected error interval. That is because the graph was mis-specified, and the “anomaly” is simply an error. Graph 2, using the correct specification shows that the data is properly all within the expected range, so the projected range of future temperatures is a correct forecast. Granted, Graph 2 might be a less precise forecast than one would like(“Anything goes, as long as doesn’t deviate too much from preceding values”), but it is an accurate statistical model of the data.

    Regards the model projections(http://www.ipcc.ch/graphics/ar4-wg1/jpg/fig-9-5.jpg) looks pretty good until you get to the details. First off, the yellow munge represents 14 models and 58 different simulations. The key question to me would be does any SINGLE model, starting with the estimated state in 1900, produce a 5 yr running average that comes close to the 5yr running average for the data? With a correct model you only need one,not 14 contradictory ones. Using 14 simply means that there is no agreement on what are the important model parameters and design, not that they provide useful information. The second point, using the 5yr models average, they do appear to follow the three volcano perturbations included. The question then arises, what about the 7-8 other simillar sharp temperature drops that were not tracked? Where were those volcanoes? or what else is going on that isn’t modeled?

  977. philc Says:

    On the recurring theme that there have been an abnormally high number of years with record high temperatures in the last 10 years.(from RC blog:”It is remarkable statistically that the 13 (now 14) warmest years in the modern record have all occurred since 1990.:) The global temperature estimates for the last 10 years all appear to show a divergence between from the apparent trend of 1975-2000. So pointing out that the estimates in the last 10 yrs are all record highs is a way to “hide the divergence”. If the temps continue bounce around the current 5yr average, they will continue to be “record highs” even though they wouldn’t necessarily be increasing much at all.

    Secondly, the actual temperatures appear to have been rising steadily since 1850 or so. So I would EXPECT the last few years to have record high temps compared to 1850 or 1900, or almost any time during that period. The records wouldn’t show gradually rising temperatures if the highest values weren’t recent!! Duh!!

  978. Eli Rabett Says:

    The models are good enough for private enterprise work Tim, certainly a lot better than the ones the bankrupt banks used.

  979. ge0050 Says:

    >>
    sod Says:
    March 28, 2010 at 07:40
    this is how the false and useless claims made by VS will be spun by “sceptics”. (also note the completely false ideas about CO2 not being well mixed by Tom Fuller, mentioned in this context)
    <<

    Sod, I'm surprised by your comment. Good science is skeptical science. It is what separates science from religion. Science allows new theories to come forward as new information is discovered.

    Where is the science that shows "Global Average Temperature" is normally distributed and thus suitable for naive statistical analysis? Without first establishing the probability distribution for "Global Average Temperature" how can anyone reasonably argue how much variability is "normal"?

    For example, on average the global temperature varies about 10 degress C between day and night. About 20 degrees C between summer and winter. Some places the variability is less, some places it is much more.

    The total warming since the "Little Ice Age" is about 1 degrees C – just a small fraction of the variability day to day and season to season. Only when you use an a "Global Average Temperature" does this look significant. When you graph the historical changes along with the daily and seasonal variation, the rise since the Little Ice Age is not obvious.

    Thus, in order to detect a trend, some scientists used the "Global Average Temperature". However, in doing so the averaging processes masked tha natural variability in the data. This is the crux of the problem. Creating an average masked the variability.

    What VS is explaining is that in the process of creating a "Global Average Temperature", the scientists overlooked a rigorous examination of the resulting value. They assumed a probability distribution for "Global Average Temperature". They did not test to see if "Global Average Temperature" was statistically suitable for the type of statistics they were using.

    What VS is explaiining is that just such a test exists. It was created for economic forecasting and has withstood rigorous examination. Further, when such a test is applied, then "Global Average Temperature" is not suitable for the types of statistical analysis that was performed. Thus, conclusions based on these statistics are in doubt.

    The primary conclusion challenged by these findings is that CO2 is the primary climate driver. Rather what VS is explaining is that the findings suggest that the sun is the primary climate driver.

    When ever you compute an average you need to be careful when applying the result because it can give misleading results. For example, if you have one foot in the freezer and the other in the oven you are on average statistically comfortable.

  980. Pofarmer Says:

    The models are good enough for private enterprise work Tim,

    Such as?

  981. Trevor Says:

    I haven’t read all of the nearly 1000 comments here, but something strikes me as funny in the earlier posts. Bart (and Tamino vicariously through Bart) seem to be using the fact that “temperature is bounded” as their chief statistical argument against VS’s “random walk” theory of temperature change. I think VS deflected this counter-argument quite effectively, but if you think about it a minute, if the alarmists are now ADMITTING that “temperature is bounded”, doesn’t that mean that “runaway global warming” is IMPOSSIBLE? Furthermore, if temperature is indeed bounded, aren’t we pretty damned close to the upper bound already?

    Also, the alarmists seem to be making a lot of the fact that the stratosphere is cooling while the troposphere is warming, as is predicted by AGW theory. But AGW theory also says that the upper troposphere should be warming more than the surface. And that’s not happening. The surface is warming more. Weather balloons confirm this. One of the two satellite records confirms this. The other satellite record DID confirm this, until NASA GISS “adjusted” it. And guess who is the head of NASA GISS. James Hansen. Al Gore’s scientific advisor.

  982. Shub Niggurath Says:

    “The models are good enough for private enterprise work Tim, certainly a lot better than the ones the bankrupt banks used.”

    Many of these private enterprises would have been better served if they listened to the proprietary models of Joe Bastardi or Piers Corbyn for the last winter. I wonder which models you are talking about.

    The ‘bankrupt’ bankers got hefty bonuses/packages nonetheless so maybe that is what they modelled. :)

  983. Eric Says:

    Language such as “alarmists” and “deniers” is way beneath this thread. It is not constructive, it is not interesting, and it is not new.

    Please take your agendas elsewhere – you know who you are. If you don’t have anything to move this conversation forward please exercise some self control and refrain from posting. Especially please refrain from posting over and over again.

  984. GDY Says:

    Eduardo, thank you for your contributions here! I have learned much from your dialogue with VS and others. And thank you as well for your larger endeavors to understand better the world we live in.

    VS – time to get on with the covariate analysis?

  985. Marco Says:

    @Trevor:
    You might want to get the facts right. NASA GISS has nothing (as in “nothing”) to do with the two available satellite records. Those are from UAH and from RSS, an independent organisation.

    Also, Jim Hansen (who you tried to link to RSS and supposed adjustments in that record (please provide evidence for that, so far UAH is the one continuously having to modify its procedures)) has already indicated that a Venus-like runaway effect is highly unlikely for earth. Nothing new, there. Unfortunately, even a ‘mere’ ten degrees extra is devastating for modern human society.

    Of course, that the upper troposphere is warming faster than the surface *is* being seen through proper data analysis, removing non-climatic signals.

  986. Pofarmer Says:

    “Of course, that the upper troposphere is warming faster than the surface *is* being seen through proper data analysis, removing non-climatic signals.”

    Now, hold on. If the surface temperature record isn’t being handled properly(just for arguments sake) why should we think the other records are?

  987. ge0050 Says:

    [Reply: The rate of increase in GISS temps over the last 10-12 years are of a similar rate as that of the past 25 years. … BV]

    If we look at the graph at the start of this blog and apply OLS, it looks like the past 10-12 years are a LOT more like 1940-1980 than they are for the period 1980-2000.

    Based on that I would conclude that for the next 20-30 years temperatures will remain about as they have been for the past 10-12 years. Assuming you can use OLS to predict Average Global Temperature.

  988. Crusty the Clown Says:

    Bart –

    Many, many thanks for hosting this thread and moderating it in such a reasonable and judicious manner. It would appear that genuine communication is occurring and new ideas are being heard and discussed.

    I had to laugh the other day when I realized that the discussions here were reminiscent of an extended conversation I had some twenty years or so ago with a metamorphic petrologist about his electron microprobe results from a core-to-matrix traverse of a garnet. He was plotting the concentrations of minor elements along the traverse without considering the underlying statistics. I pointed out that the variation in minor element corrected counts all fell within two sigma of the mean and therefore a single traverse had no statistical significance. His counter argument was that he understood that the Poisson statistics described the worst-case scenario and that he could get better results than that, and, besides, no one published an error analysis of their microprobe results anyway. I was initially concerned that I had missed something, but after conferring with standard textbooks, a seasoned x-ray fluorescence analyst, and a statistics professor the correctness of my opinion was confirmed. However, my colleague was unable to accept that his conclusion was wrong. Hope springs eternal and he discerned a trend, so he ignored my advice and published a paper which, while perhaps not entirely wrong, would have been a much better paper had it included a proper statistical analysis of the inherent errors.

    I seem to see the same attitude in many of the responses in this thread. When VS demonstrates the stochastic nature of the instrumental temperature record there are some who wish to sweep it under the rug and simply ignore it because it is inconvenient for them and it undercuts an hypothesis they are fond of. Unfortunately, this is not the way science makes its progress towards a fuller understanding of the mysterious workings of the universe. Theories are constantly revised or even abandoned based on new data or new analytical techniques. I believe it was Huxley who said “The great tragedy of science is the slaying of a beautiful hypothesis by an ugly fact.”

    VS –

    Brown’s Law informs us that “No good deed ever goes unpunished.” Please consider that when you are confronted with trolls. In my humble opinion you have performed a great service to all of science (and not just climatology) with your careful, meticulous explanations of unit roots, stochastic processes, and the proper statistical tools with which to evaluate them. I fear that interdisciplinary discussions such as the one occurring here are the exception rather than the rule, and all of science suffers when that is the case.

    Keep up the good work, gentlemen.

  989. ge0050 Says:

    Looking at this data from wikipedia, it certainly looks like current temperatures are well within the range of natural variation, and have been if anything unusually stable over the past 10 thousand years.

  990. RusQ Says:

    Eduardo,
    your reference to unemployment in 1930 and now(ish) is interesting. Was not around personally, but can fully appreciate the Catastrophic future outlook that prevailed at the time (and (probably) the likes of sod’s heartfelt concern for the ‘unwashed masses’ and their potential corrupted perceptions agoing forwards (may have been important to some then too)).

    Looking now at Time Series representations of various economic proxies – ‘blips’ like those (there were several subsequent (earlier also)) reveal a mass of information regarding ‘forcings’ prevalent in the short term .. and YETTI .. exceptionally weak in predicting economic activity 100yrs forward.

    it is an analogy. the TSs may well have statiscally different properties (and i do not ask VS to prove this :))

    also … speed reading a top quality thread (my thanks to all) several references to the bumblebee’s flight problem caught my attention for its incongruity .. did not the best science (fundamental physics based) of the day postulated grounding the poor bee? Somehow the (repeat) poster used this as evidence against what is? I must be confused.

    Go well, all

  991. eduardo Says:

    @ Steve

    Steve wrote
    ‘So my question is, how do you support the assumption that the pre-instrument temperature was stationary for relatively short periods like 125 years? ‘

    Dear Steve, a data generating process, DGP in VS terminology, is either stationary or not. It is not meaningful to say that it is non-stationary for a short period of time. The process is defined by its structure and parameters, not by one realization.

  992. eduardo Says:

    I suggest to you the following exercise:

    we have detected that the New York daily temperatures measured from January to May 2009 contain a unit root.

    Which is your favorite conclusion among this mutually exclusive and complete set ?

    1- there is an external factor causing this behaviour
    2-There is no external factor causing this behvaviour
    3- We cannot say anything

  993. sod Says:

    If we look at the graph at the start of this blog and apply OLS, it looks like the past 10-12 years are a LOT more like 1940-1980 than they are for the period 1980-2000.

    Based on that I would conclude that for the next 20-30 years temperatures will remain about as they have been for the past 10-12 years. Assuming you can use OLS to predict Average Global Temperature.

    while VS is using an obscure claim, to contradict the perfectly fine use of trend lines by real climate scientists in real scientific papers, his “sceptic” followers are making completely absurd “statistical” claims, based on the most absurd possible interpretation of what he writes.

    “sceptics” will refer to the VS claims for a couple of years. i already see the posts in front of me: “unit root proves that CO2 is not increasing”.
    “liverpool station shows sinking temperature. this is caused by the unit root”.
    “the unit root shows, that the hockeystick is broken”

    “sceptics” are not like scientists restricted by the need for a coherent theory. they can claim that “unit root” nullifies all linear trends on climate data, but keep posting false linear trend claims like “temperature shows cooling over the last decade”.

    this, again, is the reason why i don t think that such a debate makes any sense. look at all the abuses and misinterpretations of the VS claims under this topic alone. see how the sceptic “errors” basically never get corrected. confusion is their target, nothing else.

  994. ge0050 Says:

    For those like myself that find a picture easier to understand than words, here is diagram (WP) that demonstrates unit root:

    http://en.wikipedia.org/wiki/Unit_root#Unit_root_hypothesis

    Looking at the temperature data at the start of this page, it certainly looks like the visual explanation of unit root in WP. In point of fact, the graph in WP looks very much like the temperature record from 1910-2000. And if you look at the dashed project line in the WP it is clear why linear regression may provide misleading results when there is a unit root.

    What I find most interesting about this discussion is that this problem can be both detected and overcome mathematically. If I’m understanding the math correctly, you use the difference in temperature (the slope of the line) rather than the temperature itself to eliminate the unit root (motion). This new value can then be used for OLS.

    In the case of CO2 and GHG in general, I’m guessing here but I assume this is I(2) because its effect on temperature is logarithmic. You need to take the first difference to eliminate the acceleration, and the second difference to eliminate the motion. Hopefully someone can correct me if I’m off track. Solar variance however remains I(1) because it directly drives temperature. In the context of unit root, it has motion but no acceleration.

    What I understand from all this is that when you convert all the data to I(0) so that it can be used with OLS, then the confidence level for solar activity driving temperature is high, while the confidence level for GHG driving temperature is low.

    I find the mathematics behind this argument compelling.

  995. DLM Says:

    sod says: “while VS is using an obscure claim, to contradict the perfectly fine use of trend lines by real climate scientists in real scientific papers…blah…blah…blah”

    Bart, you seem to have left the building. Sod is trying to uphold your side of the discussion here, but I think most observers would agree that he is doing very poorly. Would you agree with sod’s characterization of what VS is doing?

  996. sod Says:

    For those like myself that find a picture easier to understand than words, here is diagram (WP) that demonstrates unit root:

    http://en.wikipedia.org/wiki/Unit_root#Unit_root_hypothesis

    Looking at the temperature data at the start of this page, it certainly looks like the visual explanation of unit root in WP. In point of fact, the graph in WP looks very much like the temperature record from 1910-2000. And if you look at the dashed project line in the WP it is clear why linear regression may provide misleading results when there is a unit root.

    did you read the article?

    for example this part:

    Economists debate whether various economic statistics, especially output, have a unit root or are trend stationary.

    looks like some people posting here, have jumped to a conclusion, right after the start of this debate on temperatures?

    Bart, you seem to have left the building. Sod is trying to uphold your side of the discussion here, but I think most observers would agree that he is doing very poorly. Would you agree with sod’s characterization of what VS is doing?

    i am really sorry. i don t live up to your expectations.

    as i noticed you did not find anything wrong with the comment i was quoting. so this claim is something that you agree with?

    Based on that I would conclude that for the next 20-30 years temperatures will remain about as they have been for the past 10-12 years.

    i will try hard to match this level of statistical understanding in my future posts!

  997. Kweenie Says:

    sod, why don’t you leave the discussion to real scientists like Eduardo and Bart. Your messages make things only worse for your side.

    “Bart, you seem to have left the building.” See this thread: https://ourchangingclimate.wordpress.com/2010/03/28/slow-moderation/

  998. Paul H Says:

    VS and Bart.

    I’m finding this discussion interesting, intriguing and quite stimulating. I’m not from a statistical background I work in atmospheric science, but I’m finding this idea of a random walk of the temperature series to be quite interesting.

    I’ve read Tamino’s response’s to VS’s criticisms and I’ll admit that I have no way of telling who’s right on this issue. From my reading of the references you provided it would appear that you’re correct. There is something about what I’ve been reading regarding the random walk hypothesis that caught my eye though: this idea that time series with unit roots could essentially look like they’ve had a linear transformation performed along them. The idea that you could have some perturbation to a time series and that you might not recover back to the linear trend. In the case of economics this could relate to GDP (see wiki on Unit Roots). There is an argument from some economists that data relating to GDP contains unit roots, and this means in real terms that decreased GDP experienced now will affect future productivity from now and until forever. You can essentially never get back to your original economic trajectory because your present economic woes set you back irretrievably. That sounds plausible. From that more intuitive explanation, I can imagine certain instances where I might expect the Earth’s temperature to behave with a random walk. For the record, I completely accept Bart’s simple energy balance model to a point, but I do think it needs some caveats that may lead to the temp. series behaving with a random walk.

    Let’s assume that we have two Earth’s one that has increased GHG concentrations in its atmosphere in line with the real Earth (Earth 1) and one that has had increased GHG concentrations like the real Earth but has also had a prolonged period of SO2 emissions during the middle part of its industrial history (Earth 2). Let’s also assume that these fictional Earth’s have no volcanic activity, no solar variability, and no other forcings affecting their climate. For Earth 1, we see increases in temperature with upward trends as close to monotonic that might expect. Lot’s of variability, but with essentially what visually appears to a stationary trend. For Earth 2, we see an initial increase in temperature and then see a decreasing trend in temperature during the period of SO2 emissions and then a recovery of the original trend and then an increase in temperature as SO2 emissions decrease. We might expect to see increases in temperature on Earth 2 similar to what we see on the real Earth. In terms of the Earth 2’s energy balance and its energy budget what just happened? And how does that relate to random walk hypotheses and the earlier analogy regarding GDP? One of the big climate drivers on our Earth’s real and fictional are the oceans, which, from my understanding, are considered to be the single biggest thing preventing the Earth from reaching radiative equilibrium on short timescales i.e. a decade approximately if there were no oceans. In real and fictional world’s 80-90% of the radiative forcing positive or negative is absorbed/emitted into/from the oceans and in essence we are waiting for the oceans to catch up with that radiative imbalance before we reach a radiative equilibrium. I think this is an important point. Earth 2 has experienced a decreased, and in fact negative, radiative forcing during the period of SO2 emissions. Therefore, its oceans will have begun radiating that energy back to space. Its oceans, at the end of the period of SO2 emissions, will have less energy contained in them. Given that, AFAIK, there are constrained limits to the rate of absorption of heat by the oceans that are limited by ocean mixing depth and rates of mixing depth overturning and deeper ocean circulation, it seems that Earth 2 would have a hard time recovering to the temperature trajectory of Earth 1. In essence Earth 2 now has a temperature trajectory with a unit root. First off, does that make sense physically to Bart, and does that make any sense in the realms of statistics (to VS)?

    What am I saying? I think that the Earth’s temperature series do have a unit root but there are reasonable explanations consistent with our physical understanding for why this might be the case.

    From my reading of Kauffman and Stern, 1999 my explanation isn’t too far fetched. Kauffman only gets a decent model when they account for an cooling aerosol affect in the Northern Hemisphere. Preliminarily, it appears that it is statistical studies that exclude genuine physical causes of unit root behavior in time series that are reaching the wrong conclusions. I would be intrigued by VS’s opinions on this matter. Perhaps analyses based on periods of time where we suspect a dominant positive or negative forcing with no breaks to forcings of the opposite sign would prove disprove this idea. Perhaps VS could perform this analysis on the period 1978-1992. Short I know, but if you go beyond 1992 you include the effects of Pinatubo.

  999. ge0050 Says:

    If we difference the trend lines used by real climate scientist in real scientfic papers to arrive at I(0) we are left with the identity “real contribution = all”. From there we can calculate “sod real contribution = sod all”.

  1000. xcyle Says:

    Sod wrote: “this, again, is the reason why i don t think that such a debate makes any sense. look at all the abuses and misinterpretations of the VS claims under this topic alone. see how the sceptic “errors” basically never get corrected. confusion is their target, nothing else.”

    This is the kind of bunker mentality that has led to the complete collapse of public confidence in climate science. The thinking goes “do not admit to even the most minor error because it will be seen as a PR victory for the ‘other side'”. Its the mentality that leads RC and Tamino to censor all dissent. Its the mentality that leads Mann to refuse to admit that he used a varve series upside-down. And it this bunker mentality that might send Phil Jones to jail.

    Give it up Sod, you’ve done enough damage.

  1001. DLM Says:

    sod:”i am really sorry. i don t live up to your expectations.”

    Actually, you have lived up to my expectations. But alas, I no longer find you amusing. I will join the many others who have wisely decided to ignore you.

    Thanks for the heads-up Kweenie. I apologize to Bart. Didn’t know that he had announced his temporary absence.

  1002. Anonymous Says:

    # Eli Rabett Says:
    March 29, 2010 at 16:13

    The models are good enough for private enterprise work Tim, certainly a lot better than the ones the bankrupt banks used.

    Nice one, Eli. I would suggest, however, that the difference is not necessarily in the quality of the models but more that the bankers forgot the fundamental rule of models “All models are wrong”. I don’t know about you, but I wouldn’t bet the bank on any of the climate models.

  1003. cohenite Says:

    co2 must have a unit root relationship with temperature because of Beer-Lambert and the difference between the effective temperature and the greenhouse temperature component; CO2’s heating effect doesn’t figure in the total temperature for that reason; as ge0050 notes Solar does because it drives effective temperature directly.

  1004. Frank Says:

    eduardo,

    What do you mean by “behavior”? A time series either has a unit root or it doesn’t, no?

  1005. Willis Eschenbach Says:

    eduardo Says:
    March 29, 2010 at 21:14

    I suggest to you the following exercise:

    we have detected that the New York daily temperatures measured from January to May 2009 contain a unit root.

    Which is your favorite conclusion among this mutually exclusive and complete set ?

    1- there is an external factor causing this behaviour
    2-There is no external factor causing this behvaviour
    3- We cannot say anything

    Eduardo, while you have raised an interesting question, I fear I can’t come to any conclusion until you tell us how you distinguish an “external factor” from an “internal factor”. For example, is the operation of the Constructal Law an external or an internal factor? Is El Nino external or internal? Is the “polar see-saw” external or internal?

    Finally, you have said that your list is “exhaustive”. It is not. One possibility, of course, is that we don’t have enough data. Another is that your unspecified dichotomy of “internal vs. external” is not mutually exclusive. Another is that the dichotomy is mutually exclusive, but the causation is a synergy of internal and external factors which would not occur without the operation of both.

    In any case … what is it that you are calling “external”? Is human action “external” to New York City? Is the earth’s orbit external to the annual temperature cycle?

  1006. Lance Says:

    WOW what a thread!!
    I am very happy to have found this thread, and see a (mostly) civilized scientific discussion, of which most goes over my head by the way. It makes me feel good that knowledge-progress is still being made on this subject by means of healthy and civilized discussion.

    However the “contributions” of Sod make me laugh and cry at the same time. This Sod character is the embodiment of everything that is wrong with the upper echelon climate-science clique (the likes of Mann, Jones, etc.). It is VERY OBVIOUS that he/she/they do not care about the advancement of climate science as a whole, all they care about is protecting their position and their precious CAGW theory. Any criticism (however well argumented and discussed openly, as is done here) on their theory is ridiculed and “defused” by personal attacks.
    Thank you Sod, for once again showing all of us what is wrong with climate science at the moment!
    Your posts here show that soooo clearly, it is funny actually.
    Sod, SOD OFF!!

  1007. John Whitman Says:

    VS,

    Saw your comments and mikep’s over at Lucia’s on 29 Mar (yesterday). Enjoyed the interation over there.

    Will you now here at Bart’s start a statistical analysis on the time series for CO2 forcing and CO2 conc? I would think it would be similar to analysis just completed on the GISS time series.

    Glad you have called out to: “unsheathe your steel… mother academia is calling you to arms.” : ) Indeed.

    Take care.

    John

  1008. JvdLaan Says:

    Eduardo said:
    I suggest to you the following exercise:

    we have detected that the New York daily temperatures measured from January to May 2009 contain a unit root.

    Which is your favorite conclusion among this mutually exclusive and complete set ?

    1- there is an external factor causing this behaviour
    2-There is no external factor causing this behvaviour
    3- We cannot say anything

    I was just comtemplating a same experiment with temperatures taken every 10 minutes on a random day in May on the northern hemisphere between 3:00 am en 3:00 pm. Thanks!

  1009. Steve Fitzpatrick Says:

    eduardo
    March 29, 2010 at 21:05,
    “a data generating process, DGP in VS terminology, is either stationary or not”

    I do not I agree with this. Is the temperature data collected during a single day, month, or year stationary? It does not seem to me that they are, since the basic requirement of a stationary process is lack of change in mean or variance over the period the data series represents. Average daily temperature data for the month of July in one location may appear stationary, while the data for the month of March at that same location probably would not. Any data sequence which is too short to accurately represent the true variability of the DGP does not appear to me to be treatable as a stationary process, because the information to determine an accurate stationary mean and variance is simply not present in a short series.

    Which is why I think the stationarity of the pre-instrument temperature needs to be evaluated based on proxy data which provide information about the variation in global average temperature during the Holocene. Ice core proxy temperature data probably is not a good representation, since it is known (or at least strongly believed) that polar temperature changes were much larger than global average changes over ice age/interglacial cycles. Variability in the ice core proxies during the Holocene is likely much higher than variation in the global average. But perhaps other proxy data (like ocean sediments from a range of locations) can provide more reliable information about the variability in Holocene temepratures.

    All that I am saying is that to assume the pre-instrument period temperatures were stationary (and so not treat the instrument temperature data as did VS) I think there needs to be some clear evidence in support of that assumption of stationarity.

  1010. ge0050 Says:

    It seems unlikely historical proxies for temperature would be stationary if the instrument records are not.

    As I understand the concept of unit root and stationary, a stationary process is something like a dam on a river. It can hold back the water temporarily, but over time it cannot alter the total volume of water flowing in the river.

    A non-stationary process is like a solar eclipse. When the moon passes between the sun and the earth, the solar radiation is lost permanently from the earth.

    So, when the earth heats and cools, if that change is due to something like heat being stored and released in the oceans, similar to water in a dam, then this could be stationary as the net temperature is unchanged in the long term.

    However, when the earth heats and cools due to a change in solar radiation, as might happen due to orbital mechanics, atmospheric transparency, etc., then that would be a permanent change similar to an eclipse.

    The problem in looking at the proxy records then is to separate out the stationary data from the non-stationary. Otherwise you cannot tell what you are measuring. Maybe the temperature change you are seeing is permanent, maybe it is temporary. How do you isolate cause and effect in terms of physical law if you cannot be sure if there has been a net change in the temperature?

    It might be argued that over very long periods of time, the stationary data in the proxies will average out, and only the non-stationary data will be visible. For example, any heat temporarily stored in the oceans would eventually be released. This I could agree with if we can be sure the timescale is long enough.

    In any case, this natural averaging of the stationary data over long periods of time indicates to me that the proxy records cannot be stationary.

    This strongly suggests to me that any standard statistical treatment such as OLS that uses proxy data without first removing the motion via the T(1) -> T(0) transformation is suspect in its conclusions.

  1011. ge0050 Says:

    >>
    co2 must have a unit root relationship with temperature because of Beer-Lambert
    <T(0) and is strongly correlated with temperature transformed T(1)->T(0), that would seem to me a strong indication that CO2 is driving temperature.

    If however, CO2 is transformed T(1)->T(0) and is strongly correlated with temperature transformed T(1)->T(0), that would seem to me a strong indication that temperature is driving CO2.

    This is just a guess on my part and comments are welcome. Does this sort of approach provide a means to isolate cause and effect for CO2?

  1012. ge0050 Says:

    hmm, my previous post got corrupted in some way.

    What I was trying to say was this, is there a way to separate cause and effect when it comes to CO2 and temperature?

    if we look at historical CO2 versus temperature, such as:

    http://en.wikipedia.org/wiki/File:Co2-temperature-plot.svg

    we can see a linear trend. Historically, when temperature increases CO2 also increases. However, Beer-Lambert provides that GHG effects are log functions.

    Therefore, if we compare CO2 T(1)->T(0) to temperature T(1)->T(0) does this help isolate the linear trend such that a strong correlation would indicate temperature drives CO2?

    And similarly, if we compare CO2 T(2)->T(0) to temperature T(1)->T(0) does this help isolate the log trend such that a strong correlation would indicate CO2 drives temperature?

  1013. eduardo Says:

    @ Steve,

    Dear Steve,

    no, sorry, you are wrong. You are confusing two different, albeit related, concepts: on one side we have the true process which is defined by its equations and parameters. For this you can calculate its true mean , its true variance and find out if it is stationary or not. You do not need any data for this. On the other hand we have the observed data that may be described by a certain process (or perhaps by several different processes) , and we can try to estimate from the data the true structure and values for the parameters.

    That the underlying processes for temperature must be stationary follows because the variance cannot be grow unlimited with time. It would violate the conservation of energy, for instance.
    Please consult any book on statistics.

  1014. eduardo Says:

    @ Willis,

    External factors, or forcings, are those that are considered not to be influenced by the climate itself. For instance, the output of the sun would an external factor, or a volcanic eruption or the Earth’s orbit. They are consider to evolve independent of the Earths climate. All other variations are not external factors: ENSO, PDO, the polar sea-saw there are phenomena or internal to the climate, the form part of the global climate, they interact with each other, and are influence by the external factors.

    If the example of the temperature in New York is unclear for you, you can consider instead the global temperatures, which also display an annual cycle. Perhaps this is even a better example.

    If the expression external factor causes you problems, you can rephrase the question as

    1. The temperatures have a deterministic component
    2. The temperatures do not have a deterministic component
    3. We cannot say anything

    I think the alternatives are indeed mutually exclusive. If you think we do not have enough data, then your option would be 3.

    It is a clear question: we observe the temperatures from January to May, a I(1) test indicates us that the series has a unit root, and we as curious observers of the world think about what we can conclude. Scientist do it all the time. I am curious to read what you would do.

  1015. Steve Fitzpatrick Says:

    Eduardo,

    I must admit I am confused by what you said. I do not see the connection between stationarity (or lack thereof) and conservation of energy, when we are looking at a short data sequence (short in a relative sense).

    The issue as I see it (trying to avoid statistical definitions) is that there is a real possibility that there is a level of ‘natural’ variation in global temperature during the Holocene which is not apparent in the instrument temperature record, and which could lead to substantially incorrect attribution of the measured 20th century temperature rise to GHG forcing. For example, suppose that in addition to known pseudo-cyclical processes (ENSO; ~3 years, AMO; ~60 years, PDO; ~20 years) there is a natural pseudo-cyclical process which has a characteristic period of 120+ years. How could analysis of the instrument temperature record ever account for ‘contamination’ with a pseudo-cyclical process that has a period comparable to or longer than the length of the entire instrument record?

    I do not know how one can ever assign a confidence interval for warming attributed to GHG forcing if that forcing is convoluted with an unknown ‘background’ variation, unless there is a way to step back an see the ‘bigger picture’ of normal background variation during the period when GHG forcing was not changing significantly. Am I missing something here?

  1016. VS Says:

    Hi Eduardo,

    Are you stating that the data generating process governing the instrumental record must exhibit mean-reversion (i.e. a necessary condition for stationarity)?

    Just checking.

    All the best, VS

  1017. Steve Fitzpatrick Says:

    eduardo,

    One other thought. You wrote:

    “on one side we have the true process which is defined by its equations and parameters. For this you can calculate its true mean , its true variance and find out if it is stationary or not.”

    This statement strikes me as “upside-down”. The ‘true process’ seems to me to be the physical process that we are talking about. Equations and parameters are only a model of the true process; that is, the equations/parameters are an approximation of the true process, and can never be exactly correct.

    So in the absence of data, it is only a model on which you can calculate the exact mean, variance, and whether or not the model is stationary. Any model is (of course) limited to the accuracy of the modeler’s understanding of the physical process. The model may be a very good approximation of the true process, or it may be quite bad, depending on how well we understand the process.

  1018. VS Says:

    Allow me rephrase that:

    Are you stating that the data generating process governing the instrumental record, in the absence of anthropogenic forcings, must exhibit mean-reversion (i.e. a necessary condition for stationarity)?

    Best, VS

  1019. Adrian Burd Says:

    Steve,

    You write “Equations and parameters are only a model of the true process; that is, the equations/parameters are an approximation of the true process, and can never be exactly correct.”

    To some degree this is true. However, look at it another way. Equations and parameters form a representation of our best understanding of a system (whether it be planetary climate, the mass of the Higgs boson, of the motion of the planets around the Sun). Comparing our best understanding (as codified in those equations) with observations allows us to compare quantitative predictions against data and see where our understanding is lacking.

    To say something is “only a model” is rather a strange thing to say. General Relativity is “only a model”, the equations describing the transfer orbit of a space craft from Earth to the Moon is “only a model” containing parameters that we do not know precisely and effects that we neglect (GR, the gravitational pull of Pluto etc), but I don’t see people saying that they are “just a model” and so we should not send space craft to the Moon!!

    Climate models encode our current best understanding of the planet’s climate. What is more, if we do not include anthropogenic production of greenhouse gases in those models, we get an answer that doesn’t agree at all well with observation. If we do include them, we get results that agree quite well (though not perfectly) with observed increases in global temperature. Could we be fooling ourselves? Possibly, but this is becoming increasingly unlikely. Yes, the understanding climate is a difficult and messy issue, but there is a great deal that we do know. I personally would be very surprised indeed if increased CO_2 was not responsible for increased T but some other as yet to be elucidated effect was able to mimic the effects of CO_2 so precisely.

    That’s my 2d worth.

    Adrian

  1020. Frank Says:

    eduardo says:

    “It is a clear question: we observe the temperatures from January to May, a I(1) test indicates us that the series has a unit root, and we as curious observers of the world think about what we can conclude.”

    and, the choices are:

    1. The temperatures have a deterministic component
    2. The temperatures do not have a deterministic component
    3. We cannot say anything

    Clearly, if the ONLY data at hand is the temperature series, the answer to your question is number 3. However, as “curious observers of the world” we know that, in addition to the short temperature series, the earth is tilted on its axis and revolves around the sun every 12 months. In this case, the NYC temperature series has at least one deterministic component (solar insolation in addition to others we may, or may not know of) and the correct answer is number 1.

    With all due respect, your initial challenge has the appearance of a logical “straw man” intended to convey the idea that a statistically useful predictions can be inferred from a data series having a unit root as long as we know that there is an underlying deterministic process.

    If this were so, we could, of course, fit a regression line to the Jan-May series and “project” temperatures above the melting point of lead in the not too distant future. Come to think of it, the idea of extrapolating hstorical temperatures sort of brings this thread full circle…

  1021. Al Tekhasski Says:

    Adrian,

    Could you please quantify how exactly precise is the mimicking of the CO2 effect on global temperature in the models? Could you compare this measure if the CO2 chart is replaced with Dow Jones Industrial average?

  1022. John Says:

    Adrian

    It would worry me more if the models did agree exactly with the observed temps. The potential error in the observed datasets is likely greater than the increase caused by co2.

  1023. Willem Kernkamp Says:

    Adrian,

    The statistical question is really whether we should “force” these models to be so sensitive to CO_2 in order to match the recent temperature record. If the record is too short and the statistical properties too weak to draw conclusions we should not.

    My model of a casino is that over time you loose money to the bank. Then, I have a winning evening. From my LS trend analysis of that evening’s earnings I momentarily get visions of great future wealth. That is, until VS sets me straight and I realize that my data set is not statistically significant and my original model may still be correct.

    In short, the climate models may have been let astray by the recent temperature trend.

    Will

  1024. DLM Says:

    Yo Adrian,

    “Could we be fooling ourselves?”

    Yes. See Climategate, Glaciergate, Sealevelgate, etc.

    Would you trust the forecasters at the British Met Office (barbecue summer?) to chart a course for a flight to the moon, on which you were a passenger?

    Computing spacecraft trajectories with a high level of confidence is a lot simpler than forecasting the weather a week in advance, or the climate a hundred years hence.

    Just how much increase in T do you believe there has been since the start of the Industrial Age, and how much of that do you believe can be attributed to anthropogenic GHG? Show your work, please.

  1025. Paul_K Says:

    Sod,
    Thank you for your contribution to this conversation, and thank god you didn’t listen to those detractors who suggested you might have the IQ of a polo mint. Your insightful assertions were sufficiently eye-opening for me that it has allowed me to generate a COMPLETELY NEW theory of global warming. I agree with you that the temperature trend is “obvious” and one would have to be some sort of moron (or more malevolently a statistician trying to fool us all with some ridiculous mathematics which no-one understands).
    In light of your advice, I therefore noted that the number of births in the US has been increasing monotonically and REACHED RECORD levels in 2007, almost alongside average surface temperature. Using your methodology, I found that a correlation of birth rates to CO2 production (1960 to 2007) produces an R^2 of 94%, STATISTICALLY SIGNIFICANT AT THE 99.2% level. It is eminently clear to me that if we can only cut the US birth rate then we can solve global warming at a stroke. I suggest that we share the Nobel prize?

  1026. John Says:

    Weather is not climate.

    The irony about that is the weather regulates the climate!!?

  1027. Paul_K Says:

    VS,
    Don’t go away soon!
    I believe, on re-reading Lucia’s comments, and your response, that there were some misunderstandings there.
    Lucia was, and may still be, under the impression that your a priori assumption for the alternative hypothesis against a unit root null was a trend gradient of zero (I think). I also think that she may have misunderstood your test of a “drift” as a test of an intercept term in a trend, rather than a constant in the first difference term.
    To settle this one way or the other, and maybe to help the still-puzzled trend stationary church-goers, can you maybe do the following test:-

    From the sample datatset, fit the best-fit trend line, and compute the residual variance.
    Run a MC to generate realisations of the trend, varying only the residual error term in accordance with the sample residual variance.
    Test the realisations using ADF with the actual estimated lag stats to see how many times the test “fails to reject” the unit root hypothesis .
    Hopefully, this should make it clear to lots of people besides Lucia what the Type 2 error really is against the assumption of TREND-STATIONARITY.
    Before you ream me out about asking you to do homework, I would love to test this myself, but no longer have access to a free stats pack.
    I honestly think that this wouold help convince the people who are (still) yelling that “There must be trend there!”
    Don’t let the digressions get to you. There are a lot of people listening carefully.

  1028. Willis Eschenbach Says:

    eduardo Says:
    March 30, 2010 at 18:20

    @ Willis,

    External factors, or forcings, are those that are considered not to be influenced by the climate itself. For instance, the output of the sun would an external factor, or a volcanic eruption or the Earth’s orbit. They are consider to evolve independent of the Earths climate. All other variations are not external factors: ENSO, PDO, the polar sea-saw there are phenomena or internal to the climate, the form part of the global climate, they interact with each other, and are influence by the external factors.

    If the example of the temperature in New York is unclear for you, you can consider instead the global temperatures, which also display an annual cycle. Perhaps this is even a better example.

    If the expression external factor causes you problems, you can rephrase the question as

    1. The temperatures have a deterministic component
    2. The temperatures do not have a deterministic component
    3. We cannot say anything

    I think the alternatives are indeed mutually exclusive. If you think we do not have enough data, then your option would be 3.

    It is a clear question: we observe the temperatures from January to May, a I(1) test indicates us that the series has a unit root, and we as curious observers of the world think about what we can conclude. Scientist do it all the time. I am curious to read what you would do.

    So what you are asking is, do non-climate factors (changes in earth’s orbit, solar variations, cosmic ray variations, volcanoes, etc.) affect the climate? Regardless of whether any given climate record does or doesn’t contain a unit root, the answer is clearly yes …

    At a very basic level, the earth rotates. If it didn’t rotate, the climate would be very different. The rotation leads to temperature variations, which lead to tropical cloud variations that increase low temperatures and decrease high temperatures. The external factor of the earth’s rotation, which leads to widely varying diurnal forcing, has a huge effect on all parts of the climate.

    So before I didn’t understand your dichotomy, but now I fear I don’t understand your point. The entire system is externally forced by a variety of phenomena. And? …

  1029. VS Says:

    Hi Paul_K

    I’ll try to do more simulations in the future, but I’m pretty busy right now :)

    I don’t think lucia has had the time to read my posts in detail, I believe doing so would clear up a lot of misunderstandings.

    In the meantime, I’m really eager to hear a (straight) answer to this question, from Eduardo and/or other climate scientists.

    It’s very important.

    Cheers, VS

  1030. adriaan Says:

    Adrian,

    Have you ever had the opportunity to look into the code underlying the different models? Are you aware how many non-physical assumptions are in these models in order to have all descriptors tuned in agreement with the training set? And do you know how many parameters have to be fiddled with in order to get the models agree with the observations? And that the choice of the descriptors used for the modelling is up to the operator of the model? Did you?

    I did, and I will not touch a model ever since without knowing what the model does, what the parameters are that are important for the model’s predictions, and which descriptors are important for the model and which are not. And this was only a model dealing with the properties of a single molecule…..

    Modelling is worthless without a high quality training set with observed properties of the system you are studying. It seems that this may be lacking in climate science: a high quality reference dataset.

  1031. HAS Says:

    Just a little observation on the NY/gloabl temp question, and then some wider comments on the implications for model verifications.

    eduardo I think we need a more precise specification of the question. The structure of the time series/nature of the DGP is an empirical issue i.e. it is something observed, just like the last temperature in the series, and just like that last point in the series it is only observed with a certain degree of certainty. So in asking the question you should be asking about the confidence we have in giving the answer – otherwise the answer is always going to be 3. we are never absolutely certain.

    Taking a step further there is a difference between asking “what does the time series as observed tell us” and “what does it tell us about phenomena not in the time series”. So continuing the analogy if asked “what is the last temperature in the series” I will be quite clear in my answer whereas in answering “what is the next temperature in the series” the answer will potentially be quite different. So it will be with the deterministic/stochastic question.

    I had been giving some thought to some of the implications flowing from the structure of these time series in terms of how this knowledge aids the understanding of climate change. Before going on I want to just reiterate that in the case of temp VS’s result has been derived from just one dataset. I think there is still work to do to demonstrate using others, and to go into the data to see what the source of this structure is (and incidentally, if this process comes through from the individual temperature records, whether the subsequent processing of the data to give the global series remains robust).

    I’m please to see some discussion about doing some similar work (beyond B&R) on the GHG forcings series because as I understand it this series is a less processed one.

    There has been quite a bit of heat generated (pardon the pun) over what B&R means for the link between GHG and temp series. I suspect that this is perhaps the more complex bit of the puzzle to deal with because it is pretty clear from just a naïve appreciation of the physics that this relationship works both ways with various lags (aside from the normal issue of GHG concentrations increasing temp, increased temp effects chemical reactions i.e. feedback as mentioned by a number of commentators).

    For this reason I’d tend to start with the question of why does the GHG series show the structure it does, and more directly how does this line up with the results that come out of existing models of the processes that lead to GHG concentrations in the atmosphere and their forcing effects.

    This should help with validation of the models of this subsystem.

    Thinking about this issue raised a wider question in my mind about model validation.

    If we are saying that both GHG forcing and global temperature series have a certain structure, is it more important when validating models of those processes that they:

    1. replicate the actual observations we have seen over the last 150 years, or
    2. produce the same structure in the outputs of interest

    Counter-intuitively I think the second answer is correct, and tuning a model to generate the actual series may well embed a structure that is inconsistent with the observed structure. The more apporiate technique would be to use the model that shows the structure and then before forecasting initialise it using the observed parameters.

  1032. Paul_K Says:

    VS,
    A separate question, maybe easier for you. You have focused on the time domain for the temperature series, but analysis in the frequency domain suggests a lot of cyclicity in the surface temperature trend at periodicities ranging from geological (Milankovitch, Gleissberg) down to oceanic cycles (62 years+-10) and solar cycles (11 years+- 3).
    Suppose that one could “decompose” the temperature trend by removing periodic cycles to leave a (predefined null) non-linear multivariate function plus a stationary trend. Is there any reason why one shouldn’t test the significance of the results using conventional nested-model statistics. Are there particular bear-traps that need to be accounted for?

  1033. Al Tekhasski Says:

    John wrote the standard AGW thesis: “Weather is not climate”.

    Climate is a finite-length global average of weather, by its very definition. There are plenty of evidence that weather is chaotic. Any simple function (like a running average) of a chaotic function is still a chaotic function, it is just a low-pass filter of otherwise chaotic weather variables. What makes you think that the weather attractor does not have long-drifting natural components that are not filtered out by the 30-year running filter called “climate”?

  1034. Paul_K Says:

    VS,
    Apologies. My question should have read…”…non_linear multivariate function plus a stationary error term.”
    Just having a senior moment.
    Paul

  1035. VS Says:

    adriaan,

    I think Adrian has looked into those models ;) I completely understand his position. Let me just add that every model is per definition wrong, the question is which model is least wrong to answer the particular question we ask it.

    That’s why we need to use formal methods to distinguish between them. These are issues concerning scientific methodology.

    I believe empirical testing (i.e. statistics/econometrics) to be crucial part of the scientific method. This is also what we are discussing here, at the moment

    Paul_K, whoa, eh.. I (or preferably somebody who knows more about that) would have to look into it, but in statistics, it’s always best to assume that there are bear-traps… because there usually are :)

    So in short: I don’t know :)

    Again, so we don’t forget: I would like an answer to this question from a climate scientist (especially those analyzing the instrumental record).

  1036. eduardo Says:

    @ VS

    Dear VS,

    I try to clarify the definitions because Willis comments makes me think that we may have language problem here. Let assume that the temperature variations can be decomposed in those externally forced and those internally generated. Those externally forced are the results of variations in solar output, for instance. Those internally generated are the results of turbulent, stochastic dynamics, and include ENSO, PDO, and all thos quasi-oscillations and many more. The latter would be always present because the climate is a chaotic open system. The former would not exist if the external factors (sun, volcanic eruptions) were exactly constant in time. You can perhaps consider the economy of the world, with an independent and unique central bank that sets interest rates blindly once a year or at random points in time. You will have some internal random variations in output and some ‘forced’ variations in output that you can ascribe to the whims of the central banker.

    Now what I claim is that the process governing the internal chaotic variations in the climate must be mean reverting. And if the external factors do show variations and these can be described by a stationary process, then the process governing all temperature variations combined must be also mean reverting.

    This is just physical reasoning. In econometrics you may have processes that are per se non-stationary (prices, GDP, wages, etc). In nature I cannot think of such example. For instance, the energy of the system sun-earth is finite. So the mean temperature of the Earth can never be so high that the its energy surpasses this limit.

    But this is not the objection that I have. I have no objection to ‘describe’ the instrumental record as a I(1) process, even if I think it is physically impossible. My question all the time has been, what do we learn from it? does the property I(1) tell me whether the temperature variations have been caused by increasing solar output or whether they are randomly internally generated. This is the answer that I want to find. I and nobody wants to predict statistically the temperature in 2050 or to know whether the temperature in 2010 is compatible with some type of stochastic process. If the property I(1) cannot discriminate between both possibilities it is not relevant in this context.

    I am suspecting that the solution to my question is co-integration, and not I(1) per se. If thats the case , let us move on and stop arguing in circles.

  1037. VS Says:

    Hi Eduardo,

    I think we are still talking past each other. Language problem indeed :)

    I am not disputing that the DGP of the long term record is mean-reverting (i.e. the current Interglacial). I’m also not disputing the physical nature of this prediction.

    Note that the data generating process is not the true underlying long-term physical process. It’s simply the process that generates our observations.

    I’m simply asking you whether you believe that the process generating our instrumental temperature record (i.e. the DGP governing our instrumental record, i.e. the measured temperatures on our 128 year interval), when we account for the various exogenous forcings (volcanic, anthropogenic, you name it, one shot hits), must be mean reverting.

    Again, just checking.

    Cheers, VS

  1038. Paul_K Says:

    VS,
    Your important question:
    “Are you stating that the data generating process governing the instrumental record, in the absence of anthropogenic forcings, must exhibit mean-reversion (i.e. a necessary condition for stationarity)?”

    Mmmmmm. I believe that an honest answer from either AGW proponents or opponents would have to be “no”. However, the question comes down to timeframe.
    The instrument record itself is very short relative to the known external forcings which control temperature on a geological timescale. Hence one could summarily dismiss the idea on the assertion that a short-chain sub-sample of such data – being significantly less than the maximum known periodicity of cyclic phenomenon – should demonstrably NOT show mean reversion. Equally one can argue that, since the sun is cooling, the long-term tenmperature trend is a cooling Earth.

    However, both of these discussions can be dismissed on grounds of relevance – the first because of the very long wavelength of these cycles (low DT/dt in the short term) and the second becuase the rate of change (lnegligible DT over 128 years) is close to zero for all practical purposes.

    One is left then with the very basic “zero order model”of Earth which basically says that over time, the average total input radiative flux from the sun must equal the total radiative energy emitted by the Earth. One need only think about this for a few seconds before realizing that this does give rise to “mean-reversion” in temperature, assuming no other change in external forcings, BUT ONLY BECAUSE IT IS ASSUMED IN THE FORMULATION.

    I think that it is fairly safe to say that there is no powerful theoretical reason why the instrumental record, given its short timeframe, should exhibit mean reversion characteristics.

  1039. John Says:

    Al

    Sorry that was a failed attempt at humour towards DLMs comment

    VS. Awesome question!! May I take opportunity to thank you for the time you have put into this very interesting thread. Your more formal approach to testing seems to me more in line with modern standards than a lot of climate scientists use, They remind me of the 70’s back of a fag packet engineers.

  1040. Paul_K Says:

    Eduardo,
    Our posts crossed. Your distinction between exogenous and endogenous forcings is valid and useful.
    But if one accepts that the overall amplitude of change in the exogenous forcing has been smallish for the last few thousand years (say), then we have still seen variations in surface temperature from Roman Warm Period through Cool Dark, Medieval Warm Period, Little Ice Age and Modern Warm Period. (I don’t want to get into a competition about the temperature amplitudes associated with each of these periods.) If one accepts that there is SOME periodicity there of order several hundred years, then one shoudl argue that the ACTUAL OBSERVED intrumental record of 128 years should NOT be assumed to show mean reversion, since it is “within cycle”.

  1041. HAS Says:

    Paul_K

    In fact it should be a reasonably straight forward analysis to ask two questions about stationary series with very large variance (is that how you’d describe a long wave length?):

    1. What are the odds of drawing a sample of contiguous 150 data points from that series and not being able to detect the stationary process?

    2. How long a sample of contiguous data points do you need to confidently detect the stationary process?

  1042. eduardo Says:

    @ Willis,

    ‘So before I didn’t understand your dichotomy, but now I fear I don’t understand your point. The entire system is externally forced by a variety of phenomena. And? …’

    The entire system is externally forced, but not all variations are externally forced. ENSO, and PDO and so on would exist even if the sun was completely constant, and we had no volcanoes, and the rotation of the Earth were constant, etc, etc..

    My point is the following:
    let assume for the moment that the 20th century trend is caused by CO2. The analysis will indicate that temperatures are not stationary. So unit root.
    Now let us alternatively assume that the observed 20th century trend is caused by random internal processes that happens to be also unit root and not by CO2.

    If both possibilities are compatible with the analysis, I claim that the analysis is useless. It doesnt tell me anything. It doesnt prove nor disproves anything per se.
    It is just a description of something that could have multiple causes.
    You may retort that here there is nothing to explain, it is just ‘unit root, thats normal’. And I would say, well, no, there are no ‘normals’. Even if the mean annual temperatures were just a boring gaussian random noise with mean 13C and standard deviation 0.1 C, one should be able to explain why the mean is 13 and why the standard deviation is .1 C and why it is a boring Gaussian noise.

    My example with the dichotomy illustrates that even in the case when we know the true cause, the ‘explanation unit root’ doesnt allow you to reach the right conclusion.

  1043. phinniethewoo Says:

    it’s been said somewhere that it is important that sunlight directly reaches earth’s surface..now we are at 60-70% with the present atmosphere?

    adding CO2 makes that %rise, until ultimately we have 100% .
    This would mean , imho, that we have a 1st order generation of H2O in the amosphere falling away..
    this could trigger an iceage.

    milancovitch physical explanations all you want, but it seems scientific observation (tsa) of the data indicates earth’s temperature seems a variable , stacking/unstacking unto itself year on year (or month on month), if that is the meaning of being I(1).

    What we do not want is a trillion dollar investment in windmills only to find out that what is needed is a trillion dollar investment in clouds or dust to avert an iceage.

    anyways , it is just my theory. It is not based on scientifically analysing observed data.. so i am in the same camp as any alarmist, or alarmist “convinced” institute

    What does the Cameroun institute thinks about the issue? sure they have thought about checking observations before joining the march of the convinced institutes?

  1044. eduardo Says:

    @ Paul

    ‘Assuming there is a ‘periodicity..’.

    But this argument has nothing to do with the ‘unit root’ stuff, right?

    In principle your argument could be valid, but: one has to show that there was such periodicity in the past, of the right amplitude and find a cause for it. Then show how this periodicity also causes more warming in the Northern latitudes than in the equator and at the same time cools the lower stratosphere. In other words, explain more than CO2 can explain. Thats the correct procedure. If you find one, please tell me. I would be very happy to share with you the Nobel prize. :-)

    I think however this is another topic, and we can easily get entangled in other questions.

  1045. phinniethewoo Says:

    the DGP is gov’d by a mix of compre and non-compressible navier stokes equations and heat equations on a sphere that’s spinning around the sun..zack zack now it’s cool facing -273 the dark night zack zack now it’s facing 250W/m**2
    I think the statespace for the measure defined by averaging 4000 (uhi) sites looks like a christmas tree littered with lorentz attractors, and is all but stationary.Also without humanity.

  1046. Steve Fitzpatrick Says:

    Adrian Burd
    March 30, 2010 at 21:06

    Every intellectual construct that purports to represent reality is only a model of that reality. Of course, some models (eg. ideal gas law, Heisenberg uncertainty principle, relativity, and many more) have proven to be extremely robust and accurate.

    That does not mean that climate models are in any way comparable in terms of being robust. My argument with eduardo is quite simple: sure, climate models can be used to assign a certain level of warming to GHG forcing, but the unanswered question is: how do we know if the current understanding (as embodied in climate models) is a reasonable representation of that physical reality. If we say that the instrument temperature record is non-stationary specifically because of GHG forcing (which it seems to me to be what eduardo is saying), then I ask how we know that is correct without being able to see the temperature record in the absence of GHG forcing.

    I honestly do not think that eduardo has addressed this question in a meaningful way.

    My position remains that only a broader perspective (earlier Holocene temperature data compared to recent data) can provide the background to know if the models are close to accurate. Alternatively, we can wait 50 years to see how the models do; this is (I suspect) what will actually happen.

  1047. Steve Fitzpatrick Says:

    Paul_K
    March 31, 2010 at 00:29

    Thanks. Better said than I could say, but exactly what I have been trying to say.

  1048. Steve Fitzpatrick Says:

    eduardo,

    The issue is not if there is a GHG forcing (I do not think anybody commenting here doubts that there is). The issue is uncertainty. Please explain how you would assign uncertainty limits to the effect of GHG warming in light of known, but poorly defined pseudo-periodic temperature variation and in light of possible longer term natural pseudo-periodic temperature variation. How do you do this based only on the instrument temperature record, which is not long enough to provide information about longer term variations?

  1049. HAS Says:

    eduardo

    You are missing the point that the data is saying something beyond the nature of the process. It is also saying that you don’t need to assume an external GHG impact to explain the late 20th century global temp series.

    I also think the discussion about getting the wrong answer if we knew the right answer is a distraction (including my comments on second thoughts). If we know what system we are in then we can tell whether it matters if you interpolate the DGP from a small sample, and in any event I don’t think you are worrying about the long wave length stuff, you’re worried about detecting external GHG impacts. So the long wave length stuff is a straw man.

    To clarify can I just get you to mentally check through the following points in your mind and see if you agree:

    1 The GISS data shows that we can with confidence say it is I(1) with unit root.
    2 This has an important implication – be careful about the use of statistical techniques that don’t cope with this.
    3 Using this insight into the GISS data we also find that we can not statistically distinguish the temp series from the period of GHG increases from the prior to that period.
    4 This is a very limited experiment with a very limited dataset so we should do more, taking into account point 2 and using 1. to help form our hypotheses.
    5 Empirical results that don’t generate data consistent with 1. should be treated with care, until 1. is modified, disproven or found to just be a limited result.

  1050. cohenite Says:

    VS says:

    “Are you stating that the data generating process governing the instrumental record must exhibit mean-reversion (i.e. a necessary condition for stationarity)?”

    This is the issue; at least it is the guts of the dispute between Foster et al and McLean et al; Foster say that natural factors contribute to temperature variation but not trend; the natural factors oscillate and the cold and warm parts of the cycle neutralise. But ENSO asymmetry is well documented [see Monahan and Dai, Sun and Yu, Stockwell] and the heat source for such non-linearity over the 20thC is well documented with sufficient TSI in the beginning to middle 20thC and the Pinker effect present in the latter part of the 20thC where decreasing TOA SW flux and increasing surface SW flux occurs despite a moderating sun due to a well documented decrease in cloud cover.

    There is no need to resort to GHG/CO2 as a temperature driver. CO2 cannot be the driver because of Beer-Lambert; increasing CO2 has less LW to absorb; the effect is exponentially declining; but even this delimiting parameter is further mitigated by convective process; heat transfer by convection exceeds metres per second; heat transfer by diffusion [ie CO2 raidative transfer subject to Beer-Lambert decline] does not exceed cm/s and is swamped by the convective process.

    A further point about the current temperature gradient not being exceptional; this is confirmed by Fig 9.5 in AR4; the pre and post 1940 plots are identical; Tamino disagrees but in the valley of the blind the one-eyed man is king.

  1051. Willis Eschenbach Says:

    Eduardo, I think part of the problem might be in the term “mean reverting”. Let me give an example.

    Suppose I have a car on a level stretch of road. I fasten the gas pedal at a certain point. The car accelerates until power is equal to wind resistance, and after that the car stays at that speed, say sixty miles per hour.

    Now if there is a bump in the road, a rock or whatever, when the car hits it the car will slow down. Soon, however, it will speed up until it once more is running at exactly the same speed as before it hits the road, sixty miles per hour.

    Obviously, it is reverting to the mean of sixty per. Mean reverting behaviour.

    Now, let’s consider a car with “cruise control”. This adjusts the gas to maintain the speed of 60 miles per hour. If it hits a rock and slows down, it gives the engine a bit more gas until the car reaches 60 mph again, then cuts back on the gas to maintain the speed.

    Obviously, it is also reverting to the mean of sixty per. Mean reverting behaviour.

    However, the mechanisms in question are very, very different. In one, there is an active governor at work. In the other, it is merely staying at the point where input = resistance. The implications of these two mechanisms are also very different.

    Which type of “mean-reverting” behaviour are you referring to here? One? The other? Both?

    I ask in part because before, you said that the temperature variations of the earth appear to be bounded. I asked what mechanism enforces those bounds, without an answer. The first mechanism I point to above could not enforce bounds on the car’s speed, but the second mechanism could. So it is not just of theoretical interest.

  1052. VS Says:

    Hi Eduardo,

    I understand that coming from your methodological point of view, you have already elaborately answered my questions, so it might seem like I’m ‘nitpicking’.

    However, coming from my methodological view, the question is still unanswered.

    You state that both are a description of the system. I agree. However, allow us to go back to the very first application of OLS trend regression analysis, the estimation of the parameters governing planetary orbits by its inventor Gauss.

    Now one can say that the process describing the DGP of those observations (i.e. where we observe various planets at various times) can be written as:

    (1) a perfect circle + measurement error
    (2) an elipse + measurement error

    However, these two are not equivalent, or ‘equally justified’. The assumptions are different.

    So I ask again:

    Do you believe that the process generating our instrumental temperature record (i.e. the DGP governing our instrumental record, i.e. the measured temperatures on our 128 year interval), when we account for the various exogenous forcings (volcanic, anthropogenic, you name it, one shot hits), must be mean reverting.

    Kind regards, VS

    PS. On the ‘humour’ issue. I’ve read quite a lot of Terry Pratchet, who’s a ‘Pom’ right? Well, I think this man knows ‘humour’ and he knows how to spell it ;) As can be seen from my contributions however, I prefer American spelling.. Cheers.

  1053. VS Says:

    A side note Eduardo: Because I don’t want to ‘trick’ you into anything (really! :), I have to point out that this is an exact and formal question.

    Cheers, VS

  1054. Tim Curtin Says:

    Hi Eduardo

    I was intrigued by your challenge re New York temperatures: “eduardo Says: March 29, 2010 at 21:14 I suggest to you the following exercise: we have detected that the New York daily temperatures measured from January to May 2009 contain a unit root. Which is your favorite conclusion among this mutually exclusive and complete set?
    1- there is an external factor causing this behaviour
    2- there is no external factor causing this behaviour
    3- We cannot say anything”
    Why January-May? Why not just January for starters, then each month in turn? For a first stab, let’s take January NY temperatures from 1960 to 2006, and take the unadjusted trends for an index (1960=100) of each of NY’s average temperature, the level of atmospheric CO2 at Mauna Loa in January 1960-2006, and the Solar Surface Radiation (aka “AVGLO”, in average daily Wh/sq.meters) in NY in January 1960-2006. The resulting graph is fascinating, and I can forward it and the data to any who ask for it by emailing me at tcurtin@bigblue.net.au). It shows as usual the virtually monotonic rise in [CO2], which is therefore non-stationary, but reveals a long term (46 years) declining trend in NY’s Average Temperature in January (211.59—7.8687T), which visually at least closely tracks a very similar declining trend in NY’s SSR (in January, (102.06 –.02056SSR). In short, given that SSR=SUN is an external factor, while [CO2] is ‘internal’ (anthropogenic) but has an opposite trend to that of the external factor, we can say a lot, and the reply to Eduardo is YES to (1) and NO to (2).
    But eyeballing unadjusted data can be risky. So let’s regress Ave T on [CO2] and SSR, all data are the absolute index numbers for each January. The AdjR2 is lousy, 0.07 with constant=0, and 0.08 with unfixed constant, which however indicates there is no spurious correlation in this model. Both independent variables have significant coefficients (for both t>2.3, p=0.02), the only problem is that the one on [CO2] is negative. Oh dear! But let us do the Durbin-Watson test for autocorrelation before we move on to first differencing etc., and we find it’s 1.947, so close to 2 and therefore no autocorrelation.
    Now assuming Eduardo is right that NY temperatures exhibit unit roots, let’s redo the regressions for dT/dt, but leave SSR and CO2 undifferenced for now. Both coefficients are now stat. insignificant, but that on [CO2] remains negative. Next, we 1st difference all variables: both coefficients remain insignificant but swap signs.
    Beenstock & Reigenwertz find that radiative forcing needs double differencing to become stationary. Doing that with dSSR as the only external factor, finally [CO2] becomes positive and significant, with t=1.96 and p=0.0568. But this was achieved with I(1) for average temperature and I(2) for greenhouse gas forcing: “Normally, this difference would be sufficient to reject the hypothesis that global temperature is related to the radiative forcing of greenhouse gases, since I(1) and I(2) variables are asymptotically independent.” (B&R, p.3).

    So far I have used only one external variable, SSR. But there are others in my database for NY. Let’s take “precipitable water” (aka H2O, in cm.). The impact is amazing. Using the unadjusted data in each case, the R2 becomes serious, at 0.6, and the coefficient on H2O is both large (14.66) and very significant (t=8.04, p=3.54E-10), while that on [CO2] remains stubbornly (and very significantly) negative.

    Now is precipitable water an “external” factor? Much of the IPCC’s AR4 and the whole of Stern and Garnaut say more [CO2] means worse and longer droughts most everywhere. So is H2O an inverted proxy for the otherwise invisible affects of [CO2]? Eduardo, do tell us. But take care, I find only a spurious correlation between CO2 and H2O, as implied by the wonderful R2 of 0.95. Enough to make me the poster boy for AR5? – well, perhaps not, as the Durbin-Watson shows strong auto-correlation.

    Apologies for length – I could go on! – but enough for now except to ask why not a single one of the IPCC’s thousands of Nobel prize winners could ever be bothered to analyse climate data place by place, and then, and only then, compile frequency distributions of the trends and correlation coefficients for temperature on SSR, [CO2], H2O, RH (=relative humidity) and other “external” factors (unrelated to [CO2]), all classified by latitude and altitude and by I(d) status? Hansen et al of GISS say (2010) that is unnecessary as the trends are the same for at least 1000 km between each location irrespective of altitude and latitude. Are they? Not for Hilo (sea level) and Mauna Loa (3,500 metres) not more than c30 km distant from each other. Enough!

  1055. VS Says:

    ========================
    GRASSROOTS EFFORT ANNOUNCEMENT
    ========================

    Do you agree with the notion of open science, and do you like what’s being done in this thread? Do you want to help out, and don’t know how?

    Here’s an opportunity!

    I need to do a literature survey, but between posting here, and my own (real-life) responsibilities, it’s going to take forever. That’s why I ask all interested individuals, who believe that they understood my message to help me out here.

    ========================
    WHAT DO WE NEED?
    ========================

    We need a comprehensive overview of the litterature where (trend)-stationarity of the instrumental record is assumed.

    This includes:

    – Any article where a ‘statistically significant trend’ is used as an emprical argument. Here’s a good example
    – Any article describing a method/estimator/statistic which (implicitly) assumes (trend)-stationarity
    – Any article that regresses climate variables (i.e. instrumental temperatures, sea level heights, solar irradiance, GHG forcings.. etc) without mentioning the the phrase ‘cointegration’

    If you find a mountain of papers (likely), try to select the most cited ones and the ones appearing in A-journals!

    ========================
    AND THEN WHAT?
    ========================

    Just send the references you find to vs dot metrics at googlemail dot com. Mention the number of citations next to each reference, and please provide a direct link to the journal article!

    I thank thee in advance (and so does science :),

    all the best to everybody, VS

  1056. Alan Says:

    Hi Eduardo,

    Your little question appears to have drawn a variety of responses – in both perspective and length.

    If I analysed the daily temperatures from Jan 1 to May 31 in New York (a little over 150 data points) and computer says “unit root”, I’d go “hee!”

    From winter to summer in New York I would expect the external driver called the sun to give me a deterministic trend.

    So I would be confused!

    (more than I usually am!)

  1057. Bart Says:

    Perhaps the presence of a unit root is not inconsistent with a deterministic trend?

  1058. sod Says:

    1 The GISS data shows that we can with confidence say it is I(1) with unit root.

    double wrong.

    tamino disputes the root.

    Still Not

    De Witt disputes I(1)

    http://rankexploits.com/musings/2010/how-do-we-know-that-temperature-is-i1-and-not-i2/

  1059. sod Says:

    Perhaps the presence of a unit root is not inconsistent with a deterministic trend?

    of course it is not.

    let us say, that i have been measuring temperature in my house over 130 data points.

    VS has found the dataset to contain a unit root. he disputes a trend.

    but i know that i installed a climate system 30 data points ago. and that i have been constantly increasing the setting of it, to generate a linear trend.

    what VS does here is useless.

    and the case in economics is obviously not a s clear as he describes it either.

  1060. VS Says:

    Hi Bart,

    Yes, a deterministic trend is inconsistent with a unit root. That’s the lesson you should take back from this whole discussion.

    For the record, reply to both of Tamino’s posts. And the answer to DeWitt is in the comments of his entry.

    Cheers, VS

  1061. sod Says:

    Just send the references you find to vs dot metrics at googlemail dot com. Mention the number of citations next to each reference, and please provide a direct link to the journal article!

    Bart, i would give this some serious thought.

    do you really want to host the VS attack on science on this page?

  1062. VS Says:

    PS.

    Bart,

    just to remove any confusion, a stochastic trend can contain a drift parameter, which indeed predicts a ‘deterministic’ rise in each period. The trend can furthermore be polynomial (i.e. quadratic, or higher order).

    This is all possible, and you can test all of this, formally.

    A trend-stationary deterministic trend however (i.e. the one you estimated up there), is ruled out though.

    Cheers, VS

  1063. John Says:

    sod

    If tamino is wrong will your head explode

  1064. Morph Says:

    “Bart, i would give this some serious thought.

    do you really want to host the VS attack on science on this page?”

    Sod,

    No attack on science that I can see…

    but, if for some reason some erroneous conclusions are found somewhere, does that make you afraid? If everything has been peer-reviewed by experts on the field and they have reached consensus, then all will be fine, no? AMOF, it will be even stronger, no?

    Still scratching my head… Why would someone be afraid?

    From what I’ve seen in this thread, I believe that Bart would be honored to have hosted a grassroots effort (according to you driven by those damned skeptics that want to destroy science) that will vindicate science.

  1065. Bart Says:

    VS wrote:

    Yes, a deterministic trend is inconsistent with a unit root. That’s the lesson you should take back from this whole discussion.

    Sod wrote:

    let us say, that i have been measuring temperature in my house over 130 data points. VS has found the dataset to contain a unit root. he disputes a trend.
    but i know that i installed a climate system 30 data points ago. and that i have been constantly increasing the setting of it, to generate a linear trend.

    Let’s change “linear” into “going up, with relatively large up- and downswings along the way”. That’s a similar point I’ve been making:

    It’s a bit like asking if the bike could have moved downhill all by itself, even if you see that someone is riding the bike downhill.

    The chance of my weight increasing or decreasing depends on my caloric input and output (i.e. my personal; ‘energy balance’). That statement holds, irrespective if a unit root is present in the timeseries of my weight against time (because there’s physics -and in this example biology- involved).

    VS, is your contention then that in the presence of a clear deterministic factor (such as in these examples), that it’s impossible to have a unit root in the timeseries? Even if there are *various* deterministic factors that change over time *and* a large amount of variability and oscillations?

    Just saw your latest comment: What I would predict is that the atmospheric temperature is influenced by the net forcing and by energy flows between different parts of the system (e.g. atmosphere – ocean in ENSO). Do you have plans to test that hypothesis? (no easy task, I realize, but that’s an hypothesis worth testing, as opposed to the hypothesis that temperatures will continue to rise in the same way as they did in some reference period – nobody claims the latter.)

    Finally, sod:
    You made your point that you don’t deem VS’ ideas worthy of consideration. Just because some people are eager to run away with the results and make all kinds of far-reaching (and imho unsubstantiated) claims doesn’t invalidate the discussion we’re having. I decided to have a relatively open discussion here and will stick to that.

  1066. VS Says:

    Hi Bart,

    “Just saw your latest comment: What I would predict is that the atmospheric temperature is influenced by the net forcing and by energy flows between different parts of the system (e.g. atmosphere – ocean in ENSO). Do you have plans to test that hypothesis? (no easy task, I realize, but that’s an hypothesis worth testing, as opposed to the hypothesis that temperatures will continue to rise in the same way as they did in some reference period – nobody claims the latter.)”

    I have many plans!

    But the point is, we’re still (apparently) not out of the ‘unit root’ / ‘stationarity’ woods. I’d like to close that chapter before moving on (otherwise we would get ‘dragged’ back to it, constantly, crippling the discussion).

    I’ll try to write a proper response to your questions soon, I promise, but I’m extremely busy right now (this is taking up a lot of time! And I have important obligations here, that I haven’t given due weight in the past few weeks, hence my ‘grassroots’ request :).

    Also, thank you for the last paragraph of your comment. I knew you felt that way, and if I thought that you didn’t, I wouldn’t be contributing here ;)

    Cheers, VS

  1067. Bart Verheggen Says:

    VS, “closing that chapter” may be a little too ambitious…

  1068. VS Says:

    Bart, I’m working on it behind the scenes, patience ;)

  1069. Paul_K Says:

    Hi HAS,
    Re your post of March 31st 00:30
    “In fact it should be a reasonably straight forward analysis to ask two questions about stationary series with very large variance (is that how you’d describe a long wave length?):

    1. What are the odds of drawing a sample of contiguous 150 data points from that series and not being able to detect the stationary process?

    2. How long a sample of contiguous data points do you need to confidently detect the stationary process?”

    Just on your point of definition/description, my reference to long wavelength referrred in context to external forcings which are apparent on a geological time-scale and which exhibit cyclic behaviour – for example, changes in solar insolation as a result of solar system “orbital harmonics”. I was not saying anything about whether such cycles are cyclic-stationary.
    I believe that a formal attempt to answering your two simple questions would be horrendously complicated! If we were dealing with a long-term DGP which had a single known cycle as an input signal, we could do a good job theoretically. The problem which arises here is that any analysis of temperature series in the frequency domain (whether one looks over millions of years, thousands of years or 128 years) suggests a multiplicity of cycles, some external, some endogenous, some unknown, which are superposed on each other. A frequency analysis unfortunately does not come with ready-made labels and cycle properties.
    We are left then with qualitative arguments of the sort you see above on mean-reversion AND an inspection of the data structure itself to see what it can tell us.
    It is genuinely sad that we do not have a single generally accepted temperature reconstruction over the last few thousand years. This would permit practical resolution of many of the “structural” questions raised here. If we knew for certain that the Roman Warm Period and the Medieval Warm Period were truly high amplitude global temperature events, then we could say with certainty that we CANNOT assume that our sample of 128 years of temperature should exhibit mean-reversion. Since even hockey-stick supporters now acknowledge that these were high-amplitude events in the Northern Hemisphere, there should be common agreement that mean-reversion is, to put it mildly, an unsafe assumption at best.

  1070. Trevor Says:

    Marco:

    “You might want to get the facts right. NASA GISS has nothing (as in ‘nothing’) to do with the two available satellite records. Those are from UAH and from RSS, an independent organisation.”

    Sorry about that. I made the mistake of relying on what someone else said. But now I’ve checked it out myself, and it turns out, it doesn’t matter, because BOTH satellite records (before “adjustments”) show less warming in the troposphere than at the surface.

    “Also, Jim Hansen (who you tried to link to RSS and supposed adjustments in that record (please provide evidence for that, so far UAH is the one continuously having to modify its procedures)) has already indicated that a Venus-like runaway effect is highly unlikely for earth. Nothing new, there. Unfortunately, even a ‘mere’ ten degrees extra is devastating for modern human society.

    Who said anything about 10 degrees? Looks to me like, for at least the last half-million years, the earth has NEVER been more than about 4-5 degrees warmer than it is right now. If temperature is “bounded” (as YOUR side is claiming, and I agree), then that boundary is 4-5 degrees north of here. That’s not “devastating for modern human society”. In fact, I could argue that it’s quite positive for human society. (And I’m not convince that 10 degrees would be devastating for that matter, though it might not be quite as positive as 4-5 degrees. But for that to happen, the current ice age we’ve been in for the last 2.6 million years would have to end, and the shortest ice age we know of lasted 30 millions years). Moreover, no matter how “devastating for modern human society” global warming might be, it’s not nearly as “devastating for modern human society” as what will happen if we accept the AGW theory and take action, based on the AGW theory, to STOP global warming.

    “Of course, that the upper troposphere is warming faster than the surface *is* being seen through proper data analysis, removing non-climatic signals.”

    ONE analysis (Fu et al, 2004) of UAH and RSS tropospheric temperature records shows that ONE of the two records, when adjusted for “stratospheric cooling”, has about the expected ratio to surface temperatures in the TROPICS, but not for the entire planet. And the UAH tropospheric record, even after this adjustment, is still warming less than the surface. And the weather balloons agree with UAH. So, there’s really very little evidence of greater warming in the troposphere.

    But, regarding this Fu et al paper, one would assume that, when these really smart climatologists were coming up with their theories and models of climate change, they would have included stratospheric cooling of the troposphere, and thus when the theory says that the troposphere SHOULD warm more than the surface, they’re talking about the NET warming, including the negative effect of stratospheric cooling. And so, Fu et al are wrong to exclude it. Now, if you’re admitting that these “brilliant” scientists overlooked something this obvious in formulating their theories, I’m more than happy to go along with the notion that they are incompetent. But even then, you have exactly zero temperature records showing as much tropospheric warming, worldwide, as is predicted by AGW theory. Only one adjustment of one record even shows more warming in the troposphere than at the surface, and that difference is not as large as expected, except in the tropics.

    Oh wait, sorry. I just read Mears et al (2005). They too “adjust” the RSS tropospheric temperatures to something near what AGW theory says they should be. They “used 5 years of hourly output from a climate model” to adjust the tropospheric temperatures. So, Mears uses MADE UP data, from a model, rather than actual, real-world, empirical data, to adjust the satellite record. And the made-up data he’s using is from a model that assumes anthropogenic causes for global warming (Mears didn’t specify which model he chose, but they all assume anthropogenic forcings much larger than natural forcings). Is it really any big surprise that the assumption of anthropogenic global warming leads to a conclusion of anthropogenic global warming? That’s circular reasoning.

    Regards.
    Trevor

  1071. Kweenie Says:

    “sod

    If tamino is wrong will your head explode”

    You mean like Slim Whitman’s music?

  1072. Bart Says:

    Trevro, Marco:

    This thread is about surface temperature datasets and their (statistical) interpretation. (Adjustments to) satellite measurements and discussions about what is worse, global warming or mitigation, are for the open thread. Thanks.

  1073. JvdLaan Says:

    Bart, is it perhaps better to close this thread and continue on a part two. It takes minutes to refresh and in the meantime my boss could see me reading this thread ;-)

  1074. ge0050 Says:

    This formal question from VS to Eduardo, has it been answered?

    Do you believe that the process generating our instrumental temperature record (i.e. the DGP governing our instrumental record, i.e. the measured temperatures on our 128 year interval), when we account for the various exogenous forcings (volcanic, anthropogenic, you name it, one shot hits), must be mean reverting.

  1075. ge0050 Says:

    Looking at the geological record going back hundreds of millions of years it looks like the earths mean temperature has an upper and lower bound, and that the mean temperature fluctuates rapidly between these bounds.

    It looks to me more like a blind drunk bouncing off the walls while trying to stagger down a hallway, as compared to a sober person trying to keep to a centerline drawn down the middle of the hallway.


  1076. DLM Says:

    VS says: “As can be seen from my contributions however, I prefer American spelling.”

    And that’s the only reason I find your arguments persuasive:) Now if you could just do something about the smugness. Be gentle with Bart. He is a gentleman and a scholar.

    Did I miss Eduardo’s reply?

  1077. AndreasW Says:

    Grassroot effort

    If you search for papers with “unit roots”and cointegration in climate science you find some papers that discusses the issue, not just b&R.

    But if you look at guy like Phil Jones you get another picture. For fun i browsed some of his papers,that you could find online, and i found a lot of “statistical significant temperature trends” and very little “unit root” and “stochastic trends”.

    So either you are a “unit root groupie” or a “deterministic trend groupie”.

    It would be interesting to see if any of the big players of the team: Mann, Jones, Schmidt, Hansen, Briffa and their disciples, ever mention unit roots or cointegration in any of their research.

  1078. Pofarmer Says:

    ENSO, and PDO and so on would exist even if the sun was completely constant, and we had no volcanoes, and the rotation of the Earth were constant, etc, etc..

    Uhm, how do we know that?

    It’s these kind of assumptions that drive me crazy with the AGW crowd.

  1079. eduardo Says:

    Dear VS,

    ‘I’m simply asking you whether you believe that the process generating our instrumental temperature record (i.e. the DGP governing our instrumental record, i.e. the measured temperatures on our 128 year interval), when we account for the various exogenous forcings (volcanic, anthropogenic, you name it, one shot hits), must be mean reverting.’

    I think your question is not well posed. You said that the DGP is not the underlying process of which the data are a realization. So I am now confused.: The DGP cannot be conditioned to something external to the data, e.g. forcings.

    But I will try to respond your question. No, it must not be mean reverting. The temperature of the Earth will tend to move towards equilibrium with the forcings, so that the same amount of energy enters and leaves (forget for the moment possible multiple equilibria) . If the external forcings are changing with time, for instance, if the solar output changes suddenly in year 1940 to a higher value, the temperatures will not return to their previous mean .Its mean value will tend to rise until the same energy that enters the system is radiated away.

  1080. eduardo Says:

    Dear Steve,

    ‘If we say that the instrument temperature record is non-stationary specifically because of GHG forcing (which it seems to me to be what eduardo is saying), then I ask how we know that is correct without being able to see the temperature record in the absence of GHG forcing.’

    The arguments is because CO2 is the only *known* external forcing that can explain the observed rise in temperatures. Perhaps other forcings will be known in the future, but now it is the only one. But this is another type of debate. One could argue that the solar output may be responsible for part and so on. I have not addressed this question because I think it is not the question of this thread and we do not need to muddle the discussion further. We are not discussing models, we are discussing unit root

    ‘The issue is not if there is a GHG forcing (I do not think anybody commenting here doubts that there is). The issue is uncertainty. Please explain how you would assign uncertainty limits to the effect of GHG warming in light of known, but poorly defined pseudo-periodic temperature variation and in light of possible longer term natural pseudo-periodic temperature variation. How do you do this based only on the instrument temperature record, which is not long enough to provide information about longer term variations?’

    The issue is whether the global temperature rise is externally driven or it is the result of internal variations, and what the unit root test can tell us about this two questions. This is what I would like to know.

  1081. eduardo Says:

    Dear HAS,

    ‘You are missing the point that the data is saying something beyond the nature of the process. It is also saying that you don’t need to assume an external GHG impact to explain the late 20th century global temp series.#

    I do not think this is a logical argument. It is circular: you observe a non-stationary process, you assume a non-stationary process and you confirm your observation. By the same token, I could define the following process: instead of unit root, my process is denoted “temperatures rise”. I apply my test and I can confirm that my process describes the observations. Therefore, I dont need to assume anything else. What is wrong in my argument or what makes my argument different from the ‘unit root’ argument.

  1082. eduardo Says:

    Dear Willis

    ‘Now, let’s consider a car with “cruise control”. This adjusts the gas to maintain the speed of 60 miles per hour. If it hits a rock and slows down, it gives the engine a bit more gas until the car reaches 60 mph again, then cuts back on the gas to maintain the speed.’

    This mechanisms is not possible in the Earth system. The essential source of energy is the output of the sun and the sun is not reacting to the Earths climate.

    I think you are referring to the sign of the total feedbacks in the climate system, which must be negative. Otherwise we would not be here.

  1083. eduardo Says:

    Tim,
    Probably I was not clearly enough with my example. I meant, for instance, from January 1950 to May 1950, daily values

  1084. John Whitman Says:

    I find a unity in the application of physics to climate and the application of statistical (econometrical) analysis to surface temp and atmospheric CO2 times series data led by VS (and his aids). Likewise for solar and other times series data.

    The actual surface temps and CO2 time series (and other such as solar, etc) were intended by scientists to be measurements consistent with physics. The instruments were developed by scientists for the purpose of getting physical information about our atmosphere to explain it and for diverse other uses. The data represents the output of physics. Any reference to physical in this context relates to physics.

    Statistical (& econometrical) science studies data. Time series data is included in the scope of the science of statistics. It is a science that is widely used by virtually all other sciences. Modern science cannot dispense with it. The use of professional statisticians as a collaborative resource for physicists and climate scientists seems consistent within the structure of science. Collaboration across disciplines in science has wide precedent and is common. Climate science may not have widely taken advantage of collaboration with profession econometricians, although there are several papers that have done so.

    Econometricians (statisticians focused on economics) have developed processes/methodologies that have been shown in some climate related papers to meet the needs of climate scientists (and their associated physicists).

    The results of some econometric statistical analyses applied to temp and CO2 time series, performed with established processes/methodologies, have shown some physical possibilities that may or may not be inconsistent with some previous physical understandings of our climate system. The job of the science of physics in collaboration with statisticians (econometricians) is to look into the possibilities and advance our understanding. Parts of physics and statistics as a whole change, but physics and statistics, as sciences per se, are not refuted because of potential new information. Physics and statistics, as processes for improving knowledge must always change.

    Collaboration in evaluation of the statistical results by VS (and others) and the evaluation of the potential physical possibilities by physicists represent an opportunity to add to our understanding of our atmospheric process. I cannot find any reason to think it is one-or-the-other type situation. It is two branches of science collaborating with all of science benefiting.

    I find unity of purpose and precedent.

    VS (and aids) and Bart and you physicists, have fun. This is a good process you have going here.

    John

  1085. HAS Says:

    Hi eduardo

    I said:

    “You are missing the point that the data is saying something beyond the nature of the process. It is also saying that you don’t need to assume an external GHG impact to explain the late 20th century global temp series.”

    You replied:

    “I do not think this is a logical argument. It is circular: you observe a non-stationary process, you assume a non-stationary process and you confirm your observation.”

    Apologies for my brevity but I was referring here to the experimental evidence where VS looked at the data up to mid 20th Cent and then showed that the remaining series was not inconsistent with a model derived only from that i.e. as an empirical observation “you don’t need to assume an external GHG impact to explain the late 20th century global temp series”. (You will note however on a number of occasions I have said that what VS has done with the temp series is a pretty limited experiment and the dataset itself is limited, so it may not be supported with further investigation).

    But the heart of the issue I think sits in the balance of your comment:

    “By the same token, I could define the following process: instead of unit root, my process is denoted ‘temperatures rise’. I apply my test and I can confirm that my process describes the observations. Therefore, I dont need to assume anything else. What is wrong in my argument or what makes my argument different from the ‘unit root’ argument.”

    In other words you are asking what makes the “unit root” model superior to the “temperature rise” model. The simple answer is that it fits the observed data better.

    I would have thought that on this basis it was completely uncontroversial to accept the unit root model as superior (without denying that it is quite appropriate to have a squabble about if it really does fit the data better, or how robust this result is – but this doesn’t seem to be your point). If I look at your paper “How unusual is the recent series of warm years?” all through it you are choosing to use one model over potential alternative models, presumably on precisely this basis.

    I do think we are over complicating this to a degree. I would like to better understand the points of disagreement. Therefore I would really appreciate it if you came back to the questions I asked earlier and shared your views on them. Do you think we can agree:

    1 The GISS data shows that we can with confidence say it is I(1) with unit root.
    2 This has an important implication – be careful about the use of statistical techniques that don’t cope with this.
    3 Using this insight into the GISS data we also find that we can not statistically distinguish the temp series from the period of GHG increases from the prior to that period.
    4 This is a very limited experiment with a very limited dataset so we should do more, taking into account point 2 and using 1. to help form our hypotheses.
    5 Empirical results that don’t generate data consistent with 1. should be treated with care, until 1. is modified, disproven or found to just be a limited result.

    VS is right we can’t move beyond 1. until the point is understood and agreed (and some are disagreeing with it so maybe an additional point about wider validation of point 1 is required).

  1086. eduardo Says:

    Dear HAL,

    I do not agree in several points:

    -‘In other words you are asking what makes the “unit root” model superior to the “temperature rise” model. The simple answer is that it fits the observed data better’

    In which sense does the unit root model fit the datar better? Actually the unit root does not fit the data at all? It cannot ‘predict’ an increase, it just says that the increase is within some broad bounds. Going back to 1935, the time where the unit model was fitted, the model could not have predicted that the temperatures after would keep rising most of the time. The observations are compatible with the model because the model has a very broad ‘compatibility intervals’.
    My model ‘temperature rise’ or even better ‘temperatures rise when the external forcing (solar output, greenhouse gas) increases is always better than the unit root model. So can you explain in which sense is the ‘unit rott model’ better ?

    -‘Using this insight into the GISS data we also find that we can not statistically distinguish the temp series from the period of GHG increases from the prior to that period.’
    My claim is that unit root model cannot distinguish the period before and now (if it cant), because apparently it cannot distinguish anything. Can the model distinguish between forced deterministic variations and stochastic unpredictable variations ? This has been my question all along, and I dont get an answer. Once again, if I give you the daily data of the global temperature from 1 January 1950 to 31 May 1950, can the unit model predict how temperature would evolve after that ? No, the model just gives a very broad channel of possible paths, some of them decreasing and some of them increasing. Is that predictable power? I model taking into account the orbit of the Earth can predict the temperatures much much better dont think so.

  1087. sod Says:

    1 The GISS data shows that we can with confidence say it is I(1) with unit root.

    this is disputed and will be disputed in the future. as i posted above tamino disputes one part of it, De Witt another one.

    here is a very nice paper on the topic, that was posted by jr on the blackboard:

    Click to access wp495.pdf

  1088. cohenite Says:

    Very good sod; from your link:

    “Irrespective of the criterion used to judge the break point, and for all three of
    the data series, the most remarkable break point in the trend stationary models
    is at 1976. In the unit root models, for the T3GL and NCDC the break point
    again is 1976.”

    Are you now ready to concede David Stockwell’s point?

    Click to access 0907.1650v3.pdf

  1089. Willis Eschenbach Says:

    eduardo Says:
    March 31, 2010 at 18:03

    Dear Willis

    ‘Now, let’s consider a car with “cruise control”. This adjusts the gas to maintain the speed of 60 miles per hour. If it hits a rock and slows down, it gives the engine a bit more gas until the car reaches 60 mph again, then cuts back on the gas to maintain the speed.’

    This mechanisms is not possible in the Earth system. The essential source of energy is the output of the sun and the sun is not reacting to the Earths climate.

    I think you are referring to the sign of the total feedbacks in the climate system, which must be negative. Otherwise we would not be here.

    The essential source of energy is the sun itself, you are right about that. But the relevant energy source is the amount of solar energy actually entering the climate system.

    And while you are correct to say that the sun is not reacting to the climate, the amount of solar energy reaching the earth is definitely reacting to the climate. The amount of energy received is controlled mainly by the clouds. I have spelled out one such governing mechanism here.

    So I repeat my question.

    However, the mechanisms in question are very, very different. In one, there is an active governor at work. In the other, it is merely staying at the point where input = resistance. The implications of these two mechanisms are also very different.

    Which type of “mean-reverting” behaviour are you referring to here? One? The other? Both?

  1090. eduardo Says:

    Dear VS,

    I will try to formalize my eternal question, with the hope that this time you can understand what I mean, and perhaps you understand what I mean.

    The model, a bit simplified as explained later, that I think represents the global mean temperatures is the following (1):

    T(t) = F(ghg(t),sun(t),aerosols(t),volcanoes(t) ) + stationary_process(t)

    The function F is totally deterministic, i.e. there is no stochastic component at all in it. The variables ghg,sun,volvanoes,aerosols are also deterministic, they can be measured and do not depend on T. The measured curve F(t) in the 20th century is not a straight line, it is undulating upwards but it is not monotonic and has some sudden spikes due to the volcanoes.

    [ The model is simplified because one should allow for some lags due to the thermal inertia and heat diffusion so more correctly it would convolution of F(t) and some filter. But let us forget this for the moment ]

    You can consider periods when F(t)=F_0 could be constant for a long time. Then T(t) is a stationary processes with constant mean (= F_0) and constant variance. If F(t) is not constant, the mean of T is drifting with F(t) and the observed T(t) consists of a stationary processes with non-linearly drifting mean. F(t) is the climate and the stationary_process(t) is the weather. I think this is the answer to your question (?).

    A unit root model is (2):

    T(t) = T(t-1) + stationary_processes (t)

    My question is: If I have one observed realization T_obs(t) of process (1) obtained with the measured F(t), and I applied a unit root test, could the test indicate me (a) that T(t) contains a unit root , or (b) would the test 95% of the time correctly reject the hypothesis of a unit root ?

    Thats what I wanted to know. I dont know the answer. If the answer is (a) the test is useless for me in this context (perhaps useful in others). If the answer is (b) then it is indeed relevant.

    Note that the model :

    T(t)= a + b t + stationary_process(t)
    with a,b constants is not considered by anyone, although people insist on calculating a linear trend for certain periods and see if it is changing. Perhaps this is what is confusing you. (?)

  1091. Ludovico opina que no se puede dudar del fin del mundo climático « PlazaMoyua.org Says:

    […] Estadísticas y temperaturas. […]

  1092. Willis Eschenbach Says:

    sod Says:
    March 31, 2010 at 21:53

    1 The GISS data shows that we can with confidence say it is I(1) with unit root.

    this is disputed and will be disputed in the future. as i posted above tamino disputes one part of it, De Witt another one.

    sod, this is getting very boring. You have said this over and over ad nauseum. Problem is, vs has shown exactly where tamino and De Witt are wrong.

    So please, either show where vs’s disproof of tamino and De Witts assertions are incorrect, or shut up about it. We know you think tamino is right, but so what? Until you produce some math and some arguments, vs has shown that they are wrong. I note that as far as I know, neither tamino nor De Witt have defended their claims against vs’s math.

    Repeated assertion merely makes you look foolish, and clutters up a fascinating thread. Put up some math or shut up.

  1093. eduardo Says:

    Dear Willis,

    but the clouds are not an active reactor, they are passive. I do not see the difference between your two mechanisms. In the case of the Earth both are feedbacks:

    in the first case, if for some reason the temperature of the Earth drops, it emits less radiation and the temperature tend to recover because the energy that is radiating is less that the energy it is absorbing. This is the black-body feedback.

    In the second case, if the temperature drops for some reason and cloud cover diminishes, more radiation is absorbed and the temperature recovers. Both are qualitatively the same type of feedback, only the physical mechanism is different

  1094. HAS Says:

    Hi Eduardo

    Your response to me is useful. I need to think about your subsequent response to VS because that is dealing with a slightly different issue, so in what follows I’m dealing with your response to me.

    If I summarise you correctly you are saying that an important criterion for you in model selection is predictive power. Correct?

    Can I make a few observations that might help get through this.

    If for a moment we put aside predictive power, can I pose you a question? If you have a choice of two models (say, linear regression and unit root), and your observations (data) tells you that linear regression is inconsistent with the data, whereas unit root is consistent, as an experimentalist where does that take you?

    Without putting words into your mouth I’m sure you would start digging into the process to try and establish why you had got this result. Because this is an empirical result, and for that reason a call to theory doesn’t magic the result away. We need to do more experiments and observations to find out what is going on.

    Turning to predictive power can I suggest that you can never predict more information from a set of information than is already in it (it’s a tautology but an often forgotten point). This analysis of the GISS dataset is a particularly limited experiment but the model does produce a probability distribution and confidence limits around what might be observed in the future. I think you are mistake a broad range of possible futures with no predictive power.

    Am I right in saying that you are disappointed with the range of these results? I’d say don’t be, the hard reality is that this is probably all this dataset alone can tell you. To do better we need to add more information. This is where I suggest it would be profitable to put effort in.

    I should add it would be quite wrong to assume a different process that didn’t fit with the observations (e.g. linear regression) simply because it gave better predictions. I’m sure you aren’t suggesting this, but this mistake was the genesis of this whole thread.

    Finally turning to your question “Can the model distinguish between forced deterministic variations and stochastic unpredictable variations?” I think there is a subtlety here. If there were statistically detectable deterministic variations in the dataset then the process VS went through in developing the unit root model should have led to a different model. If the data set had been appropriately well behaved the process VS went through might have generated a linear regression model.

  1095. sod Says:

    sod, this is getting very boring. You have said this over and over ad nauseum. Problem is, vs has shown exactly where tamino and De Witt are wrong.

    So please, either show where vs’s disproof of tamino and De Witts assertions are incorrect, or shut up about it. We know you think tamino is right, but so what? Until you produce some math and some arguments, vs has shown that they are wrong. I note that as far as I know, neither tamino nor De Witt have defended their claims against vs’s math.

    you did not read those topics well.

    and you did not take a look at the link.

    Click to access wp495.pdf

  1096. John Says:

    I’m now in a position to state that ANY MODEL predicting what the next years temp on the above data (Barts Graphs) is a tad iffy! just a quick count on the graph reveals:-

    1-following year down 38 times
    2-following year up 38 times
    3-following year cont down 25 times
    4-following year cont up 25 times
    5- includes 6/7 patterns

    Above are approx (had trouble seeing the changes) but you should get the idea

    How much would you bet on what happens next.

  1097. VS Says:

    Ah finally an actual argument! Great :)

    I cite from this working paper:

    “”The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards. Hence, the unit root tests by themselves do not answer our question. If we trust the Phillips- Perron test, then we can trust equations (1), (2) and (3), which clearly show a positive and highly signi…cant trend in the temperature data.””

    Yes, if we trust the PP test. Here are the Monte Carlo simulations that say we cannot trust the PP test.

    :)

    Cheers, VS

    PS. Eduardo, thank you for your reply, I will get back to you ASAP :)

  1098. Kweenie Says:

    “I’m now in a position to state that ANY MODEL predicting what the next years temp on the above data (Barts Graphs) is a tad iffy! ”

    All Models Are Wrong, But Some Are Useful. (George E.P. Box).

  1099. cohenite Says:

    Eduardo says clouds are “passive” and do not force; is this correct? When Monckton was in Australia he based his talks on climate sensitivity around the Pinker et al paper; Pinker clarified Monckton’s definition of “cloud forcing” thus:

    Click to access debate_australia_tim_lambert.pdf

    It is well known that:

    (surface) SW CRF at BOA ~ -0.8 – -1.0 W/m^2/% cloud

    and

    surface LW CRF at BOA ~+0.6 W/m^2/% cloud cover

    Therefore net SW forcing at BOA ~ -0.2 – -0.4 W/m^/% cloud cover.

    The Pinker paper shows that cloud forcing can occur regardless of a decrease in solar activity; given that clouds are not a passive factor.

  1100. Igor Samoylenko Says:

    VS,

    Why didn’t you provide the complete quote from the paper by Breusch and Vahid (2008)?

    “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards. Hence, the unit root tests by themselves do not answer our question. If we trust the Phillips-Perron test, then we can trust equations (1), (2) and (3), which clearly show a positive and highly signi…cant trend in the temperature data. However, if we dismiss the result of the Phillips-Perron test because of its size distortion in …finite samples and trust the result of the augmented Dickey-Fuller test, the presence of a unit root does not exclude the possibility that there may be a deterministic trend in the data as well. So we need to do further analysis.“.

    Bart raised this point about the deterministic trend several times which you dismissed out of hand.

  1101. Steve Fitzpatrick Says:

    Eduardo,
    “The issue is whether the global temperature rise is externally driven or it is the result of internal variations, and what the unit root test can tell us about this two questions.”

    It is almost certainly a mixture of internal variation and external forcing. A unit root can only tells the temperauture is not stationary during the instrument period. As a result, the measured change in temperature during that period remains (just barely) consistent with the null hypothesis of random variation, and no CO2 effect. I do not suggest that the GHG effect is really zero (I can’t be!), but a non-stationary process that is potentially contaminated with an unknown level of internal variation, and with that internal variation having an unknown temporal spectrum, does mean that any calculated effect of GHG forcing is bound to have large uncertainty unless there is some way to quantify the expected range of contribution from internal variation during the instrument temperature record.

    If the best estimates of Holocene variation show not more than a 0.2 C “normal variation” and that variation takes place on relatively short (<<100 years) time scales, then your attribution of warming to GHG forcing may turn out to be accurate. But if estimates of Holocene temperature variation suggest larger 'natural variation', and that variation has characteristic times comparable to or longer than the length of the instrument record, then I think it is reasonable to conclude that any attribution to GHG forcing is at best highly uncertain.

  1102. VS Says:

    Hmm, missed something there.

    There is the issue whether we should model the process as moving average (MA) based process, rather than an autoregressive (AR) one. In this paper they use an MA based specification.

    I list, for the GISS record, both my stochastic trend specification, and theirs.

    ———————

    ARIMA(3,1,0) no constant

    so: D(GISSTEMP_all) = AR1*D(GISSTEMP_all(t-1)) + AR2*D(GISSTEMP_all(t-2)) + AR3*D(GISSTEMP_all(t-3)) + error(t)
    with: error(t) ~ White Noise

    Coef: Estimate (p-value)

    AR1: -0.438867 (0.0000)
    AR2: -0.368938 (0.0001)
    AR3: -0.308871 (0.0006)

    All three coeffients significant at 1% sig level, R2=0.23

    ———————

    ARIMA(0,1,2) with constant

    so: D(GISSTEMP_all) = Constant + error(t) + MA1*error(t-1) + MA2*error(t-2)
    with: error(t) ~ White Noise

    Coef: Estimate (p-value)

    Constant: 0.006208 (0.0161)
    MA1: -0.510149 (0.0000)
    MA2: -0.200860 (0.0258)

    So, I replicated the specification on the GISS record, and found the Constant (i.e. drift) significant at 5%, MA1 at 1% and MA2 at 5%, with an R2=0.23 (R2 same as my specification).

    Other than this, the system at first glance seems well behaved (ran a few preliminary tests).

    ———————

    I think the ARIMA(3,1,0) ‘looks’ better, that’s why I picked it, and I prefer how the AR model is estimated, but that’s just a matter of opinion/taste (for now). Anyhow, I’ll take a closer look at it, and post the diagnostics and all others are encouraged to do the same. We can then decide which one we prefer.

    We can use some physicists here, think about what’s more appropriate considering ‘energy balance’ and such: an AR process or MA process based stochastic trend? Think about impact of temperature ‘shocks’ on future time periods, as those are also contained in error(t). I expect physics has something to say about this.

    Note that the ARIMA(0,1,2) specification indeed implies a positive drift parameter, or putting it loosely a ‘positive trend’ or ‘expected rise’, of 0.06 [+/- 0.005, 95% conf.] degrees Centigrate, per decade (per 10 years). Also note the difference with the ‘after-the-structural-break-in-the-deterministic-specification-occured’ short-term trend usually estimated. I’ll take the calculations from Bart’s blog entry:

    “(0.17 +/- 0.03 degrees per decade)”

    So [0.055,0.065] versus [0.14,0.20] per decade. That’s quite the difference in trend, no? Well, that’s also what the authors conclude in light of their estimation results:

    “Comparing these results with those in equations (1), (2) and (3), we see that the evidence for a positive linear trend is much weaker in the presence of a unit root.”

    Note: I compared to last 30 year deterministic trend, they compare their estimates with the trend over the whole sample.

    Naturally, we can (should) discuss all of this.

    =====================
    Note that this is a DIFFERENT discussion than the discussion about the PRESENCE OF A UNIT ROOT, as the ARIMA(0,1,2) specification also describes a stochastic trend. The ambiguity on the unit root question in paper discussed, stems from the use of the PP test, which I believe my simulations disqualifies it in light of my stochastic trend specification (see previous comment). I.e. I still need those references!
    =====================

    Best, VS

    PS. I don’t want to jump to conclusions, but the authors seem to have used the only test which gives amibiguous results on the unit root (PP, while the ADF is ‘default’), and they used a specific stochastic trend specification (not per se better, if you compare individual coefficient p-values over the two specifications) which contains a positive drift parameter.

    It looks at first glance, to me, as if they were trying to avoid a minefield. And when I say minefield, I’m talking about the topic we’re discussing right now.

    PPS. Sod, I’m willing to listen to your arguments, but first you need to quit acting like a jerk. You are fully entitled to be skeptical of my results (I would perhaps even say, encouraged), but all this name-calling and various strawmen are inappropriate. Also please stop neglecting all the statistical evidence I posted here, when formulating your arguments.

    PPPS. Hi Igor Samoylenko, I dismissed that part, because I performed that ‘further analysis’ the authors are talking about, in this very thread. See the references to all simulations and various test set ups in section 2 of this comment. Also, please note the difference between a deterministic trend, and a deterministic component (i.e. drift) in the stochastic trend. I answered Bart about that here.

  1103. VS se dispone a zarandear la base de la ciencia climática actual « PlazaMoyua.org Says:

    […] La discusión de estadísticas y temperaturas. […]

  1104. VS Says:

    I wrote in my previous post:

    “We can use some physicists here, think about what’s more appropriate considering ‘energy balance’ and such: an AR process or MA process based stochastic trend? Think about impact of temperature ’shocks’ on future time periods, as those are also contained in error(t). I expect physics has something to say about this.”

    Here’s some help to do that.

    I let EViews forecast the so-called impulse response function for the two estimated specifications (i.e. ARIMA(3,1,0) and ARIMA(0,1,2)).

    Here’s the scenario. A single exogenous one standard-deviation (1x s.d.(error)) shock is applied at time t, and the forecasted impact of that shock over t+1,..,t+24 is studied.

    Note that the top graph plots the effect on the error in future periods and the bottom one the accumulated effect. Also note that the effect on future errors (top panel) does not imply that a ‘shock’ in period t will ’cause’ another ‘little shock’ in a future period.

    Here are the impulse resposes for the two specifications. I believe that the bottom one (i.e. accumulated effects of an exogenous shock) is more relevant for the question I posed.

    ———————-

    Impulse responses of ARIMA(3,1,0) stochastic trend specification, GISS record, Figure 9
    (my specification)

    ———————-

    Impulse responses of ARIMA(0,1,2) stochastic trend specification, GISS record, Figure 10
    (specification used in Breusch and Vahid (2008))

    ———————-

    From my ‘layman’ climate-science perspective, I would say that my naive ARIMA(3,1,0) looks more like something global mean temperature trend would ‘do’ after a shock in one period, than their ARIMA(0,1,2) specification.

    ?

    VS

  1105. VS Says:

    haha… make ‘layman’, layman :)

  1106. Willis Eschenbach Says:

    eduardo Says:
    March 31, 2010 at 22:36

    Dear Willis,

    but the clouds are not an active reactor, they are passive. I do not see the difference between your two mechanisms. In the case of the Earth both are feedbacks:

    in the first case, if for some reason the temperature of the Earth drops, it emits less radiation and the temperature tend to recover because the energy that is radiating is less that the energy it is absorbing. This is the black-body feedback.

    In the second case, if the temperature drops for some reason and cloud cover diminishes, more radiation is absorbed and the temperature recovers. Both are qualitatively the same type of feedback, only the physical mechanism is different

    You are still not understanding the difference. Suppose the sun gets a bit brighter. In the first case, the temperature rises until the balance is restored.

    In the second case, on the other hand, the clouds form earlier in the day, effectively turning down the incoming solar and the the temperature remains about the same.

    How on earth can those be seen as being “the same kind of feedback”? One is a simple negative feedback. The other is an active governing mechanism. If you still can’t see the difference, please ask someone who is an engineer about the difference between negative feedback and a governor. A governor uses negative feedback, it is true, but it also uses positive feedback. In addition, the governor tends to keep some variable (temperature, RPM, etc.) constant, while a negative feedback does nothing of the sort.

  1107. Pofarmer Says:

    but the clouds are not an active reactor, they are passive.

    Another one of those things that is simply just assumed that drives me crazy.

  1108. Alan Says:

    Willis, I find this comment puzzling …

    A governor uses negative feedback, it is true, but it also uses positive feedback. In addition, the governor tends to keep some variable (temperature, RPM, etc.) constant, while a negative feedback does nothing of the sort.

    Well, in a previous life I was a practising mechanical engineer designing electro-hydraulic machine tools. On one project, the control challenge was to assure that a beam, powered by a hydraulic ram at each end, approached a die at constant attitude. Which meant controlling the position of the end of each ram.

    The control solution is trivial and utilises both positive and negative feedback. We measured the position of one end relative to the other. If the controlled end ‘led’ the other (because its speed was greater) we sent a negative signal to the hydraulic actuator which reduced the fluid flow into the ram, reducing its speed … the other end ‘caught up’. \

    The opposite (positive feedback) occured when the controlled end lagged the other. Of course, when the position difference between the ends was ‘equal’ to our set point, the feedback signal to the actuator was zero.

    So this is negative feedback used to keep a variable constant.

    I don’t understand what you are talking about.

  1109. Willis Eschenbach Says:

    Alan Says:
    April 1, 2010 at 06:35

    Willis, I find this comment puzzling …

    A governor uses negative feedback, it is true, but it also uses positive feedback. In addition, the governor tends to keep some variable (temperature, RPM, etc.) constant, while a negative feedback does nothing of the sort.

    Well, in a previous life I was a practising mechanical engineer designing electro-hydraulic machine tools. On one project, the control challenge was to assure that a beam, powered by a hydraulic ram at each end, approached a die at constant attitude. Which meant controlling the position of the end of each ram.

    The control solution is trivial and utilises both positive and negative feedback. We measured the position of one end relative to the other. If the controlled end ‘led’ the other (because its speed was greater) we sent a negative signal to the hydraulic actuator which reduced the fluid flow into the ram, reducing its speed … the other end ‘caught up’. \

    The opposite (positive feedback) occured when the controlled end lagged the other. Of course, when the position difference between the ends was ‘equal’ to our set point, the feedback signal to the actuator was zero.

    So this is negative feedback used to keep a variable constant.

    I don’t understand what you are talking about.

    Sorry for the confusion, Alan. What I meant by “simple negative feedback” is like wind resistance on a car. It is proportional in some sense to the speed of the car. But all it can do is slow the car down. It is not controlled by some kind of what you call a “control solution”. It doesn’t include any positive feedback. It gets larger when speed increases, and smaller when it decreases.

    A governor, on the other hand, applies negative feedback or positive feedback as needed, to keep some variable (temperature, RPM, position of the end of a beam) within limits. Simple negative feedback can’t do that.

    If that’s not clear, ask again.

    w.

  1110. Paul_K Says:

    VS,

    I suspect that the main cause of the difference in conclusions (about the confidence interval on forecasts) arises from the inclusion/exclusion of drift.

    You ably demonstrated yourself that your “rejection” of a constant in the difference term in your model specification was based on a test with low statistical power. If you were to overturn that test and refit your model (same specification except for the addition of a constant in the difference term) I strongly suspect that you would see a conclusion similar to Breusch.

    What would this tell us? Well, I need to think about this a bit.
    Paul

  1111. sod Says:

    PS. I don’t want to jump to conclusions, but the authors seem to have used the only test which gives amibiguous results on the unit root (PP, while the ADF is ‘default’), and they used a specific stochastic trend specification (not per se better, if you compare individual coefficient p-values over the two specifications) which contains a positive drift parameter.

    it is funny to see, how blog replies posted by VS days ago destroy printed paper. VS must be the dominant force in his field of expertise. (at least as long as people do not bother to reply to the stuff he writes)

    apart from the cherrypick quote he provided above, i noted that the forecast intervals look very different in that paper: (page 12)

    Click to access wp495.pdf

    the intervals start around the year, when the forecast starts (1958) not in 18180 as it does in the VS version. (i think someone already commented on this above, and i must have missed the explanation for it.)

    we are really scratching the upper end of the useless forecast that VS provided. and this march was really really hot globally. (hint by a daily satellite measurement)

    http://discover.itsc.uah.edu/amsutemps/amsutemps.html

  1112. jr Says:

    So VS chose (3,1,0) because he thinks it “looks” about right? I must be mistaken right? As far as I can recall Breusch and Vahins (B+V) chose (0,1,2) because that is what the data told them fit. Again, I could be mistaken, I don’t have the paper in front of me right now.

    VS talking about the use of the PP test in B+V is a little misleading I think. They don’t rely on it to dispute the presence of a unit root. They note its problems explicitly just as they also note the difficulty that the other statistical methods have in determining the presence of a unit root in short “noisy” time series. VS applying motives to them is also a little bit dodgy I think. There doesn’t seem to be any evidence for that. The paper is fairly clear and transparent as to the process they went through. The short discussion on definition of trends and what a unit root implies is quite useful to make sense of the discussion I would say.

    The main thing I take from B+V is that things just aren’t as certain as VS makes out. In fact even as they avoid invoking physics throughout the paper, part of their conclusion is that because of the nature of the data and the difficulty the statistical methods have in analysing it you need to turn to physics to understand what is going on. I am pretty certain that that has been pointed out multiple times by various people on this thread already. As has the need to carefully define the nature of the trend before carrying out tests for unit roots. Something they also note in their discussion.

    As I mentioned at Lucia’s, a google search will bring up papers using these techniques on the global temperature series from the early ’90s. It isn’t something that hasn’t been looked at before. That people are still looking at it 20 or so years on suggests that the question isn’t quite as settled as some would have you believe.

  1113. Alan Says:

    OK Willis … I understand what you are saying … but I think it’s askew.

    “Air resistance” on a car is not a ‘feedback’ … it’s an ‘input force’. Speed is the output and a feedback is a signal, based on the output, which modifies the input.

    Look at it this way … if the car was stationary (brake on!) and there was a prevailing wind of 150mph (say). If the driver gunned the car and let off the brake, the force applied by the engine to the wheels would be balanced by the wind force … the car would not move (accelerate from 0 mph).

    You can’t say the ‘negative feedback’ is proportional to speed … the speed is zero.

    ?

  1114. VS Says:

    Hi Paul_K,

    Actually, it was lucia who implied that the power was ‘low’. I think the power was fine. You are suggesting to run my forecast with the drift parameter, but then I ask you what the whole point is of testing for statistical significance, if you are going to proceed to ignore the outcomes of that test?

    Furthermore, I tested the ARIMA(3,1,0) specification on various forms of drift (hadn’t posted test results, I do that now).

    So I tested not only for a linear ‘drift’, but also ones described by higher order polynomials. Here are the test results. In each case the two models (with and without drift) are compared and the hypothesis test refers to the H0 that the additional terms (in this case, various ) are redundant.

    ———————

    Redundancy of drift parameter, in ARIMA(3,1,0) specification, Log-likelihood redundancy tests

    ———————

    Coefficients, LL-ratio, p-value, Inference

    C, 2.33, 0.13, redundant
    C + trend, 3.42, 0.18, redundant
    C + trend + trend^2, 3.49, 0.32, redundant
    C + trend + trend^2 + trend^3, 3.65, 0.46, redundant
    etc.

    ———————

    Important note: these are tests performed on the first difference equation (which is stationary), so in order to see what kind of trend this implies for the level series, multiply by ‘t’. So the constant is actually ‘trend’, and the trend is actually a ‘quadratic trend’, etc.

    I conclude that the ARIMA(3,1,0) specification, simply doesn’t require a deterministic drift component in order to fit the GISS record. I think I have provided ample evidence for that.

    ———————

    Also, some (who are back on ignore, for obvious reasons), are claiming that I’m implying that I’m the ‘dominant force in the field’. I have done no such thing.

    I simply performed a very elaborate unit root analysis, something that the authors of this unpublished working paper, admit they are not interested in:

    “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards. Hence, the unit root tests by themselves do not answer our question.”

    I also cite their unit root test result summary:

    “The augmented Dickey-Fuller test in-dicates that a unit root cannot be rejected at any of the usual decision levels (with 3 or 4 augmentation lags). The GLS version of the Dickey-Fuller test agrees. In contrast, the Phillips-Perron test for the same situation rejects the unit root in the presence of a trend (with 3 or 4 lags). The results are the same for all three series. This apparent con‡ict among unit root tests may be attributed to the severe size distortion of the Phillips-Perron test in the presence of moderate negative MA roots, also discussed in Stock (1994).” (bold added)

    So, they, like me, find that the ADF and DF-GLS point to the presence of a unit root, while the PP test doesn’t. Now, I have also performed the ZA test, and the KPSS which both also point to a unit root. Eduardo’s calculations in Zorita et al (2008) also point to a unit root, and I have performed Monte-Carlo simulations showing that we can disregard the PP test, conditional on my ARIMA(3,1,0) stochastic trend specification. Finally, there is a myriad of authors that conclude that the record has a unit root (viz. references in section 2 of this post).

    The analysis I performed here on the unit root in the GISS record, is far more elaborate than what the authors have done.

    I’m not ‘rewriting histry’ here or ‘burning printed paper’ or whatever. I’m just doing a very detailed reproduction of an already established finding (i.e. the presence of a unit root), together with elaborate diagnostics.

    In addition, I’m doing my very best to explain to everybody the formal implications of this particular finding.

    That’s it.

    Cheers, VS

    PS. Jr. I posted the impulse response functions of the two specifications. What do you think about those?

  1115. cohenite Says:

    Gee sod, you still haven’t adressed this from the Breusch paper:

    “Irrespective of the criterion used to judge the break point, and for all three of
    the data series, the most remarkable break point in the trend stationary models
    is at 1976. In the unit root models, for the T3GL and NCDC the break point
    again is 1976.”

    I note too that Breusch finds no evidence of a break in the 1990’s whereas David Stockwell does and Tsonis and Swanson find a break in 2002; but does that mean you think Breusch is 1/2 right or Stockwell is 1/2 right and if there is no break in 1998-2002 [and both Stockwell and Tsonis are wrong] does that mean natural variability has ceased or simply been swamped by AGW trend?

  1116. sod Says:

    Gee sod, you still haven’t adressed this from the Breusch paper:

    i did not comment on it, because it is off-topic. so i will keep this reply short: you did never care about the 70s break point. you always were only interested in the 1998 one, which the Breusch paper does NOT support.
    you also used that 1998 break point to support your conclusion about a flat trend since 1998. that you dare to bring this up in this topic is simply bizarre. you are flat out wrong.

    ——————————

    again: VS is using a “forecast interval” that does not start at the moment when his forecast start. (1935) everybody else seems to be doing this differently.

    for example this guy:

    Click to access R11_06.pdf

    as he prefers to ignore my points, perhaps some of the other “unit root” specialists hanging around here will be able to explain why his forecast interval is really far away from the data that he uses to forecast?

  1117. VS Says:

    ——————-

    Additional Unit Root Testing

    ——————-

    I bumped into yet another unit root test. This unit root test controls for (hypothetical) additive outliers which might bias our test results if unaccounted for.

    Stata module is available for download here

    Description:

    “dfao is an extension of the dfuller routine in Stata. It performs the D-F unit root test when the data have additive outliers, or temporary one-time shocks. Such outliers give rise to moving average errors with negative coefficients and these in turn result in oversized unit root tests. The method employed here follows Vogelsang (1999). dfao also serves as a replacement for dfuller. Unlike the latter routine, DFAO conducts an automatic sequential t-test to determine the lag length to use in the DF regression (this test can be suppressed). Additionally, dfao calculates response surface critical values using the equations in Cheung and Lai (1995).”

    ——————-

    ADF test for a unit root with additive outliers

    Variable tested : gisstemp_all, Time Period : 1881 to 2004
    Maxlag = 12 chosen by Schwert criterion, # obs = 124
    Deterministic terms : Constant + trend, # outliers = 0

    Sample calculated test statistic: -2.363

    Critical values:
    1%, -4.004
    5%, -3.425
    10%, -3.132

    MacKinnon approximate p-value for Z(t) = 0.4009
    Opt Lag (Ng-Perron sequential-t test) = 3 with RMSE = .0934037
    No outliers detected

    Inference: Presence of Unit Root again not rejected. Note that we fail to find any outliers in our data that might bias our results.

    ——————–

    Just adding these results to the pile of evidence pointing to the presence of a unit root ;)

    Note that we now have the following results:

    ADF: unit root
    KPSS: unit root
    ZA: unit root
    DF-GLS: unit root
    ADF-outlier: unit root

    versus.

    PP: no unit root, but these test results are disqualified via both Monte-Carlo, and the Stock (1994) reference (thanks for the working paper, I wasn’t aware of this publication!)

    Cheers, VS

  1118. John Says:

    Vs or anybody that knows these things.

    If you want to try to correlate the forcing etc to the temperature/time, why would it not be better to improve the resolution and plot the temperature/time to monthly. Also this would cause the yearly pseudo- random up/down/same change to a less dramatic level shift.

  1119. VS Says:

    Hi John,

    Very good question.

    The point is that monthly observations suffer from seasonality. This means that a very (very) extensive seasonality analysis has to be performed, which (if not done perfectly) has unknown impact on our estimators / tests.

    So while you are indeed adding observations, you are not necessarily improving your analysis, as far more uncertainty is added (you have to ‘get’ many more parameters ‘right’).

    Note also that the additional information contained in these monthly observations is only marginal for the analysis at hand.

    Most authors avoid this marsh-pit of seasonality, by resorting to annual observations.

    Best, VS

  1120. VS Says:

    Hahaha, by now, I completely understand who Josh was referring to with this cartoon. No? :)

  1121. eduardo Says:

    Dear HAS

    ‘If for a moment we put aside predictive power, can I pose you a question? If you have a choice of two models (say, linear regression and unit root), and your observations (data) tells you that linear regression is inconsistent with the data, whereas unit root is consistent, as an experimentalist where does that take you?’

    The question is not so straight forward. Lets let aside for the moment that I an defending a linear trend, I am defending a external forcing as predictor. But let us focus ion your question in principle:

    linear trend model in 1935 predicts for 2010 a temperature rise of say 0.7 +- 0.1 K (a confidence interval of 0.2 degrees) and the observed temperature rise is 0.85 K

    a unit root model in 1935 predicts for 2010 a temperature rise of 0 +- 1K (i.e. a confidence interval of 2 degrees) .

    Which is the best model for prediction? . I would say the first is the best model, because although the prediction was wrong, it was almost right. The model has to be improved.

    The second model did not predict anything useful.

    If I had to buy one of these two models to make a second prediction, I would buy the first and I think many people would do the same.

    If for you the second model was better, then by the same token a model predicting 0 change with 200 degrees confidence intervals is always the best model. It will be always right and cannot be falsified.

  1122. John Says:

    Vs

    Thanks, it never occurred that all these sophisticated statistical tools could be thwarted by humble albeit messy sine wave. Personally I would have stuck the daily sawtooth on top as well!! How disappointing, its so far above my head that I thought you could do anything statistics. All I could manage with this is to decide its a bit random and about 2000 on looks like a PID approaching set-point.

    Regards
    John

  1123. eduardo Says:

    Dear Willis,

    ‘In the second case, on the other hand, the clouds form earlier in the day, effectively turning down the incoming solar and the the temperature remains about the same.

    How on earth can those be seen as being “the same kind of feedback”? One is a simple negative feedback. The other is an active governing mechanism. If you still can’t see the difference, please ask someone who is an engineer about the difference between negative feedback and a governor. A governor uses negative feedback, it is true, but it also uses positive feedback. In addition, the governor tends to keep some variable (temperature, RPM, etc.) constant, while a negative feedback does nothing of the sort.’

    I think again that we are using different terminology and I am trying to understand yours without appealing to authority.

    If a governor itself does not depend on the temperature itself , it is a called in climate science a forcing, not a feedback. If you think that cloud cover can change ‘by itself’ independent of temperatures, then it is a forcing, not a feedback. Whereas in principle it can be true – actually this is the suggestions by Roy Spencer or the cosmic rays hypothesis. It is in principle another theory, debatable as any other.

    Now I think you are proposing a mechanism by which clouds know in advance what the temperature will be and react accordingly (?). Are you suggesting that clouds react to the rate of temperature change instead of the temperature level? In that case that would be indeed a different type of feedback, but I cannot imagine a mechanism for that on a global scale and there will be lots of thing to discuss. Again another theory to be debated

    But how is this related to the unit root thing ? Does the unit root test invalidate an externally driven temperature in favor of your mechanisms?

  1124. VS Says:

    Hi Eduardo,

    I resent this comment of yours:

    “The second model did not predict anything useful.”

    First of all, I didn’t perform any multivariate analysis, I calculated a trend.

    Please (re-)read that post of mine.

    As you can see – e.g. in the impulse response functions I posted – there is a lot of information contained in my estimates. The ARIMA structure is very apt at describing a wide variety of data structures. The fact that you don’t ‘see’ the relevance of this, doesn’t do away with the analytical fact.

    Again, it predicts what it predicts. This (Figure 4) is what we see in the data.

    Knowing what you don’t know, is the most useful kind of knowledge. Or more eloquently, as Alan Wilkinson noted here:

    “The problem aint what you don’t know. It’s what you know that aint so – Will Rogers”

    ————-

    Now, you stated in your reply, the following:

    “I think your question is not well posed. You said that the DGP is not the underlying process of which the data are a realization. So I am now confused.: The DGP cannot be conditioned to something external to the data, e.g. forcings.”

    Allow me to elaborate again. Our climate is a hyper-complex system. This is the true underlying process governing an infinite number of data generating processes. This process is mean reverting. However, there is plenty of evidence to suggest that our particular DGP is non-stationary. If you want to calculate any probability you must respect the DGP, otherwise your results are non-sensical.

    ————-

    Example:

    ————-

    Let’s say that I want to test the hypothesis that you were blessed by the Gods. The true underlying mechanism is ‘divine luck’. This true underlying mechanism governs a myriad of DGP’s. So: your dice throws, your job interview success rate, your lottery chances.. etc. etc.

    Now, if I want to test this hypothesis, I will collect a sample of your, say, dice throws. I will then set the significance level, calculate the probability that you threw what you threw conditional on being a ‘regular unblessed person’, and if the probability is low enough (<sig) I reject the H0 that you are a 'regular unblessed person'.

    However, this probability has no meaning, if I assume that you were throwing with a 6-side die, while you were actually throwing with a 10-sided one

    ————-

    Finallly, you wrote:

    “But I will try to respond your question. No, it must not be mean reverting.” (bold added)

    That’s exactly what I was aiming at.

    Note that hen I said a DGP controlling for exogenous factors, I meant the sample realization (governed by our DGP) if no exogenous shocks (or e.g. forcings) would take place. Also note that the DGP itself (viz impulse responses) includes estimated descriptions of how are ‘trend’ deals with these exogenous impulses.

    Can you see now why I find it so strange that in Zorita et al (2008) you assumed that this must be the case for the purpose of calculating the probability of observing the modern warming record? What you did, but not in those words, is more or less this:

    H0: ‘normal’ mean reverting DGP
    H1: ‘influenced/contaminated’ DGP

    Then you calculated the probability that modern warming will occur under the H0, and concluded that this is not very likely.

    Well, I simply don’t agree with your distribution under the H0 (i.e. you assume stationarity under the H0, I dispute that), and this is why I find your calculated probability uninformative.

    Those interested in the background behind statistical hypothesis testing, are referred to my post here on that topic.

    Best, VS

    PS. John, statistical tools are in fact thwarted by misspecification, adding seasonality to the mix merely increases the chance that you will indeed mess up somewhere, and misspecify. Note that this (i.e. misspecification) is also (in a nutshell) my critique of the Zorita et al (2008) paper.

  1125. cohenite Says:

    sod, you don’t keep an open mind; the break concept combines a stationary process because the mean is or can be constant either side of the break with a deterministic trend in the break; that combination is not appreciated by your side of the fence in this debate. Your statement that there is no break in 1997-8 [as found by Stockwell, and many others such as Seidel, Lindzen etc or 2002 as found by Tsonis] means that the random walk is either with unbounded variance [and therefore with no restorative negative feedback] or even worse with drift and unbounded mean and variance and even greater runaway.

    I would have thought those options were entirely germane to this discussion if not your ideology.

  1126. eduardo Says:

    VS,

    I think we have misunderstood each other again. We have to be careful which terminology we are using. Originally you asked yesterday:

    ‘I’m simply asking you whether you believe that the process generating our instrumental temperature record (i.e. the DGP governing our instrumental record, i.e. the measured temperatures on our 128 year interval), when we account for the various exogenous forcings (volcanic, anthropogenic, you name it, one shot hits), must be mean reverting.’

    and I understood you were asking for a process *including* exogenous factors.

    Today you wrote:
    ‘Note that hen I said a DGP controlling for exogenous factors, I meant the sample realization (governed by our DGP) if no exogenous shocks (or e.g. forcings) would take place’

    Now you wrote something for me different : ‘controlling for exogenous factors’ which apparently means excluding them.

    No, if you take out the influence of the exogenous factors, the remaining process must be mean reverting. It was clearly describe in my previous post. I specified it clearly in my previous post.

    Global average temperature increase GISS HadCRU and NCDC compared

  1127. sod Says:

    a unit root model in 1935 predicts for 2010 a temperature rise of 0 +- 1K (i.e. a confidence interval of 2 degrees) .

    his forecast his useless. it is a “statistical” version of this famous german “farmer rule”:

    When rooster crows from dungheap’s top
    Weather will soon change–or not.

    and i still have serious doubts that the prediction interval on figure 2 is right. it should start around 1935.

    was there ever a reply to Alan?

    # Alan Wilkinson Says:
    March 26, 2010 at 02:50

    VS, I question the reality of your 95% confidence lines of the projection given the assumption of I(1). Since you have known data to 1935 it makes no sense to me that the origin of the curves is symmetric back to 1880.

    It seems to me you are forecasting the probabilities from an 1880 starting point whereas the forecast range actually begins in 1935. So your spread in 2000 is incorrectly centred at the very least and probably incorrect in width as well.

  1128. VS Says:

    Hi Eduardo,

    Thank you for your reply.

    I still have to answer to that post (but it might take some time, as it deals primarily with multivariate analysis, while now we are focused on univariate, or trend, analysis).

    So, your proposition is, that because physics dictates that the DGP governing the record over the past 8000 years must be mean reverting (here we agree), the DGP governing our instrumental record, also must be mean reverting in absence of exogenous forcings (here I believe we disagree).

    Did I get this correct?

    Please note again, that you have just made a formal proposition in terms of mathematical statistics.

    I present again, for illustration of the other readers, what intervals we are comparing: Figure 6.

    Best, VS

  1129. eduardo Says:

    Dear VS,

    ‘I resent this comment of yours:
    “The second model did not predict anything useful.”’

    There is no need to be resentful. I wrote the the prediction is not useful, not that there is not any theory in the model. I am trying to understand what I can learn from this type of analysis, and I would appreciate if everyone would do the same .

    You said you calculated a stochastic trend. Ok. As I wrote it is interesting that the data show a unit root behavior. Now the range of values estimated for 2010 is from -1, to 1 . am I correct ? the range for 2100 will be something line -2, 2 degrees.
    What can we learn from this ?

    I read essentially the same answer: this is what the data say. ok. It doesnt seem to me very specific. Did I miss something?

    I asked further, do the data tell me if they are exogenously driven or internally generated?. I get no answer to this. and this is what I want to know.

  1130. VS Says:

    Hi Eduardo,

    The answer to your last question cannot be answered with trend analysis. However, it can be answered with multivariate analysis.

    The unit root then implies that we need to use cointegration to determine the answer to which drivers do what. We’ll get to that.

    However, I have to stress that stationarity is a big ‘assumption’ and before you make it, you need some solid evidence. Currently, all the test-results point in the other direction.

    This is my main point here, not explanatory variables.

    That’s also why I asked you why on which basis you assumed stationarity under the H0 in Zorita et al (2008).

    Note that the whole purpose of that stochastic trend analysis (with plot) was to juxtapose your analysis (which assumes stationarity under the H0) with my analysis (which doesn’t assume stationarity under the H0).

    Like I wrote earlier, it is simply our (proper) trend estimate. Not the model of the temperature anomalies.

    Now, like I said earlier, this was a detour in our discussion, and we can continue it to see what else comes out of it. Who knows, maybe if we extend our analysis, we might finally get to very similar conclusions to what you concluded in Zorita et al (2008).

    However, some individuals here are jumping on this stochastic trend plot in order to mudlsing in an attempt to cripple the debate we’re having here.

    Best, VS

  1131. Kweetal12 Says:

    SOD

    I’ve seldom seen somebody so scared.
    Why is that?
    This is science and I am waiting till Tamino comes this way.
    But I won’t hold my breath.

  1132. VS Says:

    PS. Eduardo, just for clarification.

    Do note that in your GRL paper, your AR process was also a trend estimate!

    However, this trend estimate was based on the assumption of stationarity :)

    Are we starting to understand each other a bit better?

  1133. eduardo Says:

    Dear Steve,

    ‘It is almost certainly a mixture of internal variation and external forcing. A unit root can only tells the temperauture is not stationary during the instrument period. As a result, the measured change in temperature during that period remains (just barely) consistent with the null hypothesis of random variation, and no CO2 effect. ‘

    I do not agree with this conclusion. It would remain consistent only if the random (not exogenous) variations could produce a non-stationary process. I think they cannot. If they could, live would be impossible on Earth, as at some point temperatures would reach the absolute zero or higher than the boiling point.

    ‘I do not suggest that the GHG effect is really zero (I can’t be!), but a non-stationary process that is potentially contaminated with an unknown level of internal variation, and with that internal variation having an unknown temporal spectrum, does mean that any calculated effect of GHG forcing is bound to have large uncertainty unless there is some way to quantify the expected range of contribution from internal variation during the instrument temperature record.’

    Let us forget CO2 for the moment. We know that solar output has also increased in the 20th century, in particular in the early 20th century. This should produce some non-stationarity. So what is remarkable in the fact that the observed record shows a unit root ?

  1134. Igor Samoylenko Says:

    I am trying to summarise what I have seen so far (I am neither a statistician nor a physicist BTW).

    Eduardo presented a model based on our physical understanding of the climate system. In that simple model F(t) term is fully deterministic and is a function of the total net forcing (Incidentally, I think Tamino provides statitical support for this model with his CADF-based set of tests).

    If we test total net forcing for unit roots, we are likely to find that it is I(1) (not a formal test though; I have not seen a formal test yet). Then F(t) will also likely be I(1) and so will be T(t). It all seems pretty straight forward.

    So, VS’s results so far that T(t) may be I(1) are not that surprising and as Eduardo pointed out not very informative. Presence of a non-linear but perfectly deterministic component shows up as unit root in some (but not all) statistical tests. So what?

    A couple of other points:

    1) From my limited knowledge of statistics, it seems to me that ARIMA(0,1,2) model used by Breusch and Vahid (2008) is better aligned with our physical understanding of the climate system (see Eduardo’s model) then the ARIMA(3,1,0) used here by VS. ARIMA(3,1,0) also has a good fit to the temperature record but that will as far as I understand involve reviewing our understanding the underlying physics.

    2) VS, your conclusions about the PP test are based on Monte-Carlo tests using ARIMA(3,1,0) model. What if you used ARIMA(0,1,2) (I am not a statistician to do it myself)? Until this is done, I don’t think you can dismiss PP test quite so easily.

  1135. eduardo Says:

    Dear VS,

    PS. Eduardo, just for clarification.
    Do note that in your GRL paper, your AR process was also a trend estimate!
    However, this trend estimate was based on the assumption of stationarity :)
    Are we starting to understand each other a bit better’

    We are repeating the same arguments again and again.

    Yes, in the GRL paper the trend estimate was based on stationarity assumptions.
    Yes, The GRL found that the observations do not fit a stationarity assumptions.
    Conclusion from the GRL, the temperature must be exogenously forced.

    Your argument: no, because the temperature can obey a unit root.
    My argument: a unit root cannot represent unforced endogenous temperatures from physical reasoning,

    And here we stand after 1000 posts.

    The question is now:
    can the temperature generated by endogenous unforced variations be non-stationary. I say no. Physically impossible.
    I still do not what is your answer to that question. Please be specific.

  1136. VS Says:

    Hi Igor Samoylenko,

    You make valid points, so here are the answers:

    (1) You really think the ARIMA(0,1,2) is more appropriate? Why would you believe that? Have you taken a look at the (IMHO) ridicilous impulse response functions that the ARIMA(0,1,2) specification generates?

    (2) I just ran the Monte-Carlo you asked for, so assuming an ARIMA(0,1,2) specification, estimated on the interval.

    Before I present the results however, I have to note that an MA specification is called ‘invertible’ if it can be respresented as an infinite order AR specification. The estimation results suggest that the MA model is indeed invertible. This has to do with the MA roots of the equation, which are equal to: 0.77 and -0.26 (i.e. both inside the complex unit circle).

    Now, keeping this in mind, I think the Stock (1994) reference the authors name is relevant for my specification as well. I cite them again:

    “This apparent con‡ict among unit root tests may be attributed to the severe size distortion of the Phillips-Perron test in the presence of moderate negative MA roots, also discussed in Stock (1994)” (bold added)

    Note that we in fact are dealing with a moderate negative MA root. However, here are the MC results, although I have to add that we more or less know, theoretically, what will emerge from them, due to Stock (1994).

    ————-

    MONTE CARLO RESULTS, PP TEST, ARIMA(0,1,2) SPECIFICATION

    ————-

    Using a nominal significance level of 5%, simulated actual significance levels, 50,000 iterations per simulation estimate:

    Lags – Simulated significance level

    0 – 0.4863
    1 – 0.4551
    2 – 0.3970
    3 – 0.3907
    4 – 0.3984
    5 – 0.4112

    Again, we find heavy bias towards overrejecting the true H0 of non-stationarity.

    ————-

    As for the rest of the discussion, I need to see some proofs, before I accept the statistical validity of what Eduardo is claiming w.r.t. the DGP. Again, please respect the formal nature of the discipline we are discussing right now.

    Also, nobody claims we should ‘reevaluate’ the physics. This is a strawman. I listed quite clearly in my replies to Eduardo why this is not the case. I agree that the long-term process is mean-reverting, just like physics predicts.

    We are not discussing that here.

    The implication that a long term stationary process must imply a stationary process on a small interval, is a formal proposition within the frame of mathematical/theoretical statistics. So how about somebody show me some proof of this theorem?

    So far, all I’ve seen are ‘informed opinions’. Maybe that ‘works’ in other disciplines, but not in mine (and not in physics, and not in mathematics, and not in chemistry…).

    Best, VS

    PS. Eduardo, we are not repeating the same arguments over and over again. I think the last part of this post (starting with ‘The implication..’) summarizes what I mean :) In a nutshell: I’m saying you are making a lot of (formal) assumptions without delivering any proper proof for them. You can’t just ‘wing’ statistics, it’s a formal discipline.

  1137. cohenite Says:

    eduarto says:

    “I do not agree with this conclusion. It would remain consistent only if the random (not exogenous) variations could produce a non-stationary process. I think they cannot. If they could, live would be impossible on Earth, as at some point temperatures would reach the absolute zero or higher than the boiling point.”

    I find this strange; the effective temperature is 255K, the greenhouse temperature 33K with the vast majority of that water caused. With a water reservoir and a fairly constant sun natural factors will be stationary and homeostatic to exogenous factors but even if there were not a greenhouse temperature absolute zero and/or boiling point would not result.

    In respect of endogenous unforced variations, AGW; they can be non-stationary; ENSO non-linearity shows that and Pinker et al describes how:

    http://www.sciencemag.org/cgi/content/abstract/sci;308/5723/850

  1138. Tim Curtin Says:

    Eduardo said March 31, 2010 at 18:03 Tim, Probably I was not clear enough with my example. I meant, for instance, from January 1950 to May 1950, daily values.

    Well with all due respect, your comment is a nonsense, obviously the main determinant of the change in NY temperatures from January to May is the SUN, and that is true EVERY year. The question is how much of the change in average temperatures for EACH of January, February, March, April, May, from 1960 to 2006 in the NREL data base for NYC I have accessed is due to your external (e.g solar) or internal (CO2) causation? Your reply is frankly ( and disappointingly) inadequate, because all the regressions I report show that by far the main effect on January temperatures in New York from 1960 to 2006 was precipitable water (aka H2O). None of the changes in Average Temperature in NYC ever have anything to do with [CO2] but everything to do with El Nino/ENSO and related phenomena (and arrivals etc at JFK Airport).

    Then we have Eduardo Says March 31, 2010 at 18:01 “Dear Steve,
    ‘If we say that the instrument temperature record is non-stationary specifically because of GHG forcing (which it seems to me to be what eduardo is saying),’ then I ask how we know that is correct without being able to see the temperature record in the absence of GHG forcing.”

    Eduardo replied: “The argument is because CO2 is the only *known* external [sic] forcing that can explain the observed rise in temperatures. Perhaps other forcings will be known in the future, but now it is the only one. But this is another type of debate. One could argue that the solar output may be responsible for part and so on. I have not addressed this question because I think it is not the question of this thread and we do not need to muddle the discussion further. We are not discussing models, we are discussing unit root…”

    I thought like most of the IPCC mob that CO2 was not an “external” forcing, but rather anthropogenic and therefore internal. Truly, we learn something new every day here!

    More Eduardo again: “The issue is whether the global temperature rise is externally driven or it is the result of internal variations, and what the unit root test can tell us about these two questions. This is what I would like to know.” Really? Well I gave you here above the answers for New York for January 1960 to January 2006, showing that the atmospheric concentration of CO2 (aka [CO2]) has no explanatory power whatsoever.

    Undeterred (what would?), eduardo said March 31, 2010 at 18:01
    ‘The issue is not if there is a GHG forcing (I do not think anybody commenting here doubts that there is [sic]). The issue is uncertainty’ [actually it is relative strength, minimal, vis a vis natural forcings like the sun and precipitable water, aka H2O)].”

    Eduardo goes on: “The issue is whether the global temperature rise is externally driven or it is the result of internal variations, and what the unit root test can tell us about these two questions. This is what I would like to know.”

    If Eduardo really would like to know, the unit root test is not that informative except as a refutation of the IPCC-GISS hypothesis, based as that is on a carefully contrived artifact of GMT.

    The “global temperature rise” if such there is (GISS: 0.7 oC from 1900 to 2000) is not outside the criminally (as in Madoff) exclusion of tropical Central America, Africa and SE Asia from the 1900 base of the instrumental record.

    My analysis is fully supported by VS’ comment (April 1, 2010 at 10:10)
    “Hi John, Very good question. The point is that [eduardo type] observations suffer from seasonality. This means that a very (very) extensive seasonality analysis has to be performed, which (if not done perfectly) has unknown impact on our estimators / tests. “

  1139. eduardo Says:

    Dear VS,

    PS. Eduardo, we are not repeating the same arguments over and over again. I think the last part of this post (starting with ‘The implication..’) summarizes what I mean :)

    ‘The implication that a long term stationary process must imply a stationary process on a small interval, is a formal proposition within the frame of mathematical/theoretical statistics. So how about somebody show me some proof of this theorem? ‘

    It is remarkable that after 1000 posts there exists such misunderstanding. I think nobody defends that ‘a long term stationary process must imply a stationary process on a small interval’ . Actually there are many examples you can think of showing the opposite. Everyday, the local temperature appears non-stationary from 5 am to 12 am, and yet it is stationary.

    The question is a completely different one. Can a exogenous driven process appear as non-stationary in a short interval ? If I give you temperature data of a single day from 5 am to 12 am without telling you what they are, would you be able to spot that the process is exogenously driven or would you conclude (wrongly) that it is simply non-stationary ?
    This is the interesting question for climate science. The question of whether the trend + confidence intervals has been calculated correctly is basically irrelevant, although it may have appeal to those wanting to spend their time debating in blogs – on both sides.

    Trends as computed by Bart at the beginning of this thread are just descriptive, and they are not used to prove or disprove anthropogenic climate change. They could not be. For that, one has to jointly analyze the temperature *and* the different forgings, among which some are anthropogenic and some natural. So a trend in itself, either computed with OLS or I(1) or any other method, cannot prove anything. I have the impression that you think that the GRL paper is a milestone in climate science, whereas I doubt that it will barely mentioned in the next IPCC report. For instance, the GRL paper cannot say whether the record years are caused by the sun or by GHG, since those data didnt enter the analysis.

    So my suggestion, again, is that if you want to make a real contribution to climate science- of which I would be sincerely happy- you should move to co-integration or other type of co-variance analysis. I am looking forward to learning from you on this. There there is a lot of stuff to be looked at, but so far I am afraid that you are on the wrong track.

  1140. VS Says:

    Eduardo,

    you write:

    The question of whether the trend + confidence intervals has been calculated correctly is basically irrelevant, although it may have appeal to those wanting to spend their time debating in blogs – on both sides.” (bold added)

    and

    “So my suggestion, again, is that if you want to make a real contributionto climate science- of which I would be sincerely happy- you should move to co-integration or other type of co-variance analysis. I am looking forward to learning from you on this.” (bold added)

    Observe the very first paragraph in the very first section of the IPCC summary report, which as far as I know, embodies the scientific consensus™:

    “Eleven of the last twelve years (1995-2006) rank among the twelve warmest years in the instrumental record of global surface temperature (since 1850). The 100-year linear trend (1906-2005) of 0.74 [0.56 to 0.92]°C is larger than the corresponding trend of 0.6 [0.4 to 0.8]°C (1901-2000) given in the TAR (Figure 1.1). The linear warming trend over the 50 years from 1956 to 2005 (0.13 [0.10 to 0.16]°C per decade) is nearly twice that for the 100 years from 1906 to 2005. {WGI 3.2, SPM}” (bold added)

    I think that pointing out that these both the point estimates and accompanying confidence intervals named here are nonsense, in itself constitutes a contribution to climate science.

    Best, VS

    PS. You are still dancing around my (formal) question.
    PPS. We are not moving on to covariate analysis until the unit root / non-stationarity question is resolved.

  1141. sod Says:

    I think that pointing out that these both the point estimates and accompanying confidence intervals named here are nonsense, in itself constitutes a contribution to climate science.

    those partsof the IPCC report are NOT nonsense. what is nonsense, is your approach, which tells us that global temperature will rise or drop by 1°C over the next 50 years.

    PPS. We are not moving on to covariate analysis until the unit root / non-stationarity question is resolved.

    that question will not be resolved. it isn t resolved in economics either. there are plenty of datasets, plenty of time spans, plenty of tests and many professional opinions.

    in the end, unit root will not change climate science.

  1142. eduardo Says:

    Dear VS,

    as you like. You can use your time spotting holes in the accessory questions just because they have prominently be sold.

    all the best

  1143. VS Says:

    Eduardo,

    ‘spotting holes’?

    You do realize that if the assumption of stationarity is violated, it implies that all the statistics performed cannot in fact not to be called formal statistics?

    That’s one big ‘hole’ Eduardo.

    I repeat the statement I made in my very first post:

    “Claims of the type you made here are typical of ‘climate science’. You guys apparently believe that you need not pay attention to any already established scientific field (here, statistics).”

    Have you just proven the point I was making there?

    VS

  1144. VS Says:

    Ehm: “You do realize that if the assumption of stationarity is violated, it implies that all the performed statistical analysis, which assumes stationarity, cannot in fact be called formal statistics?”

  1145. KD Says:

    As a scientist (PhD, but not in statistics or “climate science”), I have followed this thread with great interest. After these latest few post I think it is clear that VS has done a great service and provided a significant contribution to climate science.

    To see this all one has to do is look at the posts and see which contain the following: 1) data; 2) formal analysis; 3) multiple tests of the same hypothesis; and, 4) consistency in approach (i.e. using the scientific approach).

    There is little to be gained from all the “informed opinions” if their authors either refuse or are “too busy” to provide data, formal analysis, tests, etc.

    Well done VS, well done. Thank you for your patience, diligence, and professionalism

    You HAVE made a significant contribution, even though the AGW crowd is in denial, which, by the way, is quite ironic.

  1146. Shub Niggurath Says:

    Paging dhogaza, re: the earth will be a snowball without CO2

    There is an interesting paper in Nature:

    Rosing et al. No climate paradox under the faint early Sun. Nature 464, 744-747 (1 April 2010)

    The doi is 10.1038/nature08955

    “Based on the ratio of the minerals, the team reports in tomorrow’s issue of Nature that CO2 levels during the Archean could have been no higher than about 1000 parts per million—about three times the current level of 387 ppm and not high enough to compensate for the weak sun”

    Regards

  1147. jo abbess Says:

    In response to VS, writing April 1, 2010 at 11:23

    >
    > I present again, for illustration of the other readers,
    > what intervals we are comparing: Figure 6.
    >

    The link for Figure 6. being here :-

    That figure does not quite correspond with the chart I recall of Vostok.

    I couldn’t find the chart, so I’ve made a new one, and it is pretty clear that during the Holocene there has been a steady cooling, until the period of the last few hundred years, where there has been a warming :-

    Perhaps somebody could inform me of which research produced the Figure 6. that VS links to ?

  1148. Shub Niggurath Says:

    Apologies for posting in succession

    Eduardo:
    “Can a exogenous driven process appear as non-stationary in a short interval ?”

    From my (limited) understading:
    I think the answer to your question is ‘Yes’. I asked the exact question above too.

    The real question however is:

    Is it valid to infer exogenous-drives or forcings using non-stationary data from a short interval?

    I think you are on the same page with VS but yet fail to see what he is saying. I think you are painting VS (and his position) into a ‘unphysicality’ corner when it seems uncalled for.

    Regards
    Shub

  1149. Pofarmer Says:

    Eduardo.

    May be off base.

    But, you certainly seem trying to slam in the statistics to fit your theories and biases, rather than seeing where the MATH takes you. You make many assumptions, such as the stationarity of the temperature trends without Exogonous forcings. Might be worth stepping back a minute, and thinking, “Hmmm, what if I’m incorrect” and gaming through and seeing where the rest of the math leads.

    Why not try that route?

  1150. VS Says:

    Hi jo abbess,

    Oops, you caught me with my pants down :)

    I have to admit that that one I took from somebodies reproduction online, and I can’t remember anymore where. I wouldn’t be surprised if you were correct, and I was wrong (could you link me to that data please? so I can correct my own database?). Also, I think there are various reconstructions on the basis of the same Vostok data, so it is possible that we are both ‘right’ ;)

    However, the whole point illustrated on those figures (note that I didn’t use the data for any formal argument/estimation, it served as an illustration) is the mean reversion over a long period, and that the sample we study is a very small subset of that long process.

    Also, take another look at this post before saying anything about ‘trends’ ;)

    The message you should take home from this entire discussion here is that *eyeballing* the data, doesn’t work.

    Cheers, VS

  1151. Ibrahim Says:

    Jo Abess: anomalies and temperatures

    Here is a nice paper for you all:

    Click to access 494.pdf

  1152. jr Says:

    Shub, couldn’t you just as easily say that the real question is “Is it valid to infer non-stationarity from an exogenous driven process over a short interval?” :P

    B+V seem to say it isn’t (in their conclusion at least. They try to avoid invoking externalities in their analysis). VS seems to have a different opinon. Me, I don’t know, but from my background I am more inclined to side with the guy who says, “there is some uncertainty as to just what our tests can actually tell us about this data because of x, y, z.” especially when there seems to be grounds for legitimate debate as to just what the correct interpretation of the various test results might mean when applied to the global temperature series. If “failure to reject presence of” actually meant “confirms the presence of” I might think differently, but again I don’t have the stats chops to know if that is really a reasonable argument to make, and I guess that is why Lucia was asking about the statistical power of the tests with this data.

    I have an inkling that this is what Eduardo is aiming at. Given the short timescale and our knowledge of the physics, is it valid to infer non-stationarity from the test results?

    VS, sorry I am not in a position to be able to tell you how the temperature series should respond to a kick. Even if I said that I thought the response of the (3,1,0) model was more “realistic” it could only be a statement from ignorance.

  1153. Igor Samoylenko Says:

    VS: “This implies, that when our data are formally dealt with, we fail to find any significant ‘trend’ in temperatures over the period 1881-2008.”

    And Breusch and Vahid (2008) who also dealt formally with the temperature data reached very different conclusions (as you know):

    We conclude that there is sufficient evidence in temperature data in the past 130-160 years to reject the hypothesis of no warming trend in temperatures at the usual levels of significance. The evidence of a warming trend is present in all three of the temperature series and it is most pronounced in NASA’s GLB series. Although we have used unit roots and linear trends as a coordinate system to approximate the high persistence and the drift in the data in order to answer the questions, we do not claim that we have uncovered the nature of the trend in the temperature data. There are many mechanisms that can generate trends and linear trends are
    only a first order approximation (see Granger 1988). It is impossible to uncover detailed trend patterns from such temperature records without corroborating data from other sources and close knowledge of the underlying climate system.

    Anyone who is enthralled by VS’s rigor should read this paper. Breusch and Vahid are also economists and they used similar (all formal) statistical methods to those used by VS, yet they reached such different conclusions.

    Personally, I think this conclusion by Breusch and Vahid is a very fitting conclusion to this whole thread.

    VS, I would suggest you find a willing a climate scientist (such as Eduardo Zorita, for example), and publish a joint paper. When you do that, I will read it. Until then, have fun blogging…

  1154. VS Says:

    Hi jr.

    “I have an inkling that this is what Eduardo is aiming at. Given the short timescale and our knowledge of the physics, is it valid to infer non-stationarity from the test results?”

    I do have to point out that I’m not the one making the bold (unfounded) assertions here. Stationarity is a non-trivial assumption, and is in that sense stronger than non-stationarity.

    What I’m simply saying is: as long as nobody can formally prove their claims, I’ll go with what the data tell us. And the data do point to non-stationarity.

    Physical ‘boundedness’ really has little to do with this, as it’s the data generating process we’re discussing here. There are plenty of processes that turn out to contain (one or more) unit roots, when formally tested.

    – Sea level rises (David checked this)
    – Solar irradiance (widely reported in literature)
    – All GHG forcings (widely reported in literature)
    – etc.

    Now in light of all of this, I would say that odds are that, not the myriad of tests, but rather the unproven assumption of (trend-)stationarity of the instrumental temperature record, is in fact invalid.

    Don’t you think inferring non-stationarity in this context is the prudent thing to do?

    Also, note that some of the tests (i.e. KPSS) in fact take stationarity as the null-hypothesis. Note also that stationarity was rejected in this case.

    Cheers, VS

    PS. Yep, my inference was also from ‘ignorance’ :) However, that line with a knick seems weird, don’t you think?

    PPS. Igor, too bad you display such an attitude, I actually thought you were an interested skeptic, hence my elaborate reply to your questions (which you apparently also didn’t bother to read / properly respond to, including those Monte Carlo’s I ran at your request).

    Do note that the Breuch and Vahid working paper did not reach ‘such different’ conclusions than me. Try reading the paper again, and then reading my comments. You are misstating their analysis.

    Most of their analysis in fact supports my assertions. They simply didn’t dig through on the unit root question since they were not interested in that issue… ah well, you can’t please everybody…

    прощай, славянский собрат! ;)

  1155. DLM Says:

    # eduardo Says:
    April 1, 2010 at 14:25

    Dear VS,

    as you like. You can use your time spotting holes in the accessory questions just because they have prominently be sold.

    all the best

    That looks like a big wave of the arms, and a Bye Bye.

  1156. DLM Says:

    VS, I would suggest you find a willing a climate scientist (such as Eduardo Zorita, for example), and publish a joint paper. When you do that, I will read it. Until then, have fun blogging…

    And so goes Igor. Methinks this formal stuff is not to their liking. The formal stuff usually doesn’t serve the dogma well.

  1157. VS Says:

    Yeah DLM,

    Honestly, I have to admit that I got a similar impression.

    And all this time I naively thought that if I would simply provide enough evidence, and answer individual concerns/criticism…

    …ah well, you indeed cannot please everybody..

    As a side note, if this is representative as to how dissenting scientific results are ‘dealt with’ within the climate science discipline, and how much rigor is demanded of authors, I’m really curious what that peer-review looks like from the inside…

  1158. jr Says:

    Hi VS, as B+V made clear, if something might contain a unit root, then it is a good idea to proceed as if it did have a unit root for analysis purposes. So in that sense, sure, inferring the presence of a unit root is prudent. They don’t however make the claim that the underlying data generating process must therefore contain a unit root (I think). That is the major difference I see between them and you.

    (I could be completely misinterpreting you. But that is the sense I get.)

    As I said before, it is your declarations of certainty that seem at odds to me with what B+V concluded. I find their lack of certainty more convincing. Sorry. :) They use a variety of formal tests to let the data tell them what it can and, well their concluson is posted just up there…I don’t have the chops to really go much further than that without spending time I don’t have getting myself up to speed with the techniques you are using. I could be talking completely arse-backwards here so…I really do find it all very interesting though.

  1159. Pofarmer Says:

    I’m really curious what that peer-review looks like from the inside…

    Wear a tyvek suit.

  1160. VS Says:

    Hi Jr,

    B and V didn’t study the same thing I did.

    In that sense, my analysis is not ‘competing’ with theirs, and our conclusions are not contradictory.

    As a matter of fact, my Monte-Carlo simulations vindicate their Stock (1994) reference on the PP distortions.. and other than that, all our test results coincide.

    They even explicitly state they are not studying the unit roots in particular, and that ‘further analysis’ is necessary.

    That’s what I did in this thread.

    VS

  1161. Pofarmer Says:

    VS

    I don’t think you’re going to get Eduardo on board here. He seems to have pretty well entrenched.

  1162. DLM Says:

    VS,

    You are just a blogger. Until you find a willing climate scientist (such as Eduardo Zorita, for example), and publish a joint paper …blah…blah…blah.

    You are very smart guy, so you should have seen this coming. And good luck on finding that willing climate scientist:)

    As for the inside scoop on the peer-review process as practiced by ‘climate scientists’: see Climategate emails.

    I don’t think we will lose sod. He doesn’t have a clue about when to quit.

    You really scare these guys. Except for Bart. He has been very patient and uncommonly gracious in hosting your heresy here.

    Looking forward to more of the formal stuff. I don’t really grasp it in the intricate details, but it’s like the first time I saw a cricket match. At first the whole thing was mystery, but after a while it became apparent which side was winning.

    Hang in there.

  1163. DeWitt Payne Says:

    VS,

    You rule out a deterministic trend based on unit root tests. Breusch and Vahid:

    Most of the available unit root tests consider the null hypothesis of a unit root against a stationary or a trend stationary alternative. Given our discussions above about the possibility of observational similarity of a unit root process and a process with a deterministic trend in finite samples, it is not surprising to know that the
    …finite sample properties of unit root tests are poor. The performance of these tests depend crucially on the type of trend and the period of cycles in the data. Stock (1994) emphasises the importance of properly specifying the deterministic trends before proceeding with unit root tests, and advises that “this is an area which one should bring economic theory to bear to the maximum extent possible.” In
    our context, the responsibility falls on the shoulder of climate theory rather than economic theory, an area that we know nothing about.

    [emphasis added]

    I read that to say that unit root tests cannot rule out the presence of a deterministic trend in a time series a priori. Ruling out a linear trend in the temperature series is meaningless because no one actually believes the deterministic trend is linear. The use of linear trends is descriptive, not prescriptive.

  1164. The Blackboard » Does this have a unit root? Says:

    […] may help clarify (for me) some of the answers VS is presenting to eduardo’s questions over at Bart’s. (Right now, I understand eduardo’s questions and explanations, and I don’t think VS is […]

  1165. Igor Samoylenko Says:

    VS,

    I appreciate your responses and the analysis you did as the result of responding to my questions. I think you have shown that the PP test has some issues; I am not disputing that.

    VS: “Do note that the Breuch and Vahid working paper did not reach ’such different’ conclusions than me. Try reading the paper again, and then reading my comments. You are misstating their analysis.”

    How can I misstate their conclusions when I quoted what they said in their conclusion verbatim next to what you said in your conclusion verbatim?

    Your conclusion: “we fail to find any significant ‘trend’ in temperatures over the period 1881-2008”

    Their conclusion: “there is sufficient evidence in temperature data in the past 130-160 years to reject the hypothesis of no warming trend in temperatures at the usual levels of significance”

    My point about you publishing your results jointly with someone who understands climate science is that it is IMO the only way you can make a meaningful contribution. Write it up, explain it in terms and in ways that are relevant to climate science, get it published. Get the response. If you have something of value, they WILL listen, they WILL take notice. Engage those who have both statistical and physics training. Turn it into something constructive.

    If you think you can do it here (or that you have already done it) then like I say good luck! Most of your supporters here don’t need much of a spring board to jump to their favourite conclusions. The few serious opponents you’ve had came and went. Do you really think this is because they have nothing to say or there is no one else who can productively engage you?

    It is the people like Zorita and Bart that you need to convince, not me (I am just a layman)! And so far as I can see, you are struggling to do that.

    VS: “As a side note, if this is representative as to how dissenting scientific results are ‘dealt with’ within the climate science discipline , and how much rigor is demanded of authors, I’m really curious what that peer-review looks like from the inside…”

    Are you serious? Do you see any scientists engaging posters at WUWT? Do you think this is a sign of rigor in what is published at WUWT? Or of “how dissenting scientific results are ‘dealt with’ within the climate science discipline , and how much rigor is demanded of authors”?

    VS: “They [Breuch and Vahid] even explicitly state they are not studying the unit roots in particular, and that ‘further analysis’ is necessary.”

    The aim of their paper is stated in the abstract as: “Are global temperatures on a warming trend?” regardless of whether there is unit root or not in the time series. Their conclusion is “yes” (regardless of whether there is unit root or not in the time series).

    Their reference to “further analysis” was to establish if there is a warming trend not if there is a unit root: “the presence of a unit root does not exclude the possibility that there may be a deterministic trend in the data as well. So we need to do further analysis.”

  1166. Pofarmer Says:

    From V&S.

    In
    our context, the responsibility falls on the shoulder of climate theory rather than economic theory, an area that we know nothing about.

    From DeWitt Payne

    I read that to say that unit root tests cannot rule out the presence of a deterministic trend in a time series a priori.

    Aren’t they really begging the question? We don’t think there should be a unit root, so poof, we are going to ignore it? That’s the way I read their statement.

  1167. Pofarmer Says:

    It is the people like Zorita and Bart that you need to convince, not me (I am just a layman)! And so far as I can see, you are struggling to do that.

    Zorita and Bart are ALREADY convinced. They aren’t looking to become unconvinced. If they were, they would ask more questions and do less arguing.

  1168. Paul Says:

    you took 3 data sets each known to have alarge warming bias most of which is known to be introduced by siting bias and UHI corrections that go the wrong way. None of them are raw data, all of which are based on the same fudged data and are in no way independant. You put this together and get the same results. What a surprise GIGO

  1169. DLM Says:

    Igor,

    I apologize. I thought that you had left in a huff. But you are back.

    “Are you serious? Do you see any scientists engaging posters at WUWT? Do you think this is a sign of rigor in what is published at WUWT? Or of “how dissenting scientific results are ‘dealt with’ within the climate science discipline , and how much rigor is demanded of authors”?’

    Are you serious? WUWT is an open forum. How would you characterize the blog run by the team of prominent climate scientists? You know the one. And whatever would make you think that Eduardo Zorita would co-author a paper with VS, after having seen the discussion between the two here?

  1170. GDY Says:

    VS and Eduardo –
    thank you both for the extremely informative, educational and fun(!) dialogue. I hope sincerely you can continue the exchange. I agree with both of you, and don’t see any contradiction in that.
    VS – i’ve said it before, I say it again -the depth and breadth of your statistical knowledge and analysis is thrilling. The lack of any real rebuttal of the Stats analysis is disappointing. Your analysis shows the Realized Atmospheric Temperature record is STILL CONSISTENT with Natural Internal Variability.
    At the same time –
    Eduardo, I understand as well the way you look at the world – attempting to ‘attribute’ the realized temperature record. Ultimately, your analysis path is the path we must all go down to deepen our understanding of the world in which we live.

    As I see it, there are some parametrization problems in the GCMs in regards to: Clouds, Oceans/Atmosphere Dynamics, Biota and Cross-Sensitivities between these and the other forcings.

    There is much work to be done, and the reality is, given the failure of Copenhagen to achieve its global public policy goals, we will indeed have the time to realize more and better data.

    One final aside – can we please recognize the irony of criticising Climate Science as a ‘young science’ by resorting to analytical tools of a ‘younger science’ (eg, Econometrics)??

    Please find a way to continue this robust exchange.

  1171. DLM Says:

    GDY says: “Your analysis shows the Realized Atmospheric Temperature record is STILL CONSISTENT with Natural Internal Variability.”

    That would have been a good place to stop.

  1172. jo abbess Says:

    @VS

    Hope you’ve got your trousers back on now.

    I included the link for the data in my graphic (go “figure”) to show what data the chart was based on.

    But I’ll show good grace and include my links here again :-

    http://www.ncdc.noaa.gov/paleo/icecore/antarctica/vostok/vostok_isotope.html
    ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/vostok/vostok_deld.txt

    I chose to use Jouzel (1996) years before present, not the other year columns. Why did I do that ? Because they were the most recent published calculations – pure and simple.

    I know, I misspelled the name Jouzel in the name of the graphic file. Oops.

    As for “eyeballing” the trend lines, the travel of the deep minima in the graphic clearly suggest that, and this analysis of the data is backed up by numerous other proxies.

    My summary is probably more correct than trying to contend that the temperatures are the result of a stochastic process, and arguing that since they are stochastic that therefore they do not reflect any real change. Stochastic does not mean “unable to change”, nor does it mean “random result”.

    Interestingly, stochastic processes are behind such things as radiation poisoning and lung cancer from tobacco smoking. Exposure to repeated small forcings, such as radioactive or chemical influences, cause real change.

    There is little doubt that Global Warming is the result of small forcings. Every day, any particular part of the Earth turns into shade and light, and during the days, more of the Sun’s radiation is captured and converted into heat because of the additional radiative forcing from accumulating Greenhouse Gases in the atmosphere. This stochastic process is leading to real and undeniable warming.

    Global Temperatures are not the result of a random walk. Heat is stored in the Earth system, largely in the oceans, and it’s building up with no way of escape.

    If you turn up enough playing cards in the deck, sooner or later you get the Ace of Spades. It’s just a matter of time for the small risks to output the winner.

    A ‘rooty’ solution to my weight gain problem

    http://www.realclimate.org/index.php/archives/2010/04/climate-and-network-connections/

  1173. DeWitt Payne Says:

    Pofarmer Says:
    April 1, 2010 at 18:01

    We don’t think there should be a unit root, so poof, we are going to ignore it?

    Please show me where in the B&V paper it said they ignored the presence of a unit root in the data. Their model was ARIMA(0,1,2). The one in the middle specifies the presence of a unit root in the model just like VS’ (3,1,0) model. The difference is that the B&V model includes Moving Averages (the last 2) while VS’ has Autoregressive lags (the 3).

  1174. sod Says:

    I have to admit that that one I took from somebodies reproduction online, and I can’t remember anymore where. I wouldn’t be surprised if you were correct, and I was wrong (could you link me to that data please? so I can correct my own database?). Also, I think there are various reconstructions on the basis of the same Vostok data, so it is possible that we are both ‘right’ ;)

    your lack of knowledge about climate issues is shocking. just a wrong choice of link? like your false “random walk” claim that started this discussion?

    the MWP was not 1° higher than current temperatures are. and GISS is a global dataset, which the Vostok core is not.

    and that information from the past does not help at all, while looking at the very recent AGW.

    In that sense, my analysis is not ‘competing’ with theirs, and our conclusions are not contradictory.

    they come to the opposite conclusion. that you don t understand and see that, is the reason why you think that your analysis makes sense. you handwave away anything contradicting your position, and seriously overstate everything that seems to support it.

    good luck getting any of this rubbish published.

  1175. Anonymous Says:

    SOD, what is wrong with you?
    So hostile! No arguments, just yelling and foot stomping. “Nno!! U cannot play with us! Nno, this is *our* club!!”

    I dont understand why you cannot just let this play out, see where it goes. If it goes nowhere, who cares? Good exercise, next!
    But you are apparently scared that it *does* go somewhere, why else would you behave this way? Your behaviour betrays you, it is so obvious.

  1176. lucia Says:

    Dewitt

    Stock (1994) emphasises the importance of properly specifying the deterministic trends before proceeding with unit root tests, and advises that “this is an area which one should bring economic theory to bear to the maximum extent possible.”

    I agree with DeWitt that VS’s mis-specification of the functional form of the deterministic trend believed to be true is a serious short coming of VS’s analysis. I’ve mentioned this, VS has mentioned it. I think we’ve each tried to explain this in several different ways. I don’t think VS is really understanding this issue, which to my mind is a very serious flaw in his entire analysis.

    I am very worried that it is the functional form of the deterministic signal that is triggering the diagnosis of the “unit root” behavior. The diagnosis is then that because of the “unit root”, we can’t see the signal.

    VS–
    I have posted some synthetic data created by driving a the absolutely simplest lumped parameter model in existence. I invite you to tell us what you discover about this data. Does it have a unit root? Does it have a trend? Etc. I made the noise fairly low, and simple. But I’d like to know what you find about the unit root, and what you diagnose about this data taking it fairly bind. I’ve posted the data here does this have a unit root?

  1177. lucia Says:

    “VS has mentioned it.”
    I meant Dewitt mentioned it.

  1178. DeWitt Payne Says:

    Willis Eschenbach Says:
    March 31, 2010 at 22:33

    I note that as far as I know, neither tamino nor De Witt have defended their claims against vs’s math.

    Have too. Just not here. See comments here. I can’t speak for Tamino.

    As far as Tamino having cherry picked, can someone point me to the a priori as opposed to data-snooped justification for VS’ 1935 date?

  1179. Bart Says:

    Lucia makes a very important point:

    I am very worried that it is the functional form of the deterministic signal that is triggering the diagnosis of the “unit root” behavior. The diagnosis is then that because of the “unit root”, we can’t see the signal.

    This is something that came up way up this thread as well, and that Tamino also highlighted: In testing for unit root you have to account for the underlying trend. If you misspecify the underlying trend (eg by assuming it linear, or just following the CO2 forcing, you may be misspecifying the unit root behavior. That’s why I have repeatedly suggested to do this analysis with the (response to the) net forcing or climate model output as the forced signal (and then there’s internal variations such as ENSO still to consider, but let’s leave details aside for the moment).

  1180. DeWitt Payne Says:

    The idea of treating the temperature series as purely stochastic rather than, say, deterministic chaotic also has to be justified.

    Trying to channel Tom Vonk:

    From the Wikipedia article on Chaos Theory:

    When a non-linear deterministic system is attended by external fluctuations, its trajectories present serious and permanent distortions. Furthermore, the noise is amplified due to the inherent non-linearity and reveals totally new dynamical properties. Statistical tests attempting to separate noise from the deterministic skeleton or inversely isolate the deterministic part risk failure. Things become worse when the deterministic component is a non-linear feedback system.[63] In presence of interactions between nonlinear deterministic components and noise, the resulting nonlinear series can display dynamics that traditional tests for nonlinearity are sometimes not able to capture.[64]

    63. Kyrtsou, C., (2008). Re-examining the sources of heteroskedasticity: the paradigm of noisy chaotic models, Physica A, 387, pp. 6785–6789.
    64. Kyrtsou, C., (2005). Evidence for neglected linearity in noisy chaotic models, International Journal of Bifurcation and Chaos, 15(10), pp. 3391–3394.

  1181. Pofarmer Says:

    That’s why I have repeatedly suggested to do this analysis with the (response to the) net forcing or climate model output as the forced signal

    But all you are doing with that is begging the question. You are ASSUMING that the model output, and the net forcing analysis is correct. You are automatically biasing your test. At least that’s the way I see it.

    Please show me where in the B&V paper it said they ignored the presence of a unit root in the data. Their model was ARIMA(0,1,2). The one in the middle specifies the presence of a unit root in the model just like VS’ (3,1,0) model. The difference is that the B&V model includes Moving Averages (the last 2) while VS’ has Autoregressive lags (the 3).

    Sorry.

    My understanding is that they chose that particular model because it fit what they “thought” the model should be. Not because it’s what the testing told them it was. Might be totally off base, of course, but that’s my impression of what B &V and Zorita et al are doing.

    I’m heartened by the fact that this is the same sort of thing that Bart proposes. It’s completely circular. “Well, if we use this bias in our testing,k then the results come out the way we like”, sort of thing.

  1182. Shub Niggurath Says:

    I am saddenned by Bart’s recent efforts which have gone in the ‘unit root body weight’ direction.

    Look at this:
    http://wotsupwiththat.wordpress.com/

    It is apparently run by Bart Verheggen, if we believe RealClimate and this is where his recent efforts have gone into, presumably in the recent past.

    VS – all the best.

  1183. Bart Says:

    Shub,

    Incorrect. I have nothing to do with that blog (not sure what the problem would be if I would though). I don’t know why the RC article linked to that blog; my guess is that it’s a mistake (as also noted by one of the commenter Sou (nr 17 at http://www.realclimate.org/index.php/archives/2010/04/climate-and-network-connections/ )

  1184. DLM Says:

    Shub says: Look at this:
    http://wotsupwiththat.wordpress.com/

    Interesting. I posted the following comment there:

    Testing.

    The limit on comments here seems to be 2. Just wanted to see if this would go through moderation.

    Just a suggestion. Did you know that you can post your comments on WUWT? You might find a bigger audience there.

  1185. DeWitt Payne Says:

    VS Says:
    March 25, 2010 at 12:22

    I’m getting a 404 error for the image that’s supposed to be at:

  1186. Pofarmer Says:

    http://wotsupwiththat.wordpress.com/

    That’s just sad.

  1187. Shub Niggurath Says:

    I think the wottsup site is just an April 1st prank. Wonder why they attributed it to Bart. It looks like it has been running since January.

    “…Bart Verheggen from wuttsupwiththat, has pointed out the site UnrealClimate’s real name may be “blogal cooling”,

  1188. Pofarmer Says:

    I’m getting a 404 error for the image that’s supposed to be at:

    http://img146.imageshack.us/img146/6674/deterministicvsstochast.gif

    Works for me.

  1189. HAS Says:

    eduardo says way back on April 1, 2010 at 10:27

    “If I had to buy one of these two models to make a second prediction, I would buy the first and I think many people would do the same.”

    But would you use it if the data told you that first model was wrong, and any apparent accuracy in the prediction was totally spurious? This is my point.

    Since I was last here things seemed to have taken a turn somewhat for the worse, unnecessarily so I fear.

    Two conversations are going on.

    One is around whether the process used by VS to get to a AMIRA(3,1,0) is statistically sound. This is as it should be, although I don’t think people just citing the differing conclusions of a paper (eg B&V) rather than identifying why the differences arise is going to move the debate along. It is also important to test VS’s process for model selecction against synthetic data to establish its robustness.

    However the other is I think based on a misunderstanding.

    It is quite possible that the underlying process (DGP) for the series involves a new exogenous component from the late 20th century. All that is being said here is that on the data its presence is not detectable, and therefore it is wrong to assume it exists on the basis of this data set alone.

    eduardo is this an acceptable proposition to you?

    It is also saying that the DGP that is dominating this time series is I(1) etc, and from that comes a number of consequences for how you treat the series in statistical analysis.

    If VS’s model proves robust then it will I suspect place significant constraints on the form of your proposed T(t) = F(t) + stationary(t) model, even if ultimately the model as estimated proves only robust for T pre mid 20th Century.

    As a starting point I’d ask you if the climate and weather could drive change in T rather than absolute T (deterministically and randomly respectively).

    Also I’d ask how well your model might fit with the results from this model namely that in aggregate the next change in the earth’s surface temperature will be in the opposite direction of the approximate weighted average of the last three changes (with some bias towards the most recent changes) with an approximately 10% overshoot [in total your climate impact] plus a random bit [your weather impact].

  1190. AndreasW Says:

    Lucia and dewitt

    since you think the temperature could have à deterministic trend and still have à Unit Root, please show us an example of a process that combines à deterministic trend With à unit root!

  1191. cohenite Says:

    V, you say:

    “Eleven of the last twelve years (1995-2006) rank among the twelve warmest years in the instrumental record of global surface temperature (since 1850). The 100-year linear trend (1906-2005) of 0.74 [0.56 to 0.92]°C is larger than the corresponding trend of 0.6 [0.4 to 0.8]°C (1901-2000) given in the TAR (Figure 1.1). The linear warming trend over the 50 years from 1956 to 2005 (0.13 [0.10 to 0.16]°C per decade) is nearly twice that for the 100 years from 1906 to 2005. {WGI 3.2, SPM}” (bold added)

    I think that pointing out that these both the point estimates and accompanying confidence intervals named here are nonsense, in itself constitutes a contribution to climate science.
    S”

    While commendable and requiring continual mention in fact Monckton has been shopping this point for yonks: see page 8,

    Click to access markey_and_barton_letter.pdf

  1192. cohenite Says:

    cohenite Says:

    AndrewsW, you ask:

    “Lucia and dewitt

    since you think the temperature could have à deterministic trend and still have à Unit Root, please show us an example of a process that combines à deterministic trend With à unit root!”

    The break concept seems to have done that as my earlier reply to sod explains:

    “April 1, 2010 at 11:05
    sod, you don’t keep an open mind; the break concept combines a stationary process because the mean is or can be constant either side of the break with a deterministic trend in the break; that combination is not appreciated by your side of the fence in this debate. Your statement that there is no break in 1997-8 [as found by Stockwell, and many others such as Seidel, Lindzen etc or 2002 as found by Tsonis] means that the random walk is either with unbounded variance [and therefore with no restorative negative feedback] or even worse with drift and unbounded mean and variance and even greater runaway.”

  1193. Willis Eschenbach Says:

    Alan Says:
    April 1, 2010 at 08:45

    OK Willis … I understand what you are saying … but I think it’s askew.

    “Air resistance” on a car is not a ‘feedback’ … it’s an ‘input force’. Speed is the output and a feedback is a signal, based on the output, which modifies the input.

    Look at it this way … if the car was stationary (brake on!) and there was a prevailing wind of 150mph (say). If the driver gunned the car and let off the brake, the force applied by the engine to the wheels would be balanced by the wind force … the car would not move (accelerate from 0 mph).

    You can’t say the ‘negative feedback’ is proportional to speed … the speed is zero.

    ?

    Oh, please. Work with me here. The air is obviously still. The air resistance is a feedback that is proportional to car speed.

  1194. Willis Eschenbach Says:

    Eduardo, thank you for persevering. You say:

    If a governor itself does not depend on the temperature itself , it is a called in climate science a forcing, not a feedback. If you think that cloud cover can change ‘by itself’ independent of temperatures, then it is a forcing, not a feedback. Whereas in principle it can be true – actually this is the suggestions by Roy Spencer or the cosmic rays hypothesis. It is in principle another theory, debatable as any other.

    Now I think you are proposing a mechanism by which clouds know in advance what the temperature will be and react accordingly (?). Are you suggesting that clouds react to the rate of temperature change instead of the temperature level? In that case that would be indeed a different type of feedback, but I cannot imagine a mechanism for that on a global scale and there will be lots of thing to discuss. Again another theory to be debated

    Not sure why this is so hard to understand. I am not proposing any such mechanism as that. Did you read the full explanation of what I think regulates the earth’s temperature? It is a governor, which is very different from a simple proportional feedback. You still seem like you don’t understand the difference. Is that the case? Because there are more things in the climate system than forcings and feedbacks, that’s a false dichotomy.

    A governor is a special kind of active system wherein a feedback controls the amount of forcing … now, is that a forcing or a feedback? Since the amount of external forcing is varying, it must be a forcing … but the amount of forcing is based on a feedback, so it must be a feedback. You see the problem. It is neither one, the simplistic forcing/feedback dichotomy doesn’t work.

    The response of a car with “cruise control” (a governor) is very different from the response of a car running at a balance between power and air resistance. Yes, the air resistance depends on the speed of the car, and the actions of the governor depend on the speed of the car. But that doesn’t make both of them equivalent “feedbacks”. One is an active system, the other is not.

    But how is this related to the unit root thing ? Does the unit root test invalidate an externally driven temperature in favor of your mechanisms?

    Before we can get to that, I am trying to understand what you mean by a “mean-reverting behaviour”. You have not defined this term, and you seem to think it is self-explanatory. It is not. Which is why I had asked:

    However, the [mean-reverting] mechanisms in question are very, very different. In one, there is an active governor at work. In the other, it is merely staying at the point where input = resistance. The implications of these two mechanisms are also very different.

    Which type of “mean-reverting” behaviour are you referring to here? One? The other? Both?

    Thanks,

    w.

  1195. jr Says:

    Hi VS. B+V looked at the data using econometric techniques to determine if it could be said that there was a trend in global temperatures. Along the way they used the same sort of methodology as you have done and came to different conclusions. Basically they accept the limitations of the tests and the uncertainty that results from that.

    If you insist on just replying to people with “I have done the tests, they are conclusive,” then please don’t take it to heart when people dispute that based on the analysis of other learned people, who have done the same sorts of tests and disagree with you. That you seem to deny any uncertainty in your position is a red flag to me. Sorry, but it just is. That you haven’t really managed to defend your choices of trend specification, or start dates for your analysis, other than “it looks right” is another. That you on several occasions appear to denigrate the efforts of people who work in fields different to yours, is untasteful, but I guess shouldn’t distract from your arguments such as they are. But I still have to side with the economics Profs, over “some guy on a blog.” Sorry. They just seem more considered and convincing that you do.

    If you could put together a synopsis of your analysis,without linking back to various comments here and there, it might make it easier to understand your point of view. Of course I realise that it is probably an unreasonable request, but I make it anyway. :P

  1196. phinniethewoo Says:

    I find this an absolutely fascinating thread, if only because I understand so little of it. Still I have this uncomfortable feeling I ought to..

    It must indeed be true that merely visually looking at a graph and say it is going up is too good to be true, is too simplistic: Maths can squeeze more info out of such graphs, no doubt.

    But the proposed alternative analyses seem to involve Klingon language :(

    an “intercept” is what mortals call a constant ?
    or did I lose it there allready??
    ah well..

  1197. phinniethewoo Says:

    http://www.statsoft.com/textbook/time-series-analysis/#systematic
    seems possible introduction in Time Series Analysis

    If VS is finished with his correlation exercise, he should try frequency domain as well? everything in the climate seems to be cyclical (Years and season, ENSO, milancovitc) so is this not a more natural choice

    When we say earth’s temperature is rising we relate that to some ideal steady temperature without humanity? but what is the chance(!) steady state for earth’s temperature is a constant? more likely to be something cyclical to me.

  1198. phinniethewoo Says:

    there is a clear proved physical model in a drunk walloping around his lantern, he is obeying all laws of thermodynamics and Newton. The randomness is in the eye of the beholder..or depends on the level you are looking at it.
    He might be trending to that marketplace where there is still a pub open late..but are we sure of any determinism? I guess a unit root should sort us out

  1199. DeWitt Payne Says:

    cohenite Says:
    April 2, 2010 at 00:15

    since you think the temperature could have à deterministic trend and still have à Unit Root, please show us an example of a process that combines à deterministic trend With à unit root!

    In my post at The Blackboard, I created series with two unit roots and added noise. Some of the details are in the comments in response to questions raised. When sufficient noise was added, the unit root tests rejected the presence of a unit root after one difference of the noisy series, or failed to reject stationarity in the case of the KPSS test. And yet, if I subtract the unmodified two unit root series from the noisy series then the difference is stationary. So a series with two unit roots can be deterministic for a series that tests as having only one unit root. Tamino’s use of the CADF test with a specified non-linear trend, namely the sum of the GISS forcings, is also relevant. That test used the entire 1880 to 2003 record so cherry picking can’t be claimed. It also rejected the presence of a unit root after correcting for the trend.

    And then there’s the problem of chaotic behavior. This paper from Koutsoyiannis & Montanari talks about long term persistence and some classical statistical tests. It would be interesting to see if data generated by Koutsoyiannis’ toy model has any unit roots.

    [Reply: First link should probably go here. BV]

  1200. DLM Says:

    jr says: “If you could put together a synopsis of your analysis,without linking back to various comments here and there, it might make it easier to understand your point of view. Of course I realise that it is probably an unreasonable request, but I make it anyway. :P”

    That’s funny. After gratuitously insulting the man, you ask him to summarize weeks of discussion, so that you might be better able to understand his point. Why don’t you read the thread and ask him specific questions on those things that you don’t get? That’s if he is taking any more questions.

  1201. DLM Says:

    PS:

    And what’s this about jr? If you insist on just replying to people with “I have done the tests, they are conclusive,”

    I don’t recall seeing VS say that. Is that a direct quote?

    In any case, that does not accurately characterize how VS has participated in this discussion. VS has been very responsive to questions. VS has shown his work. Find something wrong with it.

  1202. cohenite Says:

    DeWitt, your Blackboard link is not working; the VS summary is at Blackboard here under the RickA comment:

    http://rankexploits.com/musings/2010/questions-for-vs-and-dave/

    LTP as described by Koutsoyiannis is of course not stationary and would tend to confound any finding that non-ghg does not have a unit root and is non-stationary. The problem for me in respect of only ACO2 being non-stationary and trend producing [determinstic] while natural variation is not is that for ACO2 to impact on temperature it must be increasing exponentially to produce a linear temperature trend; this inverts the Beer-Lambert radiative effect of ACO2 which has an exponentially declining effect; it can only be, therefore, that feedbacks, primarily water, are drafted to physically explain the shortfall; from a physical perspective increases in CO2 are consistent in having a sharply declining effect while totals of ACO2 have none because their heating effect has been done [and this has been verified by David Stockwell’s cointegration series based on the B&R paper]; apart from limiting runaway [being an inbuilt governor] this would also mean that there is no pipeline effect in the system since the total of CO2 is irrelevant and therefore equilibrium sensitivity is not a valid concern.

    My concern for your statistical test is, “when sufficient noise was added”; isn’t this simply creating a statistical threshold to the presence of a unit root which may or not be justified; that is, it is an assumption?

  1203. Pofarmer Says:

    My concern for your statistical test is, “when sufficient noise was added”; isn’t this simply creating a statistical threshold to the presence of a unit root which may or not be justified; that is, it is an assumption?

    What it is is slinging poo.

  1204. sod Says:

    I don’t recall seeing VS say that. Is that a direct quote?

    In any case, that does not accurately characterize how VS has participated in this discussion. VS has been very responsive to questions. VS has shown his work. Find something wrong with it.

    VS did exactly that. for example when he makes the false claim that he is not contradicted by the Breusch paper:

    VS Says:
    April 1, 2010 at 17:16

    Hi Jr,

    B and V didn’t study the same thing I did.

    In that sense, my analysis is not ‘competing’ with theirs, and our conclusions are not contradictory.

  1205. HAS Says:

    DeWitt Payne on April 2, 2010 at 04:13

    Hi

    Thanks for the reference to Koutsoyiannis and Montanari. From there I struggled onto http://www.itia.ntua.gr/en/documents/?authors=koutsoyiannis&tags=deterministic_vs_stochastic and in particular the two most recent papers/presentations.

    My reading of all this suggested this is a strong plug for induction from the data (aka VS’s approach) rather than what seems to be where you are coming from, in your comment at April 1, 2010 at 21:11 “The idea of treating the temperature series as purely stochastic rather than, say, deterministic chaotic also has to be justified.”

    In fact as I read it Koutsoyiannis in making the point that a deterministic system can generate complex outcomes is using it as argument to encourage people to stick to the data rather than seek out the deterministic structure (even if it is there) – particularly if you are wanting to do long-term forecast.

    I’m less clear what he thinks about ARMA http://www.itia.ntua.gr/getfile/18/2/documents/2000WR900044PP.pdf seems critical, but in http://www.itia.ntua.gr/getfile/923/12/documents/hessd-6-C3040-2010.pdf (co-author of the paper you cite) comments: “I have a final and very minor remark on Koutsoyiannis (2009). The author seems to imply that the separation of deterministic and random dynamics as additive components is inappropriate. I believe it would be advisable to specify that under certain assumptions such disaggregation is justified. Actually, there are numerous examples (the author correctly cites the ARMA models) where this kind of separation is properly used.”

  1206. riba Says:

    Dear SOD,
    ‘seed of doubt’, seed of annoyance would fit you better.

    Please write some new articles for your
    not very interesting blog : http://sod-iraq.blogspot.com,
    and let the big men fight their duels with real arguments.

    Stop acting pathetic like you are doing now.

    riba

  1207. Allan Kiik Says:

    VS: “I have to admit that that one I took from somebodies reproduction online, and I can’t remember anymore where. I wouldn’t be surprised if you were correct, and I was wrong (could you link me to that data please? so I can correct my own database?). Also, I think there are various reconstructions on the basis of the same Vostok data, so it is possible that we are both ‘right’ ;)”

    This somebody has mislabeled the graph as “Vostok.” It looks to me as GISP2 ice core from Greenland, published in this paper:
    Alley, R.B. 2000. The Younger Dryas cold interval as viewed from central Greenland. Quaternary Science Reviews 19:213-226.
    Data is available here:
    ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/greenland/summit/gisp2/isotopes/gisp2_temp_accum_alley2000.txt

    VS and all contributors to this thread, thank you for most educating and entertaining thread I have ever read in whole climate blogosphere!

  1208. jr Says:

    Hi DLM, sorry if I have hurt your sensibilities but your tone hasn’t been too polite in any case. So it goes.

    VS can respond to me or not. I think he can probably manage that himself without you feigning outrage at my “heinous insults” for him. (btw. just ‘cos it is in quotation marks, doesn’t mean I think you said “heinous insults” anywhere about my words. It would have been funny if you had but alas not.)

    I’ve made pretty clear I think, my level of understanding too so, probably he will realise that I don’t really have anything else of substance to add anyway. :)

  1209. VS Says:

    Hi guys,

    Wow, what activity! Great, I encourage that.

    First of all, I have to point out that we are dealing with two questions at the same time, just like HAS noted:

    (1) Is the (DGP governing the) instrumental record non-stationary or not?
    (2) If the answer to (1) is ‘yes’, what is the most appropriate description of the process, ARIMA(3,1,0) or ARIMA(0,1,2).

    ———————-

    Now, as for (1) I present my statistical evidence:

    ADF: unit root
    KPSS: unit root
    ZA: unit root
    DF-GLS: unit root
    ADF-outlier: unit root

    versus.

    PP: no unit root, but these test results are disqualified via both Monte-Carlo (on both ARIMA(3,1,0) and ARIMA(0,1,2), and the Stock (1994) reference (the negative root of the MA polynomial). Furthermore, the ADF test turns out to be exact conditional on ARIMA(3,1,0). This last scenario I haven’t tested for ARIMA(0,1,2) but theory suggests a similar outcome, due to invertibility (both MA roots fall within the complex unit circle) viz PP monte carlo.

    So, this was the unit root question. Note that the results of BV coincide perfectly with my results. I just take the unit root question, that they try to avoid, and take it to the next level.

    That’s it.

    ———————-

    Now question (2).

    So far, we have two candidates for the stochastic trend specification. The ARIMA(3,1,0) and the ARIMA(0,1,2). We can discuss/debate this.

    However, both these specifications are in line with unit root presence. So using this ‘difference’ between my analysis and that of BV, in order to ‘disprove’ the unit root, is senseless rubbish.

    ———————-

    Also, I see a lot of questions posed. That’s great. However, I do have to point out that I’ve been typing like crazy over the past month, and that most of these questions have in fact been answered (multiple times).

    I appreciate the enthusiasm, but please read the thread before diving head first into the technical discussion.

    Also, I’ve seen a lot of people performing their own type of simulation analyses. This too, is great. However, again, I have to point out that econometrics has been around for over 60 years (i.e. that’s when full fledged development commenced).

    There is by now a huge body of literature on the topic, and there is also a clearly defined methodology as to how these things are done.

    So if you truly want to do those simulations ‘right’ (instead of just propping up a ‘debunkation’), it might make sense to look at this enormous body of literature first, so that you don’t have to invent the wheel yourself. A lot of smart people worked on this and plenty of Nobel prizes granted for econometrics, including the very first one.

    I listed plenty of references to text-book treatments of these issues. I hint again, I know for a fact that these books are ‘out there’… so there is no reason not to consult the literature.

    In addition, I would like to stress the following (which I remarked on March 20th):

    “I would like to reiterate that the TSA body of literature is not trivial. The unit roots we have been discussing here over the past two weeks concerns the first chapter in Hamilton (1994), that stretches some 10-15 pages (those pages contain much more in fact). The book itself is almost 800 pages thick, and consists for the most part (70%+) of pure formal notation (i.e. mathematical statistics packed in matrices).”

    Please respect this. Statistics is a formal discipline.

    ———————-

    Now, I’m going to take a little break from blogging. I have a lot of things to do here (and this is very time intensive), and I believe my statistical argument has been presented both fully and explained ad nauseum.

    Also, I’m really tired of typing the very same thing for the tenth time because people are ‘too busy’ to read what I already posted.

    I’ll be back in full at one point, and in the meantime I’ll drop by and check what people are posting here.

    ———————-

    Happy Easter everybody, and see you again soon!

    Best, VS

    PS. I would like to take this opportunity to thank everybody for their support, both publically in this thread, as well as in private emails. While I mostly shied away from responding directly, I admit that it was these messages that kept me going in light of all the smears/twists/strawmen/attacks.

    Again, thank you all for the adventure (so far)… and I assure you: this ain’t over yet… ;)

  1210. VS Says:

    PPS.

    And (of course) I would like to thank Bart for his hospitality :) We started off on the wrong foot, but I think we both, by now, respect where the other is coming from, scientifically.

    This is science!

    Bart, dat biertje komt er zeker aan ;) …ik mail je over wanneer het uitkomt..

  1211. AndreasW Says:

    Dewitt

    Can i take your comment as a claim: The presence of a unit root (or two) does not rule out a deterministic trend and your example at the blackboard is your proof to back your claim?

    This is interesting. Now we have two competing formal claims:

    VS

    The presence of a unit root rules out a deterministic trend.

    Dewitt

    The presence of a unit root does not rule out a deterministic trend.

    Edoardo

    Seem to me that you admit that the day temperature can be non-stationary. Everybody agrees that the long term temperatures must be stationary otherwise the climate would be “unphysical”. So what we all wants to know is when you think the temperature goes from non-stationary to stationary. In Zorita et al you assume the temperature to be stationary so the 128 year temerature record is long enough to assume stationarity. How short timescale do you need to assume non-stationarity and why?

  1212. phinniethewoo Says:

    “I would like to reiterate that the TSA body of literature is not trivial. The unit roots we have been discussing here over the past two weeks concerns the first chapter in Hamilton (1994), that stretches some 10-15 pages (those pages contain much more in fact). The book itself is almost 800 pages thick, and consists for the most part (70%+) of pure formal notation (i.e. mathematical statistics packed in matrices).”

    hmm
    this is a very expensive book again

    is like W Feller ‘s book on probability which goes for 110 dollar for a book from the 70s..I wonder if Mr Hamilton could dump the mentioned chapter somewhere for us.

    Or , alternatively, if baghwan Pachaundry could use one of his many taxbillions he gets for his “irrefutable research” and buy the copyright for all of us? cheers.
    if it is Mickey Mouse literature , that comes cheap.

    430 used, from 0.01 us dollar

  1213. phinniethewoo Says:

    All these calculation & simulation results come from the open source stats package and language “R”, right?
    http://www.r-project.org/

    Referring to VSs first march 4 posting,all on top, I wonder if some university student (or a team of PHDs from Pachaundry’s stable for my part; they aren’t doing anything useful anyways for the last 10 years it seems) could compile us a step by step instruction to get VS’s 1st ADF test result on CRUTEM data?

  1214. sod Says:

    ADF: unit root
    KPSS: unit root
    ZA: unit root
    DF-GLS: unit root
    ADF-outlier: unit root

    versus.

    PP: no unit root, but these test results are disqualified via both Monte-Carlo (on both ARIMA(3,1,0) and ARIMA(0,1,2), and the Stock (1994) reference (the negative root of the MA polynomial). Furthermore, the ADF test turns out to be exact conditional on ARIMA(3,1,0). This last scenario I haven’t tested for ARIMA(0,1,2) but theory suggests a similar outcome, due to invertibility (both MA roots fall within the complex unit circle) viz PP monte carlo.

    VS is teaching a lesson in how to lie with statistics:

    accept every test result that fits your believe. dig out the most exotic tests, that support your claim.

    if a test does NOT agree with you, double check that test. until you find a way to dismiss that test.

    if another person finds problems with the tests that you did apply (tamino, test looking only fo r linear trends only) simply ignore it. or claim that you dealt with it. (obscure links to old comments help)

    So, this was the unit root question. Note that the results of BV coincide perfectly with my results. I just take the unit root question, that they try to avoid, and take it to the next level.

    yes, taking things to te next level. that is exactly what VS is doing here…

    they do NOT come to the “perfectly” same results. they simply do NOT dismiss the PP test.

    but they do another test, that is similar to what VS did.

    Click to access wp495.pdf

    they try to forecast from 1950 on, with the unit root model “trend”. and they find modern temperature outside the wide interval. this is in total contradiction, to what VS claims!

    and VS has neither answered the question how he cherry picked his 1935 year, nor why his FORECAST interval is huge around the data up till 1935, that he is using for his forecast.

    Click to access wp495.pdf

    again something that Breusch is doing differently (and correct)..

  1215. John Says:

    Vs

    I think the main problem you have here is a failure to grasp the concept of ‘formal testing’ ie you followed a procedure to test for which ARIMA model to use, others believe that it ‘looks better/should be something else without testing. I think they assume you stuck your finger in air or threw dice to decide. Coming from an Electrical background I’m well aware of the reasoning and need for formal testing procedures which can include accountability, reliability,safety,regulatory and many other. It is somewhat ironic in this case that the formulation of these test procedure relies on extensive statical testing, indeed should you ever question the need/structure of any test you will be confronted by statistics.

    Should you decide to continue, which I and many others hope you will it may be better to publish a PDF for commenting.

    The quality control on the data may prove to be a stumbling block and probably largely invalidates any results anyway. Indeed even the graphs at the top of this thread are iffy, 1960 and 1915 on giss are odd to say the least.

    Anyway thanks for your time its been great!!

    Regards

    John

  1216. John Says:

    Sod

    Sod fool that you are, you do have your uses plus excellent timing

  1217. Henry Crun Says:

    Finally!! A reason for living in the UK. The book’s cheaper here!

  1218. phinniethewoo Says:

    I tried to read up on unit roots on one of VSs references:
    http://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test

    However if you check the references, eg ref4 , you get into paid subscription territorry: a 1990 article on unit roots and cointegration is only available when you pay.

    I had the same experience when I tried to read the 3 scientific reports that swipe UHI under the rug: All safely behind paid subscription locks.
    fxxk them.

    the only thing that’s free is Al Gore crud.

    governments and political parties do a lot of postering they are all pro science education etc. (distributing movies where teenagers stare through goggles at burning potassium) ..but when push comes to shove, they are in a frentic drive to “how to we communicate”..and they mean by this: How do we push Al Gore crud to the masses , while further increasing copyrights.

    I am going to learn Chinese to get rid of this problem: When something useful is been written I am sure a chinese translates it and it is free for all.

  1219. DLM Says:

    jr,

    No I didn’t use “””heinous””” to describe your foolishness, I said “””gratuitous””” and “””funny”””. I could have added “””disingenuous””” and “”””ignorant”””.

    jr says: “…I don’t really have anything else of substance to add anyway.”

    You got that right, other than your implication that you have added something of substance. And you are not even as amusing as sod.

  1220. phinniethewoo Says:

    I object against the patching in of undated “replies” and changing the article and postings everywhere.
    Scientific rigour should mean keeping the chronological record of all posts intact, even if it is full of faults.

    A few months further on, i am afraid this blog will appear to be written by whatshisname Tamino&co.

  1221. phinniethewoo Says:

    ref1
    http://www.jstor.org/pss/2286348

    orginal DF article, from 1979, 31YEARS AGO, is available for 14USD ??
    not going to win the war like that, in the west..

    do you need to pay copyrights nowadays to read Mozes’ 10 commandements I wonder.

  1222. Kweenie Says:

    “Now, I’m going to take a little break from blogging.”

    Jammer, jammer, jammer.

    VS, you’ve personally provided the most lucid and thought provoking discussion re climate the last year(s).

    kratje bier voor mij.

  1223. phinniethewoo Says:

    soemone wrote it allready above I guess but VS corrected all GCMs in one stroke with saying that temp is I(1)

    Earth’s temp is supposed to be the measure of earth’s calorific content so any temp(t) is going to be temp(t-1)+Delta

    has to be autoregressive.
    Pity the climate scientists that want to refute that.

    voor mij ook een kratje

  1224. DeWitt Payne Says:

    I’ve found a problem with Tamino’s use of the CADF test. He uses xx~F as the input vector, but that doesn’t seem to be data. I’ve asked him what he meant to do, but I’m not holding my breath waiting on the reply.

    Sorry about the link not working in my earlier post. I probably forgot to close the quote on the URL in the html tag. I’ve been spoiled by CA Assistant. Having html quicktag buttons and a preview function is really nice. Speaking of which:

    Bart,

    You might want to ask Mr.Pete if he can modify CA Assistant to work here.

  1225. DeWitt Payne Says:

    phinniethewoo Says:
    April 2, 2010 at 20:44

    has to be autoregressive.
    Pity the climate scientists that want to refute that.

    Could you please cite anyone who claims that the temperature series isn’t AR(n) where n is equal to at least 1 and/or has at least one Moving Average term. You’ll find lots of people who will dispute that the root of the temperature series is precisely equal to 1 rather than near 1 because the temperature clearly isn’t a pure random walk or the planet would have frozen solid or the seas boiled long before now.

  1226. HAS Says:

    BTW DeWitt (and Lucia)

    I rather suspect that Koutsoyiannis (2009) referenced above is also saying that the use of synthetic data of known source has limitiations when evaluating the absolute (rather than perhaps the relative) performance of this kind of analysis i.e. not being able to detect a known signal need not be a hanging offense.

  1227. DeWitt Payne Says:

    Tamino’s syntax was correct after all, but while F doesn’t appear to have a unit root, it doesn’t test as stationary by the KPSS test either. According to this paper on the CADFtest function in R ( http://www.jstatsoft.org/v32/i02/paper ), stationarity is a requirement for covariates in the CADF test.

  1228. phinniethewoo Says:

    dewitt

    “near 1” is sophistry (how near?)

    temperature is in a first approximation defined by the temperature it was before. The rest term is stochastic of ever increasing uncertainty(variace ->inf).

    This model is exact because it eg encapsulates the economic uncertainty in the future GHG discharges.
    no climate model so far does this..amateurs!

  1229. phinniethewoo Says:

    tamino is wrong!

    (do not know who he is or what he says btw.. does he exist tamino ? or is he a fictive character like “Killroy was here!”)

  1230. phinniethewoo Says:

    dewitt

    a unit root does not imply randomness.

    how many times do you need to tell this before the back of the class catches on..sloppy sloppy

    what we need to refrain from is allowing sophisticates who construct frail diatribes that try to discredit the built up formalism of econometry and statistics over so many years. let’s first exploit this knowledge and its tests and procedures to prove/disprove causality tempCO2.

    you had your chance and 100Billion to propose a GCM with temp in it up front. be quiet now. shush.

  1231. AndreasW Says:

    Dewitt Payne

    If i understand correctly you dispute VS’s claim that the temperature series contain a unit root. You and “a lot of people” thinks the series contain a near unit root instead. Could you please explain why and how VS’s claim is wrong?

    Ps
    Thanks for engaging in this debate. What we’d seen enough of in climate blogworld is onesided preaching of top players in their own backyards. This is like seeing Mann and Mcintyre head to head in a primetime fight.

  1232. DeWitt Payne Says:

    To add more fuel to the fire, this page ( http://www.stat.pitt.edu/stoffer/tsa2/R_time_series_quick_fix.htm ) on time series analysis using R fits an ARIMA(1,1,1) model to the GISS temp series from 1880-2004 and finds a (barely) significant drift term of 0.6 C/century. It also details some problems with R when fitting ARIMA models with the R arima() function. I really don’t think this is anywhere near as cut-and-dried as VS makes it look.

    The behavior of a system with a root less than one is quite different from a system with a root exactly one. It is not sophistry at all to refer to near unit roots. Try searching on ‘near unit root’ and see how many hits to scholarly papers you get.

  1233. Anonymous Says:

    VS,

    Happy Easter to you, too.

    Yes, I will read and reread this whole thread, looking up each piece of statistical terminology used & pursuing every reference given.

    As to the critiques of the VS statistical analysis as being contrary to physical understanding (physics) or critiques of it having little or no physical meaning, those critiques I very find unconvincing. Physicality will be a focus of mine while you take a break from blogging.

    VS, what you are doing here is wonderful. Simply, thank you. Somethings in life are really fun, but I never thought statistics to be one of them. This is like getting a Christmas present in March. : )

    Bart, thank you for hosting this pursuit of scientific enlightenment.

    Other commenters, thanks for helping to make this a detailed “down & dirty” process.

    I am eagerly awaiting for VS’ next semester to start after his Easter break.

    John

  1234. DeWitt Payne Says:

    AndreasW Says:
    April 2, 2010 at 23:32

    Could you please explain why and how VS’s claim is wrong?

    A pure unit root process is like a tank with no leaks. If you put a bucket of water into the tank, the level will go up by exactly the volume of the bucket divided by the surface area of the tank and stay there. If you randomly take out and put in buckets of water such that the average over time is zero net water, the tank level can wander arbitrarily far from the initial level. This is known as a random walk. The two dimensional example is the drunk and the light pole. If the drunk starts out at the light pole and at every time step takes a fixed distance step in a random direction, the expected value of his distance from the light pole increases as the square root of time.

    A process with a root less than one is like a tank with a leak. Borrowing from lucia, let’s say that the leak is a pipe with a porous plug such that the flow rate varies linearly with pressure. If you don’t add water, the level in the tank will decrease exponentially with time to zero. So to maintain a constant level, there must be a constant input. Do you see how this is analogous to the Earth with radiant energy being the flow rate and temperature the tank level? The exact value of the root will depend on the dimensions of the tank and the properties of the porous plug and can range from zero to one. Of course for the Earth, there are effectively more than one tank and their time constants differ, possibly over orders of magnitude between the deep ocean and land surface well away from the ocean. Also, the temperature/heat content relationship is not anywhere near linear over large ranges of heat content. But over a short range, one can always use a linear approximation.

  1235. HAS Says:

    DeWitt on April 2, 2010 at 23:40

    …. and others have fitted a linear regression model and found highly significant trends in the data. The proof of this pudding is in the derivation of what is the most appropriate model given the data, not in fitting any old model and quoting a result.

    Unfortunately R.H. Shumway & D.S. Stoffer simply assert the series looks like ARIMA(1,1,1) rather than demonstrates it does (and in their defence they are only doing this in the context of demonstrating the use of R on time series rather than as a serious contribution to the literature). If you are going to include a MA term I think the various references cited here would suggest ARIMA(0,1,2) would be a better fit to the data.

    I’ll have a poke around the “near unit root” stuff and revert.

  1236. Steve Fitzpatrick Says:

    “1,235 Responses” on one (small) part of climate science gives some indication of both the complexity of issues involved and the passion with which positions are held.

    I think it will take a while for things to be sorted out.

  1237. phinniethewoo Says:

    the expected value for the drunk in the long term is the pole.
    Consult with symmetry theoretical physicists.. :)

    if it is not the pole , just show me the point where he will be, and I show you three others of equal chance.
    the variance increases with time

  1238. DeWitt Payne Says:

    HAS Says:
    April 3, 2010 at 00:37

    If you are going to include a MA term I think the various references cited here would suggest ARIMA(0,1,2) would be a better fit to the data.

    Now that I’ve found Stoffer and Shumway, I feel more confident about doing some ARIMA fits in R. Either VS is using a different temperature series than I am, I still don’t know what I’m doing (actually that’s highly likely), or VS is doing something wrong. A (3,1,0) model is really bad, even for data restricted to 1880-1935:

    > arima(diff(yy[1:56]),c(3,0,0))

    Call:
    arima(x = diff(yy[1:56]), order = c(3, 0, 0))

    Coefficients:
    ar1 ar2 ar3 intercept
    -0.0933 -0.3254 -0.1148 0.0955
    s.e. 0.1341 0.1240 0.1369 0.9086

    sigma^2 estimated as 103.9: log likelihood = -205.87, aic = 421.74

    Note that I’m using diff(yy) so that the model is really (3,1,0) because of the problems with arima() in R pointed out by S&S. The AR1 and AR3 coefficients are less than twice their standard error, so they aren’t significant and the model is misspecified.

    Compare that with (2,1,1)

    > arima(diff(yy[1:56]),c(2,0,1))

    Call:
    arima(x = diff(yy[1:56]), order = c(2, 0, 1))

    Coefficients:
    ar1 ar2 ma1 intercept
    0.6077 -0.3243 -1.0000 0.4354
    s.e. 0.1278 0.1305 0.0508 0.1031

    sigma^2 estimated as 78.24: log likelihood = -199.85, aic = 409.69

    The AR1, AR2 , MA1 and intercept are significant.

    If I use the 1880-2003, then the ‘best’ model seems to be (1,1,2) and the drift term is still significant.

  1239. phinniethewoo Says:

    small leaks are a small worry and big leaks are a big worry.
    gosh I am getting me yogi berra moments here with the back of the class.

  1240. DeWitt Payne Says:

    phinniethewoo Says:
    April 3, 2010 at 01:05

    the expected value for the drunk in the long term is the pole.
    Consult with symmetry theoretical physicists.. :)

    That’s only true for a one dimensional walk. In two dimensions with the pole at 0,0, the walk will cross the x and y axes, but almost never at the same time. See this figure for example.

  1241. phinniethewoo Says:

    Dewitt
    I agree you must be right..
    but where will he end up then, in the long run?

  1242. phinniethewoo Says:

    if the drunk has 2 coins which he tosses up all the time, what is the EV for coin1, and the EV for coin2 ?

  1243. HAS Says:

    De Witt

    If you have a look at VS March 23, 2010 at 14:13 you will see that the constant gets eliminated from the model because it is non-significant. But even taking this into account your coefficients are quite different. I don’t know enough about R to see what’s gone wrong but maybe at VS March 25, 2010 at 12:22 you will see the procedure followed for the time series up to 1935.

    A couple of other points (and I hope I’m not doing “grandmothers and eggs” here).

    First by just trying models and finding one that fits better rather misses the point. The statistics that you are using to say “this fits better” depend upon the data conforming to certain rules, and you need to ensure this is the case. VS goes to some pains to work through these.

    Second by including both AR and MA terms there is as I understand it the change of over-fitting the model i.e. you get spuriously high correlations.

    Have a read back through what VS has done, and the tests involved.

  1244. DeWitt Payne Says:

    phinniethewoo Says:
    April 3, 2010 at 01:48

    if the drunk has 2 coins which he tosses up all the time, what is the EV for coin1, and the EV for coin2 ?

    You’re restricting travel to just the x and y direction with two coins. I’m not sure what effect that has compared to being able to move in any direction. The tank is indeed one dimensional and the expected value of the level of a tank with no leak and no net addition of water is the initial value. But the max and min may become arbitrarily large. Aren’t we wandering OT?

  1245. Rich Says:

    “…because the temperature clearly isn’t a pure random walk or the planet would have frozen solid or the seas boiled long before now.”

    DeWitt, it seems to me this is a dimensionally challenged line of reason. Try thinking of a random walk on the surface of a sphere (or any three dimensional form) mapped to a two dimensional time series. You can have a “pure random walk” without the dependant variable running to +/- infinity. It will be bounded by the radius of the sphere and yet be a random walk.

  1246. DeWitt Payne Says:

    HAS Says:
    April 3, 2010 at 02:03

    I hope he ran the test other than how he specified it in that post.

    D(GISS_all(t)) = constant + AR1*D(GISS_all(t-1)) + AR2*D(GISS_all) + AR3*D(GISS_all) + error(t)

    I’m pretty sure the equation should be:

    D(GISS_all(t)) = constant + AR1*D(GISS_all(t-1)) + AR2*D(GISS_all(t-2)) + AR3*D(GISS_all(t-3)) + error(t)

    arima(diff(yy),c=(3,0,0) should be doing exactly the same thing. But I don’t get his anything like his result.

  1247. Anonymous Says:

    VS,

    I am not Anonymous.

    It is just that I keep switching computers and then forgeting to enter my name and email before commenting.

    John
    =========
    John says,

    April 3, 2010 at 00:10
    VS,

    Happy Easter to you, too.

    Yes, I will read and reread this whole thread, looking up each piece of statistical terminology used & pursuing every reference given.

    As to the critiques of the VS statistical analysis as being contrary to physical understanding (physics) or critiques of it having little or no physical meaning, those critiques I very find unconvincing. Physicality will be a focus of mine while you take a break from blogging.

    VS, what you are doing here is wonderful. Simply, thank you. Somethings in life are really fun, but I never thought statistics to be one of them. This is like getting a Christmas present in March. : )

    Bart, thank you for hosting this pursuit of scientific enlightenment.

    Other commenters, thanks for helping to make this a detailed “down & dirty” process.

    I am eagerly awaiting for VS’ next semester to start after his Easter break.

    John

  1248. John Whitman Says:

    I think I finally got rid of my ‘anonymous’ posting errors.

    NOTE: If I was actually anonymous then I would be less embarassed.

    : (

    John

  1249. HAS Says:

    de Witt

    On my reading of Issue 2 on http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm with a constant equal to zero arima(yy,c=(3,1,0)) should generate the same result, does it?

    Also I assume c=(3,1,0) is equiv to order = c(3,1,0), and lack of closed bracket is a typo? http://stat.ethz.ch/R-manual/R-patched/library/stats/html/arima.html

  1250. cohenite Says:

    Like Rich [April 3 03:03] I too have doubts about DeWitt’s alternative scenario to a unit root temperature, either snowball or Venus; as well as being bounded by the sphere surely the delimiting factor is insolation; as I noted the effective temperature of the Earth is 255K and while we may have some drunks staggering with the greenhouse component of temperature, 33K, I would suggest Sol is much too sober for random walks.

  1251. cohenite Says:

    I should add that DeWitt’s concern is still a genuine one; if temperature is based on a unit root what restorative is there apart from the ones mentioned; water comes to mind with its variation of form and function; after reading the humidity literature you would be justified in concluding that water seems to work against any temperature movement but is it reasonable to say that if temperature is unit root based that a feedback such as water works against any random tendency in temperature in a consistently opposing way?

  1252. DeWitt Payne Says:

    It helps to test the correct data set. The previous fits are for the forcing data not the temperature series. *slaps forehead* D’oh!

    However, I still don’t get good fit with (3,1,0) for the GISS data.

    forcing no trend calculation for 1880-1935:
    > arima(giss[1:56],order=c(3,1,0))

    Call:
    arima(x = giss[1:56], order = c(3, 1, 0))

    Coefficients:
    ar1 ar2 ar3
    -0.5202 -0.1276 -0.2596
    s.e. 0.1292 0.1464 0.1326

    sigma^2 estimated as 362.4: log likelihood = -240.36, aic = 488.72

    AR2 is not significant.

    The best model in terms of minimizing sigma^2 with the minimum number of coefficients is:

    > arima(diff(giss[1:56]),order=c(0,0,1))

    Call:
    arima(x = diff(giss[1:56]), order = c(0, 0, 1))

    Coefficients:
    ma1 intercept
    -1.00 0.5227
    s.e. 0.08 0.1512

    sigma^2 estimated as 334.3: log likelihood = -239.88, aic = 485.76

    The p-value of the JB test of the residuals is 0.82 so it fails to reject normality.

    If I extend the calibration period to 1950 and test with an (0,1,2) model as B&V, the MA2 coefficient isn’t significant. So again, the best fit is (0,1,1):

    > arima(diff(giss[1:71]),order=c(0,0,1))

    Call:
    arima(x = diff(giss[1:71]), order = c(0, 0, 1))

    Coefficients:
    ma1 intercept
    -0.9044 0.6534
    s.e. 0.1019 0.2725

    sigma^2 estimated as 348.0: log likelihood = -305.01, aic = 616.02

    The JB test again fails to reject normality.

    Here are the projections from the two fits. The temperature is in C*100 and I didn’t bother to put in the year for the x axis. Looks pretty good to me, but what do I know.

  1253. Tim Curtin Says:

    I have noted here before that Bart began this fascinating series with graphs showing “global” mean temperature anomalies since 1880 and his comment “Temperatures jiggle up and down, but the overall trend is up: The globe is warming”. On his latest thread Bart shows a very similar graph portraying his weight since 1980, and evidently his weight jiggles up and down but the overall trend is up, Bart is getting heavier. However both trends apparently have unit roots, so linear projection into the future may not be valid, either for global mean temperature (GMT) or for Bart’s weight. Yet that is what the IPCC (Solomon et al 2007) does throughout AR4 WG1 for both CO2 and GMT.
    Be that as it may, I am sure we believe Bart’s account of the observed trend in his weight since 1980, but we have no basis for equal belief in the trends in GMT anomalies since 1880 in his first graph. I have just begun making full use of the admirable GISStemp “game” at its Home site, http://data.giss.nasa.gov/gistemp/maps/, and great praise is due to Hansen and Sato et al for making this available in a user interactive format which provides not only the maps of choice but even the temperature data behind them by latitude and longitude within radii of either 250 km or 1,200 km (a mission impossible to this day for CRU).

    Setting the radius at 250 km., and taking the first decade of Bart’s graph, 1880-1890, the map reveals “global” mean temperature perforce excluded the whole of Central America, virtually all of South America and Africa, all of Thailand, Vietnam, Cambodia, Malaysia, Indonesia, and almost all of China.

    The situation had not changed much by the 1900-1910 decade, with Central America still absent despite Panama, along with most of South America and Africa, all of Thailand, Vietnam, Malaysia and Indonesia, and China likewise for the most part. Just as Bart’s exclusion of his weight data before 1980 gives a more reassuring trend, so excluding the world’s hottest places up to 1910 accentuates the “warming” anomaly, to 0.6oC since 1900 in his graph here. Thus that warming is barely 0.4oC since 1940, by when global coverage was acceptable, but hardly statistically significant – even though the anomalies in the data sets one can download for each decade are given no less than 4 decimal points.

    Breusch and Varhig (unpublished, 2008) in their “proof” of a warming trend in GMT since 1850 (in the absurd HadleyCRUT series) and since 1880 (NCDC and GISS) emphasise (p.1) they do not consider the provenance of the data they use, thereby assuming like Bart that the global temperature coverage in all 3 data sets they use is the same throughout. Is this valid, given the fairly trivial trends they claim to have found?

  1254. HAS Says:

    DeWitt the confidence limits don’t look right imho, bit what’s going on I know not.

    Tim

    The interesting thing out of the unit root discussion is where this is coming from. It could just be an artifact of how the series is put together (and this in itself would be interesting). Howevr if it travels back into the raw temperature as measured then it raises a number of questions about the processes used to produce gridded temperatures in general and the confidence limits around them in particular.

    While most of the traffic on this thread has been from people who worry about how the particular model fits their view of what the physical processes should look like, the much more important stuff will come out of a close examination of whether the methods used to aggregate data are consistent with the nature of the data. This is why VS has been keen to find papers where least squares have been used without first testing for unit roots and autocorrelation.

    For my part I’m going to spend some time on the issue that got me interested in all this (and applying what I’ve learnt here) to review the way confidence limits have been developed for the published time series.

  1255. Demetris Koutsoyiannis Says:

    Willis Eschenbach, DeWitt Payne, Cohenite, HAS,

    Thanks for discussing my works. HAS (in #comment-3210) correctly quotes an interesting point of Montanari’s critique on my paper “A random walk on water” (http://www.itia.ntua.gr/en/docinfo/923/), which I missed to discuss in my “official” reply. So I take the opportunity to answer it here.

    The comment is related to the ARMA models. The related point in my paper is: “In this respect, stochastics should not be identified with the very common ARMA or similar types of models”. Montanari’s comment quoted by HAS is: “The author seems to imply that the separation of deterministic and random dynamics as additive components is inappropriate. I believe it would be advisable to specify that under certain assumptions such disaggregation is justified. Actually, there are numerous examples (the author correctly cites the ARMA models) where this kind of separation is properly used.”

    From the above extract from my paper, it may become clear that I am not a fan of ARMA models. I do not use them except the simplest AR(1), AR(2) and ARMA(1,1), which, under certain conditions, have some physical realism. In my 2000 paper “A generalized mathematical framework for stochastic simulation and forecast of hydrologic time series” (http://www.itia.ntua.gr/en/docinfo/18/), which is also pointed out by HAS (in #comment-3210), I have proposed a framework that totally replaces ARMA models in stochastic simulation and prediction using a more generalized and less mind-trapping formulation, I think. For, the emphasis in ARMA models is on the computational part and not on the explanatory part. As I write in that paper, “The reason for introducing several [ARMA] models and classifying them into different categories seems to be not structural but rather imposed by computational needs at the time when they were first developed. Today, the widespread use of fast personal computers allows a different approach to stochastic models. In this paper, we try to unify all the above-described models, both short memory and long memory, simultaneously modelling the process skewness explicitly.”

    By the way, I warmly endorse the following quotation from N. T. Taleb (The Black Swan, Penguin, 2008, p. 128): “Probability is a liberal art; it is a child of skepticism, not a tool for people with calculators on their belts to satisfy their desire to produce fancy calculations and certainties”.

    In my view, the main concern in stochastic modelling is how to condition the unknown future (say, the variables x_1, x_2, …, for lead times 1, 2, …) on the known present (x_0) and past (x_-1, x_-2, …). This is usually done by using linear relationships among x_i’s of stochastic type. Why linear? First, for simplicity–and one should be aware that linearity in stochastic modelling is fundamentally different from linearity in deterministic modelling. Whilst deterministic linearity is a very weak property, physically unrealistic, stochastic linearity is powerful and realistic, particularly in very complex systems. There is a justification of stochastic linearity based on the principle of maximum entropy (for more information see Fig. 1 and the discussion just below it in the paper “Medium-range flow prediction for the Nile: a comparison of stochastic and deterministic methods”, http://www.itia.ntua.gr/en/docinfo/799/).

    Now, the standard view of ARMA models, as described in Montanari’s comment quoted above, is that they disaggregate (or separate) deterministic and random components in a system’s dynamics. This view originates from the way these models are written as the sum of two parts: a part containing past observations (x_-1, x_-2, …) and a sum of weighted “random noises”. This view is misleading in my opinion. My view is that they just try to formulate a convenient manner of conditioning the future on the past and present, using linear relationships and expressing the future uncertainty in terms of some random variables, which are not “noises” but rather representations of the uncertainty.

  1256. cohenite Says:

    Given Demetris’s comment it appears, in the context of DeWitt’s concern about possible infinite spread to unit root based temperature, that the stochastic property of the complex system in which temperature is being expressed, can itself be the restorative factor; or at least that seems to be the meaning of equation 1 in the linked paper about the Nile?

  1257. phinniethewoo Says:

    In fact (I ran R tests whole evening now, it’s stuck) in fact the drunk will always end up at his lantern. over & over again. He’ll see a lot of the town? but the lantern is his curse.

    Probability and the law of large numbers , and symmetry demands this from us.

    Some pundits who joined in on observing the drunk’s walk a bit later , say, while he was in the leafy northeast end of town (close to a UHI site) might contend: No, he’ll always end up at the UHI site! That’s his place of return of the highest probability!

    And they are right of course.

    But the point is those pundits are a minority opinion.
    (pundits are equally distributed over town, normally. with a slight bias to the bohemian neighbourhood, maybe)

    the lantern is his curse.

  1258. Niche Modeling » Four Central Problems of Climate Science Says:

    […] by the interest VS has rekindled in fundamental analysis of the temperature series at Bart’s and Lucia’s blogs, below are a small set of core ‘problems’ facing statistical […]

  1259. A C Osborn Says:

    Tim Curtin Says:
    April 3, 2010 at 05:52

    Have you been to Chiefio’s Site http://chiefio.wordpress.com/ to see his analysis of the world temperature series, if you haven’t you really should it is very very enlightening.

  1260. eduardo Says:

    dear folks,

    to be honest , I am getting a bit tired of this discussion, which imho is not about statistics, it is about logical reasoning. I am starting to think that too much chocolate has boggled my mind

    This is how a view this:

    the question is *not* whether or not the instrumental record is stationary

    the question is :’ if the observed record contains a unit root, is this unit root behaviour caused by exogenous factors or my endogenous factors ?’. . (By exogenous factors, I mean by the sun, ghg, volcanoes, etc. By endogenous I mean ENSO, PDA, and other modes of variability)

    -if it is caused by exogenous factors, the next question would be which ones. If GHG are among these, to predict future temperatures we have to take into account GHG. This question has not been discussed here, so no further comment.

    -if it is, or may be, caused by endogenous factors, which I think is what VS claims (?), one concludes that we do not need any GHG (or any external factors for that matter).

    So far so good.

    Now, in my view the relevant question logically is : is the a unit root test able to discriminate between endogenous or exogenous factors? My answer so far is no. The reason is that one can easily produce synthetic (or real) time series with and without deterministic trends, and the unit root tests indicate in both cases the presence of a unit root. Examples can be seen at taminos’ and lucia’s blog. am I wrong here ?

    My conclusion of all this is: if the unit root test cannot discriminate between the two possibilities above, this discussion is futile. We need another test or more chocolate.

    I have tried really hard to see the point but so far.. nope. Like Lucia, I do not see that these basic logical questions have been or are being addressed, so probably I will not check the discussion very often.

    Enjoy the blogging

    @ Willis,

    forcing and feedbacks. dear Willis, forcing and feedbacks are different concepts and depend on the definition of which is the system object of your study. You state ‘Since the amount of external forcing is varying, it must be a forcing … but the amount of forcing is based on a feedback, so it must be a feedback.’. This is incorrect. The amount of forcing is not affected by any feedback, by definition of forcing. Forcing is, for instance, the amount of incoming solar radiation at the top of the atmosphere. This is determined *solely* by solar physics. On the other hand, the amount of solar energy entering the system is of course the external forcing after being modulated by the feedbacks( e.g. clouds). So the amount of solar radiation reaching the surface is not a forcing.
    You can see the formal definitions here http://journals.ametsoc.org/doi/abs/10.1175/JCLI3819.1

  1261. VS Says:

    “the question is *not* whether or not the instrumental record is stationary”

    Wow

  1262. TT Says:

    Eduardo,

    You put the question this way: “if the observed record contains a unit root, is this unit root behaviour caused by exogenous factors or my endogenous factors”?

    But who said it has to be *either* exogenous *or* endogenous factors? Did VS say that? If so I missed it. I never saw him claim that the unit root means that it’s impossible to attribute temperature variations to some combination of GHG’s + ENSO, etc. But if the temperature series contains a unit root, you have to correlate all factors or forcings (exogenous or endogenous) in accordance with a statistical method that is consistent with the unit root. An OLS linear trend for temperature is not one of those. That’s the main message I’ve taken from this thread. It is, as you say, a question that has to do with logical reasoning; and it doesn’t seem to me boring or uninteresting.

  1263. phinniethewoo Says:

    morgen eitjes rapen?

  1264. DeWitt Payne Says:

    I forgot that GISS keeps moving the goal posts. When I use the most current values through 2009 (Jan-Dec average) I get a model much like VS, (3,1,0) with a constant that is not significant at the 95% level:

    > arima(diff(giss),order=c(3,0,0))

    Call:
    arima(x = diff(giss), order = c(3, 0, 0))

    Coefficients:
    ar1 ar2 ar3 intercept
    -0.4612 -0.3896 -0.3127 0.6251
    s.e. 0.0836 0.0862 0.0834 0.3903

    The p value for the intercept is 0.108 (I think, I’m not absolutely sure I have the correct number of degrees of freedom).

    Compare that to VS’:

    Constant: 0.006186 (0.1302)
    AR1: -0.452591 (0.0000)
    AR2: -0.383512 (0.0000)
    AR3: -0.322789 (0.0003)

    The significance of the slope is where there may be a difference between a unit root and a near unit root process. A near unit root process eventually reverts to the mean so any persistent deviation from the mean will be less likely for a near unit root process. I think. Maybe.

    If the model specification is that sensitive to relatively small changes in the data, though, is all this modeling and simulation really significant or not?

  1265. AndreasW Says:

    “the davinci code” is lika a manual for a freezer compared to this and it’s just getting better.

    Demetris
    You are so welcome to put some new paint on this painting.

    Seems we’re not out of the unit root woods just yet.
    First: Does the temperature contain a unit root or not?

    Dewitt
    If i understand you right you don’t think the temperature contain a unit root from a logical point of view. From your calculations above it seems you could’nt replicate VS’s work. I’m not a statistician so i’m not sure what you did. Have you done a unit root test and what was the result. Do you think VS messed up his tests and if he did could you point out how?

    Second: What does the unit root test means?
    According to VS the unit root means the temp series is non-stationary and we can rule out a deterministic trend.

    As i understand it this is disputed by Lucia, Dewitt, Tamino and Edoardo.
    They claim (correct me if i’m wrong) that the series can contain a unit root and still have a deterministic trend and be stationary.

    How is this possible? I thought that if a test confirmed the presence of a unit root, that meant the series was non-stationary.

  1266. Tony Says:

    Eduardo,

    It seems that the proper conclusions to be drawn is that there is not enough data to show anything much; even when the correct analysis ) is done. And what data there is, seems to have been hammered together.

    So, anything that is claimed to be significant in this relatively short data series, might even be artefactual. (In other words you may be detecting the joins)

    Am I right in assuming that you are a climate modeller? If so, with regard to your wish to declare an endogenous/exogenous boundary, I’d recommend you look up the phenomenon of phugiodal oscillations and their relatively long timescales.

    And finally, could you tell me why climatologists seem to be concentrating on the semi-surface max-min thermometer records and don’t seem to be looking at the other weather records such as baro pressure, rainfall, humidity, cloud cover, wind-vector?

    PS: H/T VS

  1267. manacker Says:

    Bart

    As a lurker here, let me say “thanks” for hosting a very interesting thread.

    Thanks to VS for raising the very provocative questions that have kept this thread buzzing.

    Thanks to all the other bloggers for keeping the discussion on topic with almost no name-calling and ad homs

    Max

  1268. DLM Says:

    “the question is *not* whether or not the instrumental record is stationary”

    Case closed. The discipline of ‘formal’ statistics is not useful to climate scientists. The science of ‘formal’ statistics is not logical. It belongs under the bus. OLS rules.

  1269. steven mosher Says:

    bart getting CA assitant to work here would be great. It’s a must for uber long threads. If you like I can ask Mr pete

  1270. VS Says:

    An excerpt from the second chapter of Haavelmo (1944).

    Trygve Haavelmo received the Nobel prize for economics, in part for this article.

    “If we compare the historic developments of various branches of quantitative sciences, we notice a striking similarity in the paths they have followed. Their origin is Man’s craving for “explanations” of “curious happenings,” the observations of such happenings being more or less accidental or, at any rate, of a very passive character. On the basis of such-perhaps very vague-recognition of facts, people build up some primitive explanations, usually of a metaphysical type. Then, some more “cold-blooded” empiricists come along. They want to “know the facts.” They observe, measure, and classify, and, while doing so, they cannot fail to recognize the possibility of establishing a certain order, a certain system in the behavior of real phenomena. And so they try to construct systems of relationships to copy reality as they see it from the point of view of a careful, but still passive, observer. As they go on collecting better and better observations, they see that their “copy” of reality needs “repair.” And, successively, their schemes grow into labyrinths of “extra assumptions” and “special cases,” the whole apparatus becoming more and more difficult to manage. Some clearing work is needed, and the key to such clearing is found in a priori reasoning, leading to the introduction of some very general-and often very simple-principles and relationships, from which whole classes of apparently very different things may be deduced. In the natural sciences this last step has provided much more powerful tools of analysis than the purely empirical listing of cases.”

    VS

  1271. VS Says:

    ..another nice part.

    CHAPTER III

    STOCHASTICSACAL SCHEMEAS AS A BASIS FOR ECONOMETRICS

    “From experience we know that attempts to establish exact functional relationships between observable economic variables would be futile. It would indeed be strange if it were otherwise, since economists would then find themselves in a more favorable position than any other research workers, including the astronomers. Actual observations, in whatever field we consider, will deviate more or less from any exact functional relationship we might try to establish. On the other hand, as we have seen, the testing of a theory involves the identification of its variables with some “true” observable variables. If in any given case we believe, even without trying, that such an identification would not work, that is only another way of saying that the theory would be false with respect to the “true” variables considered. In order that the testing of a theory shall have any meaning we must first agree to identify the theoretical with the observable variables, and then see whether or not the observations contradict the theory.

    We can therefore, a priori, say something about a theory that we think might be true with respect to a system of observable variables, namely, that it must not exclude as impossible any value system of the “true” variables that we have already observed or that it is practically conceivable to obtain in the future. But theories describing merely the set of values of the “true” variables that we conceive of as practically possible, would hardly ever tell us anything of practical use. Such statements would be much too broad. What we want are theories that, without involving us in direct logical contradictions, state that the observations will as a rule cluster in a limited subset of the set of all conceivable observations, while it is still consistent with the theory that an observation falls outside this subset “now and then”.

    As far as is known, the scheme of probability and random variables is, at least for the time being, the only scheme suitable for formulating such theories. We may have objections to using this scheme, but among these objections there is at least one that can be safely dismissed, viz., the objection that the scheme of probability and random variables is not general enough for application to economic data. Since, however, this is apparently not commonly accepted by economists we find ourselves justified in starting our discussion in this chapter with a brief outline of the modern theory of stochastical variables, with particular emphasis on certain points that seem relevant to economics.”

    ..and a bit further

    “Since the assignment of a certain probability law to a system of observable variables is a trick of our own, invented for analytical purposes, and since the same observable results may be produced under a great variety of different probability schemes, the question arises as to which probability law should be chosen, in any given case, to represent the “true” mechanism under which the data considered are being produced. To make this a rational problem of statistical inference we have to start out by an axiom, postulating that every set of observable variables has associated with it one particular “true,” but unknown, probability law. Since the knowledge of this true probability law would permit us to answer any question that could possibly be answered in advance with respect to the values of the observable variables involved, the whole problem of quantitative inference may then in each case be considered as a problem of gathering information about some unknown probability law.”

  1272. VS Says:

    From the Nobel page:

    “Many of Haavelmo’s other studies, such as a monograph on environmental economics which appeared long before such research came into existence, have been an inspiration to other researchers.”

    ;)

  1273. VS Says:

    DeWitt :)

    Suggestion: quit using R (it’s too ‘custom’, you don’t need that for this), get Stata/Matlab/EViews.

    Best, VS

  1274. phinniethewoo Says:

    tampino is wrong!

  1275. DLM Says:

    “In order that the testing of a theory shall have any meaning we must first agree to identify the theoretical with the observable variables, and then see whether or not the observations contradict the theory.” (Apparently, he should have been more explicit by adding: And when the observations do contradict the theory, don’t just assume that the observations must be wrong.)

    That seems logical. And that was from a guy who won a Nobel Prize in 1944, before it was devalued by inflation. Recently they have handed it out to thousands of people at a time: Al Gore, the ‘2,500’ IPCC climate scientists, and the railway engineer. They very recently gave one to a guy, just because he wasn’t George Bush.

  1276. Bart Says:

    Eduardo echoes a lot of my concerns: What consequences, if any, do these analyses have for the attribution of the 20th century warming? Probably not much.

    What TT sais is true as well:

    “But if the temperature series contains a unit root, you have to correlate all factors or forcings (exogenous or endogenous) in accordance with a statistical method that is consistent with the unit root. An OLS linear trend for temperature is not one of those. That’s the main message I’ve taken from this thread. ”

    But that has no bearing on attribution (explaining what caused the warming).

    I’m still not sure if VS actually claims that the unit roots mean that there doesn’t need to be a reason for the warming; that it could have occurred by chance. Imho, energy balance considerations show that that can’t be the case.

    If the only thing to be taken from his analysis is that OLS is strictly speaking not valid (mainly that the errors of the fit will be underestimated; the fit itself is probably not affected much, though I haven’t gotten a straight answer on that), then let’s be clear about that.

  1277. VS Says:

    “If the only thing to be taken from his analysis is that OLS is strictly speaking not valid (mainly that the errors of the fit will be underestimated; the fit itself is probably not affected much, though I haven’t gotten a straight answer on that), then let’s be clear about that.”

    No Bart, OLS on the level series gives non-sensical results.

    Didn’t you show that with your blog entry? You calculate on different intervals, and you get different ‘trends’. How can you not see how pointless it is to estimate the parameters of those ‘trends’ as if you are in fact scientifically describing a real DGP, while they are in fact (as you demonstrate!) a matter of ‘opinion’. You can simply ‘pick’ the period you want to ‘trend’.

    This is not statistics, as it has nothing to do with dscovering the true underlying DGP. Hence, there exists no ‘confidence interval’ for a hypothetical parameter governing that DGP.

    You can fit lines through dots, that’s perfectly fine. The Least Squares Approximation (i.e. (X’X)^(-1)(X’y)=([1,t]'[1,t])^(-1)([1,t]'[giss], where 1=vector of ‘1’s, t=[t_begin, …. , t_end]’, and [giss] is the temp record) will serve you well. However, this is not statistics, because you are not describing a valid DGP (in case of a unit root).

    Those confidence intervals have no meaning. It’s not that you just have a little bit of bias in your point estimate and need a little confidence interval correction.

    It’s full fledged misspecification.

    Try to find time to read the whole article I posted today.

    Best, VS

  1278. VS Says:

    PS.

    “If the only thing to be taken from his analysis is that OLS is strictly speaking not valid.. then let’s be clear about that”

    What should be taken from this analysis, is that non-stationarity (i.e. unit root infested DGP) invalidates any statistical procedure that assumes stationarity. OLS on level series, is one such procedure.

    OK, now I’m really going back to painting eggs… ;)

  1279. VS Says:

    And now the real final one, and I take my ‘break’ :) Haavelmo’s conclusion, pp. 114-115. Remember, the article was published in 1944 (the Nobel prize awarded in 1989).

    Best, VS

    ———–

    CONCLUSION

    The patient reader, now at the end of our analysis, might well be left with the feeling that the approach we have outlined, although simple in point of principle, in most cases would involve a tremendous amount of work. He might remark, sarcastically, that “it would take him a lifetime to obtain one single demand elasticity.” And he might be inclined to wonder: Is it worth while? Can we not get along, for practical purposes, by the usual short-cut methods, by graphical curvefitting, or by making fair guesses combining our general experiences with the inference that appears “reasonable” from the particular data at hand?

    It would be arrogant and, indeed, unjustified to condemn all the short-cut methods and the practical guesswork which thousands of economists rely upon in their daily work as administrators or as advisers to those who run our economy. In fact, what we have attempted to show is that this kind of inference actually is based, implicitly and perhaps subconsciously, upon the same principles as those we have tried to describe with more precision in our analysis. We do, however, believe that economists might get more useful and reliable information (and also fewer spurious results) out of their data by adopting more clearly formulated probability models; and that such formulation might help in suggesting what data to look for and how to collect them. We should like to go further. We believe that, if economics is to establish itself as a reputable quantitative science, many economists will have to revise their ideas as to the level of statistical theory and technique and the amount of tedious work that will be required, even for modest projects of research. On the other side we must count the time and work that might be saved by eliminating a good deal of planless and futile juggling with figures. Also, it is hoped that expert statisticians, once they can be persuaded to take more interest in the particular statistical problems related to econometrics, will be able to work out, explicitly, many standard formulae and tables. One of the aims of the preceding analysis has been to indicate the kind of language that we believe the economist should adopt in order to make his problems clear to statisticians. No doubt the statisticians will then be able to do their job.

    In other quantitative sciences the discovery of “laws,” even in highly specialized fields, has moved from the private study into huge scientific laboratories where scores of experts are engaged, not only in carrying out actual measurements, but also in working out, with painstaking precision, the formulae to be tested and the plans for the crucial experiments to be made. Should we expect less in economic research, if its results are to be the basis for economic policy upon which might depend billions of dollars of national income and the general economic welfare of millions of people?”

  1280. DLM Says:

    What TT sais is true as well:

    “But if the temperature series contains a unit root, you have to correlate all factors or forcings (exogenous or endogenous) in accordance with a statistical method that is consistent with the unit root. An OLS linear trend for temperature is not one of those. That’s the main message I’ve taken from this thread. ”

    But that has no bearing on attribution (explaining what caused the warming).

    It doesn’t appear to me that VS is too concerned at this point in explaining the cause for a warming that is a figment of your naive use of OLS, in a case where it is useless. Maybe after he performs that formal polynomial co-integration stuff on the data, he will find some warming and then attempt to explain it. In the meantime, he has said this for about the thousandth time:

    “What should be taken from this analysis, is that non-stationarity (i.e. unit root infested DGP) invalidates any statistical procedure that assumes stationarity. OLS on level series, is one such procedure.”

  1281. Bob_FJ Says:

    phinniethewoo wrote:

    tamino is wrong!
    (do not know who he is or what he says btw.. does he exist tamino ? or is he a fictive character like “Killroy was here!”)

    Tamino’s identity was deduced by Mosher some time back as being Grant Foster, and has finally been confirmed by his involvement in the “Climategate Emails”

    His Email address is: Grant Foster tamino_9@hotmail.com
    I Emailed him concerning some “strange things” in his article Volcanic Lull, but have had no response after five days, Re: http://tamino.wordpress.com/2008/10/19/volcanic-lull/#comment-40285

  1282. phinniethewoo Says:

    I wonder how this ADF test works
    the test finds out if your time series has the form
    Y(t)=1.Y(t-1)+e

    but you can always write out your Y(t) series like that I think? I can do that with excel with 1 formula.

    the test must find out with some confidence that e is stochastic
    It must do so by proving that it finds no meaningful signal in e i guess?

  1283. cohenite Says:

    DeWitt it seems that your observation about GISS adjusted data [there’s a shock] in combination with your previous distinction between a complete unit root and a partial one [April 3, 00:10 comment] does 2 things; firstly it tends to vindicate VS’s concern about the statistical verification of a deterministic basis for temperature and secondly resolves the issue of runnaway based on unbounded temperature which is the scary stick of AGW predictions; that is, a partial unit root basis to temperature negates ghg causation but the partial nature of the unit root means the stochastic trend [as explained by Demetrius] prevents runnaway.

  1284. HAS Says:

    I pushed “submit comment” on something that looked approximately like what follows some time ago, but its not showing nor saying “in moderation”. Not noted for my hand eye co-ordination I’ve posted again in case I stuffed up. If the earlier one is in mderation (or they both are there) Moderator please delete one.

    Eduardo and Bart

    We’ve eaten our Easter eggs so perhaps just a reinforcing comment from me.

    I hope you don’t walk away from all thinking “that was a big waste of time”.

    If I can just draw a couple of examples from your own fields (as published by you both and free to air).

    Bart your (with others) “Chemical composition of free tropospheric aerosol for PM1 and coarse mode at the high alpine site Jungfraujoch” (2008) deals with time series and undertakes statistical inferences from these (3.2 Long-term chemical composition). While the conversation here has been about temperature, similar considerations also apply to the chemical composition measures. I’m not saying that the analysis and results aren’t robust, just that the first paper of yours I found involved time series analysis and with that potential applications of what is being discussed here.

    I also note that when you found that “No statistically significant trends in the major ionic species could be obtained from this data set in contrast to other aerosol parameters measured within the GAW program for which clear trends were observed” you didn’t say “This can’t be right the physical evidence says otherwise” (or some such – which is my interpretation of what is being said here). Instead you concluded “… that there is a need for a longer time series to detect statistically significant trends.”

    Why not the same approach in the case of GISS temp series?

    Eduardo we have already discussed your “How unusual is the recent series of warm years?” The next freely available publication on your list is “Relationship between global mean sea-level and global mean temperature in a climate simulation of the past millennium” (2009). I discover that this is tied into a series of papers where the issues being discussed here are absolutely germane. In particular this paper is based on Rahmstorf’s “A Semi-Empirical Approach to Projecting Future Sea-Level Rise” (2007) which drew precisely the criticism by Schmith et al in “Comment on ‘A Semi-Empirical Approach to Projecting Future Sea-Level Rise’” (that is referenced and discussed in your paper) that you might expect from VS.

    This led then to what I regard as at best a curious conclusion from Rahmstorf in response (“Response to Comments on ‘A Semi-Empirical Approach to Projecting Future Sea-Level Rise’” (2007)):

    “Schmith et al. also raise the possibility of “nonsense correlations,” that is, real correlations that do not have a causal basis [sic]. This can of course never be ruled out; data can only falsify but never prove a hypothesis. However, the starting point of my analysis and my paper was not a correlation found in data but rather the physical reasoning that a change in global temperature should to first order be proportional to a change in the rate of sea-level rise. The analysis shows that the data of the past 120 years are indeed consistent with this expectation, and the expected connection is statistically significant [sic]. The observational data therefore strongly support the hypothesis I put forward [sic].”

    Those who have been following this thread will, I trust, find my [sic]’s obvious.

    Vermeer & Rahmstorf have subsequently gone on in “Global sea level linked to global temperature” (2009) (that I see includes an acknowledgement to you for data) that “fits the model” rather than “interrogate the data” if you understand what I mean. Further in a quick look I can’t see any analysis of the type recommended in this thread being done on the dependant variable in the model.

    I am therefore surprised that you feel there is nothing for you here. Where I you I’d be looking to apply these alternative techniques to this time series (as Schmith et al suggested over two years ago to Rahmstorf). For one thing it would remove the artificial requirement to use climate models as a substitute for reality.

    It is quite likely that based on proper statistical inference the conclusions are much less robust. But don’t we want to know that?

  1285. DeWitt Payne Says:

    VS,

    Are you going to buy me a copy of Matlab? I could afford it, but I’m too cheap to buy it when I can get R for free.

    I’ve tried to answer my own question on correlations with R^2 greater than 0.8. I can fit the modelE net forcings using the two box model originally proposed by Tamino and discussed at length at The Blackboard and elsewhere. R^2 is 0.82 for a fit with time constants of 1 and 19 years. The hypotheses that the residuals for this fit are normally distributed and stationary cannot be rejected at the 95% confidence level (and probably higher). So I created synthetic series using an ARIMA (3,1,0) model with the initial values constrained to match the first three values of GISStemp and the coefficients and standard deviation of the noise term derived from a an ARIMA(3,1,0) model with no drift term fit to the 1880-2009 GISStemp data. For 10,000 trials, the probability that R^2 for the fit of the forcings to the synthetic series would exceed 0.75 was 5.2%. If I add the additional constraint that the fitted coefficients (not the intercept) had to both be positive, the probability is on the order of 1%. The criterion that the coefficients must be positive was used when optimizing the time constants back when the discussion was active. It ruled out both very small and very large constants and seems physically reasonable.

  1286. DLM Says:

    ‘A Semi-Empirical Approach to Projecting Future Sea-Level Rise’, perhaps should have been titled:

    ‘A Quasi-Empirical Approach to Projecting Future Sea-Level Rise’

    HAS says: “It is quite likely that based on proper statistical inference the conclusions are much less robust. But don’t we want to know that?”

    Good luck with that one. All one has to do to avoid answering the question, is to dispute your assertion that their conclusions are based on improper statistical inferences. It has worked for them up to now.

  1287. Tim Curtin Says:

    Eduardo said (April 3):” This is how I view this:
    the question is not whether or not the instrumental record is stationary
    the question is :’ if the observed record contains a unit root, is this unit root behaviour caused by exogenous factors or my endogenous factors ?’. . (By exogenous factors, I mean by the sun, ghg, volcanoes, etc. By endogenous I mean ENSO, PDA, and other modes of variability)
    -if it is caused by exogenous factors, the next question would be which ones. If GHG are among these, to predict future temperatures we have to take into account GHG. This question has not been discussed here, so no further comment. [but see below for mine]

    -if it is, or may be, caused by endogenous factors, which I think is what VS claims (?), one concludes that we do not need any GHG (or any external factors for that matter).”
    1. Can “ENSO PDA and other modes of variability” be “endogenous factors” when they are caused by exogenous factors like the sun but not by CO2? There is NO evidence despite IPCC’s best efforts to link changes in [CO2] to ENSO, PDA, etc. So VS is right, we do NOT need GHG to describe ENSO etc, but you are wrong to add “we do not need any external factors” to explain “endogenous factors” (sic) like ENSO etc.

    ENSO is totally explicable by external factors including the sun and its side effects, that is why it is not an endogenous factor

    I have tried earnestly to find any correlation between either [CO2] or changes therein to temperatures or changes therein, and have failed utterly so far, from Pt Barrow to New York and Indiana.

  1288. Willis Eschenbach Says:

    Demitris Koutsoyiannis, thank you for your comment above. In one of the referenced papers you say:

    Based on the empirical evidence from the exploration of the Nile flows and on the theoretical insights provided by the principle of maximum entropy, a concept newly employed in hydrological stochastic modelling, an advanced yet simple stochastic methodology is developed.

    This is the point that I have been trying to get across to Eduardo without success. In an active system, there are more than the simplistic dichotomy of forcings/feedbacks.

    In particular, you speak of the “principle of maximum entropy”. A system which is maximizing anything (entropy is one example) has very different statistics than a system which is merely comprised of forcings and feedbacks.

    Suppose, for example, that someone has been sick and they have a high temperature. Over a few days, their temperature will gradually fall until it asymptotically reaches their normal temperature.

    Now, a dataset of that falling temperature will look like there is some external forcing bringing the temperature down. But that’s not the case at all. What we are seeing is the action of a governor, a thermostatically controlled temperature.

    Eduardo holds out the consensus idea, that the earth is free to adopt any temperature following the climate sensitivity formula where temperature varies linearly with forcing. It’s like a pool ball on a level table. The claim is if you push climate with a forcing X to the north, it moves 5 units north, less the negative feedback of the resistance from the air and the felt.

    I think the climate is more like a ball on a pool table with humps and pits and valleys. We may push it with a force X to the north, but the ball may go up and over a hump and end up to the east.

    And if the ball is in a pit and is struck by the cue, it may circle round and round and roll back slowly to its original position.

    My point is that the math of the ball’s position in those two situations is very different. That’s why I have been trying to get eduardo to see that there’s more than one kind of mean reverting phenomena, and that they need to be analyzed in very different ways.

    eduardo says above:

    @ Willis,

    forcing and feedbacks. dear Willis, forcing and feedbacks are different concepts and depend on the definition of which is the system object of your study.

    I know that, eduardo. I’m just saying that the climate system contains more than just forcings and feedbacks. Like all complex systems, the Constructal Law means that the climate will work to maximize some values. In particular, the climate works to maximize the sum of work done plus turbulent loss. See Bejan on the subject. This is why the various climate systems spend so much time on the edge of turbulence.

    Now, a system that is constantly being driven a short distance into the turbulent regime (but not too far) will show very different statistical properties than one which is not driven in that fashion.

    Consider a river. It winds and twists. The bends get wider and longer, until it goes so far that the river cuts across the neck and leaves behind an oxbow lake. It does this over and over, jumps out of its bed here and cuts a new channel there.

    But over time, the overall length of the river doesn’t change much. It gets a bit longer, it gets a bit shorter, but it oscillates around some average value.

    So if we cut across a narrow neck and shorten the river, gradually over time we will see it lengthen. If we measure the length, we’ll see that there is a trend in the length.

    eduardo wants to ask the question, is this trend due to an external factor (a forcing) or an internal factor (a feedback). But it is due to neither one. It is due to the action of the Constructal Law, with the river maximizing some aspect of its flow with the result that it oscillates above and below some average river length.

    And as I said, the math for this is very different from the usual deterministic/stochastic dualism. River length simply cannot be seen as a forced versus unforced question. You have to analyze it as an active system, not as a pool ball on a table or a drunk wandering around a lamppost.

    For example, consider a thunderstorm. It needs a certain temperature to initiate it. Once it is started, however, it creates high winds underneath it. These increase the evaporation, which makes the air less dense. Since the thunderstorm runs off low density air, this allows it run until the land/ocean underneath it is

    cooler than initiation temperature.

    It is this quality of “overshoot” which distinguishes it from a simple negative feedback. All a simple negative feedback can do is reduce the temperature rise from a forcing. But a thunderstorm can not only do that, it can “overshoot” to drive the temperature down

    below where it started.

    That’s why I keep pushing eduardo to look at the difference between an active system like a thunderstorm, and his forcing/feedback dichotomy. A thunderstorm breaks all the rules. Forcings warm things up, feedbacks either increase or decrease the forcings … but thunderstorms cool things down below the starting point. No simple feedback or forcing can do that.

    One thing this means is you can’t analyse thunderstorms (or the climate) using the usual math. You have to use engineering calculations that involve thermal heat engine concepts and mathematics and different kinds of statistics. That’s what I’ve been trying to get across here, without gaining a whole lot of traction …

  1289. Anonymous Says:

    Willis

    Consider a river. It winds and twists. The bends get wider and longer, until it goes so far that the river cuts across the neck and leaves behind an oxbow lake. It does this over and over, jumps out of its bed here and cuts a new channel there.

    But over time, the overall length of the river doesn’t change much. It gets a bit longer, it gets a bit shorter, but it oscillates around some average value.

    And as a matter of trivia, the average value is Pi times the length of the straight line between the source of the river and the mouth. (See eg Stølum, H.-H. “River Meandering as a Self-Organization Process.” Science 271, 1710-1713, 1996. )

  1290. Eli Rabett Says:

    In a very econometric statement VS says
    ———————————–
    What should be taken from this analysis, is that non-stationarity (i.e. unit root infested DGP) invalidates any statistical procedure that assumes stationarity. OLS on level series, is one such procedure.
    ————————————-

    However, we are NOT talking about such a case. Here there are strong theoretical grounds, based on straightforward physics, telling us that the relationship between forcing (not just greenhouse gas forcings) and global temperature is linear. Moreover, there are many other strong and convincing observational verifications of the straightforward physics.

    VS entire postmoderist argument collapses.

  1291. Eli Rabett Says:

    Well, let’s consider that river, relatively constant in length, until a bunch of idiots come along and canalize it, permanently altering the stream for the worse. Seems a pretty good analogy for what humans are doing to the atmosphere

  1292. VS Says:

    “VS entire postmoderist argument collapses.”

    Halpern, you are statistically illiterate, and your comments here are worthless. Sod made better points than you.

    Your contribution to the overall scientific debate is furthermore detrimental. I elaborated on all of that here.

    How about you change the blatant definition error (i.e. definition of an integrated series). on your anti-scientific smear blog, that I pointed out a full month ago, before joining in on this discussion and telling all of us what ‘we’ are ‘talking about’.

    Here”s some help from wikipedia.

    Off with you, agitator.

    VS

  1293. cohenite Says:

    eli revealed; human interaction with nature is always detrimental and prosperity is a delusion; this is the ideological paradigm of AGW; as one who has enjoyed the benefits of flood mitigation by ‘canalizing’ river banks and greater availabilty of food through irrigation I regard eli’s comments with scorn.

  1294. HAS Says:

    …and empiricism has become post modern LOL

  1295. Don Jackson Says:

    From a naive perspective, the “take away point” still seems to be:
    The data we have doesn’t seem to support (anything based upon) a trend of anomalous warming of the earth’s climate…

  1296. Alan Says:

    cohenite …

    human interaction with nature is always detrimental and prosperity is a delusion; this is the ideological paradigm of AGW

    … keep such stupid generalisations to yourself.

    And Professor Halpern …

    I am one who considers the AGW threat to be real and present. I spend 6 months a year giving my time pro bono to help eliminate kerosene for lighting in the third world (by switching to ‘relatively’ cheap solar micro-lamps) … it’s an action group tackling energy poverty amongst those not connected to a grid.

    It saddens me to see what a joke you have become in this matter. With such distinguished scientific credentials it should have been possible for you to exercise significant influence and bring the science to the public.

    Instead it seems that anything you say is now routinely discarded by those who do not see a real and present AGW danger.

    My advice is that you retire from the public arena and work behind the scenes in matters of public education and debate. In this matter you will be unable to avoid the negative brand of Eli Rabett.

    Or you try really hard to change the way you present …

  1297. Eli Rabett Says:

    Ah VS, and just who are you? It’s really a cool feature of the internet to be called out as a pseudonym by another. Clown

  1298. cohenite Says:

    Well Alan, I don’t want this discussion of an important approach to analysis of AGW verification to be side-tracked but as a matter of interest do you consider AGW to have an ideological basis and if so could you briefly describe it?

  1299. Alan Says:

    Cohenite … amongst the hundreds of people that I have engaged on this matter it’s about ‘sustainability’.

  1300. Kweenie Says:

    That rodent river metaphore is utter bollox, as most of his contributions here. Send in the clowns.

  1301. cohenite Says:

    And is ‘sustainability’ determined by Malthusian principles as enunciated by Erlich and Holdren or by human ingenuity as exemplified by Norman Borlaug?

  1302. Tony Says:

    Are we being had? Is this Rabbet/Halpern guy really a Professor? Of what?

    VS shows the temperature series is root infested and says that
    “..non-stationarity (i.e. unit root infested DGP) invalidates any statistical procedure that assumes stationarity. OLS on level series, is one such procedure.”

    In other words, use the right tools or your results might be bent ( i.e. not straight … or true … as we say IRL)

    But Halpern/Rabbet misses the point spectacularly …. and tries a classic but-but-but:

    … ” there are strong theoretical grounds, based on straightforward physics, telling us that the relationship between forcing (not just greenhouse gas forcings) and global temperature is linear. Moreover, there are many other strong and convincing observational verifications of the straightforward physics.”

    Sorry Professor Halpern, but you blew it right there, for two reasons.

    a) irrespective of your claims/appeals special pleading etc., the fact is that the data series contains roots!

    b) your appeal to ‘straightforward physics’ is really odd. We are dealing with air, which is anything but straightforward. Ever looked into the movement of air? (aka aerodynamics)?

    So instead of ‘climate-physics’, have a look at some real physics:

    http://www.math.sunysb.edu/~scott/Book331/Art_Phugoid.html

  1303. Tim Curtin Says:

    Tony said April 3 2005 19.25 (to Eduardo): “…. could you tell me why climatologists seem to be concentrating on the semi-surface max-min thermometer records and don’t seem to be looking at the other weather records such as baro pressure, rainfall, humidity, cloud cover, wind-vector?”
    As Eduardo has not replied yet, let me try. First, one oddity is the apparent dearth of long time series local data on baro pressure, at least I haven’t found it at GISS or NOAA.
    Secondly I have done quite a few multi-variate climate regressions, but generally find RH and wind not to be of great significance. My latest is for Des Moines, Iowa, using annual data from 1960 to 2006 (when reporting there and everywhere mysteriously ceased) . My model is
    dAvgT = a(dOPQ) + b(dH2O) + a([CO2]) + d(dAVDIR) + e(dAVDIF) + f(dAVDIF) + g(dAETRN),
    where OPQ is sky cover, H2O is precipitable water (cm.,) [CO2] is the annual figure for the atmospheric concentration (in ppmv) from Mauna Loa, AVDIR is average total direct surface solar radiation (Wh/sq.m), and AVDIF is total diffuse SSR (Wh/sq.m.), and AETRN is direct normal extraterrestrial solar radiation (Wh/m2). All variables are 1st differenced except for [CO2], because it is its accumulating total that gives rise radiative forcing. Ideally one would want to run this model after using the covariate-augmented Dickey-Fuller (CADF) test proposed
    in Hansen (1995b) and demonstrated by Claudio Lupi (2009, h/t to Cohenite), but his procedures are beyond my capability with my existing software.

    The columns of results (R2 = 0.94) are for
    Variable Coefficient S. Error t p
    dOPQ -0.66585 0.164754 -4.04149 0.000235
    dH2O 0.952935 0.02868 33.22611 9.52E-31
    [CO2] 1.13E-05 8.47E-05 0.133639 0.894358
    dAVDIR 0.000183 0.000241 0.757127 0.453411
    dAVDIF 0.00124 0.000622 1.992942 0.053121
    dAETRN -0.00233 0.001487 -1.5633 0.12586

    Perhaps the reason climate scientists avoid this kind of stuff is because atmospheric CO2 proves to be of no consequence for determination of the average temperature trend at Des Moines from 1960 to 2006, and has the least statistical significance of the 6 variables displayed here.

  1304. phinniethewoo Says:

    Ah VS, and just who are you? It’s really a cool feature of the internet to be called out as a pseudonym by another. Clown

    wow
    is this how aparatchniks vent their frustration when they lose the argument?

    matlab: you need to ask for a price quote , so it is a-la-tete-du-client what you pay.

  1305. phinniethewoo Says:

    tampino is WRONG

  1306. Eli Rabett Says:

    Tony, think irony, not angry.

  1307. Eli Rabett Says:

    What the heck, since this is going on in two threads, might as well reply here too

    VS, you CAN start by making assumptions (those underlying applying an OLS analysis for example) which are supported by the underlying theoretical physics and other observations. Moreover, no matter what YOU said, certainly Beerenstock and Reingewurtz said

    ———————————————-
    Therefore, greenhouse gas forcings, global temperature and solar irradiance are not polynomially cointegrated, and AGW is refuted.
    ———————————————

    Eli takes it you now agree that this is nonsense

  1308. phinniethewoo Says:

    the drunks walk is straight physics as well, but it remains a probability conundrum..The probabilities along his walk is a tale of moving posts though.

    when he is at the uhi site , there is no doubt chances are much higher for his next 1000 steps that he’ll be in that part of town..

    One really has to invoke the law of large numbers to see he’ll come back to his lantern.

    Limiting to paraxial walk just shows the probability distribution of his possible paths is a product of the Probability Distribution of 2 coin tossing exercises.

    Multiplying probabilities favours the establishment though:
    If you have 0.9 chance X happens and 0.9 chance Y happens,
    then you still have 0.81 chance X*Y happens. Al Gore keeps jetting.just a 10% reduction in his chances.
    The smaller probabilities , the chances of the hoi polloi to get matlab AND a cosy job next to tampino in their lifetimes, they suffer though
    If you have 0.4 chance X happens and 0.4 chance Y happens, then you only end up with a 0.016 X*Y happens. that’s a reduction by 70%
    Poincare wasn’t interested in probabilities, but if he was, he surely would have agreed.
    I hope everybody is convinced now.. (ps can i get a nobel now?)

    eitjes rapen..zinloze bezigheid.

  1309. phinniethewoo Says:

    Tony, I need to do ironing as well I forgot that now as well with all this chocolate around..You’re not walking alone.

  1310. John Says:

    phinniethewoo

    Try http://www.freestatistics.info/stat.php

    JMulTi –interactive software designed for univariate and multivariate time series analysis

    Does most things vs has covered with easy(ish) instructions

  1311. phinniethewoo Says:

    the drunk’s walk, or earth’s temp, is non-stationary: his future whereabouts are determined by endogene factors mainly: his present condition.

    the onus is on tampino to prove that’s not the case.

  1312. phinniethewoo Says:

    thanks john, I was checking the btjunkie store for good deals as well..
    (you want to keep me busy don’t you..my girlfriends played this out on me all the time ..tomorrow I am on a field trip don’t worry…hoor-ray, hoor-ray who fxxing ray !)

  1313. phinniethewoo Says:

    what Rich says is unconveniently true as well..Indeed: let topology enter the debate, after VS proved that tempCO2 are uncorrelated (or are only possibly correlated with galaxian margins of error)..

    many people asked my the last few years where the origin of the big bang is located.. well, it is on the top of your nose isn’t it? It always puzzles me these reports on capturing radiation of the first millisecond, far away.. We never will see t=0 , but are always ever approaching the top of our noses, coming in from far away

  1314. phinniethewoo Says:

    the financial times , of all lefty tabloids, “reports” this WE that there is a whole list of accomplishments in Cern, the most salient of them being the invention of the internet. the rest of the list is mysterioulsy edited away..

    This must be their Al Gore moment.

    With this sophistry in place we should encourage Swiss patenting: Hopefully we get follow results on Brownian motion, and genral relativity.

  1315. Shub Niggurath Says:

    Alan, you might placate yourself that hand-crankers and solar lamps are the one stone you are killing two birds with – AGW and poverty. But you have to understand that governments of the third-world use your actions as an excuse not to bring grid electricity to the very people whom you think you serve.

    I agree with you about Halpern (Eli) though.

  1316. DLM Says:

    The learned rodent has confirmed that the Luddites were right! The Industrial Revolution is a tragic mistake inflicted upon the more enlightened part of humanity, by a few evil idiots. If only we could erase their stupid perfidy and turn back the clock to a far better time. The Medieval Warm Period would have been been comfortable (for the Nobility, at least). But that and other eras of balminess have been erased, by quasi-clever and deliberately naive use of proxy data. Well, it was done for a good cause: to increase lebensraum for the noble, and very cute when young, polar bear.

    Anyway, before our saviors turn back the hands of time to rescue us all, be sure to be very near the equator. Based on their track record, they really don’t get the paleoclimate stuff right. Odds are they will put us in the middle of an ice age. Good luck! And bundle up!

    PS
    I hope they don’t have to have another big important meeting before they can get this done. Very poor track record on that too.

  1317. DLM Says:

    Give him a break Shub. His admirable effort, whatever the motivation, is not preventing the natives from being connected to the grid. Corrupt third world despots don’t need excuses to keep their people down, they have the guns.

  1318. Richard Patton Says:

    cohenite asks:

    “Well Alan, I don’t want this discussion of an important approach to analysis of AGW verification to be side-tracked but as a matter of interest do you consider AGW to have an ideological basis and if so could you briefly describe it?”

    I would suggest that if there is a bias it is the unquestioning belief that the mechanical metaphor is an apt metaphor for understanding (and thus predicting) the climate. Notice the language – “simple physics”, it is just “forcings and feedbacks”. The mechanical metaphor is all about clear inputs and clear outputs (context insensitivity). However, when you have a high degree of context sensitivity (everything affects everything) this metaphor breaks down and with it all of our techniques for describing mechanical phenomena.

    The (to me) compelling evidence for a unit root in the temp record is just another piece of convergent evidence that climate is a complex open-system that is quite far from meeting the requirements of the mechanical metaphor.

    In fact, the more I look at it, the more it appears to behave like a Complex Adaptive System. I was struggling with how this could be until Willis pointed to the Constructal Law and its far-ranging applications. (Thank you Wills – although them book by Bejan are expensive – but well worth it :-) )

    If climate is more like a CAS than a machine it is no wonder the econometricians seem to have so much to say.

  1319. Bart Says:

    Off topic comments belong in the open thread.

  1320. Tony Says:

    Tim Curtin , thanks for your post … your work illuminates the relative insignificance of CO2 as a climate-change agent.

    Richard Patton, and Willis; re complex systems modelling & econometrics, see my post on the open thread.

  1321. phinniethewoo Says:

    ✶ Your comment is awaiting moderation. ✶

    wtf?

  1322. Willis Eschenbach Says:

    Richard Patton Says:

    ….

    In fact, the more I look at it, the more it appears to behave like a Complex Adaptive System. I was struggling with how this could be until Willis pointed to the Constructal Law and its far-ranging applications. (Thank you Wills – although them book by Bejan are expensive – but well worth it :-) )

    I have been surprised at the lack of interest by climate scientists in the Constructal Law. Analyzing flow systems far from equilibrium is the bread and butter of the Constructal Law, and can’t be done without it. See Bejan’s paper here, as well as the Constructal Theory Web Portal, and the Wikipedia page.

    I do not know of any good references on the statistics of self-organizing systems. Any assistance gladly accepted. My few forays into this area indicate that they appear to be neither stochastic nor deterministic, but I was born yesterday, what do I know? …

    w.

  1323. manacker Says:

    Hey c’mon phinniethewoo, as a Swiss I have to defend the work being done at CERN to recreate the “Big Bang”. This experiment apparently had its first success a few days ago, when CERN reported that thousands of “mini-Big Bangs” had been created.

    Total cost of the project is estimated to be a paltry $9 billion. (Hell, we spent many times this amount cranking out the latest IPCC report, which didn’t provide any “bangs” at all.)

    Included in the scope is the detection and understanding of “dark matter” as well as the “Higgs boson”, both pretty elusive so far.

    To achieve this CERN scientists believe they will need to increase the particle accelerator from 7 to14 million electron volts.

    On the negative side, many are concerned about CERN’s large carbon footprint, and some doomsayers have feared that the project would create stable black holes, which would end up devouring not only CERN and the scientists plus technicians, but also a good part of Geneva and surrounding countryside.

    Anthropogenic black holes?

    I shudder at the thought.

    Max

  1324. manacker Says:

    Bart

    It looks to me that (despite some objections from Eli Rabett and a few others) VS has shown that the correlation between empirically observed atmospheric CO2 and “globally and annually averaged land and sea surface temperature” is not statistically robust, and that the postulation of causation is thereby invalidated.

    Did I get that right?

    Max

  1325. Bob_FJ Says:

    VS
    In your comment above concerning Tamino’s competence it seems that you are not impressed by his wisdoms.
    Are you familiar with Tamino’s exchanges with statistician Ian Joliffe, that were initiated by false claims that Joliffe supported Tamino’s views on the Mann hockey stick? Do you agree with Joliffe, or would you like some more details?

    I’m no statistician, but over at Real Climate, I’ve been debating, one of Tamino’s articles that was severally cited to me; entitled “Volcanic Lull“. In my view, as an engineer, it is very deeply flawed in at least four different major aspects. I posted a comment on Tamino’s blog seeking clarification on the first one, but it was deleted without explanation. Then, about 6 days ago, I Emailed him on another issue, [edit], but no response so far, but then it is Easter, and I‘ll wait a bit longer before consolidating the concerns.
    [edit. Don’t post other people’s email address without their consent. BV]

    Here is my most recent comment # 667 of significance at RC, for anyone that might be interested, but the whole thing has become fragmented and is now spread over two different threads. Interestingly, RC seems to have stopped deleting comments, and I’ve had no disagreements with the real concerns that I’ve raised on Tamino’s article.

  1326. Bob_FJ Says:

    SORRY,
    My comment above should read starting at para 4:

    I’m no statistician, but over at Real Climate, I’ve been debating, a Tamino article that was severally cited to me; entitled “Volcanic Lull“.

  1327. dhogaza Says:

    Are you familiar with Tamino’s exchanges with statistician Ian Joliffe, that were initiated by false claims that Joliffe supported Tamino’s views on the Mann hockey stick? Do you agree with Joliffe, or would you like some more details?

    Tamino misunderstood Joliffe’s position, but that had nothing to do with statistical competence. It had a *lot* to do with ambiguity in how people (including Mann, as it turns out) used words to label what kind of PCA they were doing.

    A “false claim” comes close to suggesting Tamino was lying, and it’s clear it was a simple misunderstanding, and Tamino apologized for it.

    Tamino does time series analysis for a living, it is his profession. He’s not as high ranking in that profession as Ian Joliffe is, but who is? Not VS, I guarantee (who, after all, is an economist who uses statistics, not, strictly speaking, a statistician).

  1328. dhogaza Says:

    I’m no statistician, but over at Real Climate, I’ve been debating, that was severally cited to me; entitled “Volcanic Lull“. In my view, as an engineer, it is very deeply flawed in at least four different major aspects.

    Perhaps he finds your opinion as interesting as, say, those who pester physicists with designs of perpetual motion machines?

  1329. dhogaza Says:

    VS has shown that the correlation between empirically observed atmospheric CO2 and “globally and annually averaged land and sea surface temperature” is not statistically robust, and that the postulation of causation is thereby invalidated.

    Did I get that right?

    The causation comes from physics, so really the best you can say is that there’s not enough data, not enough years, to sort the wheat from the chaff, so to speak.

    Real statisticians working in the field disagree, so I wouldn’t get too excited about VS’s claims, if I were you.

  1330. dhogaza Says:

    Eli takes it you now agree that this is nonsense

    If VS does, perhaps he’ll finally get around to showing where the specific errors were made by B&R?

  1331. dhogaza Says:

    Off with you, agitator.

    VS

    Oh, VS appears to have deluded himself into thinking that he’s the blog moderator, again.

    Tch Tch BS (as spanish speakers would say).

  1332. Bill Hunter Says:

    “Real statisticians working in the field disagree, so I wouldn’t get too excited about VS’s claims, if I were you.”

    Likewise with the claims of those opposing VS’s claims apparently.

    Disagreement merely interjects yet another major uncertainty into the predictive capablities of the GCMs.

  1333. cohenite Says:

    dhogaza says: “The causation comes from physics”, to which the obvious rejoinder is that so does the evidence for a lack of causation; cointegration shows that only the increase of CO2 can have a temperature effect not the absolute amount; this is a confirmation of both Beer-Lambert and the dominance of convective process over diffusion which further mitigates the exponential decline in CO2 heating from CO2 increases.

    The unit root characteristic of temperature trend is a product of stochastic climate parameters and supports break approaches to temperature trend rather than linear trends; CO2 is not capable of producing a temperature break trend either incrementally or at absolute levels.

  1334. Tim Curtin Says:

    A nobody said (April 5, 2010 at 03:35)

    “VS has shown that the correlation between empirically observed atmospheric CO2 and “globally and annually averaged land and sea surface temperature” is not statistically robust, and that the postulation of causation is thereby invalidated…..Real statisticians working in the field disagree, so I wouldn’t get too excited about VS’s claims, if I were you.”

    Nobody, name one “real statistician” who has forestalled VS in a peer reviewed paper, other than nonentities like yourself. Koutsoyiannis? Bejan? I think not – their papers support VS.

  1335. kuhnkat Says:

    Tim Curtin,

    “Secondly I have done quite a few multi-variate climate regressions, but generally find RH and wind not to be of great significance.”

    So, you are saying that dry air not moving has approximately the same energy as air with 100% humidity moving at about 150mph when at the same temperature??

    HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

  1336. Tim Curtin Says:

    kuhnkat: I was too hasty, in fact at Des Moines Relative Humidity is highly significant, but negative, and it is average windspeed that there at least plays no significant role whilst also having a negative impact on mean annual temperature.
    As before we have coefficient, S. Error, t, and p (Adj. R2 is 0.94).
    dRH -0.038792717 0.011076382 -3.502291475 0.001197255
    dAVWS -0.018380323 0.085040424 -0.216136302 0.83003757
    And as before the main positive and significant determinant of changes in annual mean temperature is “H2O”, precipitable water (on which [CO2] has no discernible effect). So if you are right, to reverse AGW, we need to reverse precipitation and enhance RH and WS?

  1337. John Whitman Says:

    Paladin-like commenter,

    I think that was a hint that you may be ignored

    John

  1338. Tim Curtin Says:

    Further to the easily amused kuhnkat’s laughter, here are the results of another naïve regression, for changes in average annual temperature from 1960-2006 at Pt Barrow (Alaska) on [CO2], and changes in “global” solar surface radiation (the “global” is net of albedo), opacity OPQ, precipitable water H2O, extraterrestrial solar radiation AETRN, relative humidity RH and windspeed WS. Kuhn will be pleased to see that this time unlike at Des Moines, both the last 2 are positive, but this time it only WS that is Stat. Sig.
    Alas for IPCC AR4 WG1 chapter 9 and its many false claims, the role of radiative forcing by [CO2] is actually negative, even though Barrow is where it is now being measured (but only since 1974, so here I use the Mauna Loa series from 1960, when both are available the R2 between them is virtually 1.0).
    CO2 ML -6.7E-05 0.000439 -0.15183 0.880101
    dAvglo 0.002163 0.002384 0.907559 0.369685
    dOPQ 1.22991 0.476609 2.580542 0.013743
    dH2O 11.6294 3.044559 3.819732 0.000468
    dAETRN 0.000885 0.003862 0.229028 0.820044
    dRH 0.050933 0.049261 1.033944 0.307531
    dWS 1.18916 0.461924 2.57436 0.013954

    The admirable GISS maps always spread red ink around Alaska, enough to make one think that Barrow will soon be as hot as Khartoum, but the total anomaly since 1980 on 1950-80 is only enough to take the average temperature at Barrow up by about 1oC from minus 12.7 in 1950-80 to something over -11 according to my source (it seems as suggested by EM Smith that GISS is finding it too cold to source temperature there, as it is currently showing only 9999 for Barrow’s lat 71.3 long 156.78 since 1980).

  1339. Tim Curtin Says:

    Apologies, kuhn & GISS, I was looking at lat 73; at Lat 71 Long 157 there are data, but showing zilch change in the Barrow anomaly since 2000 (it was .2717 in 1980-2000 on 1950-80, and still that in 1980-2009).

  1340. sod Says:

    Nobody, name one “real statistician” who has forestalled VS in a peer reviewed paper, other than nonentities like yourself. Koutsoyiannis? Bejan? I think not – their papers support VS.

    this paper comes to the completely opposite conclusion of VS.

    Click to access wp495.pdf

    Tim Curtin, you are wrong, lie always..

  1341. Henry Crun Says:

    SOD = Same Old Disinformation?

  1342. cohenite Says:

    sod, the Breusch paper does have similarities with this paper:

    Click to access 0907.1650v3.pdf

    Stockwell finds a break beginning in 1976 of duration to 1979; this has been confirmed by many other papers including Seidel, Lindzen, Tsonis etc; Breusch discount a unit root in temperature because they do not find another break in 1998; Stockwell finds one beginning in 1997 and Tsonis in 2002; a converse break at that time is proof of a unit root because the stationary component must be stochastic; the subsequent downward break cannot be deterministic because CO2 cannot produce a downward trend. At the very least then the Breusch paper is outnumbered 2×1; or at least 3×1 if we include Beenstock’s paper.

  1343. Bart Says:

    Manacker,

    You wrote:

    It looks to me that VS has shown that the correlation between empirically observed atmospheric CO2 and “globally and annually averaged land and sea surface temperature” is not statistically robust, and that the postulation of causation is thereby invalidated.

    No, I don’t think that is the correct conclusion. He showed that the global avg temperature has a unit root *if* he takes the CO2 forcing as the underlying trend. Whereas the temp is expected to respond to the net forcing and to endogenous factors such as ENSO (see Eduardo’s last comment). That expectation, which is based on physics and exemplified in the paleoclimate record, is not invalidated. That said, I’m still not clear what it is exactly that VS is claiming.

    HAS,

    You asked me about a paper that I co-authored about chemicla composition of aerosol. The presence or absence of a statistically significant trend is not necessarily inconsistent with physics. What I would argue against though if people were then to argue that PM is not a health hazard, or that aerosols don’t influence climate, or other unphysical conclusions that are unwarranted based on the data and our physical understanding. It’s such unphysical conslusions that I argue against; not the applications of sound statistical methods.

  1344. Bart Says:

    ALL:
    The thread is now divided into pages with a maximum of 200 comments on each page. Hopefully that speeds up the loading of the page.

  1345. Kweenie Says:

    “Tamino misunderstood Joliffe’s position, but that had nothing to do with statistical competence.”
    To me it is a matter of competence.
    Joliffe stated that “the Hockeystick paper continued to be cited as having valid conclusions well after it became clear that some of its methodology was flawed. If it had quietly disappeared, but the errors noted and never repeated, leaving other less controversial papers to be cited when discussing past climate, the errors would not have attained such prominence. But I guess that would have been much less fun for the protagonists on both sides
    …”
    (BTW apperently M&M did understood Joliffe correctly in their ’05 background information.)

    I guess accusing Wegman of plagiarism was another “misunderstanding”? The man’s paranoia doesn’t know any boundaries.

  1346. VS Says:

    Hi Bart,

    Return the thread back to normal please.

    You just ‘disabled’ all the links in comments referring to previous posts in this thread.

    Best, VS

    PS. Well, revert to previous, or manually fix all my links :P

  1347. VS Says:

    PPS. I elaborated extensively on why the BV paper is not doesn’t contradict with my results. This is tiring.

  1348. VS Says:

    PPPS. Bart, this is serious. There are plenty of links on other websites referring to specific posts, and all of them are now ‘dead’. People complaining about the thread ‘loading too slowly’ should enter the 21st century and get a DSL connection.

  1349. manacker Says:


    Bart

    This thread is very interesting.

    VS shows us that the temperature series is root infested and says that

    “..non-stationarity (i.e. unit root infested DGP) invalidates any statistical procedure that assumes stationarity. OLS on level series, is one such procedure.”

    In simple terms, this tells me that the presence of a unit root rules out a deterministic trend and that the correlation between GHG and temperature is, therefore, not statistically robust, thereby raising serious questions regarding the case for causation.

    I do not see that Eduardo has refuted this in the post you cite. He has simply stated that GHG are one of many either exogenous or endogenous factors, which may force temperature.

    Now, in my view the relevant question logically is : is the a unit root test able to discriminate between endogenous or exogenous factors? My answer so far is no. The reason is that one can easily produce synthetic (or real) time series with and without deterministic trends, and the unit root tests indicate in both cases the presence of a unit root. Examples can be seen at taminos’ and lucia’s blog. am I wrong here ?
    My conclusion of all this is: if the unit root test cannot discriminate between the two possibilities above, this discussion is futile. We need another test or more chocolate.
    I have tried really hard to see the point but so far.. nope. Like Lucia, I do not see that these basic logical questions have been or are being addressed, so probably I will not check the discussion very often.

    To this I would ask, which of the factors are exogenous, which are endogenous and, what difference does it make with regard to the point made by VS? Even more importantly, can we reasonably conclude that there are no significant factors, which are unknown unknowns today?

    TT has responded (April 3, 18:00) to Eduardo:

    But if the temperature series contains a unit root, you have to correlate all factors or forcings (exogenous or endogenous) in accordance with a statistical method that is consistent with the unit root. An OLS linear trend for temperature is not one of those. That’s the main message I’ve taken from this thread. It is, as you say, a question that has to do with logical reasoning; and it doesn’t seem to me boring or uninteresting.

    Going through all the other very interesting posts on this thread, I really do not see that the basic point made by VS has been refuted in any way.

    If you see this differently, please point out where anyone here has demonstrated (as opposed to simply stated) that the premise by VS is not valid, and that the statistical correlation CO2/temperature is therefore robust, thereby supporting the case for causation.

    Thanks.

    Max

  1350. sod Says:

    PPS. I elaborated extensively on why the BV paper is not doesn’t contradict with my results. This is tiring.

    so you are getting tired of making false claims?

    you did NOT “elaborate extensively” about the second part of the Breusch paper.

    Click to access wp495.pdf

    your claim is simply false. you did NOT address, that they find the last decade to be outside their forecast interval, starting with 1950.
    Breusch does contradict you!

    they forecast interval still starts around 1950, while yours starts in 1880, which makes no sense.

    ——————-

    the same is actually true for tamino. you also didn t contradict the second part of his post.

    Still Not

    you claim that the first part is false, because your search for statistical breakpoints contradict his 1970 choice in the post.

    you have not contradicted the second part of what he said.

  1351. DLM Says:

    Manacker has provided a very cogent summation of where this discussion stands. And that is, at a standstill.

    They are not going to concede anything, VS. Please move on with your analysis, either here or elsewhere.

  1352. Willem Kernkamp Says:

    Reply to:

    sod Says:
    April 5, 2010 at 10:01

    “this paper comes to the completely opposite conclusion of VS.”

    sod,

    I enjoyed reading that paper by Breush and Vahid. They gave a lot of attention to clarity. On the whole, their findings do not differ from what VS has been representing in this thread. This includes all aspects such as 1) the likely presence of a unit root or specifically a near-unit root and 2) an evaluation of trend breaks.

    However, near the end of the article there is a small difference. Their confidence interval for rejecting the H0 of no temperature trend is narrower than the one calculated by VS. As a result they (barely) accept that there is a statistically significant trend whereas VS’s calculation (just) rejects it.

    With respect to their method they note in footnote 4:

    “The confidence bands in these graphs are calculated using the dynamic forecast option in Eviews and they do not incorporate estimation uncertainty.”

    Perhaps this explains the difference with VS. All in all, it does not appear to me that you have found a paper that significantly contradicts VS’s analysis.

    Bart, thanks for this interesting blog.

    Will

  1353. DLM Says:

    Willem Kernkamp Says:Perhaps this explains the difference with VS. All in all, it does not appear to me that you have found a paper that significantly contradicts VS’s analysis.

    My guess is that sod et al will remain unconvinced, and hostile. It is not possible that they could have made a mistake. with their little OLS tool. Hey, it gave them the answer they were looking for. No need for any of that complicated co-integration stuff.

  1354. VS Says:

    Bart, thanks for reverting :)

    Best, VS

    PS. Will, the difference in forecasting intervals is that BV employed an ARIMA(0,1,2) structure w drift, and I employed an ARIMA(3,1,0) structure without drift. Like I said, this is a matter of (informed scientific) opinion. We can debate it / discuss it, and employ more finely tuned diagnostic tools to pick the one we like. Honestly speaking, I haven’t found the time to perform any (extensive) formal comparison of the two specifications.

    I stress again, that the whole forecasting interval story was a discussion between Eduardo and me about his 2008 GRL paper. It was a detour.

    Now, note that because the (estimated) roots of the MA-based specification fall within the complex unit circle, and the specification actually has an ARIMA(Infinite, 1, 0) representation (i.e. it is called ‘invertible’). So, if the ARIMA(3,1,0) specification is a good approximation to that inverted ARIMA(0,1,2) specification, the two representations are very similar. Note that this doesn’t imply that the MA specification is ‘right’, just that the two specifications are similar.

    This is quite logical, since they are different representations of the same (stochastic) process generating our observations.

    However, and more to the point, both of these structures in fact represent stochastic trends, so I have no clue what all the huffing and puffing is about (well, I do have a clue, but it has nothing to do with science).

  1355. Willem Kernkamp Says:

    VS,

    Thanks for the clarification about how the two stochastic models relate to each other.

    Currently studying Beenstock.

    Will

  1356. Bart Says:

    Manacker,

    You seem to ignore what I wrote: Nobody expects a one to one correlation of CO2 with temp, because there are many other factors influencing temp as well. They should also be included in any analysis looking at attribution (which, btw, is not being attempted in this thread at all). A causal relation between GHG and temp has not been refuted at all. (Nevermind the fact that it would somehow be similar to refuting gravity by pointing to a flying bird; there’re more forces besides gravity and there are more climate forcings besides CO2)

    In your cite from Eduardo, he argues (rightly imho) that a unit root cannot distinguish what are the most likely causes of the increase in global avg temp.

  1357. sod Says:

    I enjoyed reading that paper by Breush and Vahid. They gave a lot of attention to clarity. On the whole, their findings do not differ from what VS has been representing in this thread. This includes all aspects such as 1) the likely presence of a unit root or specifically a near-unit root and 2) an evaluation of trend breaks.

    However, near the end of the article there is a small difference. Their confidence interval for rejecting the H0 of no temperature trend is narrower than the one calculated by VS. As a result they (barely) accept that there is a statistically significant trend whereas VS’s calculation (just) rejects it.

    you must have failed to read the abstract of their paper:

    Our analysis shows that the upward movement over
    the last 130-160 years is persistent and not explained by the high correlation, so it is best described as a trend. The warming trend becomes steeper after the mid-1970s, but there is no signi…cant evidence for a break in trend in the late 1990s. Viewed from the perspective of 30 or 50 years ago, the temperatures recorded in most of the last decade lie above the con…dence band of forecasts produced by a model that does not allow for a warming trend.

    it is really nice, that real scientists put their most important results into a short summary of their paper.
    the most important results of the Breusch paper does contradict VS!

    As a result they (barely) accept that there is a statistically significant trend whereas VS’s calculation (just) rejects it.

    Perhaps this explains the difference with VS. All in all, it does not appear to me that you have found a paper that significantly contradicts VS’s analysis.

    if you look at figure 2, you will see that modern temperatures are scratching among the upper end of the VS forecast interval.

    a serious analysis might also factor in temperatures of this year so far, which looks like it might break out of that interval.

    so i think that actually it is VS, who barely gets the result that he desires. (choice of 1935,weird forecast interval)

    but yes, Breusch is more careful in his choice of words. that is, how science is done. VS is overconfident. his story is build on sand.

  1358. VS Says:

    Bart,

    “In your cite from Eduardo, he argues (rightly imho) that a unit root cannot distinguish what are the most likely causes of the increase in global avg temp.”

    True, but it is completely irrelevant, as the test is not meant for that purpose. For that you need cointegration analysis.

    The tests are meant to test the assumption of stationarity, which dictates the direction/methods employed in any further analysis.

    I answered Eduardo here.

    Also, about that 1935 date, I motivated (to Eduardo!) that particular choice in that particular context, here.

    People, please read the thread before diving head-first into the technical discussion

    Thanks.

    Best, VS

    PS. Also, I would like to take this opportunity to point out how stimulating a debate with ‘skeptics’ is.

    So far, I have been pushed to thoroughly examine every single assumption underpinning my analysis (viz. both theoretical and simulation evidence I was ‘forced’ to provide).

    I would never dream of describing this as ‘obstructionism’. Rather, I would call it science.

    Thank you all for this, you really pushed me to my limits! :)

  1359. DLM Says:

    From the B&V ‘working paper’ (which I guess is decidedly more authoritative than VS’s comments here, because there are two of them, and we know their names):

    “Viewed from the perspective of 30 or 50 years ago, the temperatures recorded in most of the last decade lie above the con…dence band of forecasts produced by a model that does not allow for a warming trend.”

    Viewed from the perspective of 30, or 50 years prior to 1970, and 1980 respectively ( I will help you it’s 1940), what would one who is convinced that increasing CO2 must have a major impact on GMT expect the 30 and 50 year GMT trends to look like? I’m talking to you sod. What would the current state-of-the-art climate models have told you to expect back in 1940? And let’s give you perfect knowledge on the future levels of CO2 to get you started.

  1360. Willem Kernkamp Says:

    In reply to:
    sod Says:
    April 5, 2010 at 21:08

    “you must have failed to read the abstract of their paper:”

    Now sod,

    Since when does reading the abstract of a paper trump reading the paper itself? When you get around to reading it, your conclusions should not fall to far from mine as described above.

    Will

  1361. DLM Says:

    OOPS! : “Viewed from the perspective of 30, or 40 years prior to 1970, and 1980 respectively ( I will help you it’s 1940), “

  1362. manacker Says:

    Bart

    You wrote

    You seem to ignore what I wrote: Nobody expects a one to one correlation of CO2 with temp, because there are many other factors influencing temp as well. They should also be included in any analysis looking at attribution (which, btw, is not being attempted in this thread at all). A causal relation between GHG and temp has not been refuted at all.

    (Nevermind the fact that it would somehow be similar to refuting gravity by pointing to a flying bird; there’re more forces besides gravity and there are more climate forcings besides CO2)

    In your cite from Eduardo, he argues (rightly imho) that a unit root cannot distinguish what are the most likely causes of the increase in global avg temp.

    No, Bart, I did not ignore what you (or anyone else here) has written.

    Sure there are “many other factors” influencing temperature. VS has just shown us that the correlation between GHG/temperature is not statistically robust. While this, in itself may not “refute causation” (as you say), it does raise serious question regarding the case for causation.

    Eduardo’s statement does not refute the logic of VS, as VS points out in his last post to you:

    In your cite from Eduardo, he argues (rightly imho) that a unit root cannot distinguish what are the most likely causes of the increase in global avg temp.

    True, but it is completely irrelevant, as the test is not meant for that purpose.

    Bart, I think my short summary of the conclusions reached on this thread still stands until someone can show that the points made by VS are false, which no one has done to date.

    Max

  1363. manacker Says:

    Bart

    I forgot to thank you and the many contributors here, especially VS, for a very interesting discussion.

    Max

  1364. sod Says:

    Since when does reading the abstract of a paper trump reading the paper itself? When you get around to reading it, your conclusions should not fall to far from mine as described above.

    abstracts and conclusions summarize the main findings of a paper. Breusch comes to a conclusion, that contradicts what VS claims here!

    5 Conclusion
    We conclude that there is su¢cient evidence in temperature data in the past 130- 160 years to reject the hypothesis of no warming trend in temperatures at the usual levels of signi…cance.

    Click to access wp495.pdf

    .———————

    Bart, I think my short summary of the conclusions reached on this thread still stands until someone can show that the points made by VS are false, which no one has done to date.

    you folks and VS keep ignoring the contradictions,. that makes contradicting you a little hard.

  1365. Anonymous Says:

    VS,

    Hey, I just (barely, C+ level) made it to Section 9.2 ‘Models with Nonstationary Variables’ in Verbeek’s 2nd Edition ‘A Guide to Modern Econometrics’ (2004)!!

    Now for the real meat in all that gravy.

    DON’T you dare even considering to give up here after all the homework I’ve done just to get a glimpse of what you are talking about.

    I am loving it.

    VS, thank you for the enlightenment.

    John

  1366. John Whitman Says:

    VS,

    Sorry for the ‘Anonymous’ post at ‘April 5, 2010 at 23:54’

    I am pathologicaly Non-Anonymous. : )

    Bart, sorry for messing up your commenting protocals. With me it is something about ‘Old dogs new tricks . . .

    John

  1367. Bob_FJ Says:

    Dhogaza,
    In your recent flurry of comments starting at April 5, 2010 at 03:33, one thing I did find interesting was your:

    Tamino does time series analysis for a living, it is his profession.

    I’ve read statements that he is a statistician before, and have often wondered; from where does this originate. I’m now aware of the following description of his book after a brief Google, but can’t seem to find much else about him.

    In [the book] “Noise: Lies, Damned Lies, and Denial of Global Warming” statistician Grant Foster shows how the manipulation of figures can be used to mislead the average person about global climate change. Using clear, plain language that can easily be understood by anyone, regardless of math grades, Foster arms the reader with the critical thinking skills necessary to help discern the signal of fact from the noise of misinformation.

    Can you point me to something less vague about his competence?

    Incidentally, I have illustrated over at RealClimate, that Tamino in his article “Volcanic Lull” uses a strange “30-year smooth”, that appears to be in error, or at best is lacking proper definition to explain it. In response, David B Benson has commented to the effect that decadal averaging would be a better methodology, to overcome problems with smoothing. Interestingly, Gavin Schmidt and Ray Ladbury have remained silent on this matter, although they have commented elsewhere on this topic. (As has Tamino)

    BTW, I noticed that within the “Climategate Emails” some of his colleagues suggested that Tamino might be the best guy to write-something-up that they had difficulty with. (Spin?)

    Oh, and one joker has suggested that Grant Foster may be a play on words; “to foster grants”.

  1368. Alex Heyworth Says:

    Those with a serious interest in taking this discussion further might like to consider the Koutsoyiannis paper, “A random walk on water”, found here http://www.itia.ntua.gr/en/docinfo/923/.

  1369. VS Says:

    “From the B&V ‘working paper’ (which I guess is decidedly more authoritative than VS’s comments here, because there are two of them, and we know their names):”

    Professor Trevor Breusch is definitely more authorative than me on econometrics in general, and on these issues in particular. You might have noticed that he is the (co)author of the Breusch-Godfrey test for serial autocorrelation that we used to test whether various trend specifications were well behaved in that sense.

    As a matter of fact, I hope that climate scientists indeed begin to know his name (and his tests), instead of continuing to use the hopelessly obsolete Durbin-Watson test, that furthermore doesn’t work if the specification/process contains AR terms (see wiki link above). Note that most real time series, including temperature series (regardless of the stationarity assumptions) contain AR terms.

    However, this is all beside the point, as Breusch and Vahid explicitly avoid the unit root discussion we’re having here. I cite: “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards.” So they take both assumptions (without dwelling into which one is correct too deeply, although they did provide me with a crucial reference to answer this question :) and take a look at the outcomes. Under non-stationarity they arrive at an ARIMA(0,1,2) with significant drift, and I arrive at an ARIMA(3,1,0) without significant drift. Both have an equal R2 and are well-behaved. Which one is more appropriate is still an open question.

    It seems that some people commenting here don’t understand either: the BV paper / my analysis / both.

    In any case, I sincerely doubt that Professor Breusch would disapprove of what I’m doing here.

    Best, VS

  1370. VS Says:

    PS. John, once you read that chapter, you’ll see how blindly obvious (if one accepts the unit root) all the implications I’m making here are.

    For everybody else, I’ll reproduce the first sentence of the first paragraph of section 9.2.1 (in my version) of the (undergraduate) textbook John is referring to:

    “The assumption that the Yt and Xt variables are stationary is crucial for the properties of standard estimation and testing procedures.”

  1371. Willem Kernkamp Says:

    In reply to:
    sod Says:
    April 5, 2010 at 23:25

    “abstracts and conclusions summarize the main findings of a paper. Breusch comes to a conclusion, that contradicts what VS claims here!

    5 Conclusion
    We conclude that there is sufficient evidence in temperature data in the past 130- 160 years to reject the hypothesis of no warming trend in temperatures at the usual levels of signi…ficance.”

    sod,

    In the body of the article we find:

    “While the t-statistics of the coefficient of the trend in equations (1), (2) and (3) were around 4, implying a highly significant and precisely estimated positive trend, the t-statistics for the drift terms (the intercepts) in equations(4), (5) and (6) are 1.83, 1.78 and 2.23 respectively. While these are sufficient to reject the null hypothesis of zero drift against a one-sided alternative of a positive drift at the 5% level of significance, they show that there is not as much information about the linear trend in data as implied by equations (1), (2) and (3).”

    Here the equations (1), (2) and (3) represent the “usual” stationary regression and (4), (5) and (6) a correct model for a DGP that contains a unit root. (There is a regression equation for each of three temperature records in both cases).

    Breusch and Vahid show that the unit-root causes an assessment to change from a precise trend with narrow 5% and 1% confidence limits to a rejection of a null hypothesis of no trend at the 5% confidence level. The assessment has obviously become much weaker. It appears that the conclusion and the abstract understate the content of the article. I would say that they probably should not have said “at the usual levels of signi…ficance.”

    I don’t think it is that uncommon for scientists to tone down conclusion and abstract to ease their work in to publication. In fact, I have seen it before.

    In short, Breusch does not contradict what VS says here!

    Will

    The publication can be found here:

    Click to access wp495.pdf

  1372. VS Says:

    correction: “AR terms” should read “additional AR terms”, so in excess of the AR(1) term.

  1373. VS Says:

    Thanks Willem,

    I wanted to start elaborating on that as well, but as I already noted, it’s really tiring.

    Do note that while your nice summary a wasted on the particular person you are addressing, it should clear up some misconceptions for the other readers :)

    Best, VS

  1374. DLM Says:

    VS says:”Professor Trevor Breusch is definitely more authorative than me on econometrics in general, and on these issues in particular. You might have noticed that he is the (co)author of the Breusch-Godfrey test for serial autocorrelation that we used to test whether various trend specifications were well behaved in that sense.”

    Well, I guess I am happy that I gave you an opportunity to say that. But what about the other guy? I bet Varhid is not as smart as you are. On average, you may be their better ;)

  1375. John Whitman Says:

    ”””’VS Says: April 6, 2010 at 01:28 – I’ll reproduce the first sentence of the first paragraph of section 9.2.1 (in my version) of the (undergraduate) textbook John is referring to: ‘The assumption that the Yt and Xt variables are stationary is crucial for the properties of standard estimation and testing procedures.’ ”””’

    VS,

    Yes, in my version section 9.2.1 starts out the same.

    Also, I like the readability of Verbeek, so I also ordered the 3rd edition (2008) : ) Due at my house tomorrow.

    Philosophically, I recommend you do not lend credibility to the premises of insincere commenters, you know who I am referring to. Of course we all individually need to decide if a commenter is sincere or not. If a fairer playing field will prevail, then it will be more similar to the one that you started here at Bart’s.

    Thanks Bart for the playing field support.

    John

  1376. DeWitt Payne Says:

    No one has bothered to comment on my post above that shows that the total forcings do, in fact, correlate with the temperature data when a model with time constants is used with a correlation coefficient that is highly unlikely to have been obtained by chance. I take that to mean that positions have hardened so much that further discussion here is pointless. VS’ point that the IPCC confidence intervals are too small is valid. However, the argument that if one cannot find a linear relationship between temperature and CO2 forcing alone, then AGW is falsified is a straw man. Confirmation bias seems to be in action again. Given the physics of radiative heat transfer and the large heat capacity of the oceans, the surface temperature should show what looks like unit root behavior. Btw, the B&V graph strongly, not barely, rejects the hypothesis of no trend. VS’ graph, OTOH, barely fails to reject the no trend hypothesis.

  1377. DLM Says:

    VS says: “However, this is all beside the point, as Breusch and Vahid explicitly avoid the unit root discussion we’re having here. I cite: “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards.” So they take both assumptions (without dwelling into which one is correct too deeply, although they did provide me with a crucial reference to answer this question :) and take a look at the outcomes.”

    I was moved to re-read the B&V paper (twice) in a different light, by your gracious account of Prof. Breusch’s qualifications. Now I will have to admit, that I can see why your assertion that you and they are in basic agreement is being questioned.

    I am not sure that it is correct that they have explicity avoided discussion of the unit root. And if they have avoided it, I wonder why they would do so, if the issue is so important. It seems to me that they did discuss the unit root issue, and they said: “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards.” Aren’t you saying that determining whether there is a trend or not, is about a unit root in the temperature data?

    Their conclusion: “We conclude that there is sufficient evidence in temperature data in the past 130- 160 years to reject the hypothesis of no warming trend in temperatures at the usual levels of signi…ficance.”

    Isn’t your conclusion the opposite?

    I obviously don’t have much knowledge of statistics (just like sod), but I can read. And the words are not currently adding up on this specific issue-the alleged conflicts, between your analysis and the B+V paper. I read Willem’s response to sod several times and it doesn’t clear it up for me.

    In a nutshell:

    1. Why would B&V explicitly avoid discussion of the unit root?

    2. Why did they find a warming trend in the data?

    Help me out VS.

  1378. hengav Says:

    K Dewitt, I will comment.

    The fundamentals of physics continues to be the straw man in this thread, not as you assert:

    “However, the argument that if one cannot find a linear relationship between temperature and CO2 forcing alone, then AGW is falsified is a straw man.”

    We are discussing the merits of the statistics used, I for one, and most others are definitely NOT trying to draw conclusions.

    Cheers

  1379. Don Jackson Says:

    Again from a naive perspective: If the data you have require different statistical tools than those you’d like to use: Show that those you’d like are adequate.

    If the tools that are shown to be adequate are not to your liking, say so. And say why.

    I (think I) understand why most climate science professionals are opposed to this level of statistical rigor.

  1380. sod Says:

    We are discussing the merits of the statistics used, I for one, and most others are definitely NOT trying to draw conclusions.

    Tom Fuller didn t get the memo, i guess. i did a google blog search (“unit root” climate) and his article is on the first page of results:

    What does all this mean? It could mean that the theory is incorrect. Or, it could mean that the data are not “accurate” enough to exhibit the “theoretical relationship.” It certainly “raises a red flag” as VS has noted several times. And, it does mean that one can’t simply point to highly correlated time series data showing rising CO2 concentrations and rising temperatures and claim the data support the theory.”

    http://www.climatechangefraud.com/behind-the-science/6661-global-warming-bigger-than-climategate-more-important-than-copenhagen-its-statistical-analysis

    the unit root story will be spun into “there is no warming”. anyone who has spent even a few month following the discussions knows this. and VS knows as well.

    it is what he wants, not a misperception of what he does.

    —————————-

    I (think I) understand why most climate science professionals are opposed to this level of statistical rigor.

    you think, eh?

    look at the graph again:

    the VS way of doing an analysis leaves us with zero knowledge about temperature. it could go up by 1°C or down by the same, within a couple of decades.

    his statistical approach is not only false, as i and others have demonstrated above, it is also completely useless.

  1381. sod Says:

    Btw, the B&V graph strongly, not barely, rejects the hypothesis of no trend. VS’ graph, OTOH, barely fails to reject the no trend hypothesis.

    true. but somehow this gets spun the other way round in this discussion.

    VS and his followers just really really want to get the results that they desire. at the end, statistics and numbers are only a tool to achieve this, and don t matter if they contradict the preferred outcome.

  1382. Bart Says:

    Manacker,

    It seems to me that you’re trying very hard to draw conclusions that cannot be made on the basis of unit root presence/absence. In your cite from VS he also sais that indeed, attribution of what has caused the warming is not something that can be done on the basis of his analysis so far. So the opposite, claiming that all attribution studies to date have been refuted, is therefore unwarranted.

  1383. Tim Curtin Says:

    Bart: as long ago as March 8, 2010 at 10:59 you said: “Tim Curtin,It appears that you only tested for the response to CO2. However, there are more forcings than just CO2. Climate responds to the net forcing.” I then reported various of my regression results for other forcings (like measures of solar radiation, water vapour, and most recently both wind speed and relative humidity. All of my previously reported regressions here confirm the statement by VS (9th March) “ There is no rigorous empirical proof that CO2 is (significantly) influencing temperatures”.

    Bart replied (March 9, 2010 at 09:45 ) “VS, Now you’re making some very dubious claims. Satellite measurements of outgoing longwave radiation find an enhanced greenhouse effect (Harries 2001, Griggs 2004, Chen 2007). This result is consistent with measurements from the Earth’s surface observing more infrared radiation returning back to the surface (Wang 2009, Philipona 2004, Evans 2006). Consequently, (sic)!), our planet is experiencing a build-up of heat (Murphy 2009). These findings provide ”direct experimental evidence for a significant increase in the Earth’s greenhouse effect that is consistent with concerns over radiative forcing of climate“.

    Bart, what you omitted to do there was show any quantitative effect on global warming from the “direct experimental evidence for a significant increase” in the “enhanced greenhouse effect”. This omission is recurrent by all too many of your friends posting here, as noted by VS when he asked (March 9, 2010 at 10:18) Where is the empirical proof (i.e. regression analysis)?

    Similarly Tamino at his Open Space (16 March 2010) said:” And let’s be very clear about one thing: climate forcing will affect global temperature. Unless, of course, you’re willing to deny the laws of physics.” That is not the issue, there are lot of effects in physics that are of no practical consequence. Tamino like Bart never offers any empirical proof via regression analysis that the climate forcing has ever or will ever affect global temperature to any measurable degree. That the GISS series from 1900 to now shows a rise of just 0.7oC in GMT as against the increase of c40% in atmospheric CO2 is not evidence: there is a much better correlation between world GDP and [CO2].

    I know my regressions never attract much support here – but that is better than refutations, of which there have been none! Let me take up Bart again, and use the GISS series for Global “Net Forcing” of GHG from 1900 to 2003 (when it stops, oddly enough). Here is the full set of results for the usual variables plus Net Forcing at Pt Barrow 1960-2003 (R2=0.55):

    Coefficients St Error t Stat P-value

    dAvglo 0.001575 0.002692 0.584995 0.5622
    dOPQ 1.091819 0.560822 1.946819 0.059392
    dH2O 12.24931 3.405393 3.597031 0.000959
    dAETRN 9.43E-05 0.004172 0.022613 0.982084
    dRH 0.054538 0.052033 1.04815 0.301556
    dWS 1.224792 0.485347 2.523542 0.016173
    NF GISS 0.006202 0.160149 0.038725 0.969324

    Not surprisingly at latitude 71.3N the sun has little impact, in part because the “avglo” is net of albedo.

    So let’s rerun like IPCC (WG1 Chapter 9), ignore non-anthropogenic forcings, and use just NF of all GHGs as the putative forcing of 1st differenced Average temperature (to escape its unit root):

    NF 0.246065 0.235775 1.043645 0.302619

    Clearly GISS’ Net Forcing does nothing on its own to explain changes in temperature at Barrow (Adj R2=0.001, significant to Michael Mann, Schmidt, and Tamino perhaps, but not many others).

    Well, let’s assume like Bart that Average temperature from 1960 to 2003 does not have a unit root, so we can regress the absolute values on GISS’ Net Forcing. Wonderful, R2 soars to 0.32, and with the coefficient at MINUS 6.96 and t=MINUS 4.76 and p= 2.23E-05 we have superb statistical significance of the result that Net Forcing has a powerful effect on temperature change, it’s just too bad that it is actually NEGATIVE.

    What saves the day for climate “scientists” (name one of the IPCC’s Ch.9 authors who have ever reported any multiple regressions) is that the Durbin-Watson shows massive spurious correlation, at 0.197, cause for serious alarm (Wiki). Is this why Hegerl & Zwiers (lead authors WG1 chapter 9 “Understanding and Attributing Climate Change”) never present any data sets or regressions thereon, even though they are the source of the IPCC’s 90% certainty that more than 50% of warming since 1950 is due to Net Forcing by AGG?

    It is not enough to say Barrow is not representative, as The Science assures us that warming will be largest at high NH latitudes, and you don’t get much higher than Barrow!

    Bart, you said (April 5, 2010 at 21:08) “Nobody expects a one to one correlation of CO2 with temp, because there are many other factors influencing temp as well. They should also be included in any analysis looking at attribution (*which, btw, is not being attempted in this thread at all*)”. Really, so what have I been doing again and again here?

    Bart added loc. cit. : “A causal relation between GHG and temp has not been refuted at all.” Well, given that Barrow is one the main measuring sites for GHG, I think my results confirm the nul, there is NO evidence to show a causal relation, at Barrow, or anywhere else I have reported above.

  1384. Bart Says:

    I think DeWitt Payne made a very important point, anhd I’d be curious to have VS and others comment on it.

    Sod in reply to hengav makes the valid point that indeed many people seem intent on drawing conclusions along the lines of ‘no significant warming’ or ‘no causal relation between GHG and temp’, even though they are not warranted by the analyses done (the latter definitely not; some would claim the former is).

  1385. DLM Says:

    “the unit root story will be spun into “there is no warming”. anyone who has spent even a few month following the discussions knows this. and VS knows as well.”

    Why don’t you calm down sod. The wildest claims being made here are by yourself.

    Didn’t the famous climate scientist, Nobel laureate, Phil Jones Phd. get the memo? He very recently said that there has been no statistically significant warming for the last 15 years. More from the BBC interview, in which Dr. Phil spilled his guts:

    Harrabin – Do you agree that according to the global temperature record used by the IPCC, the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 were identical?

    Jones – The 1860-1880 period is only 21 years in length. As for the two periods 1910-40 and 1975-1998 the warming rates are not statistically significantly different.

    Where is the unnatural warming sod? How do you explain the lack of warming from 1940 to 1975? Isn’t that just when humans really started cranking out the CO2? Where is the corelation?

  1386. steven mosher Says:

    manaker,

    “In simple terms, this tells me that the presence of a unit root rules out a deterministic trend and that the correlation between GHG and temperature is, therefore, not statistically robust, thereby raising serious questions regarding the case for causation.”

    I’ll just repeat what Dewitt has said. If you want to make a case against the causation then you actually have to look at the sum total of the causes, the total forcings. He’s made that point several times but nobody takes notice of it.

  1387. Bart Says:

    Tim,

    Global vs. local (at the risk of repeating myself)

    DLM,

    Total forcings vs. only CO2 (at the risk of repeating myself). See also Steven Mosher’s latest comment.

  1388. Bob_FJ Says:

    Steven Mosher, DeWitt Payne & Tim Curtin et al
    Consideration of net forcings such as this by GISS:

    is obviously important, but whatever happened to internal variability such as AMO and PDO?

    DLM
    Does the following graph help with the 1940 through 1975 mystery and the sharp warming before 1940?

  1389. VS Says:

    Hi Dewitt,

    “For 10,000 trials, the probability that R^2 for the fit of the forcings to the synthetic series would exceed 0.75 was 5.2%”

    I don’t understand the point of this excercise.

    Our estimated R2 is around 0.22. What you basically simulated is the distribution of that statistic conditional on an ARIMA(3,1,0). Interesting, but irrelevant in this case, and I don’t think you proved what you wanted to prove. A ‘trend’ is not supposed to capture that large part of the variance. Explanatory variables are.

    Also, since everybody is mentioning these net forcings.

    First off, the series looks funny with all the spikes shooting down in random years, and I (personally) wouldn’t use it for any econometric analysis (also because it’s a chewed out model output, and it looks that way, so we have no idea what it is we’re actually using there). Second, it tests for unit roots, so you need to use cointegration in order to relate (and not the CADF, and not regular OLS..). And finally, the trend is not supposed to ‘explain’ the temperature rise. We need cointegration for that.

    Finally, you mentioned that the ARIMA(1,1,2) specification is ‘best’:

    “If I use the 1880-2003, then the ‘best’ model seems to be (1,1,2) and the drift term is still significant.”

    It’s unclear if you mean ARIMA(1,1,2) or ARIMA(2,1,1), as you list the first, and report estimation results for the second. I estimated both:

    ARIMA(1,1,2):

    Coefficient, point estimate, p-value

    Constant, 0.006572, 0.0131 (sig at 5%)
    AR(1), 0.235074, 0.0869 (sig at 10%)
    AR(2), -0.128751, 0.2448 (insig)
    MA(1) -0.737909, 0.0000 (sig at 1%)

    ARIMA(2,1,1):

    Coefficient, p-value

    Constant, 0.006409, 0.0118 (sig at 5%)
    AR(1), 0.093808, 0.8161 (insig)
    MA(1), -0.597617, 0.1368 (insig)
    MA(2), -0.147818, 0.5845 (insig)

    First off: “The AR1, AR2 , MA1 and intercept are significant.”???

    I think you need to check your backcasting algorithm that is used to fit the MA terms (hence my suggestion to use a non-custom software package), the estimate of -1 on the MA1 coefficient is an indication that it messed up somewhere (think carefully about what that estimate implies).

    Second, how is any of these specifications ‘better’ than the ARIMA(3,1,0) or ARIMA(0,1,2)? Is it because they both indicate a significant (at 5%) drift parameter? You mentioned ‘confirmation bias’ somewhere up there…

    —————

    Hi DLM,

    “In a nutshell:

    1. Why would B&V explicitly avoid discussion of the unit root?

    2. Why did they find a warming trend in the data?”

    (1) I would say they wanted to avoid the discussion we’re having right now, but that’s just speculation. Breusch knows very well what a unit root in the record implies. The whole paper is drenched in ‘hints’. Take another look at Willem Kernkamp’s post here.

    (2) They came to an ARIMA(0,1,2) specification, which contains a significant drift parameter. This last part implies a deterministic rise in each period, and different forcasting confidence intervals, than the ARIMA(3,1,0) specification I found. I listed plenty of arguments why the question relating to which one is more appropriate, is still open.

    I haven’t replicated their entire analysis (only a part), but like I said, this whole point is an irrelevant red herring in the unit root discussion, and I’m really not interested in continuing it for now.

    —————

    Hi Don Jackson,

    “Again from a naive perspective: If the data you have require different statistical tools than those you’d like to use: Show that those you’d like are adequate.

    If the tools that are shown to be adequate are not to your liking, say so. And say why.”

    Please read the thread first.

    Here’s what the Nobel committee thought about the tools that econometricians believe should be used in this instance.

    Statistics is a formal discipline, like regular mathematics. You can’t just ‘pick’ the method you like, and keep calling it statistics. Ergo, what climate science professionals ‘feel like using’ in this instance, is quite irrelevant.

    Take a look at this post. It gives the methodological context.

    —————

    Hi Bart,

    Sod made no point so far except drag out this discussion, create confusion and tire everybody in the process. While I understand that your policy prohibits you from banning him, I resent that you are now actually ‘endorsing’ his commenting methods, thereby encouraging his smears, strawmen and spins.

    I’m serious Bart, I really resent that.

    We emailed about this.

    Best, VS

  1390. Tim Curtin Says:

    Bart: at the risk of stating the obvious (to most here):

    1. the “global” is made up of the sum of the “local”. If Einstein’s E=MC^2 had been refuted at say Barrow, it was as he admitted invalid everywhere. Same with Newton. I now have some 50 sets of “local” temperature, natural forcings (eg solar, RH etc) and anthro “net forcings” per GISS across the USA. How big does the sample have to be before it becomes significant? So far at none of the 50 does CO2 (nor at the dozen or so Australian sites I have studied) play any role, in any shape or format, in explaining temperatures changes. Instead of just repeating yourself, which I also find boring, please show your own regression of all relevant data for your own home town that confirms your faith in the magic qualities of atmospheric CO2.
    2. Total forcings vs. only CO2 (at the risk of repeating myself). Which forcings have I left out? I always include surface solar radiation from the Sun (in situ, not up at TOA where it is the same everywhere at 1365 W/sq.m day and night 24/365 for the flat earth assumed by Trenberth & Kiehl, above which there is an apparently stationary sun), relative humidity, water vapour (H2O), windspeed, etc., plus the all-in Net anthro Forcing of GISS that I used in my last (it does not help your cause if you had actually read my results). Please tell me which “forcings” I omitted in last post and where I can find the data – NOAA/NCDC/NREL evidently cannot think of any in their data bases I use that I have not used here.

  1391. VS Says:

    “I estimated both:”

    Hahaha, and I named (in my reply) the ARIMA(1,1,2), ARIMA(2,1,1), and vice versa… :)

    Sorry about the typo.

    Note: ARIMA(p,d,q) indicates

    – p AR terms, so AR(1)..AR(p)
    – order of integration d, so you have to difference the series d times
    – q MA terms, so MA(1)..MA(q)

    I hope that clears up some confusion.

    Best, VS

  1392. TT Says:

    Sod,

    I know someone who was failed in his doctoral examination (in a humanities field) because his examiners thought his argument–the truth of which was not disputed–would have bad social consequences. True story.

    Now compare this to your comment “the unit root story will be spun into ‘there is no warming’. anyone who has spent even a few month following the discussions knows this. and VS knows as well.”

    You seem to think VS should just keep quiet about the unit root in the temp record–even if true–because of the spin some people will put on it. That suggests you lack a disinterested, objective commitment to the scientific discussion that we’re trying to have. You’re disrupting that discussion with your fears about its *consequences*.

  1393. Bart Verheggen Says:

    VS,

    The spikes in the grapoh of climate forcings shooting down are from strong volcanic eruptions, which cause short term cooling (due to them injecting aerosols into the stratosphere). The different forcing estimates are based on measurements and physical understanding, and they are what the climate is expected to respond to (as calculated using GCM’s). If as the forced component (or whatever you call it) in unit root or correlation testing you use only one of those many forcings (namely, CO2), then you’re not looking at the whole story, and the whole point of the analysis is then somehow lost.

    Sod wrote: “the unit root story will be spun into “there is no warming”.” I think that that is indeed happening and that indeed it is an entirely unwarranted conclusion. I didn’t endorse his commenting methods; I merely expressed that I agree with that specific point he made.

    Could you clarify in plain English (without stats lingo) which conclusions you feel are warranted and which are not? And hold people to account who are running away with your analyses to make all kinds of wild & unwarranted claims? That would prevent accusations of dogwhistle politics.

  1394. VS Says:

    Bart,

    (1) – I’m discussing the unit root here
    (2) – BR take the unit root findings, and perform a (polynomial) cointegration analysis

    (1) is relevant right now, (2) is for a (much more advanced) discussion.

    As for the drift: to be honest, I personally believe my ARIMA(3,1,0) no drift specification is more appropriate, on the basis of the impulse responses. Also, all my coefficients are sig at 1%, which does not hold for the ARIMA(0,1,2). Other than that, they are more or less equivalent, in terms of diagnostics.

    Based on the estimates relating to my specification, there is indeed no ‘warming trend’ (i.e. there is no significant deterministic rise in each period, there is a ‘driftless’ stochastic trend). Using their specification, BV find a drift coefficient significant at 5%.

    However, this is a different discussion, and the difference in specifications is used as a red herring in the unit root discussion.

    I’m not interested in dealing with this particular model selection issue right now, and I explained that here.

    Best, VS

    PS. Thanks for the ‘spikes in net forcings’ clarification, that makes sense. I was (obviously) already wondering about that :)

  1395. Tim Curtin Says:

    Bart: Could you clarify in plain English why according to you the Laws of Physics re CO2 and AGW hold in some places, but not in others, e.g. New York, Indianapolis, Des Moines, Salt Lake City, Houston, Sacramento, LA, San Francisco, Fresno, San Juan (PR), Hilo (Hawaii), Barrow, Tasmania, Queensland et many al.?

    alles van die beste

    Tim

  1396. HAS Says:

    I can see the Easter chocolate have people bouncing against the walls and the S/N ratio (for the engineers) has declined.

    What I don’t understand is why there is so much angst about what is after all only simple empirical observations about the nature of a time series (even if aspects of the analysis maybe open to theoretical debate), and so little curiosity about what this all means for statistical inference more generally in climate science. this is where the main game lies.

    In particular Bart how you leapt from my drawing an analogy from your own work in measuring particles in the atmosphere to what the science says about what those particles do in your lungs shows to me you just aren’t getting it.

    Relax, I don’t think anyone here is arguing about the science of what CO2 does in the lab (apart from the obvious rabid fools). We are talking about what you can deduce from what is measured in the GISS Temp time series (and hence my surprise that while you aren’t particularly exercised about the absence of some statistical relationships in your own particulate measures, you seem to have a block about accepting the analogous lack in GISS GTM).

    I would say that IMHO that what you are seeing here is something to pay attention to if you are engaged in climate science. In my earlier post about eduardo and your work I was struck by the way that Rahmstorf has had to pull his socks up in terms of experimental design, statistical analysis and inference over a two year period (and I suspect they have some way to go).

    Increasingly the bar for publication is being raised, and practices like not checking the data to make sure statistical tests remain valid, and treating models as reality in attempts to give a false sense of accuracy will be marginalized (and I blame the engineers for the latter).

    As is obvious from a number of papers turned up in this thread these are issues that are being increasingly raised. Your blog is no doubt sensitizing a wider audience to them, and most seem appreciative.

    Grab the opportunity you have created here and incorporate it into your own work would be my advice! And encourage the analysis to continue here because I’m sure this is just the beginning (as long as it doesn’t all get bogged down and it reverts to opinion without debate (having myself just done exactly that:)).

  1397. sod Says:

    Didn’t the famous climate scientist, Nobel laureate, Phil Jones Phd. get the memo? He very recently said that there has been no statistically significant warming for the last 15 years.

    Jones spoke of a linear trend. that is something strange to discuss, under this topic.

    and he told the interviewer, that the period is too short to tell.
    the most important aspect of that reply in that interview, is how it got spun by “sceptics”. into all kind of stuff (mostly ignoring the time aspect), but also into “there is no warming”.

    Where is the unnatural warming sod? How do you explain the lack of warming from 1940 to 1975? Isn’t that just when humans really started cranking out the CO2? Where is the corelation?

    i will not open another discussion about aerosols. but the “unnatural part” is more difficult than you think. (or that VS makes it seem)

    there could be unnatural warming without global warming (land use change, causing regional warming) and unnatural warming during global cooling (less cooling than natural factors would cause).

    most people have been confused and mislead by VS and his presentation of things.

    what he looks at, is a pure statistical approach.

    some climate scientists use a pure statistical look at temperature data, to confirm an unusual rise during the last couple of decades. but most don t.
    and some people who use only statistical tools, find the rise to be unusual. (like Breusch)

    such a method can NOT make a difference between natural and unnatural causes. it can only find that current temperature changes are similar to those of the recent past, or very different. they say absolutely nothing about the cause.
    (a strong human warming influence could be dampened into usual” warming by cooling natural factors happening at the same time. on the other hand moderate/weak human warming could add to strong natural warming and produce unusual high temperatures)

    to get knowledge about “natural/unnatural” warming, you need to look at natural and unnatural forcings.
    you need to establish what effect natural forcings would have. then physics help you to figure out, whether unnatural forcings can explain the difference in measured vs naturally forecast temperature.

    what VS does above, is completely useless for such an analysis.

  1398. sod Says:

    Bart: Could you clarify in plain English why according to you the Laws of Physics re CO2 and AGW hold in some places, but not in others, e.g. New York, Indianapolis, Des Moines, Salt Lake City, Houston, Sacramento, LA, San Francisco, Fresno, San Juan (PR), Hilo (Hawaii), Barrow, Tasmania, Queensland et many al.?

    the sun is rising, and should have the same effect on the thermometer in front of my house, and the one in the back. it doesn t.

    with Tim Curtin logic, we have just dismantled the sun as a global factor affecting temperature…

  1399. Bart Says:

    VS,

    That’s more stats lingo than I was asking for. I prefer to discuss things in plain English, to avoid misunderstandings from different directions.

    If I understand correctly, (some of) the unit root tests need specification of the underlying trend/forced component/drift/whatever you call it. You have assumed it to be proportional to the CO2 forcing only (correct me if I’m wrong). Whereas that’s incomplete: It ignores the effects of other forcings.

    There is no reason to be happy with using just CO2 forcing (derived from concentrations), while unhappy with using the other forcings (which are also derived from concentrations and other measurements).

    Moreover, if you indeed claim that the increase in global avg temp has been merely random/stochastic/not deterministic (ie not *caused* by a specific set of factors), that is inconsistent with energy balance considerations and it is a conclusion that is very likely not robust to including more indicators of a warming world (changes in ocean heat content, sea level, glaciers, ice, ecosystems, etc).

    Tim,

    Nobody expects CO2 to correlate strongly on the local scale. Take it to the open thread please.

  1400. VS Says:

    “You have assumed it to be proportional to the CO2 forcing only (correct me if I’m wrong).”

    I didn’t assume anything.

    I just performed a formal trend analysis, which doesn’t involve any covariates. BV did the same thing.

    Covariate analysis requires cointegration (this is really the 20th time I write this).

    Furthermore, I tested my own specification for both residual non-linearities and non-linear drifts (i.e. polynomial). I found no evidence to suggest any of that.

    CO2 forcings (net, gross, cleaned, whatever) have nothing to do with what I’m doing here.

    Best, VS

    PS. Please read and reply to the e-mail I just sent you. I’m very sincere.

  1401. sod Says:

    i guess that Bart is talking about some tests assuming that the trend would be linear (and then dismissing a linear trend).
    (which implicitly uses pure CO2 forcings, which should cause a linear trend)

  1402. Bart Says:

    VS,

    Didn’t you use the CO2 forcings (and/or a linear trend/drift) in unit root testing?

    The net forcing is by far the best to use (preferably also accounting for known modes of internal variability such as ENSO).

    I’m much better at understanding plain English than stats lingo, so I’m going to try to ask straightforward questions in the hope of getting a straightforward answer, similar to how Lucia entered the discussion.

  1403. VS Says:

    “Didn’t you use the CO2 forcings (and/or a linear trend/drift) in unit root testing?”

    I have no idea where you got that from (i.e. me using forcings). The ‘variables’ I used are

    (1) time
    (2) lagged realizations of the temperature record

    I.e. I analyzed the trend behavior of the series.

    Tamino used forcings for his spurious regression (by abusing the CADF) with which he tried to ‘debunk’ me.

    VS

  1404. Alex Heyworth Says:

    Bart,

    I think you have still not understood even the first thing VS did. What he has done is to look at the available global average temperature data. He established that this has a unit root. As noted by him in a number of posts, this finding is confirmed in several published papers. Given this fact, the appropriate trend analysis tools (according to econometricians) are stochastic, not deterministic. A stochastic trend analysis based on data up to 1935 finds that subsequent temperature data remains consistent with the pre-1935 stochastic trend.

    Note there is no mention here of forcings, CO2 or otherwise. (Of course, some might consider that an implication of this finding is that there is nothing to explain. However, as you have pointed out, the expectation from a physics perspective is that CO2 does have an influence on temperature, so it seems worthwhile pursuing the stochastic analysis further.)

    The next step in this process is to look at the temperature series and possible influences on it, using cointegration. VS has not done this. B&R did do this and concluded that the rate of change of CO2, rather than its absolute level, is correlated with temperature. This finding is obviously open to dispute. Maybe VS will disagree with it.

  1405. Alex Heyworth Says:

    PS, my (very humble) opinion is that Koutsoyiannis is probably right, that all processes in the real world contain both deterministic and stochastic elements, with the time frame of the analysis determining which is dominant.

  1406. Kweenie Says:

    “VS and his followers just really really want to get the results that they desire. at the end, statistics and numbers are only a tool to achieve this, and don t matter if they contradict the preferred outcome.”

    “VS and his followers”….
    Sod sounds like the 21st century Father Firenzuola: “You should neither hold, defend, nor teach that [the Copernican, read skeptic] opinion in any way whatsoever.”

  1407. John Says:

    Bart

    “Energy balance considerations” try to think more that the rate of cooling has changed more than the “earth is warming”, warming implies more heat applied. Due to the differing properties of the materials involved there is no one time constant that the mythical energy balance can be calculated against, although I personally and without the use of complicated maths can confidently predict its a long time.If it was nearly instantaneous only the likes of SOD would be around.

  1408. Bart Says:

    VS,

    Tamino showed that testing for a unit root one should account for the underlying trend/drift. IF there is a genuine trend/drift and you fail to account for it you could mistakenly conclude that there is a unit root. In the discussion that followed here, I remember you saying that your used both a linear trend, CO2 concentration, and CO2 forcing as the underlying trend/drift, and still concluded that there was a unit root. (Sorry, no time to look up the exact set of comments.) Is my recollection wrong?

    Alex,

    Thanks, that’s helpful. There’s indeed a good chance that I didn’t understand everything. Is what I wrote just above correct according to you though? That a test for unit root depends on what the underlying trend/drift is (assumed to be)?

    I am not very impressed with the “stochastic trend analysis based on data up to 1935 finds that subsequent temperature data remains consistent with the pre-1935 stochastic trend” for the following reasons:
    – The alternative hypothesis (an extrapolation of the linear trend up to 1935) is one that nobody expects to be true anyway.
    – This stochastic trend has no predictive power to speak of: Any remotely plausible value (at least up to now) would have been consistent with its specification, i.e. in essence it means ‘anything goes’ (even though it’s not specified as such). It’s like saying “my weight next year will be between 50 and 100 kg”. Yeah, indeed it will. Not sure of that proves it to be stochastic though. Given that I eat more than my body needs, will my weight more likely be smaller or larger than what it currently is? That’s the question.

    Your later comment (re Koutsoyiannis) rings true indeed: There will always be unexplained variability in nature that could make it hard to decipher an underlying trend or process. Only at a large enough spatial or temporal scale will the process become apparent above the noise.

  1409. Tim Curtin Says:

    Bart, I hope I can be allowed to respond to sod when he said quoting me (April 6, 2010 at 12:12)
    “Bart: Could you clarify in plain English why according to you the Laws of Physics re CO2 and AGW hold in some places, but not in others, e.g. New York, Indianapolis, Des Moines, Salt Lake City, Houston, Sacramento, LA, San Francisco, Fresno, San Juan (PR), Hilo (Hawaii), Barrow, Tasmania, Queensland et many al.?”

    sod said: “the sun is rising, and should have the same effect on the thermometer in front of my house, and the one in the back. it doesn t.

    with Tim Curtin logic, we have just dismantled the sun as a global factor affecting temperature…”

    That is cruel after all my efforts to include solar surface radiation (which is quite different from TOA solar irradiance) in my regressions of changes in both external (solar, water vapour, RH, windspeed) and internal (CO2, total net anthro. forcing) in my regressions at all the places I listed and some.

    Sod, do address your incisive comments to NOAA/NCDC/NREL, the sources of ALL the data I use. However as the reviled Anthony Watts has shown, the NOAA is not averse to siting its temperature measurement sets at odd places like those you describe. Is that my fault?

    Actually the NREL data set I have used is as “pure” as can be. That is probably why it has been discontinued – Gavin Schmidt’s reach is long.

  1410. Alex Says:

    sod,

    You are right that looking at temperature with univariate models (which is what VS and BV do) isn’t that informative. You could use such models to test whether temperature behaves significantly different in two different time periods, but it will not tell you anything about what caused this difference.

    What is indeed more interesting is to try to estimate which part of the variantion in temperature has a natural cause and which part has a unnatural cause. This can be done with multivariate models. The advantage of these models is that you can analyse the variation in temperature on a whole set of ‘forcings’. So you are not limited to only estimating the correlation between CO2 and temperature, but at the same time you can also estimate the effect of other ‘forcings’.

    However, this is exactly where the discussion of a unit root becomes very important. If the data contains a unit root, then you can not apply OLS in these multivariate models, this will lead to a problem known as ‘spurious regressions’. You will need to estimate the correlations with cointegration.

    I hope you now understand why it is so important to test for a unit root. The presence of a unit root simply changes the method we need to use to estimate our model. It will tell you nothing about the underlying theory, which is being tested, unless this theory explicitely requires the presence or absence of a unit root. Only with very simple ‘underlying processes’ will it be possible to derive whether the autocovariance polynomial has a unit root. Most of the time the phenomena being studied are so complicated that it is impossible to derive the autocovariance polynomial analytically, and subsequently impossible to derive whether it has a unit root. However, it will still be possible to test whether there is one. I hope this explains why it is not very meaningful to try to explain in a theoretical way why there should or shouldn’t be a unit root.

    Kr,

    Alex

  1411. VS Says:

    Bart,

    “Tamino showed that testing for a unit root one should account for the underlying trend/drift.”

    Tamino didn’t show anything. If you feel he actually did ‘show’ something, why don’t you cite his arguments directly?

    I reported a couple of times that the CADF only serves to increase the statistical power of the unit root test (i.e. it has nothing to do with Tamino’s creative ‘interpretation’ of what it does, read the original article).

    What Tamino wrote down is complete nonsense, and I replied elaborately to all his ‘claims’.

    Please stop referring to him.

    VS

  1412. VS Says:

    Alex!!!! :)

  1413. TT Says:

    Here’s another attempt to sum up: The tests for the presence of a unit root in a temperature time series (which VS has performed and no one has successfully contradicted) are for a formal, mathematical property of a data set; it has *nothing* to do CO2 or any other forcing. The conclusion that a unit root is present in a temperature time series does *not* entail that there are no C02 or other forcings. That a trend is “stochastic” (or “random”) does not mean it is uncaused. The presence of unit root simply tells you what kinds of analysis are *statistically necessary* for determining the relationship between data about temperature movements and data about any putative causes of those movements. Failure to follow the statistically necessary methods leads to an invalid, nonscientific conclusion. That doesn’t mean the hypothesis behind the conclusion (e.g. CO2 causes global warming) is thereby disproved. It just means that proof for that conclusion is lacking insofar as it relies on an incorrect method.

  1414. Alex Says:

    TT,

    That’s a very good summary! Maybe I could add that in finite samples not only a unit root, but also roots near unity have the same detrimental effects on OLS (this is also pointed out by BV in their paper). So if we get the impression that there is a unit root or a root close to unity, then we should use cointegration.

    Alex

  1415. Bart Says:

    I note that wiki gives me a different answer than VS did on the question in what way OLS is affected by non-independence of the variability:

    “Thus far the data have been assumed to consist of the trend plus noise, with the noise at each data point being independent and identically-distributed random variables and to have a normal distribution. Real data (for example climate data) may not fulfill these criteria. This is important, as it makes an enormous difference to the ease with which the statistics can be analyzed so as to extract maximum information from the data-series. The use of least-squares estimation of the trend is valid, but might be improved. Statistical inferences (tests for the presence of trend, confidence intervals for the trend, etc.) are invalid unless departures from the standard assumptions are properly accounted for.”

    The last two sentences to me mean that indeed, the trend estimate (via OLS) is not strongly affected, but the errors of the trend are (they are underestimated if the errors are correlated). That makes sense to me: The central trend estimate cannot be much different, though the significance level is affected.

    VS, do you take issue with this wikipedia description?

  1416. John Whitman Says:

    ””’Alex Says: April 6, 2010 at 13:26 – Your summary ”””

    Alex,

    Great summary. I can imagine that it helps keep VS’ energy levels up. Thanks.

    Amazing, I can now actually follow what you said with some degree of understanding after reading Verbeek’s 2nd edition ‘A Guide to Modern Econometrics’.

    John

  1417. Alex Heyworth Says:

    Bart,

    You wrote

    Thanks, that’s helpful. There’s indeed a good chance that I didn’t understand everything. Is what I wrote just above correct according to you though? That a test for unit root depends on what the underlying trend/drift is (assumed to be)?

    My answer: I do not know. However, I am more inclined to take the published findings of a number of other scientists (that temp is I(1)) over the unpublished views of Tamino.

    You also wrote

    I am not very impressed with the “stochastic trend analysis based on data up to 1935 finds that subsequent temperature data remains consistent with the pre-1935 stochastic trend etc”

    Neither am I. Obviously it is lacking predictive power. However, the OLS alternative gives false reassurance.

    The more recent comments by Alex (not me) and TT are, I hope, some further clarification on what VS has and hasn’t done, and why.

  1418. AndreasW Says:

    VS, Alex, TT

    I think most people (here) understand the unit root business by now. If they don’t, they will never do or they don’t want to understand.

    Please go on with the analysis! There are loads of people who can’t wait to dig in to the cointegration part. I’m all ears!!

  1419. VS Says:

    Bart, no.. that entry says precisely the same thing I’ve been saying for over a onth here:

    “Statistical inferences (tests for the presence of trend, confidence intervals for the trend, etc.) are invalid unless departures from the standard assumptions are properly accounted for”

    Trend-stationarity is one of those ‘standard’ assumptions.

    VS

  1420. VS Says:

    I side with Alex, that’s a good summary TT :)

  1421. John Says:

    I wonder if its the unpredictable nature of the data that makes it difficult to predict?, or could it be something else?

  1422. Bart Verheggen Says:

    VS,

    So you agree that the central trend estimate (via OLS) is not strongly affected, but the errors of the trend are (they are underestimated if the errors are correlated)?

    Alex,

    I don’t see OLS as an alternative to the stochastic model at all; quite the opposite: I stated that nobody claims that the temps can just be extrapolated into the future as if they linearly increase (at least not for very long into the future).

    The only reason I put an OLS trend in the figure in the head post is to visualize that the past 10-12 years are not anomalous compared to the increase of the 25 years before, ie that there is no reason to believe that the warming has stopped or reversed.

  1423. John Whitman Says:

    ”””AndreasW Says: April 6, 2010 at 14:07

    I think most people (here) understand the unit root business by now. If they don’t, they will never do or they don’t want to understand.

    Please go on with the analysis! There are loads of people who can’t wait to dig in to the cointegration part. I’m all ears!!”””’

    AndreasW & VS,

    It was actually good that there has been a significant pause to re-iterate (over and over) the unit root discussion for the GISS surface temp time series. For one thing it has allowed time for more education on the topic for many of us. Also, more importantly, it has allowed time for many more statistically informed commenters, like yourselves, to gather here for the next step.

    Sod , Bart and others thank you for helping to create the beneficial delay. I personally appreciated it .

    John

  1424. VS Says:

    Bart,

    Which ‘trend estimate’ are you talking about precisely? What underlying DGP?

    Is there a constant trend you assume? Maybe the one with 1 structural break? Or one with 2 structural breaks? What’s your alternative? Structurally broken stochastic trends? Regular constant stochastic trends? Higher order polynomial trends? Other types of trends?

    All of the inferences (i.e. conclusions based on statistical testing) on where (or whether) these breaks exist, and on what the ‘drift’/’trend’ looks like, that assume trend-stationarity, are invalid in the presence of a unit root. In that case (i.e. unit root), all the statistical analysis has to be done over.

    The OLS estimator is not BLUE in violation of the first Gauss-Markov assumption: a correct specification (for the 20th time).

    VS

    PS A stochastic trend is not more of a ‘stochastic model’, than the specification assuming trend-stationarity.

    I defined it/explained it plenty of times in this thread.

    Quit simply ‘loosely interpreting’ the definitions used, and taking it from there. All the concepts I describe in this thread are formally defined on many different pages on the internet.

    PPS. Your statistical inference (in this blog entry) on whether the trend is ‘anomalous’ is invalid in the presence of a unit root. Your conclusion on the evidence on ‘reversal’ etc. of the ‘trend’, is invalid too.

    That was the main point I made in my very first post.

  1425. Bart Verheggen Says:

    VS,

    I calculated an OLS trend for global avg temp from 1975 to 2009 at 0.17 (+/- 0.03) deg/year.

    How I interpret the wiki article (with which you said you agree) is that the central estimate of the trend (0.17) is valid (wiki: “The use of least-squares estimation of the trend is valid”), but that the confidence interval for the trend (i.e. the +/- 0.03) is not valid (it’s likely to be wider).

    Do you agree?

  1426. VS Says:

    Bart,

    I don’t agree with your interpretation of that entry.

    If (if!) we knew for sure that the stochastic trend with a significant drift started in the 70’s, and continued up to this date, your analysis might (might!) have some merit. Statistical inference however, would still be invalid (i.e. on the significance of the estimate).

    However, this is an untested assumption you made right there. I see absolutely no evidence to support it. Hence my first charge: that’s not statistics.

    Again, we are modeling a DGP, not picking begin and end points and fitting arbitrary ‘trends’. If you want to do the latter, don’t call it statistics.

    VS

  1427. John Says:

    Bart

    As things stand i think Vs is saying nothing can be inferred from the temp data alone. If the correct tools are employed (co-integration?)along with the forcings and if that reveals a correlation and if any correlation is meaningful and if the resolution allows it then may be possible to infer something and if there is sufficient information revealed it may be possible to make meaningful forecasts or derive information about C02 etc.

  1428. Bart Verheggen Says:

    VS,

    I’m not giving it that name. For the purpose of my post, it was just a visualization tool. And I agree with what you say here: “Statistical inference however, would still be invalid (i.e. on the significance of the estimate).” That is also what the wiki article sais.

    But there are good reasons to expect the temp trend from the mid seventies to now to have increased approximately linear: The climate forcing has increased approximately linearly (apart from the large but shortlived downward swings from volcanoes).

  1429. TT Says:

    Bart,

    Just to add my note of thanks for hosting and moderating the most substantive, detailed discussion I’ve seen in the climate blogosphere. It’s been a great learning experience, and I look forward to seeing where it all leads.

    I expect that eventually we’ll see a downward “trend”, but after almost 1,500 comments who knows? Maybe it will just keep going up! :) Maybe someone should run a unit root test on the rate of comments-per-day over the last month? :)

  1430. JGK Says:

    I sense the time is nigh, and I am eager to see how VS explains the proper testing for a deterministic trend when there is evidence of a unit root.

  1431. VS Says:

    Hi JGK,

    It’s very simple actually, and I already did it.

    If the first difference series is stationary (i.e. the level series has only one unit root), you simply perform regular significance tests on the constant in the first difference series (after you have established, via diagnostics, that it is well-behaved, which I did).

    In my ARIMA(3,1,0) specification, the constant came out insignificant. I also tested for structural breaks, and found none. Furthermore I tested the specification for unaccounted for none-linearities, none. Finally, I tested for higher order (polynomial) trends, none.

    I have to run now, so I don’t have time to find the links to the specific posts, but it’s all in the thread.

    Best, VS

  1432. GDY Says:

    John Whitman – I am most impressed by your initiative!! Congratulations and thank you!!

    VS – wanted to affirm what John wrote – many of us are still here, ready to move on to the covariate analysis. Thanks for your continued efforts here to introduce formal statistical rigor. Well done. (As a personal note, I am contemplating adding a Statistical Degree to the Pure Maths I am pursuing currently, largely because of this thread and your efforts!).

    Alex, Alex Heyworth, TT and others – thank you as well for your dispassionate attempts to deepen understanding. much appreciated.

  1433. Alex Says:

    Bart,

    The paragraph on wiki is not wrong, but I do think it’s not written very carefully. They say that OLS is still valid even if the error term is not white noise. This statement is only true of you (implicitely) assume that the error terms are independent of the regressors. In your case that would mean that the error term itself should not depend on time.

    Model misspecification is notorious for causing the error terms and the regressors to be related, which causes the OLS estimates to be biased. To illustrate this, let’s assume that indeed there is a unit root and that the true DGP is:

    y(t) = C +y(t-1) + B*trend + U U being white noise

    Now you estimate:

    y(t) = C + B*trend + E

    E here contains both U and y(t-1), so it isn’t white noise. Moreover, notice that since y(t-1) depends on the trend, so will E, hence the assumption of independence between the error term and the regressor is violated. Also notice that if we would have accounted for the unit root by using the first differences, we would have estimated the following model:

    d(y(t))=y(t)-y(t-1)=c + B*trend+U

    which is perfectly ok, because U and the trend are independent of each other.

    There are several ways you can check whether your model is misspecified. A very simple one would be to test for autocorrelation, because autocorrelation *could* indicate misspecification. To understand why, look again at the second equation with error term E = y(t-1) + U. This error term will be autocorrelated, because y(t-1) was erroneously left out of the model.

    If there is a unit root in the temperature, then temperature has a specific autocovariance, which you put in the error term in your model. But, just like in the example above, this will make the error term dependent on the trend.

    The way to proceed is by modelling the underlying DGP (approximately) correctly and then test whether there is a linear deterministic trend.

    Kr,

    Alex

  1434. DLM Says:

    Bart says: “And I agree with what you say here: “Statistical inference however, would still be invalid (i.e. on the significance of the estimate).” That is also what the wiki article sais.”

    That would have been a good place to stop, and we could have moved on.

    Bart says: “But there are good reasons to expect the temp trend from the mid seventies to now to have increased approximately linear: The climate forcing has increased approximately linearly (apart from the large but shortlived downward swings from volcanoes).”

    But according to the Nobel laureate-famous climate scientist, Dr. Phil, there has been no statistically significant warming for the past 15 years. Please explain that, and the 1940-1975 travesty, using all the forcings that you can think of.

    VS,

    I asked: 1. Why would B&V explicitly avoid discussion of the unit root?

    Your reply: (1) I would say they wanted to avoid the discussion we’re having right now, but that’s just speculation. Breusch knows very well what a unit root in the record implies. The whole paper is drenched in ‘hints’. Take another look at Willem Kernkamp’s post here.

    Your speculation and a re-reading of Willem’s post doesn’t really answer the question. You said that Breusch knows more about this stuff than you do. And this seems to indicate that he does not think that the unit root issue is as important as you do: “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards.” I still find your take on this credible and extremely interesting. This B&V issue may be a red herring, or a little red flag :) Why don’t you contact Dr. Breusch and try to interest him in this discussion, or at least get his take on it?

    Anyway, the current state of this discussion has been very well summed up by several insightful posters. Isn’t it now time to leave the willfully stubborn stragglers behind and get on with the next phase?

  1435. Igor Samoylenko Says:

    DeWitt quoted Breusch and Vahid (2008):

    Most of the available unit root tests consider the null hypothesis of a unit root against a stationary or a trend stationary alternative. Given our discussions above about the possibility of observational similarity of a unit root process and a process with a deterministic trend in finite samples, it is not surprising to know that the finite sample properties of unit root tests are poor. The performance of these tests depend crucially on the type of trend and the period of cycles in the data. Stock (1994) emphasises the importance of properly specifying the deterministic trends before proceeding with unit root tests, and advises that “this is an area which one should bring economic theory to bear to the maximum extent possible.” In our context, the responsibility falls on the shoulder of climate theory rather than economic theory, an area that we know nothing about.

    In his comment, DeWitt concluded:

    I read that to say that unit root tests cannot rule out the presence of a deterministic trend in a time series a priori. Ruling out a linear trend in the temperature series is meaningless because no one actually believes the deterministic trend is linear. The use of linear trends is descriptive, not prescriptive.

    This is a very important point, I think.

    Here is a comment by John Cochrane published in NBER Macroeconomics Annual in 1991:

    Many macroeconomists now start papers whose substantive interest is elsewhere with tables of unit root and cointegration tests. These tests are used to determine the specification (order of differencing, which ratios are stationary, nature of deterministic trends, etc.) and relevant asymptotic distribution theory for subsequent estimates and tests.

    The problem with this procedure is that, in finite samples, unit roots and stationary processes cannot be distinguished. For any unit root process, there are “arbitrarily close” stationary processes, and viceversa.2 Therefore, the search for tests will sharply distinguish the two classes in finite samples is hopeless.

    Campbell and Perron discuss this point under the title “near-observational equivalence,” and I will respond in a second. However, their paper implies a much more severe version of the same problem, namely the possibility of deterministic trends.

    Here’s the problem. Low-frequency movement can be generated by unit roots (random walk components) or it can be generated by deterministic trends, including linear trends, “breaking trends,” shifts in means, sine waves, polynomials, etc. Unit root tests are based on measurements of low-frequency movement in a time series, so they are easily fooled by nonlinear trends.3 Therefore, Campbell and Perron’s repeated theme that “the proper handling of deterministic trends is a vital prerequisite for dealing with unit roots” is correct and sensible advice.

    Cochrane’s conclusion is very clear:

    3. Summary
    The central problem driving all the doubts I have expressed is that the pure statement that a series has a unit root (or that two series are cointegrated) is vacuous in a finite sample. Campbell and Perron (implicitly) and Sims (1989) emphasize the fact that unit roots are indistinguishable from nonlinear trends.

    (emphasis is mine)

    Based on this, as far as I can see if tests indicate unit root, it means one of the following two things:

    1) Either the DGP for the global temperature time series contains a unit root, such as your ARIMA(3,1,0) model or B & V’s ARIMA(0,1,2) (or generally ARIMA(p,1,q))

    OR

    2) There is non-linear, low-frequency deterministic trend in the time series.

    Are you actually claiming to rule out 2)? On what basis? I have not seen you even try to properly specify the deterministic trends before proceeding with your unit root tests.

  1436. Igor Samoylenko Says:

    Just realised that it is not clear in my post above who I am addressing. It is VS of course.

  1437. DLM Says:

    Very interesting red flag Igor. I guess the discussion will not move on for a while yet.

  1438. Shub Niggurath Says:

    Moving on:

    Bart:
    Do you think/expect rising concentrations of CO2 to move the temperature anomaly in the coming decades, to enter a phase where it will display non-stochastic trends thereby allowing simpler OLS estimates to show correlations? Say in the next 150-200 years with rising CO2 resulting in actual doubling…

    In other words, make temperatures less dynamical?

    I also did not understand the casual brushing away of Tim’s suggestions. Please help here – if the heat trapped is so much that it is causing WG2 effects (ice, glaciers, floods etc etc), it *should* have some locally demonstrable effects somewhere. The gridded anomaly itself is a derivative of local temperatures.

  1439. manacker Says:

    phinniethewoo

    Regarding your Nobel dissertation April 4, 15:18

    The drunk’s walk from lantern to lantern (or earth’s temp) may be non-stationary, and it is true that “his future whereabouts are determined by endogene factors mainly: his present condition”, but (unlike earth’s temp) the walk frequently ends up in the ditch, following the law of gravity.

    Max

  1440. mpaul Says:

    TT wrote: “That a trend is “stochastic” (or “random”) does not mean it is uncaused.”

    Stochasitic and Random are not the same.

    Mandelbrot argues that stochastic behavior often arises in systems where large numbers of input variables affect the output and where small perturbations in some of the input variables can lead to large changes in the output. So when many of these hypersensitive input variables are perturbed simultaneously (and out of phase), the combination results in a chaotic output.

    In such systems, infinitesimal errors in assumptions about input variables throw deterministic models completely out of whack. This is why deterministic modeling is often poorly prescribed for many systems. And its why I think attempts at creating deterministic climate models are a fool’s errand.

  1441. Willem Kernkamp Says:

    In reply to:
    Igor Samoylenko Says:
    April 6, 2010 at 18:15

    “DeWitt quoted Breusch and Vahid (2008):”

    Igor,

    It appears to me that your reasoning is backwards. With a unit root test there is no claim about the exact process that generates the data. Instead, the claim is more modest. Namely, when there is complete persistence (as with a pure unit root) or high persistence the probability intervals of Ordinary Least Squares (OLS) are compromised. So the introduction of some non-linear deterministic process does not help, because it has the same persistent behavior. It is that behavior of the process that causes the breakdown in the statistics of OLS.

    Separately, I am confused by the Cochrane statement where he lumps unit-root and cointegration together:

    “Cochrane’s conclusion is very clear:

    3. Summary
    The central problem driving all the doubts I have expressed is that the pure statement that a series has a unit root (or that two series are cointegrated) is vacuous in a finite sample. Campbell and Perron (implicitly) and Sims (1989) emphasize the fact that unit roots are indistinguishable from nonlinear trends. ”

    The reason for my confusion is that the effect of a unit root is opposite from the effect of two series being cointegrated. Namely, a unit root widens confidence intervals, but for cointegrated series, the situation can be recovered by cancellation of the stochastic effect.

    Will

  1442. dougie Says:

    ok, i’m supid.
    straw man – what does it mean in science & where does it come from.
    hear it all the time on the web climate comments ,the wizard of ozz maybe?

  1443. phinniethewoo Says:

    max

    that’s right: the random walk of the drunk changes its Expected Value (which is initially 0,0) with every step he takes.

    this has as result that at t=0 there is a massive chance he will end up back from where he started. But the chance practically never materialises, as you can easy check with some trials. Dewitt’s walk (picture he showed) for example ended in the bushes and that’s just a 100 steps.
    the longer the walks one considers the higher the chance for coming back to starting point, but these walks never come true.
    So non stationarity is strange business.
    And I do not know if pure random walks are the weirdest..this random walk is pretty straightforward it is a level playing field. Imagine the drunk would live in the mountains or close to a canal: many steps would entail a long implausable walk back

    who says so btw that earth won’t end up with hypothermia as well?
    Certainly not the the predictive power of the alamists that will tell us.
    scientific rigour takes 2nd violin for cheap bashing dodging the argument and slogans like “mweh mweh mweh physics mweh mweh mweh forcing mweh mweh..have you not read our science? here’s a report from Jones and tamponi”

  1444. phinniethewoo Says:

    JM Keynes , of all geronimoes in the pseudoprogressive camp, said that if we do not know nothing about the correctness of a proposition, then by virtue of the “Principle of indifference” we should endow it with a probability of 0.5.

  1445. phinniethewoo Says:

    Bart

    First congratulations with this exciting thread.

    Is there a chance to produce a redraw of your temperature anomaly graph? In my opinion there is no reason for the corridor lines (OLS?) , as the measurements are all done / calculated and available? Sure the thermometer readings were correct? This lays in the past ,so what is really the meaning of these lines.

    I ask because it would be more / as interesting to see the variance in the temperature calculation which led to this graph, which should also be an easy calculable given. Earth”s temperature, and so its yearly offset as well, is calculated from the 4000 sites. Some sites are recording in a local climate that is in a warming trend, some in a cooling trend. so there will be a variance as each site has its specific yearly offset . We only see the calculated mean offset (temperature anomaly) in your graph. What is the variance/spread related to the 4000 sites? it should be lines that jiggle along and follow the mean at some parallel distance.

    i wonder how big that variance is.

  1446. Alex Heyworth Says:

    mpaul Says:
    April 7, 2010 at 00:15

    Stochasitic and Random are not the same.

    Mandelbrot argues that stochastic behavior often arises in systems where large numbers of input variables affect the output and where small perturbations in some of the input variables can lead to large changes in the output. So when many of these hypersensitive input variables are perturbed simultaneously (and out of phase), the combination results in a chaotic output.

    In such systems, infinitesimal errors in assumptions about input variables throw deterministic models completely out of whack.

    In the paper I linked to earlier (http://www.itia.ntua.gr/en/docinfo/923/) Koutsoyiannis shows that this conclusion applies even to very simple systems with small numbers of inputs.

    The lesson I draw from this is that it is time for climate science to recognize that deterministic modeling of climate was a scientific cul-de-sac and move on, however painful that may be. If Bart and his generation are unwilling, I’m sure the up and coming climate scientists will take up the challenge.

  1447. Frank Says:

    dougie says: straw man – what does it mean in science & where does it come from. hear it all the time on the web climate comments ,the wizard of ozz maybe?

    A straw man is a logical fallacy used in debate. Basically it involves re-stating a modified version of your opponent’s argument that contains a flaw, and then identifying the flaw so as to leave the impression that you have successfully refuted your opponent’s argument.

  1448. cohenite Says:

    Alex Harvey, Alex and TT have adequately distinguished what VS has done statistically from the implementation of alleged physical reasons for the temperature data; that data has a unit root and the typical AGW statistical approaches are deficient at representing that. The B&V paper goes a step further and cointegrates CO2 and temperature to show, as Alex said, that only increments of CO2 not totals determine temperature; this point also illustrates how the physical dimension reapplies to the statistical analysis; if it is the rate of change of CO2 which impacts on temperature then BEER-Lambert is confirmed but the notion of eqilibrium sensitivity and the pipeline effect of accumulating CO2 are disproved; CO2 has its heating effect only when it is increasing, albeit at the declining rate of Beer-Lambert and is exhausted when no more increase occurs.

    A point of enquiry; what happens to temperature when CO2 declines?

  1449. cohenite Says:

    In addition to the above and following on from VS’s comments at 14:48 and 16:15; the unit root quality of the temperature data means that a break cannot be there statistically because there is no determistic trend to break; however, again when the physical forcings are introduced a break is justified because of the quality of the forcing; see;

    Click to access 0907.1650v3.pdf

    This appears to answer Bart’s concerns; on the one hand AGW is misrepresented by non-unit root statistical analysis but trends can be applied to that temperature data when the nature of the physical forcings are considered.

  1450. Don Jackson Says:

    http://mpra.ub.uni-muenchen.de/9939/

  1451. Tim Curtin Says:

    phinniethewoo asked (April 7, 2010 at 01:27)
    “Is there a chance to produce a redraw of [Bart’s] temperature anomaly graph?…I ask because it would be more / as interesting to see the variance in the temperature calculation which led to this graph, which should also be an easy calculable given. Earth’’s temperature, and so its yearly offset as well, is calculated from the 4000 sites. Some sites are recording in a local climate that is in a warming trend, some in a cooling trend. so there will be a variance as each site has its specific yearly offset . We only see the calculated mean offset (temperature anomaly) in your graph. What is the variance/spread related to the 4000 sites? it should be lines that jiggle along and follow the mean at some parallel distance. i wonder how big that variance is.”.
    To get a flavour, woo, here are the GISS anomalies (on 1950-1980) for 1980-2000 for NYC (.3805oC), Des Moines (.5143) and Pt Barrow (.2717), and for 1980-2009: NYC (.8028), Des Moines (.6036) and Barrow (still .2717). BTW, NYC and Des Moines are almost on the same latitude and only about 1000 km apart, so by Hansen’s rule you don’t need the Des Moines data as it is less than 1200 km from NYC with its more convenient anomalies. Here are anomalies for a West Coast locality, ‘Frisco: 1980-2000 (.4348) and 1980-2009 (.5859). And not to forget the Netherlands, Amsterdam’s grid’s anomalies appears to be .3837 (1980-2000), close to New York’s, and .5832 (1980-2009), much lower than New York’s .8028.
    NB: GHGs being “well-mixed” are the same everywhere in the NH, and although the annual SH levels show less intra-annual variability they track the annual NH levels quite closely. This suggests that the GHGs alone cannot explain the quite wide local variations in the anomalies noted here but are quite sufficient for the IPCC WG1 ch.9 to account (with 90% certainty) for most if not all of the global anomaly’s trend despite its unit root.
    So woo, you’ll have to wonder for a long time, as IPCC climate scientists do not do local data, mainly because the local trends vary wildly, and are rarely if ever “parallel” with the mean, which would spoil the picture in Bart’s graphs here. So once again the truth is that correlating GMT anomalies with atmospheric CO2 can only yield spurious results.

  1452. Bob_FJ Says:

    Dougie wrote in part:

    straw man – what does it mean in science & where does it come from.
    I hear it all the time on the web climate comments, the wizard of ozz maybe?

    Dougie, I too have wondered about this, but have shrugged and translated it as probably meaning: ‘Rubbish arguments’ which of course are a matter of perception or belief by the negative partisan opinions involved.
    Here follows a Wikipedia entry on it, for what it is worth:
    http://en.wikipedia.org/wiki/Straw_man

  1453. cohenite Says:

    Don Jackson; thanks for that paper; I have been informing myself of this particular aspect of this debate here;

    http://landshape.org/enm/cointegration-primer/

    http://landshape.org/enm/polynomial-cointegration-rebuts-agw/#more-3771

    About this aspect of Equilibrium Sensitivity VS had this to say:

    “Beenstock and Reingewertz indeed use a ‘model tweak’ that would allow for these two series to be cointegrated on the next level. But that ‘model tweak’ is not in itself something trivial. Polynomial cointegration has first been described by Yoo (1986), and since then a solid body of literature has been developed on the topic, where contributors include the likes of Johansen (!) and Granger (!). This is the established way to deal with I(2)/I(1) relationships.

    So, the difference between Kaufmann et al (2006) and Beenstock and Reingewertz (2009) is the following:

    **Kaufmann et al (2006) attempt to cointegrate temperature, I(1) with the sum of radiative forcing, which is I(2) (equation (3), p. 255). This is plain wrong.

    **Beenstock and Reingewerts (2009) first cointegrate various greenhouse gases back to a I(1) variable, and then attempt to cointegrate temperature with this I(1) variable. This is the accurate approach.

    They find that solar irradiance is the most important factor determining temperature levels (what a surprise).

    “This shows that the first differences of greenhouse gases are empirically important but not their levels. The most important variable is solar irradiance. Dropping this variable, but retaining the first differences of the greenhouse gas forcings, adversely affects all three cointegration test statistics”

    They then proceed to show what happens if you ignore the different order of integration, like Kaufmann et al (2006) did.

    “Haldrup’s (1994) critical value of the cointegration test statistic when there are three I(2) variables and two I(1) variables is about -4.25. Therefore equation (4) is clearly not polynomially cointegrated, and the conclusions of these studies regarding the effect of rfCO2 on global temperature are incorrect and spurious.”

    Specifically, they argue that the following conclusion, by Kaufmann et al (2006) on p.255, is spurious:

    “The ADF statistic strongly rejects (P < 0.01) the null hypothesis that the residual contains a stochastic trend, regardles of the lag length used in Equation (2) (Table I), which indicates that the variables in (3) cointegrate."

    Let me stress the most important point here. By incorrectly applying these procedures Kaufmann et al. (2006) conclude that an increase in CO2 has a permanent effect on temperatures. Beenstock and Reingewerts (2009), by correctly applying the procedure, conclude that it is in fact only temporary."

    The Lui paper appears to follow the Kaufmann methodology as they conclude:

    "whereas the I(2) trend is driven by
    a linear combination of the three greenhouse gases or
    exclusively by the radiative forcing of carbon dioxide"

    For my mind this is a crucial conclusion; if the ES is as they claim then AGW has some legs but again I can't see the physical aspects supporting it.

  1454. TT Says:

    mpaul,

    I didn’t mean to imply that stochastic and random are the same (though perhaps I could have been clearer); my point was that when phenomena exhibit either of these properties, this does not exclude the possibility of particular physical causes for the phenomena.

  1455. Bart Says:

    Igor Samoylenko makes the same point that I’ve been making (based on eg Tamino’s post, and which in my recollection was initially also admitted by VS as being important, though he has now challenged it):

    “the importance of properly specifying the deterministic trends before proceeding with unit root tests”

    See alo DeWitt Payne’s earlier comment.

    Shub,

    Yet, I expect that “rising concentrations of CO2 will move the temperature anomaly” upwards indeed. How could they not? They have a well established radiative effect without which the earth would be a frozen ball of ice. I don’t expect short term variability to magically disappear or change its nature though.

    Alex Heyworth,

    There are good reasons to model climate as a partly deteministic proces: Earth’s climate is governed (a.o.) by the planetary energy balance, which is by its nature deterministic. Internal variability adds a stochastic component on top of that, but I’m not quite ready to throw conservation of energy out of the window, and I’m pretty sure that ‘up and coming climate scientists’ won’t either (unless they’re interested in something else than understanding climate change)

  1456. Bart Verheggen Says:

    Alex,

    That’s a good point (that the OLS trend estimate would be invalid if the noise still trend up- or downward). However, even with the errors being correlated, I think that they don’t trend up- or downwards. Tamino has done alayses like that: Looking at the residuals of the fit, and inspecting to see how they behave (i.e. it’s not white noise indeed; there is some autocorrelation, but they don’t trend up or down over the multi decade timescale. Hence, the trend estimate is probably not far off (though the errors of the trend estimate might be)).

  1457. Alex Heyworth Says:

    Bart,

    not what I meant. I (mostly) agree with you (what do you mean by a.o.?). To me the question is, what are the time frames and spatial distributions for all the components? The deterministic component of energy balance may just set the external bounds within which the internal components can vary.

    IMHO the emphasis on global average surface temperature in recent times, while understandable (if it really does rise 3+K this century, the emphasis will have been justified) distracts from other issues that are just as important for local climate (which is what we all actually experience).

  1458. VS Says:

    Hi Igor,

    That’s a very interesting reference you found there (I really like Cochrane’s work :), although I do have to note that it’s 20 years old :) The has field moved slightly in the meantime.

    I think Willem provided a proper answer above, but let me just add my 2c to it.

    I think you misinterpreted Cochrane’s point. In particular, while you did cite the right paragraphs, you (understandably) forgot the crucial footnote. When he writes ‘”arbitrarily close” stationary processes’, there is a footnote present there (nr 2), namely:

    “Take a unit root process and change the root to 0.999. That’s a “close” stationary process. Conversely, take a stationary process and add to it a random walk with tiny innovation variance. That’s a “close” unit root process.”

    This is the crux of the matter. Cochrane is discussion the theoretical basis for (definitely) establishing unit root presence. Naturally, this is impossible to do, if the AR term is ‘arbitrary’ close to 1, and your sample is finite. However, allow me to quote Alex’s post from yesterday:

    “Maybe I could add that in finite samples not only a unit root, but also roots near unity have the same detrimental effects on OLS (this is also pointed out by BV in their paper). So if we get the impression that there is a unit root or a root close to unity, then we should use cointegration.”

    So, we cannot tell the difference, in finite samples, between a unit root and a near unit root, true. However, the analysis should still proceed on the basis of test results. Cochrane mentions this, and so do Breusch and Vahid. This is important.

    Now, allow me to ask you a question:

    Do you believe that the temperature record indeed resembles Cochrane’s trend-stationariy near unit root process?

    Hint: Think very carefully before replying ;)

    Also, both you and Bart are hammering on ‘specifying the deterministic trend first’. I really don’t see the methodological point you are making here (and this is not what Cochrane meant).

    So would you, or Bart, be so kind to explain the exact methodological steps one needs to pursue, in your view, in order to analyse a time series?

    Finally, since you are so bent on the deterministic trend. Here’s some output. I don’t want to sound ‘smart’, but I would really appreciate it if you would read the output, unlike last time when I ran Monte Carlo simulations on your request, and then you simply failed to respond to my results (which vindicated my point, that you disputed).

    ————————–

    Polynomial trends, GISS record, 1880-2008

    ————————–

    We start by estimating a linear trend, on the GISS record.

    C. -0.305938, 0.0000
    time, 0.0282
    time^2 1.39E-05, 0.2165

    And when we clear all the residual autocorrelation (spotted by the BG test)

    C, -0.317797, 0.0000
    time, 0.1821
    time^2, 1.37E-05, 0.4853
    AR(1), 0.481573, 0.0000

    So, no quadratic trend.

    Third degree trends?

    C, -0.289374, 0.0000
    time, 0.001102, 0.7357
    time^2, 5.32E-05, 0.4357
    time^3, -2.36E-07, 0.5593

    Now we clear up the residual autocorrelation, and reestimate

    C, -0.331017, 0.0001
    time, 0.004333, 0.4845
    time^2, -1.33E-05, 0.9151
    time^3, 1.58E-07, 0.8268
    AR(1), 0.483720, 0.0000

    No third order trend.

    Now we try the fourth order trend:

    C, -0.142999, 0.0040
    time, -0.024241, 0.0001
    time^2, 0.001070, 0.0000
    time^3, -1.44E-05, 0.0000
    time^4, 6.40E-08, 0.0000

    That’s a suprisingly nice (over)fit. Look at how nicely it fits our data. Here’s Figure 11 [edit].

    Wait a second! It seems that the last 8 years or so are deviantly cold, as they fall outside of our 95% in-sample forecast!

    Perhaps this is evidence of us slipping into the next Ice-Age!

    :) Sorry, couldn’t resist. I simply wanted to illustrate how silly it is to draw such conclusions on the basis of an arbitrary trend estimate.

    Seriously now, let’s investigate the possibility of misspecification via the Ramsey RESET test for misspecification (i.e. unaccounted for non-linearities).

    Lagged fitted terms raised to the power x, p-values (H0: stable specification)

    x=1, 0.000822 (inference: reject specification)
    x=2, 0.000020 (inference: reject specification)
    x=3, 0.000012 (inference: reject specification)
    x=4, 0.000035 (inference: reject specification)
    ..etc.

    Should we continue onto the fifth order polynomial trend? ;)

    —————————

    Hi DLM

    I speculate that they simply wanted to avoid the whole discussion we are having here. Why that is the case, you ask? I honestly have no idea :)

    —————————

    Hi GDY,

    That’s a (very) big compliment :) Thank you.

    Best, VS

    —————————

    PS. Bart, if you are referring to Tamino’s analysis, copy his arguments, that you apparently find convincing, exactly, as I believe that in my replies, I invalidated most of them. This way, we can see precisely, what ‘convinced’ you, and address the (likely) misconception. Thanks :)

    Otherwise, I will simply ignore the reference, and I suggest that everybody else does the same.

  1459. Bart Verheggen Says:

    VS, you wrote

    “I simply wanted to illustrate how silly it is to draw such conclusions on the basis of an arbitrary trend estimate.”

    Exactly. That’s why we use physics to try to understand what’s going on rather then just trying to do curve fitting.

  1460. VS Says:

    Bart, please change ‘Figure 10’ to ‘Figure 11’. Figure 10 is already ‘taken’ :)

    Thanks, VS

  1461. VS Says:

    Bart, I wrote down ‘arbitrary trend’. Don’t you dare ‘strawman’ me :P

    And let’s not go back to the ‘physics’ hand waving, that was very tiring. We are talking now about empirical verification.

    Are you an empiricist, Bart?

  1462. VS Says:

    PS. That particular specification, i.e. Figure 11, was rejected using formal statistical testing procedures. Hence, it is ‘arbitrary’. Again, don’t ‘strawman’ me :)

  1463. Alex Says:

    Bart,

    Notice that I wrote that the error term and the regressor (in this case the trend variable) need to be independent of each other. This requirement is more general than requiring that the error terms don’t have an upward or downward movement. Now if you assume that the error terms are independent of the trend variable, you are implicitely saying that there is absolutely nothing in the error term that depends on time. Things your model places in the error term is everything that determines temperature, except for some time trend. Are you sure that none of these left out determinants, be it solar radiation or CO2 or any other component, is independent of time? That seems to me like a very strong assumption (but necessary if you want OLS to yield unbiased estimates in your model).

    It is hard to do a lot of formal statistical tests on your model, since you tried to model a very short time period (33 observations), which makes it difficult to test for model misspecification. As a matter of fact the number of observations is so low that I can’t even detect autocorrelation significantly. However, if you look at the correlogram you can see a pattern that most positive error terms are followed by positive ones, and negative by negative ones. I do find that the fifth autocorrelation is (barely) significant and if I include it in the model, so estimating:

    y(t) = c +b1*trend + b2*ar(5) +E E = WN

    then then all estimates are significant. This you should not take as that this model is the right model, but more as evidence that your model is misspecified and consequently causes the error terms to be dependent on the trend variable. This is similar to what I wrote in my previous post, where I left an ar(1) term out of the model (i.e. same argument applies to an ar(5) term).
    Last but not least, *if* there is a unit root, then your model is mathematically incompatible with the presence of a unit root. Moreover, in that case I can proof analytically that your error terms will be dependent on the time variable.

    Kr,

    Alex

  1464. Bart Verheggen Says:

    VS,

    Nobody expects the temp record from 1880 to 2008 to be either linear or polynomial. Is it possible to do a similar analysis by using GCM output as the ‘trendline’ to fit to the temp record?

  1465. VS Says:

    Bart,

    The GCM output is an explanatory variable. You need to cointegrate. That’s different.

    Again, we’re talking about trends here.

    Also, since you say that nobody ‘expects’ the temp record to be linear/polynomial, how do you explain this?

    “Eleven of the last twelve years (1995-2006) rank among the twelve warmest years in the instrumental record of global surface temperature (since 1850). The 100-year linear trend (1906-2005) of 0.74 [0.56 to 0.92]°C is larger than the corresponding trend of 0.6 [0.4 to 0.8]°C (1901-2000) given in the TAR (Figure 1.1). The linear warming trend over the 50 years from 1956 to 2005 (0.13 [0.10 to 0.16]°C per decade) is nearly twice that for the 100 years from 1906 to 2005. {WGI 3.2, SPM}”

    Source: First paragraph, first section, IPCC AR4 summary report.

    Are you claiming that everybody in fact agrees that this has nothing to do with science/statistics?

    Best, VS

  1466. VS Says:

    Nice post, Alex :)

  1467. Alex Says:

    Bart,

    You wrote that:
    “Nobody expects the temp record from 1880 to 2008 to be either linear or polynomial”.

    We know that any infinitely differentiable function can be written as an infinite polynomial (Taylor expansion). So by saying that temp record is not a polynomial we know that it also isn’t an infinitely differentiable function. So we can exclude any exponential and sinus functions.

    If this is true, it greatly simplifies the search for the function that does describe temp record.

    Kr,

    Alex

  1468. phinniethewoo Says:

    For those of us playing catch-up in the discussion between the Minds on this thread:

    VSs reference to Trygve Haavelmo’s 1944 “The probability approach in econometrics” via stevereads.com is not accessible to all. Here is a copy from Yale.

    Click to access p0004.pdf

    I think allready the preface where Haavelmo outlines the conceptual framework of his work is a compelling read.

    As VS pointed out : replace every word with the root economics in it with “climate science” and we are in april 2010.

    TH was only a research associate when he wrote this? These were the times one actually had to push the cart for earning his bowl of rice in the evening.. Nowadays research associates, at double/thrice the salary in real terms, are more the fiefdom of kiddies with a complicated genealogy and with an active lifestyle in all kinds of delightful corners of social life. statistics? Nah, I passed that post in life by wearing my most tranlucent blouse to the exam.

  1469. Kweenie Says:

    VS, if I understood correctly you’re an econometrist and this is the first thread I’ve seen you appear discussing climate statistics over the last 3 year I’ve been following the AGW folly on various climate blogs ranging from Real Climate via Open Mind and Climate Audit to this blog (through Bishop Hill). My question, if you don’t mind, is this your first “love” affair with climate science or have you engaged the statistics cq economtrics before?

    Groetjes
    Kw.

  1470. Polite Indian Says:

    I have sent an email message to Professor Trevor Breusch. He replied that he will try going through the comments and perhaps comment here.

    Let’s hope he does.

  1471. phinniethewoo Says:

    Dear Bart,

    In order to give this discussion some weight on all sides here, or not lose any gravitas anytime soon if I should say so, may I suggest you to maybe contact the editorial board of The Economist?

    From their latest climate change article, it appears they have just what the doctor ordered for you in store.

    ⇒ They seem to be able to scientifically refute the alledged UHI offset trends 1940-2010 in temperature recordings, by means of trends(!) studies that refer to wind/no-wind nights observed over a few months or years, moon/no-moon, geographical locations close to the park in the city etc. etc. All kinds of hyper exotic stuff that relates the shotlived with the longlived, and none the less scientifically irrefutable or so they claim (the reports they refer to are closely guarded, unfortunately )

  1472. Bart Verheggen Says:

    VS,

    It sounds like a description of this figure: http://www.ipcc.ch/graphics/ar4-wg1/jpg/faq-3-1-fig-1.jpg

    I take those linear fits to serve a similar purpose as mine in my post: A visualization tool. There is no expectation that the temp actually closely follows those linear trends (unless of course the net forcing exhibited a linear trend over the same time interval).

    Phinniethewoo,

    I’m not sure what you’re getting at?

  1473. phinniethewoo Says:

    Hi Bart,

    well, I think making a graph of a “averaged” measure obtained by cocktailmixing 4000 other measures in some way, and then only displaying us the Mean Value of this averaged measure is cherry picking information.

    Visual information has progressed considerably the last 50years, and the averaged measure should be depicted really with the associated variation on how it was obtained.

    What I mean is that the graph of earths mean temperature anomaly, should be accompanied with a splurge of red paint around it depicting that mentioned variation. Adobe Illustrator is very good at this: As a first approximation you can change the background colour of mentioned anomaly graph to red? Small steps do it.

    The Economist: they must have a treasure of scientific inside information nobody else has to express the confidence expressed in their latest climate change article?

  1474. Igor Samoylenko Says:

    Willem Kernkamp said:

    With a unit root test there is no claim about the exact process that generates the data. Instead, the claim is more modest. Namely, when there is complete persistence (as with a pure unit root) or high persistence the probability intervals of Ordinary Least Squares (OLS) are compromised. So the introduction of some non-linear deterministic process does not help, because it has the same persistent behavior. It is that behavior of the process that causes the breakdown in the statistics of OLS.”

    Agreed.

    As for your response, VS: it may be me (and as I said I am neither a statistician nor am I a physicist), or it may be that you are not being particularly clear with your language but I have to admit that I am sometimes baffled by your replies.

    1) You picked on Cochrane’s reference to one problem with unit root tests on finite time series, namely that of near unit root processes that can be mistaken for unit root processes. Why? This is manifestly NOT what I was referring to as I am sure I made clear in my post. I have highlighted in bold what I was in fact referring to.

    Namely:

    “Unit root tests are based on measurements of low-frequency movement in a time series, so they are easily fooled by nonlinear trends”

    “Campbell and Perron (implicitly) and Sims (1989) emphasize the fact that unit roots are indistinguishable from nonlinear trends”

    That was my point.

    I have cited these specifically to address your claims that “unit roots are inconsistent with deterministic trends”. This statement is simply false on its own as far as I can see. Presence of unit root does NOT rule out the existence of low-frequency, non-linear deterministic trends.

    You have performed some tests for specific types of non-linear trends (polynomial up to 4th order etc). Is this really sufficient to rule out non-linear trends in the series? I doubt it but I am not a statistician (call it hand waving if you like).

    You then fitted ARIMA(3,1,0) model to data and proceeded to claim that on this basis there is no statistically significant warming trend in GISS time series. This is spurious in my mind because 1) you have not paid sufficient attention to the fact that the DGP can in fact have a non-linear deterministic trend and 2) that you can fit other ARIMA(p, 1, q) models to data which will show a significant warming trend (as B & V did).

    You insist that your analysis does not contradict that by B & V but as far as I can see this is only partially true. B & V do not contradict your finding of a unit root but they do show that you can fit a different ARIMA model to data and get a statistically significant warming trend. My conclusion from this is that in this case fitting ARIMA models to data is not particularly helpful, since you can get such fundamentally different results (as to whether there is a statistically significant warming trend or not).

    That takes us back to the question of what DO we really know about the real DGP that generates global temperature time series? And does knowing that it may well have unit root help us much in understanding its structure?

    2) VS:

    So would you, or Bart, be so kind to explain the exact methodological steps one needs to pursue, in your view, in order to analyse a time series?

    I think the best way to proceed is to focus on the structure of the real DGP for the temperature series based on our knowledge of physics. Eduardo outlined the likely form of the DGP for the temperature data series, based on physics in his post:

    T(t) = F(ghg(t),sun(t),aerosols(t),volcanoes(t),…) + stationary_process(t)

    It was just an outline and I am sure it can be formally derived (but as I said I am not a physicist to provide it myself) based on our understanding of physics underying climate response to forcings. It is likely to be spread over a mountain of papers and unlikely to be in a neat form of a couple of equations suitable for a blog post (see http://www.realclimate.org/index.php/archives/2008/09/simple-question-simple-answer-no/).

    This model IS based on physics, WILL have a non-linear, low-frequency deterministic trend (+ stationary noise). Does it have unit root? Eduardo asked that question and it will be an interesting question to answer.

    Climate models incorporate this physical model. Eduardo offered you to run your tests against model output which you did not take up. If you do NOT find unit root in model output then we have a problem. If you do however then I am not sure we will have achieved much. We have a physical model with a deterministic trend and stationary noise capable of generating the GISS temperature time series and which has unit root. OR you can fit a few stochastic ARIMA models to data. Great! And?

    Alex, what is you take on this?

  1475. VS Says:

    Hi Bart,

    I’m really glad you put up that figure.

    This figure is a textbook example of how to lie with statistics.

    What that figure ‘insinuates’, while backing this ‘insinuation’ with big words like ‘scientific consensus’ and ‘peer-review’, is:

    Look, the data tell us that the warming trend is accelerating

    However, over the past month, the least I hope to have been able to show, is that this type of ‘analysis’ is not statistics, those lines do not represent ‘warming trends’, and that the found implications are simply not the result of a proper (statistically literate) scientific inquiry.

    Let’s assume for the moment that the DGP governing the instrumental record is trend-stationary (i.e. no unit root). Even in that particular case, that figure contradicts itself, as it depicts a total of four different mutually exclusive and overlapping probability laws (to use Haavelmo’s terminology).

    Now, those convinced of my argument (i.e. there is no evidence of trend-stationarity in our temperature record), should understand that my argument implies that none of those implied probability laws is correct, as the procedure used to estimate them, makes an assumption for which we find no evidence in our data, namely: trend-stationarity.

    So, Bart, let’s go back to my very first post, and put that next to your statement:

    “I take those linear fits to serve a similar purpose as mine in my post: A visualization tool.”

    Do you, after our month long discussion then finally agree to:

    (1) Not call these arbitrary straight lines ‘trends’, but least squares fits

    (2) Stop reporting nonsensical confidence intervals for these LS fits, which imply that you are actually engaging in statistical estimation here, instead of ‘visualizing’

    ?

    Best, VS

    PS. Readers who are not quite sure what I mean when I say ‘trend’, should take a look at my explanation here.

    PPS. Igor, *sigh*, I think I have shown enough effort.

    Now I have collected enough evidence to know that it’s completely pointless to discuss with you. You will not even listen to any argument which even ostensibly contradicts your dogma.

    As a matter of fact, nobody had any problems with e.g. Kaufmann’s papers, which all assume the presence of a unit root in the instrumental record (but imply that AGWH is real, and backed up by statistics), until I showed up in this thread, and listed a few implications of the I(1) property, that every undergraduate student of econometrics, who attended a couple of TSA classes, is familiar with.

    This is tiring, especially since answering your questions elaboratly (with fact-based evidence) only serves so that you can spit on the reply and ignore its contents (e.g. the exact type of ‘deterministic trend’ Cochrane was referring to.. apparently you still haven’t managed to figure that out).

    Also, I really don’t like your tone, given the fact that I was arguably quite forthcoming.

    Maybe Alex and others want to continue discussing with you, but for now, I say: до свидания Igor!

  1476. Shub Niggurath Says:

    Bart: Why do you quote part of my question and address that?

    When I asked this question above:

    “Do you think/expect rising concentrations of CO2 to move the temperature anomaly in the coming decades, to enter a phase where it will display non-stochastic trends…”

    what I was asking was:

    You know there are several authors who believe the rising concentrations of CO2 will take us to a different ‘regime’ (IDK, is that the right word?) where CO2 will be the predominant driver of climate.

    Today, VS’ analysis and other sources, suggest that the anomaly is flickering up and down contains a unit root. I understand this as follows: within the time period of our study, the net influence of different factors, (including CO2, say) that affect climate pull the anomaly in different directions all the time making the temperature appear stochastic. This makes the direct inference of causation via simple(r) correlation statistical methods invalid.

    The temperature is still responding to factors that pull it down, thereby contributing to its stochastic nature on climatic timescales. Will the upcoming phase of high CO2 have such an effect on the temperature anomaly that it responds less to these other factors thereby making it appear less stochastic?

    In other words, do you expect VS’s analyses can be invalidated by a different range of concentrations of CO2, sometime in the upcoming future (100-200 years, 500 yrs, 50 yrs etc)?

  1477. Marco Says:

    @Igor (and VS in a way),
    According to MartinM over at Tamino’s blog, GISS ModelE output has a unit root. GISS ModelE is completely and totally deterministic.

  1478. VS Says:

    Hi phinniethewoo,

    Haavelmo was indeed a research associate, but he did have some pretty big guns behind him ;)

    Oh how I envy him for having Shumpeter (!) review his work…

    “The idea of undertaking this study developed during my work as an assistant to Professor Ragnar Frisch at the Oslo Institute of Economics. The reader will recognize many of Frisch’s ideas in the following, and indirectly his influence can be traced in the formulation of problems and the methods of analysis adopted. I am forever grateful for his guiding influence and constant encouragement, for his patient teaching, and for his interest in my work.

    The analysis, as presented here, was worked out in detail during a period of study in the United States, and was first issued in mimeographed form at Harvard in 1941. My most sincere thanks are due to Professor Abraham Wald of Columbia University for numerous suggestions and for help on many points in preparing the manuscript. Upon his unique knowledge of modern statistical theory and mathematics in general I have drawn very heavily. Many of the statistical sections in this study have been formulated, and others have been reformulated, after discussions with him. The reader will find it particularly useful in THE PROBABILITY APPROACH IN ECONOMETRICS connection with the present analysis to study a recent article by Professor Wald and Dr. El. B. Mann, “On the Statistical Treatment of Linear Stochastic Difference Equations,” in ECONOMETRIC Vol 11, July-October, 1943, pp. 173-220. In that article will be found a more explicit statistical treatment of problems that in the present study have only been mentioned or dealt with in general terms. I should also like to acknowledge my indebtedness to Professor Jacob Marschak, research director of the Cowles Commission, for many stimulating conversations on the subject. I wish further to express my gratitude to Professors Joseph A. Schumpeter and Edwin B. Wilson of Harvard University for reading parts of the original manuscript, and for criticisms which have been utilized in the present formulation. Likewise, I am indebted to Mr. Leonid Hurwicz of the Cowles Commission and to Miss Edith Elbogen of the National Bureau of Economic Research for reading the manuscript and for valuable comments.”

  1479. Bart Says:

    VS,

    I am actually more surprised by your reply to Igor than you reply to me. He brings up important points, and see also Marco’s addition: GCM output of global avg temp (GISS modelE in this case) contains a unit root (according to MartinM).

    I think I’m less interested than you in what name I give a particular beast. Least square fit? The common usage of the word ‘trend’ is so common that for blog purposes I don’t see the problem though. I conceded multiple times that I agree that the confidence interval of an OLS fit is probably underestimated when the errors are autocorrelated.

  1480. Igor Samoylenko Says:

    VS,

    Fair enough. As I said, I am not a statistician and it is possible that I am completely missing the point here. I have articulated my questions the best I could based on my own understanding of the issues and my own reading of the papers I cited. I do not think personally you have adequately addressed them (and I HAVE tried to understand what you are saying). But as DeWitt said, what do I know?

  1481. VS Says:

    Hoi Bart,

    I’m sorry, but he made few to no good point. Also, a good part of them has already been answered.

    This is understandable since he (obviously) didn’t read the thread and he didn’t understand what Cochrane is saying, even though I tried to explain it.

    “Take a unit root process and change the root to 0.999. That’s a “close” stationary process.”

    So, you change a pure unit root process it to 0.999 and add a trend there. This trend-stationary near-unit root process will be indistinguishable from a unit root, and as a matter of fact, in such a case you should treat it as a unit root process.

    Cochrane was discussing the question of whether we can say with certainty that the AR term is 0.999 or 1.

    Now, I don’t see either the IPCC, or anybody else in the climate science community claiming that we have an AR term equal 0.999.

    Am I mistaken here?

    Igor simply ignored my explanation, and to be honest, if somebody uses that tone in a discussion, he admits he understands very little about, I’m more or less done trying to explain over and over again.

    I’m really not obliged to talk to him.

    ——————

    Now, forget the blogs, I wasn’t pointing the finger at you in particular here (you didn’t invent this ‘standard practice’, as it is employed on both sides of the debate).

    We’re talking about the IPCC AR4 WG1 report now.

    That’s your figure is from.

    I would like a clear reply to those two questions, while referring to the IPCC chart you linked to, since I spent over a month, and God knows how many ‘000 of words, making my point.

    Please, don’t dance around it. This is what started the whole discussion.

    Best, VS

    PS. “I conceded multiple times that I agree that the confidence interval of an OLS fit is probably underestimated when the errors are autocorrelated.”, this is not what I want you to ‘concede’ Bart, as this has little to do with our discussion.

  1482. VS Says:

    correction:

    this: So, you change a pure unit root process it to 0.999 and add a trend there.

    should read: So, you take a pure unit root process with drift, and change the AR term from 1 to 0.999.

  1483. sod Says:

    this is becoming a little stupid.

    (1) Not call these arbitrary straight lines ‘trends’, but least squares fits

    the IPCC is not limited in their choice of words, to what VS allows them. in contrast to VS, they do NOT make a PURE STATISTICAL approach.

    they can use the term trend, as reference to it as a simple linear regression. or use the physical background to name this a real trend.
    or use the pure statistical approach (confirmed by Tamino and Breusch) and speak of a trend in the sense that VS uses the term.

    (2) Stop reporting nonsensical confidence intervals for these LS fits, which imply that you are actually engaging in statistical estimation here, instead of ‘visualizing’

    you might not have noticed it, but there are NO confidence intervals around those trend lines..
    that graph is perfectly fine.

    —————————-

    the usual “self-reference” that VS provided above does not lead to any useful definition of the term trend. (including a discussion of the de facto use in scientific literature)

    instead it repeats his nonsense claim about just taking the difference between two years:

    When I wrote earlier that the best estimate of the increase in temperatures over the period 1881-2008 was simply giss(2008)-giss(1881), I wasn’t joking. This is the realized increase in temperatures over the period. I firmly stand by that.

    that IPCC page would look a little different, when it was written by VS:

    let us ignore physics for a start. the temperature didn t change by more than 1°C, which is normal for a (semi) random walk. it will behave equally random in the future (VS ignores recent temperature records)

    http://rankexploits.com/musings/2010/march-uah-temperature-anomaly-0-653c/

    if you want to know how temperature changed over the last 50 years, you need to subtract the two numbers. including errors, you will of course know absolutely nothing after wards.

    enjoy the future!

  1484. phinniethewoo Says:

    hi VS,

    I liked TH his explanation in his preface , where he describes the reluctance of (economy) scientists at the time to adapt a stochastic scheme for analysis of their observables. Allthouh these scientists used , then also, LSq methods, they did not want to use deeper statistical methods because they felt their physical model with “exact laws” was under attack then.

    We see this Jekyll and Hide behaviour in this thread as well sometimes:)

  1485. Henry Crun Says:

    VS

    Thanks for all your efforts, do you know the saying ‘you can lead a horse to water…’? Please don’t give up.

    I’m on P2 of Hamilton already, should be able to hold my own by sometime in the next century 8-)

  1486. phinniethewoo Says:

    On a lighter touch, let us address physics:

    -How would “earth’s average temperature” have evolved then, without humanity in the period 1850-2008 ??
    Would the temperature have gone up, down , or stay the same?

    I think the AGW crowds have, without telling the rest of us, embroidened further on their guru JM Keynes’ “Law of Indifference”, and have replaced it with their “Law of Flatulence”: If you know nothing , just assume it would stay flat. And be 90% sure of that.

  1487. AndreasW Says:

    VS

    I don’t think Bart will agree on anything if it doesn’t lead to the conclusion that co2 is the main driver of temperature. [edit] Cointegration please! I’m waiting:)

  1488. Polite Indian Says:

    I second AndreasW

    Yes Cointegration Please! I am waiting as well

  1489. Bart Says:

    VS,

    Your comment regarding ‘dancing around’ reminds me of the pot and the kettle.

  1490. manacker Says:

    Phinniethewoo

    OK. Back to the drunk’s walk and your impending Nobel. As you say, endogenous factors (i.e. his present condition, principally his peripheral motor reflex) will play a major role in his future whereabouts. If he is enjoying a light buzz he has a better chance of avoiding the ditch (or canal), and thereby returning to his original lamppost, than if he is totally bombed out of his gourd. Of course, it can be argued that “his present condition” has a strong exogenous component, i.e. the decrease of peripheral motor reflex response attributable to the forcing from the quantity of Genever consumed just prior to the walk (i.e. the “ginhouse effect”). However, other exogenous factors (topography, as you mentioned, for example) are certainly also important. As you point out, just as with earth’s temp, hypothermia cannot be ruled out, especially if the topography does include a nearby canal or ditch.

    BTW, reports from Jones and tamponi are losing their luster; can’t we have something a little sexier?

    Max

  1491. phinniethewoo Says:

    I second Polite Indian
    I think cointegration would make everything more lucid

    PS
    RealClimate shld make available the Code, and, indeed, the Data, if they have any other suggestion (positive subjective determinism with nonlinearities?)

  1492. phinniethewoo Says:

    max

    I am a great fan of the great Soros in the respect: If you have exhausted all to embellish your tracks, just try quantum mechanics!

    And Indeed, randommality and a deeper knowledge of the Law of Large Numbers are the missing link to explain the mysterious collapse of the Wave function.
    Note in QM, as well, we have confused observers standing around, a half unexpected realisation and lots of statistics..
    My Magnum opus will be centered around this.

  1493. Willem Kernkamp Says:

    In reply to:
    # phinniethewoo Says:
    April 7, 2010 at 13:51

    “TH was only a research associate when he wrote this? These were the times one actually had to push the cart for earning his bowl of rice in the evening.. Nowadays research associates, at double/thrice the salary in real terms, are more the fiefdom of kiddies with a complicated genealogy and with an active lifestyle in all kinds of delightful corners of social life. statistics? Nah, I passed that post in life by wearing my most tranlucent blouse to the exam.”

    I saw VS’s post on the mentors that helped Haavelmo along. Today, his professors would have prepended their names to his work. Do you still own the blouse?

    Will

  1494. DLM Says:

    Bart says: “It sounds like a description of this figure: http://www.ipcc.ch/graphics/ar4-wg1/jpg/faq-3-1-fig-1.jpg

    I take those linear fits to serve a similar purpose as mine in my post: A visualization tool.”

    From the post that started this discussion, Bart says: “Often, the last datapoint (representing 2009) is omitted, and only HadCRU temperatures (in blue) are shown, to create the most visually compelling picture for claiming that “global warming has stopped” or even reversed (“blogal cooling”, pun intended).”

    What is the substantive difference between your and the IPCC’s use of visualization tools, and the use of informal statistics by the “blogal cooling” crowd, to create their own visualization tools?

    Pot, kettle?

  1495. manacker Says:

    Phinniethewoo

    You have an uncanny knack for asking the questions that force original thought.

    How would “earth’s average temperature” have evolved then, without humanity in the period 1850-2008 ??
    Would the temperature have gone up, down , or stay the same?

    Evoking the AGW crowd’s “Law of Flatulence” to result in a projected flat global temperature trend is an interesting concept, but I believe the issue is even more basic than this (see IPCC for assessed likelihoods as expressed below, based on expert judgment, rather than formal attribution studies).

    Without humanity there would virtually certainly have been no “globally and annually averaged hand-picked land and sea surface temperature” construct, only local temperatures (as pointed out repeatedly by Tim Curtin).

    Some of these would likely have gone up, some would likely have stayed the same and others (a smaller number?) would more likely than not have gone down over this extended “blip” in Earth’s history, as our planet virtually certainly recovered from a period of historically recorded colder weather called the Little Ice Age, which, it should be said in fairness, would virtually certainly not have been historically recorded without human beings on the planet.

    There would very likely have been no urban centers (at least as we know them today) without human beings, so there would virtually certainly have been no urban heat islands to introduce artifacts.

    Depending on the construction of the “globally and annually averaged hand-picked land and sea surface temperature” indicator, the record could virtually certainly be made to show a trend of any desired magnitude in any chosen direction over the time period measured.

    Hope this has provided positive feedback in your deliberations on this matter.

    Max

  1496. manacker Says:

    DLM

    The “visualization tool” from AR4 WG1 Ch.3 FAQ is a beautiful piece of chartmanship (which is quoted in SPM 2007 as evidence for a purported acceleration in warming over the 20th century).

    In a 150+ year record with multi-decadal warming/cooling cycles, such as the HadCRUT temperature record, a shorter time period can easily be shown to have a steeper trend than a longer time period.

    One could get the same result by starting all the trends at the start. The first 40 years of the 20th century would then show a linear trend that is almost twice that of the entire century. Would this imply a deceleration in the rate of warming over the 20th century? Not anymore than the IPCC chart shows an acceleration.

    Chartmanship is a wonderful thing, especially when combined with a bit of smoke and mirrors.

    Max

  1497. manacker Says:

    DLM

    PS The IPCC “visualization” chart is, as VS stated, a textbook example of how to lie with statistics. Unfortunately, there are others in AR4 WG1 and SPM 2007.

    Max

  1498. Kweenie Says:

    “-How would “earth’s average temperature” have evolved then, without humanity in the period 1850-2008 ??
    Would the temperature have gone up, down , or stay the same?”

    With 95% confidence Earth would still be in the Little Ice Age. And no UHI, of course.

  1499. sod Says:

    One could get the same result by starting all the trends at the start. The first 40 years of the 20th century would then show a linear trend that is almost twice that of the entire century. Would this imply a deceleration in the rate of warming over the 20th century? Not anymore than the IPCC chart shows an acceleration.

    no, such a cherry pick would not show anything.

    http://www.woodfortrees.org/plot/hadcrut3vgl/plot/hadcrut3vgl/to:1890/trend/plot/hadcrut3vgl/trend/plot/hadcrut3vgl/from:1900/to:1940/trend

    beyond the remarkable silence of VS, over the constant abuse of completely false statistical methods by “sceptics”.

    for example the fourth order fit:

    was last year proposed by Roy Spencer, the high guru of sceptic climate science.

  1500. John Says:

    Kweenie

    Without humanity who would have measured it or even cared?

  1501. phinniethewoo Says:

    It is worth pursuing the answer to humanity less anomalies in earth’s temp because:

    -if it would have been hotter than now without humanity , then no worries : we are doing a good job putting in CO2
    -if it were a degree or more cooler than now we are doing a good job as well putting in CO2
    -if it were about the same temp,with indeed included the alledged anomaly, we shouldn’t care too much either: obviously GHGs do not matter too much

    forgot now , when SHOULD we care again about earth’s temp anomalies??

  1502. Bob_FJ Says:

    Bart,
    Let us break the monotony of various statistical arguments, and return to your excellent graphs at the head of your article. It is great to see the three sources together, all with the same anomaly base-line. Now let us consider how one might analyse any trends if you were living in the year ~1945 whilst having the same expertise. (and had no knowledge of the future). Is there any real difference between ~1945 and ~2009 trend analysis? (and what went wrong after 1945?)

    Please see this mark-up of your figure 2 for more detail on my query.

  1503. DLM Says:

    sod,

    Why are you so angry, and in such a panic? The imminent climate scientist, Nobel laureate, IPCC guru Dr. Phil Jones, has very recently reassured us that there has been no statistically significant warming in the last 15 years. And he has told us that recent warming rates are not statistically different from the period before evil humans started dumping huge amounts of CO2 into the atmosphere, around 1940. Try to visualize some coolness sod.

    Harrabin – Do you agree that according to the global temperature record used by the IPCC, the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 were identical?

    Jones – The 1860-1880 period is only 21 years in length. As for the two periods 1910-40 and 1975-1998 the warming rates are not statistically significantly different.

  1504. DLM Says:

    manacker,

    I don’t think Bart is going to agree with you, if he chooses to reply.

  1505. cohenite Says:

    sod, indefatigable as usual, picks up the cudgels of trend and seeks to defend Fig 1 from FAQ 3:

    There has been some discussion of this here from comment 39792;

    http://rankexploits.com/musings/2010/how-do-we-know-that-temperature-is-i1-and-not-i2/#comments

    An additional rebuttal of Fig 1 is here;

    And using WFT the cherry-picking nature of Fig1 and the IPCC Clayton linear trends is well evident:

    http://www.woodfortrees.org/plot/uah/plot/uah/to:1998/trend/plot/uah/from:1998/to:2010/trend/plot/uah/from:1978/to:2010/trend

  1506. Pat Cassen Says:

    vs –
    With all due respect for your rigor and patience, Igor is not the only one who doesn’t understand your response to his question; neither do I. Please explain (yes, once again) what you think Cochrane meant when he said:

    “…For any unit root process, there are “arbitrarily close” stationary processes, and viceversa. Therefore, the search for tests [that] will sharply distinguish the two classes in finite samples is hopeless”
    but furthermore,
    “Campbell and Perron [imply] a much more severe version of the same problem, namely the possibility of deterministic trends.
    Low-frequency movement can be generated by unit roots (random walk components) or it can be generated by deterministic trends… Unit root tests are based on measurements of low-frequency movement in a time series, so they are easily fooled by nonlinear trends. Therefore, Campbell and Perron’s repeated theme that ‘the proper handling of deterministic trends is a vital prerequisite for dealing with unit roots’ is correct and sensible advice.”

    That is, he seems to be saying that near-unit roots are not the only problem; so are, separately, deterministic trends, which appear as low-frequency components.

    You say “So, we cannot tell the difference, in finite samples, between a unit root and a near unit root, true. However, the analysis should still proceed on the basis of test results.” Fine. Would you also say “So, we cannot tell the difference, in finite samples, between a unit root and a deterministic trend, true. However, the analysis should still proceed on the basis of test results.” If so, what then does Cochrane mean when he says “the proper handling of deterministic trends is a vital prerequisite for dealing with unit roots”?

    Thanks in advance for your response.

    (Alex – please butt in if your are so inclined; I seem to understand your explanations.)

  1507. cohenite Says:

    Is there an AGW forcing which can produce a low frequency deterministic trend?

  1508. sod Says:

    Why are you so angry, and in such a panic?

    this is funny. under a topic that discusses the rigorous appliance of statistical techniques, many people try to do arm-chair psychology.
    i am not angry, even though there are a lot of reasons to be.

    why don t you simply explain to VS that his year-year idea is nonsense, and that there are no confidence intervals around the trend lines in the IPCC figure? (something his second questions is about, which he explicitly wanted answered?)

  1509. Bart Says:

    DLM,

    No. It’s cherrypicking to only show a small subset of the data in order to arrive at a particular desired conclusion. I showed all the data.

  1510. DLM Says:

    Bart,

    Well some may cherry pick data, and others may cherry pick methods of statistical analysis that ignore all that inconvenient unit root Dickey-Fuller stuff, so that they might arrive at their desired conclusion. The resulting products of these dubious processes, are then used as visualization tools to attempt to sway political and public opinion, one way or the other. It can be very effective for a while, but it ain’t science.

    I haven’t seen that chart that you attributed to “some people” portraying just the last 12 years of data, or with 2009 left off. And I feel no need to look for it, because the Nobel laureate, imminent climate scientist, IPCC guru, Dr. Phil, has confirmed that there has been no statistically significant warming in the last 15 years. That is more than 12 years, and let’s presume that he includes 2009. Dr. Phil also has stated: “As for the two periods 1910-40 and 1975-1998 the warming rates are not statistically significantly different.” So I am not going to worry about getting burned up any time soon, as “some people” desperately want me to (worry, that is).

    sod,

    Glad to see that you are OK now. I am pretty sure that you aren’t going to burn up either. Hang in their.

  1511. plazaeme Says:

    Bob_FJ,

    Is there any real difference between ~1945 and ~2009 trend analysis? (and what went wrong after 1945?)

    This may be another way to visualize it:

    “Cklick”

  1512. VS Says:

    Hi Pat Cassen,

    I’ll try to do this with a short illustration.

    So Cochrane wrote that:

    “Low-frequency movement can be generated by unit roots (random walk components) or it can be generated by deterministic trends… Unit root tests are based on measurements of low-frequency movement in a time series, so they are easily fooled by nonlinear trends.”

    However, this is the series Cochrane was talking about. Now think of the term ‘deterministic nonlinear low-frequency trend’.
    In light of this series, Cochrane’s comment makes sense, no?

    Now take a look at the GISS record.

    Do you now understand that Cochrane’s comment is of very little relevance for the series we are discussing here?

    I want to leave it there (maybe Alex wants to elaborate), in part because I want to discourage this ‘culture of debunkation’, where people mindlessly run around on the internet hunting for a citation that vaguely ‘contradicts’ what I wrote in this thread and post it (with a big ‘GOTCHA, SUCKER!’) expecting me to spend a couple of days explaining/addressing it.

    All of this, without even bothering to do some background research on the topic (or even read the thread properly).

    —————

    Hi Bart,

    I’m sorry if my last post sounded overprovocative, but I really felt you were evading my question. What I wrote down in that post is quite clear and (I believe) well argumented by the preceding, and after this entire discussion, I feel I deserve a straight answer.

    Here’s my comment, to which you still haven’t replied properly. Again:

    I want you to say whether, in light of this entire discussion, you feel that this WG1 figure (together with the confidence intervals listed in the legend) is deceiving/non-science, or not.

    Also, I got slightly annoyed by you writing:

    “I conceded multiple times that I agree that the confidence interval of an OLS fit is probably underestimated when the errors are autocorrelated.”

    What you basically wrote down is that an omitted AR term biases the trend estimate (slightly). Note that this is not what we are discussing here. You were in fact ‘conceding’ a (correct but irrelevant) point, which was made by Tamino a long time ago, on his blog. I furthermore know that you are very well aware of this blog entry, because it was you who brought it to this discussion, here.

    Do you now understad why I felt that you were ‘dancing around the question’?

    I wasn’t trying to escalate the discussion, honestly. So, read my post again, and please give me a clear answer.

    Best, VS

    PS. For those puzzled/dazzled by brainless repetition; Sod’s ‘question’ (about the ‘two data points’) has been eloquently addressed by mikep a long while ago.

  1513. Bart Says:

    DLM,

    See the lower graph in this post for an oft quoted example of cherrypicking.

  1514. Tim Curtin Says:

    Bart; I really hope I do not sound churlish, your Blog is the Best!

    But when you said (April 8, 2010 at 07:24):

    “DLM, No. It’s cherrypicking to only show a small subset of the data in order to arrive at a particular desired conclusion. I showed all (sic) the data”. BUT your graphs atop here are misleading in 2 respects at least, by failing to mention that “global” temperature data did not exist before about 1920 at the earliest, and by failing to mention that use of “anomalies” from the 1950-80 mean introduces at least 2 further anomalies.

    (1) the anomalies you show have all been multiplied by 100 from the actual – as DLM has noted this, however widely practised as it has been and is, is designed purely to excite alarm in the media and the general public (like the nightly TV pictures of “smoke” stacks at power stations here in Australia as I type, emitting, horror of horrors, Hydrogen dioxide (CO2, colourless and odourless, is much less photogenic). If your graphs showed simply ABSOLUTE temperatures from say 1920 to now their “trends” would be much less compelling.

    (2) Using anomalies instead of actuals leads to subtle distortions in all statistical evaluation of said anomalies – at the least all determinist variables against which your anomalies might be regressed also need to be stated as anomalies from their respective 1950-1980 means. But then AR5 like AR4 will never present any regression results – much too risky with VS around!

  1515. Igor Samoylenko Says:

    VS:

    However, this is the series Cochrane was talking about. Now think of the term ‘deterministic nonlinear low-frequency trend’.
    In light of this series, Cochrane’s comment makes sense, no?

    Now take a look at the GISS record.

    Do you now understand that Cochrane’s comment is of very little relevance for the series we are discussing here?

    This looks to me like a blatant misrepresentation of what Cochrane said. His comments in sections 1 and beginning of section 2 are general, not restricted to the GDP time series. Then in section 2.1 “ECONOMIC INTERPRETATION OF UNIT ROOT TESTS” he proceeds to look at the GNP time series as an example: “Consider the still-studied question whether GNP contains a unit root or not…”.

    Breusch and Vahid made a very similar comment in their paper:

    Most of the available unit root tests consider the null hypothesis of a unit root against a stationary or a trend stationary alternative. Given our discussions above about the possibility of observational similarity of a unit root process and a process with a deterministic trend in finite samples, it is not surprising to know that the finite sample properties of unit root tests are poor. The performance of these tests depend crucially on the type of trend and the period of cycles in the data. Stock (1994) emphasises the importance of properly specifying the deterministic trends before proceeding with unit root tests, and advises that ‘this is an area which one should bring economic theory to bear to the maximum extent possible’.In our context, the responsibility falls on the shoulder of climate theory rather than economic theory, an area that we know nothing about. Here, we proceed with simple assumptions about trends.

    Are you going to argue that they are also talking about something else and it is not relevant to GISS?

    PS: It ain’t your blog, mate, so don’t try to browbeat me into leaving, OK? Ignore me all you like, but I am not going anywhere in a hurry.

    Понятно, приятель? Или ещё раз объяснить и на этот раз по-русски?

  1516. VS Says:

    Hehehe Tim :)

    I agree with you 100%, that taking away the 14 degrees, and multiplying anomalies by 100, indeed serves as a very potent ‘visual trick’ to make the rise ‘seem’ more extreme, to the untrained eye.

    Here’s what the GISS series actually looks like, in absolute terms (Figure 12). The average global temperature increase over the past 128 years, was approximately 3.8%.

    This is (definitely) not what a lay-man would ‘infer’ from the anomaly graphs.

    However, statistical regressions are not sensitive to this, because of automatic rescaling (i.e. you can take off the average or any other number of the entire series, and multiply the series by 1000 or what have you, but your estimation results will be the same, save for the scale of the reaction coefficients and the estimate of the constant)

    Best, VS

    PS. The whole term ‘anomaly’ can also be interpreted as a rhetorical trick, as it implies that you have in fact determined some underlying process (which they haven’t), in light of which the current temperatures are ‘anomalous’.

    In fact, the series has simply been rescaled.

  1517. VS Says:

    Igor,

    (1) I have addressed the issue adequately. Unless you can show me that nonlinear low frequency trend in the dataset we are discussing (go, look in the literature on the topic), I consider this case closed.

    (2) That’s indeed what BV wrote down. However, they are (explicitly) not addressing the unit root question. What I did in this thread was a fundamental, test based, unit root analysis of the trend-stationarity assumption concerning the global mean temperature record.

    I have argumented/explained my approach ad nauseam.

    Hopefully, Professor Breusch will indeed comment here, and set this straight. I’m growing a bit tired of various lay-man interpretations /misrepresentations of what BV did in their paper.

    Best, VS

    PS. Have you bothered to pause and reflect on what a ‘low frequency non-linear trend’ in the GISS series implies for the whole ‘unprecedented warming’ argument? It helps not to get carried away in your counter-arguments when defending a dogmatic ’cause’.

  1518. VS Says:

    this: unit root analysis of the trend-stationarity…
    should read: ..fundamental analysis of the trend stationarity…

  1519. VS Says:

    eh, forget that correction

  1520. cohenite Says:

    Any takers;

    “Is there an AGW forcing which can produce a low frequency deterministic trend?”

  1521. VS Says:

    NOTE TO ALL:

    I’m waiting for a response to this question, by Bart.

    I’m not going to address any other issues until this is resolved.

    This is how the discussion started, this is why I performed this entire battery of tests and simulations, and this is why I spent a month explaining econometrics/methodology/statistics to all who are (were?) unfamiliar with these topics.

    This is the crux of the matter, and the true topic of this thread. I will not validate any (‘nonlinear low frequency deterministic trend’-esque) distractions/detours until this is settled.

    I believe I have earned a proper and straight reply, in light of all the effort I put into this thread.

    Thank you all for your understanding.

    Best, VS

  1522. John Says:

    VS
    Is a deterministic trend a trend based on determinism? if it is how could you tell it was there on the output (temp) without reference to the inputs and knowledge of the time constants? It would seem to me to be awfully clever.

  1523. VS Says:

    Hi John,

    A trend estimate, be that a ‘deterministic’ or ‘stochastic’ trend, is simply a way of forecasting a series on the basis of:

    (1) time
    (2) previous realizations of the series

    It says nothing about either inputs or what causes the changes. It is a description of the pattern observed. In Haavelmo’s terminology, it is the derived probability law based solely on (1) and (2).

    By engaging in multivariate analysis (i.e. adding other input factors) you can tighten this probability law (i.e. tighten the forecasting intervals).

    This however, is a different story (see e.g. Beenstock and Reigenwertz’s cointegration analysis).

    Bart, I want that answer, please.

    I sincerely hope that you will be both a scientist and a gentleman in this regard, and that you will prove all those charges of bunker mentality and dogmatic thinking, levied against the climate science community, wrong.

    Best, VS

  1524. John Says:

    VS
    Sorry, I didn’t mean time series in general just the one discussed here which you have established to be stochastic.

    It has been mentioned that the time series here contains one and I could not see how this could known from this time series alone. Thanks.

  1525. VS Says:

    John, this question has been dealt with ad nauseam. Read the thread please.

  1526. AndreasW Says:

    VS

    You surely deserv a straight answer from Bart. I would lift my hat to Bart if you got that, but i wouldn’t bet on it.

  1527. John Says:

    VS

    Oops Sorry! didn’t mean to annoy you.

  1528. Tony Says:

    Come On, Bart!

  1529. phinniethewoo Says:

    Cherry picking, in my opinion (but ok, i admit I am not a gentleman and scientist like Bart) cherry picking is displaying a mean averaged value without the variance on how it was obtained from 4000 sites.
    I will pick a Sherry on that, now..lunchtime

    i find it extremely annoying that “i’s” get edited all the time instead of “I’s” (capital I)..I am sure I hit that shift key above there on the i before admit, but still it turned out to be the small i)
    Something went about in Microsoft to give us yet another pseudoservice and we are fxxdup with it now..i see many other people having this as well this is allready going on for at least 6months now
    shit I got another one before find shit and another one before the see..I give fxxing up on this.

  1530. Allan Kiik Says:

    After following this thread for one month now, I can see why cointegration analysis is a promising way to find out how big impact CO2 appears to have based on available temperature data. This is an important question, because at least here in EU, we have already started the mitigation of this impact, so we really need to know the extent of a „disease“ we are trying to cure with taxes for „renewables“ on our energy bills and forced „phase-out“ of Edison light bulbs and replacing them with mercury-bulbs, as described so eloquently by Joost van Kasteren and Henk Tennekes (http://pielkeclimatesci.wordpress.com/2010/03/17/the-unholy-alliance-between-philips-and-the-greens-a-guest-weblog-by-joost-van-kasteren-and-henk-tennekes/). We know from early works of famous climate scientists, like Nobel laureates Phil Jones and Kevin Trenberth, and more recent work of not so famous researchers McLean, De Freitas and Carter (http://img90.imageshack.us/img90/8486/mcleanetal2009fig7.png) that large part of our global temperature record can be explained with impact of SOI/ENSO and other ocean-atmosphere oscillations, but what is left must be an impact of changes in TSI and cloud cover, and of course our growing CO2 emissions.

    But obviously there are people who don’t want to know where we can get with this kind of analysis, and this is amazing – we are all for better environment here (I don’t see any „oil-shills“ commenting here) and how can it be that some of us don’t want to know better whether our mitigation effort is appropriate for the observed „disease“ or not?

  1531. Bart Says:

    VS

    I was going to respond to your question (again), and then I saw your newer comment. Don’t play games with me. Nobody (me included) is obliged to answer to anybody here; we’re all here out of free will. I happen to be the host of this blog. That does not mean that I attend to it 24-7. It doesn’t mean that I’m obliged to answer to anything anyone brings up. It does mean that I set the rules, and nobody else. I try to do so in a fair, constructive and open manner. If you don’t like it, take it elsewhere. I don’t owe you anything, and vice versa.

    No, I don’t think that wg1 figure is deceiving. The trendlines help to see the temperature increase for different time intervals, that are different enough from each other so as to be climatically relevant. One could argue whether they are a necessary addition: The bare data show the increase on temp quite clearly all by itself. If you take issue with the fact that they are linear fits, then you could just connect the begin- and endpoint (which you -surprisingly- seem to find a superior method of estimating the change in temperature). The lines from the longer periods would be a little steeper, but the basic message of the figure would not be different. Idem with omitting them altogether.

    The thing is, people are interested in what the change in temp has been over different time intervals. In the absence of a better way to do it, a linear trend is often chosen, and yes, I understand the caveats of OLS better now than I did a month ago. But merely connecting the beginning and endpoint is imho clearly inferior to OLS, for reasons I stated before. I suggest we agree to disagree on that. I’m open to hear about better ways to estimate both the linear trend and the temp increase over a certain time period.

    If the trend estimate is not strongly affected by the nature of the timeseries, but the error around the trend is affected (i.e. the error around the trend is slightly underestimated by a OLS method), is it then better to report the trend with or without errors/confidence intervals? That’s difficult question. I think there is something to say for reporting them, but mentioning the caveat (e.g. as Zeke Hausfather did on the Blackboard when he wrote: “we can take a look at the trends and trend CIs [confidence intervals] (no autocorrelation correction, so they will be a tad too small) for three periods”). Assuming that the figure in question doesn’t take into account autocorrelation or other aspects of the timeseries, it would have been better to add a clarification to that effect. Calling the whole figure deceiving is way to strong imho. Your mileage may vary.

    On another note, this figure does report the absolute temp on the right hand side, as Tim seemed to ask for. It really doesn’t make any difference. If it makes you happy to add 14 to the temp anomaly graph, be my guest. In your reply to Tim, VS showed a graph (fig 12) extending down to 0 deg C (still a rather arbitrary choice of ending the scale). But why not the Kelvin scale? Then you could even claim that the increase isn’t 3.8%, but still an order of magnitude smaller. Great! But when I as a teaching assistant, I would give marks of for a graph: If you start the y-scale at zero degrees, the values only cover a fraction of the graph. In such a case the graphical representation becomes much clearer when you adapt the scale so that the signal is spread over the entire graph.

    VS, you’re starting to show your true colors when you write:

    “The whole term ‘anomaly’ can also be interpreted as a rhetorical trick, as it implies that you have in fact determined some underlying process (which they haven’t), in light of which the current temperatures are ‘anomalous’.”

    The word anomaly refers only to the fact that it exists of a temp difference with a reference period. But regardless of the word, radiative physics is quite well understood and is indeed an important underlying process that drives the increase in temps. In fact, it was predicted before it was observed to happen:

    “Climatic change is being brought about by human induced increases in the concentration of atmospheric carbon dioxide, primarily through the processes of combustion of fossil fuels.”

    From “The Artificial Production of Carbon Dioxide and Its Influence on Temperature”
    Guy Callendar, 1938. This book would be a great read to get up to speed with how science got to understand what it does. I suggest to read it (and other science literature) before making all too bold statements about climate science.

  1532. Frank Says:

    Bart says:

    “No, I don’t think that wg1 figure is deceiving. The trendlines help to see the temperature increase for different time intervals, that are different enough from each other so as to be climatically relevant. ”

    As opposed to saying that, if measured over equal length intervals, many temperature increases in the past are very similar to more recent termperature increases??

  1533. Tony Says:

    Bart,

    Having followed this thread all the way through it seems that this blog is getting a bit much for you to handle. Why?

    It has not escaped your notice that climate-science’s CO2/AGW claim has been catapulted into international prominence as it is being used as the rationale for a ‘world-government/multi-billion trading scheme’.

    And the hits you are getting reflect the concern of ordinary people to know if these claims are as definitive and serious as they are being told, and that the consequences of the already planned actions are indeed necessary.

    In a nutshell, is CO2/AGW climate-science rock solid, or is to be the new marxism that ended up killing millions?

    Now, if you for one sincerely believe that the climatologist’s claims are serious and well-founded enough to justify the life-and-death consequences (e.g. vide the impact of biofuels on the world’s food supply; the price-increases of kerosene as the current source of artifical light for about a billion; and so on ) .. and are willing to take that responsibility, then stand on with the new demands that this blog requires.

    Otherwise, do the other right thing, and shut it down.

  1534. DLM Says:

    Bart says: “A very popular graph that purportedly falsifies the whole “AGW dogma” is the following, showing unrelated trends of temperature and CO2 for a recent 11 year period. It’s been carefully crafted to create a certain impression:”

    That graph from the mind of the world famous Joe Deleo may be famous, but I don’t recall seeing it. It wasn’t in any famous intergovernmental report, wasn’t reported on ad nauseum by the BBC, nor was it in any famous Academy Award winning movie narrated by a world famous failed U.S. Presidential candidate. And little Joe Deleo ain’t tryin to de-industrialize the world economy. Why of all the crap out there by people who should know better, do you pick on little Joe Deleo? Is there nothing in the use of visualization tools by the AGW crowd that will elicit a hint of disapproval from you Bart?

    In case you haven’t noticed Bart, the AGW case has been substantially discredited by the behavior of leading climate scientists, and the IPCC. Phil Jones is your big problem, not little Joe Deleo.

  1535. plazaeme Says:

    Otherwise, do the other right thing, and shut it down.

    I such a case, plis, tell us where you guys go. We are waiting for the cointegration stuff. But make it in an open place, like this one. We need the cointegration deniers, its’s easier to follow with the endless explanations. No joke.

    I better don’t comment about the “not deceiving graph”. There could be too much blood. Note that for shorter recent periods, the slope is greater, indicating accelerated warming. Wow!

  1536. phinniethewoo Says:

    Many references are paid-for , for the general public (who pays for it all though) , allthough universities often themselves keep copies for their own on their servers.

    This is the many times quoted Beenstock and Reingewertz report, found at the blackboard post.

    Click to access Nature_Paper091209.pdf

    Blackboard
    http://rankexploits.com/musings/
    http://rankexploits.com/musings/2010/questions-for-vs-and-dave/

    As a member of the public there is no doubt in my mind cointegration should be tried out as suggested, and TSA in general be used as a formal tool to come to better scientific conclusions.
    Articles about melting glaciers, photos about swimming polar bears, and chartists cluelessly drawing visually appealing lines around in plots: That we had more than enough for our money.

  1537. Igor Samoylenko Says:

    VS:

    I want you to say whether, in light of this entire discussion, you [Bart] feel that this WG1 figure (together with the confidence intervals listed in the legend) is deceiving/non-science, or not.

    …and whilst you are at it, why not ask Bart to explain when he is going to stop beating his wife?

    I second what Bart said above. Here is a quote from “Appendix 3.A: Low-Pass Filters and Linear Trend” explaining the way linear trends are calculated:

    The linear trends are estimated by Restricted Maximum Likelihood regression (REML, Diggle et al., 1999), and the estimates of statistical significance assume that the terms have serially uncorrelated errors and that the residuals have an AR1 structure. Brohan et al. (2006) and Rayner et al. (2006) provide annual uncertainties, incorporating effects of measurement and sampling error and uncertainties regarding biases due to urbanisation and earlier methods of measuring SST. These are taken into account, although ignoring their serial correlation. The error bars on the trends, shown as 5 to 95% ranges, are wider and more realistic than those provided by the standard ordinary least squares technique. If, for example, a century-long series has multi-decadal variability as well as a trend, the deviations from the fitted linear trend will be autocorrelated. This will cause the REML technique to widen the error bars, reflecting the greater difficulty in distinguishing a trend when it is superimposed on other long-term variations and the sensitivity of estimated trends to the period of analysis in such circumstances. Clearly, however, even the REML technique cannot widen its error estimates to take account of variations outside the sample period of record. Robust methods for the estimation of linear and nonlinear trends in the presence of episodic components became available recently (Grieser et al., 2002).

    As some components of the climate system respond slowly to change, the climate system naturally contains persistence. Hence, the statistical significances of REML AR1-based linear trends could be overestimated (Zheng and Basher, 1999; Cohn and Lins, 2005). Nevertheless, the results depend on the statistical model used, and more complex models are not as transparent and often lack physical realism. Indeed, long-term persistence models (Cohn and Lins, 2005) have not been shown to provide a better fit to the data than simpler models.

    Here is the cited paper by Cohn and Lins (2005), which looked at modelling long-term persistence in hydrological data employing FARIMA models. As the comment above explains, these “models have not been shown to provide a better fit to the data than simpler models”.

    So, what is deceiving/non-scientific about this? You disagree with this? Fine, why don’t you write and publish a paper and, if it is deemed to have any substance by those who know about both statistics and climate, have it cited in the next IPCC report?

    If you think you know more about statistics then everybody working in climate science, then you are seriously delusional. I suggest you go and re-read carefully Marty’s comment, then the B and V paper and then lookup the meaning of the word “humility”.

  1538. DLM Says:

    Igor says: “…and whilst you are at it, why not ask Bart to explain when he is going to stop beating his wife?”

    You present a false characterization of the question. It more resembles: Do you beat your wife? It was a legitimate yes/no question, and it has elicited the predictable response. Our faith in the IPCC’s competence and objectivity has been restored.

    I am not convinced that VS is the ultimate authority on the use of statistics, but not having a paper published in a climate science journal, or not being cited by the BS IPCC, sheds no light whatsoever on his qualifications. In fact, it may amount to a badge of honor.

  1539. phinniethewoo Says:

    Igor
    you keep on waffling about models with volcanoes in it etc, and suddenly detected deterministic non linear trends in AGWalarmist pseudoscience to cover up for the work not done by Baghwan Pachaundry and his march of the dunces. Oh and many many reports from the AGW clerus , all irreftuable:

    Strawman!

  1540. manacker Says:

    Bart and DLM

    Sure, the IPCC “trend” chart (AR4 WG1 Ch.3 FAQ) is misleading.

    In fact, it is just as misleading and dishonest as the chart below.

    The IPCC statement derived from this comparison (SPM 2007, p.5) is also misleading, in that it implies an acceleration of warming (as does the bogus chart):

    The linear warming period over the last 50 years (0.13°C [0.10°C to 0.16°C] per decade) is nearly twice that for the last 100 years.

    From the chart I posted one could conclude (just as doubtfully):

    The linear warming period over the first 40 years (0.14°C per decade) is twice that for the entire 100 years (implying just as false a deceleration in the trend).

    Both statements are technically correct, but they are intended to convey a message that is false, therefore they are “hidden lies”.

    I believe that is the point made by several posters on this site.

    Max

  1541. phinniethewoo Says:

    Plazaeme
    Note that for shorter recent periods, the slope is greater, indicating accelerated warming. Wow!

    maybe we should call our friends the agw alarmists from now on the slope-ists :)

    sloppistry with a keen eye for visually appealing graphs

  1542. John Says:

    DLM

    It is doubtful a known “climate criminal” would be allowed to publish in a climate science journal anyway.

  1543. Ibrahim Says:

    Bart,

    Could you please explain what was het cause of the rise in temperatures from about 1900 untill 1945 and the cooling from about 1945 untill 1960.
    And would you read the next book from page 444 onward and maybe than you become a bit sceptic about AGW.

    http://www.archive.org/stream/arcticice00zubo#page/444/mode/2up

    Veel plezier!

  1544. DLM Says:

    John says:”It is doubtful a known “climate criminal” would be allowed to publish in a climate science journal anyway.”

    That may be true, but it would not necessarily keep VS from being cited by the IPCC. He could get something published in a hiking magazine, or maybe even the Journal of Gardening.

  1545. KD Says:

    Bart,

    Because you believe WG1 is not deceiving, I hope you won’t mind adding a few trend lines and confidence intervals as follows. Note that all have the same number of years as the original 1975-2009 trend line.

    1915-1949
    1916-1950
    1917-1951
    1918-1952
    1919-1953

    Thank you. I look forward to seeing the new chart with the new trend lines. I’m betting these will be quite illuminating.

  1546. Bart Says:

    Tony,

    You truly sound alarmist/ridiculous in your assertion about ‘world givernment’ (but a discussion aout that clearly belongs in the open thread). The hits I’m getting probably reflect that there are many people who strongly dislike the perceived policy consequences of AGW and will jump at anything that can prevent them from having to face reality.

    DLM,

    That graph is one of the most popular ones on the internet re climate change, and I’ve seen it multiple times, even on the evening news. It has been very influential. There is a reason that it didn’t make it into any scientific assessment: The graph is entirely misleading. Newsmedia and politicans are using stuff found on the internet to try to prevent even the most minimal policies, so therefore the influential stuff that’s clearly bogus should be exposed as such.

    Igor,

    Thanks for the addition.

    Ibrahim,

    There are different climate forcings at work simultaneously; some natural, some man-made. In the beginning of the 20th century, increased solar irradiance, a lack of strong volcanism and GHG were the primary positive (i.e. warming) climate forcings. In the middle of the 20th century, the strong increase in aerosol precursors (aerosol exerts a negative (i.e. cooling) climate forcing) caused the temperature to remain more or less stable. Since the 70s, aerosol burden didn’t increase as much anymore while GHG did; GHG became the dominant climate forcing.

  1547. Ibrahim Says:

    Bart,

    Look:

  1548. Ibrahim Says:

    I forgot:

    http://tamino.wordpress.com/2008/04/05/stalking-the-elusive-solar-cycletemperature-connection/

  1549. DLM Says:

    Bart,

    I have been seriously interested in the issue of ‘climate change’ for a couple of years, and I do not recall seeing that graph, or a discussion of that graph, on the ‘popular’ skeptic sites. I am sure it has been properly ‘debunked’ on RC (which was the first site I frequented), but I haven’t looked at that propaganda dispenser for some time. In any case, I think that your characterization of that chart, as being “very influential”, is ludicrous.

  1550. JvdLaan Says:

    DLM and Phinnthwewoo
    Have you had any substantial to add so far, beside your denialist Watts-and co-crap-so-easy-to-debunk? Or are you of the kind that is seeing som much conspiracies in the CRU-mails while there is almost anything that disproves AGW (remember the fudge factor was behind ;; which means, not te be used in the code.

  1551. JvdLaan Says:

    … almost anything …
    should read: … almost nothing…

  1552. DLM Says:

    PS

    “Is there nothing in the use of visualization tools by the AGW crowd that will elicit a hint of disapproval from you Bart?”

    Start with An Inconvenient Truth, and proceed through the Hockey Stick, hide the decline, and the various IPCCgates; including the latest-Cowgate! Do you approve of all that stuff, Bart? Is little Joe Deleo really that big of a problem?

  1553. DLM Says:

    Jvd,

    Well, I find phinnie’s posts at least interesting and amusing, and I am sure phinnie will reciprocate, but I don’t recall seeing any post of your’s up to now. I hope that answers your question.

  1554. Kweenie Says:

    I’m not sure if VS (and other Dutch readers) has taken the time to read the Dutch section of this blog. It’s FULL of hardcore AGW [edit]. Having taken notion of this contents and the recent attitude of Bart I’m pretty sure he will never, ever come across even halfway for an honest and truly scientific debate without the Codex Vaticanus dogmas.

    Vechten tegen de bierkaai…

  1555. John Says:

    DLM

    It’s rumored that since the recent IPPC mistakes that they have distributed some new definitive reference books to climate scientists, they are:-

    “The Penguin Book Of Climate Science” and
    “The Penguin Book Of Statistics”

  1556. Kweenie Says:

    PS
    One reason to praize Bart is the fact that he does not censor any (I believe) of the messages. So in a way that’s positive, because VS et al may not have convinced Bart and the Tamino [edit], but I’m pretty sure multiple anonymus readers are positively inspired.

    For that, thank you Bart, maybe it’s your calvinist genes? ;)

  1557. phinniethewoo Says:

    jvdlaan
    could you please de-clog what you wrote there ? thnks.

    Dr Bart should acknowledge that he has drawn cherrypicked lines on his charts without any consideration for Time Series Analysis.
    He should promise to us that he will never do that again..billekoek.

  1558. JvdLaan Says:

    @DLM
    Well since you mention hide the decline, it shows you simply do not understand. Meanwhile, the Bumblebee still flies.

    @Kweenie: ga toch lekker naar GeenStijl, Klimatosoof of DDS als het je hier niet bevalt. Van wetenschap weet jij hoegenaamd niets.

  1559. Bart Says:

    Kweenie,

    Tone down your language and keep your accusations at the door.

    I’m all for an honest discussion of the science. Problem is, many aren’t. If you have an issue with what I wrote elsewhere, take it up over there and come with some real arguments rather than mere mud slinging.

  1560. phinniethewoo Says:

    Kweenie

    Haven’t you noticed the innocuous [edit] markers everywhere , also in your posting? Or was this some ironing of yours :)

  1561. DLM Says:

    John,

    and “Climate Science For Dummies”

    Kweenie,

    Bart does deserve praise. At least he has the guts to confront criticism. And he must be getting scalded by the team for allowing this to go on.

  1562. DLM Says:

    JVD,

    What is it that I don’t understand? Other than your way of communicating, which is very angry, and almost incomprehensible.

  1563. John Says:

    DLM

    “Bart does deserve praise. At least he has the guts to confront criticism. And he must be getting scalded by the team for allowing this to go on”.

    I’ll second that.

  1564. JvdLaan Says:

    @phinniethewoo
    Have you ever heard of interpunction? If not, look it up. After you have learned the concept, it is maybe a good idea to use it.

  1565. JvdLaan Says:

    @DLM
    Start with An Inconvenient Truth, and proceed through the Hockey Stick, hide the decline, and the various IPCCgates; including the latest-Cowgate! Do you approve of all that stuff, Bart? Is little Joe Deleo really that big of a problem?

    By mentioning these you showed your true colours. And yes, my previous reaction was made in anger. Not caused by your comment – I can handle that – but by the continuous flood of off-topic comments and scandulous remarks toward our host.

    And besides, it is D’Aleo. And he is not little, but quite infuential in the blogosphere.

  1566. mikep Says:

    Bart, you say
    “But merely connecting the beginning and endpoint is imho clearly inferior to OLS, for reasons I stated before. I suggest we agree to disagree on that.”
    For reasons given in my post before I think you are wrong. To what question is the OLS trend supposed to give the answer? It does not tell you how much temperature has actually increased – the two data points do give the answer to that – nor is it the best guess about where temperature will go next – OLS estimates in this context are not Best Linear Unbiased Estimates – and the stochastic trend gives a better answer.

  1567. Bart Says:

    (Hmm, ironically this got in the wong thread initially …)

    Off topic discussions (on a world government, an Inconvient truth and other stuff) belong in the open thread.

    Swearing or derogatory language is not allowed. I’ll [edit] it if it’s only a sporadic word, and will delete the entire comment if the whole gist is just derogatory. Repeat offenders of the comment policy will be put on moderation. If you don’t like, take it elsewhere.

  1568. phinniethewoo Says:

    For a drunk’s random walk every single step he takes is a step into the new origin which is from now on the Expected Value of his position.

    So if the “earth temperature anomaly” 150y snippet we are looking at were for example a random walk , then the “distance covered” 1850-2000 is the distance covered 1850-2009: Nothing more nothing less.

  1569. Eli Rabett Says:

    You know, the black helicopter crowd should really be advocating immediate and strong action to limit greenhouse gas emissions. The bottom line is that geoengineering requires fleets of black helicopters to get done. If it ever gets to the point where we have to do things like pump sulfates into the atmosphere, THAT will be the time when the guys in black show up on your lawn, not now.

    Repent Tick Tock man

  1570. DLM Says:

    mikep,

    “To what question is the OLS trend supposed to give the answer? IIt does not tell you how much temperature has actually increased – the two data points do give the answer to that – nor is it the best guess about where temperature will go next – OLS estimates in this context are not Best Linear Unbiased Estimates – and the stochastic trend gives a better answer.”

    That seems to me to be fairly persuasive, but then what do I know. This comment from the Breusch paper seems to lend some support to Bart’s position: “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards.” As does the conclusion of the paper. And testy Igor has pointed to some sources that seem to also lend support to Bart’s position, which to my untrained eye have not been explained away.

    I don’t expect Bart to concede the point. So why not move on to the cointegration stuff?

    Bart,

    Will you reply to questions that are asked on the open thread? Do you read the open thread?

    It seems to me that if you think a criticism of little Joe Deleo’s chart is appropriate here, it might also be appropriate to give your opinion of the scientific honesty of an Inconvenient Truth, which has very likely had a larger impact on the public’s perception of this issue than has little Joe’s graph.

  1571. Eli Rabett Says:

    The two data points DO NOT tell you how much the temperature has actually increased, because they are subject to measurement error, if nothing else.

  1572. DLM Says:

    This is important:

    The two data points DO NOT tell you how much the temperature has actually increased, because they are subject to measurement error, if nothing else. And all the other data points are not subject to measurement error, or deliberate massaging.

  1573. Allan Kiik Says:

    Last sentence from Cohn & Lins 2005 paper (thanks for this link, Igor):

    “…For example, with respect to temperature
    data there is overwhelming evidence that the planet has
    warmed during the past century. But could this warming be
    due to natural dynamics? Given what we know about the
    complexity, long-term persistence, and non-linearity of the
    climate system, it seems the answer might be yes. Finally,
    that reported trends are real yet insignificant indicates a
    worrisome possibility: natural climatic excursions may be
    much larger than we imagine. So large, perhaps, that they
    render insignificant the changes, human-induced or otherwise,
    observed during the past century.”

  1574. phinniethewoo Says:

    wabbit

    The two data points DO NOT tell you how much the temperature has actually increased, because they are subject to measurement error, if nothing else.

    -No, there is no measurement error: the thermometers are supposed to be correct.

    -There is variation amongst the 4000 uhi sites , but Dr Bart has a tin ear on that matter. He should give the background of his plot a red colour.

    -There is a “discussion” possible on how the measure, defined by averaging 4000 uhi sites, would be indicative for the calorific system called earth, but for now we keep that discussion amongst adults who are NOT thick as a parquet floor.

  1575. John Says:

    phinniethewoo

    Actually it’s 100% that there is an error as it is not possible to measure anything without error.

  1576. mikep Says:

    Eli, measurement error was covered in my original post is is NOT the issue here.

  1577. HAS Says:

    Hi

    The level of name calling to content still seem high so perhaps something help to think about VS’s question to Bart so we can move along.

    First to remind VS’s question:
    “I want you to say whether, in light of this entire discussion, you feel that this WG1 figure (together with the confidence intervals listed in the legend) is deceiving/non-science, or not.”

    Bart’s response so far:
    “No …. The trendlines help to see the temperature increase for different time intervals, that are different enough from each other so as to be climatically relevant. One could argue whether they are a necessary addition: …. If you take issue with the fact that they are linear fits, then you could just connect the begin- and endpoint ….

    “The thing is, people are interested in what the change in temp has been over different time intervals. .. In the absence of a better way to do it, a linear trend is often chosen.

    “… is it then better to report the trend with or without errors/confidence intervals? That’s difficult question….. it would have been better to add a clarification to that effect. Calling the whole figure deceiving is way to strong imho.”

    Hope the abbreviation still leaves the gist of the response.

    Bart I think the context is important. The IPCC report is ostensibly a summary of the science. The science knows that a linear trend is inappropriate for both the time series and the confidence level estimation and it knows that there are better underlying processes that fit to it.

    Going backwards from the easier end: the confidence limits on the linear trends are just plain wrong (and have no meaning). End of story. Not a case for adding a clarification (perhaps “The confidence limits have been estimated on assumptions that are inappropriate and incorrect and the correct ones would probably show these trends are insignificant”).

    Calculating the correct confidence limits is easy and that is what should be shown. The application of TSA to this series was in the literature well before VS came along.

    Surely we can we agree on this, else we have no respect for science whatsoever?

    Now let’s go to the more difficult question of adding in linear trend lines. Your argument for doing this is about meeting people’s interest in change of temp, and an assertion that they “are different enough from each other so as to be climatically relevant”. The point of much of this thread is that in the absence of other evidence the latter point isn’t so.

    However let’s go back to the context. The caption to this graphic in the IPCC Report says (inter alia) “Note that for shorter recent periods, the slope is greater, indicating accelerated warming.” Now since the linear trends being referenced in the shorter periods are probably insignificant when done properly isn’t the very conclusion that the IPCC is seeking to draw from this graphic just plain wrong?

    So I think Bart if you agree the confidence limits are incorrect then I think you have to say the use of the various trend lines taken in context is non-scientific, is misleading, and fails in your test of “meeting people’s interest in change of temp”.

    Fair enough?

  1578. phinniethewoo Says:

    john

    Would you agree that buying higher resolution thermometers for the 4000 uhi sites would not change a JOT to the statistics, Dr Bart’s graphs, the reports, and none of the numbers ?

    not a JOT.

    so explain why’s that.

  1579. manacker Says:

    Bart

    Short-term “blips” in the longer-term “blip” we call the HadCRUT record from 1850 to today are interesting, but maybe they do not tell us much.

    Looking at the record, we see that we had several apparent multi-decadal warming oscillations in the record of 30± years each, to wit:

    1857-1882 (26 years): y = 0.0142x – 0.506, 0.37C linear warming
    1910-1944 (35 years): y = 0.0152x – 0.524, 0.53C linear warming
    1976-2000 (25 years): y = 0.0159x – 0.073, 0.40C linear warming

    In between there were multi-decadal trends of slight cooling:

    1883-1909 (27 years): y = -0.0062x – 0.251, -0.17C linear cooling
    1945-1975 (31 years): y = -0.0006x – 0.118, -0.02 linear cooling

    And at the beginning (1850-1856) and end (2001-2009) there were short periods of cooling, which may or may not be part of a longer trend.

    The overall record showed warming:
    1850-2009 (160 years): y = 0.0041x – 0.465, 0.65C linear warming over period.

    I know that linear trend lines do not tell us much, but it is the standard used by IPCC, even in their “trick chart” from AR4 WG1 Ch.3 FAQ.

    Do these apparent multi-decadal warming oscillations tell us anything? Phil Jones has told us in an interview by Roger Harrabin of BBC that they are statistically indistinguishable, and the first two occurred before human GHGs could have caused much warming, so there must have been something else at play.

    One thing seems clear. It does not seem reasonable to “extrapolate” the latest warming cycle into the future based on one set of forcing parameters alone without knowing a) what caused the previous warming cycles and b) what caused them both to reverse to a slight cooling cycle.

    Do you have any thoughts on what could have caused these earlier oscillations?

    Max

  1580. Bob_FJ Says:

    Plazaeme, you responded to mine above :
    Is there any real difference between ~1945 and ~2009 trend analysis? (and what went wrong after 1945?)
    With:
    This may be another way to visualize it: http://cryp.dontexist.org/imags/bart.jpg
    Thanks for that. I’m impressed, and I guess you have some software better than my ‘MS Paint’. It would be good to see it using Bart’s 5-year smoothing because the 1998 spike is distracting without considering the sharp “corrections” of 1999 & 2000.

  1581. John Says:

    phinniethewoo

    It may make a slight difference to the measured raw values not much though. I’am guessing here but I think it likely that most actual measurements are fairly reliable. There are a lot of adjustments etc done after measurement, some of which are larger than the stated tolerances

  1582. cohenite Says:

    Bart says this about Fig 1 of FAQ 3 [the wg 1 figure]:

    “No, I don’t think that wg1 figure is deceiving. The trendlines help to see the temperature increase for different time intervals, that are different enough from each other so as to be climatically relevant”

    How are arbitary 150, 100, 50 and 25 year periods climatically relevant; what physical forcings are they consistent with? Wouldn’t climatically relevant periods look like this:

    http://www.woodfortrees.org/plot/hadcrut3vgl/plot/hadcrut3vgl/from:1880/to:1910/trend/plot/hadcrut3vgl/from:1940/to:1976/trend/plot/hadcrut3vgl/from:1976/to:1998/trend/plot/hadcrut3vgl/from:1998/to:2010/trend/plot/hadcrut3vgl/from:1910/to:1940/trend/plot/hadcrut3vgl/to:2010/trend/plot/hadcrut3vgl/from:1850/to:1880/trend

    That is, the periods are consistent with PDO phase shift.

    And isn’t this question basic to this issue:

    “Is there an AGW forcing which can produce a low frequency deterministic trend?”

    That is, if there is an accelerating temperature trend in the period of AGW what physical attribute of the AGW factors is producing it?

    Come on eli, have a shot, restore my faith in you as the black knight of AGW.

  1583. dougie Says:

    Frank Says:
    Bob_FJ Says:
    thanks for your replies & anyone else after this reply.
    helps a bit but am still a bit nervous on how the straw man argument has been used/abused re. climate change/warming, but that’s my problem.

    this thread is worth than more my wanderings, so delete as req’d.

  1584. manacker Says:

    Bart

    Further to my query on the multi-decadal oscillations in the HadCRUT temperature record, maybe this chart makes it easier to visualize them

    But the real question is, what caused them?

    Max

  1585. manacker Says:

    cohenite

    Your temp chart with PDO oscillations is interesting.

    It seems to fit generally with the chart showing the observed multi-decadal warming and cooling cycles in the HadCRUT record, which I just posted.

    But there is still something missing.

    Max

  1586. KD Says:

    manacker:

    Thank you for doing the “trend” plots (and more) that I had requested from Bart. Didn’t really expect him to do them. I agree, a key question is if CO2 is the key cause, then why is the trend from 1910-1944 the same as 1976-2000? And why didn’t we hear about AGW in 1920? 1930? 1940?

  1587. cohenite Says:

    But there is still something missing.

    Max

    Well, what do you think it is Max?

  1588. Bob_FJ Says:

    Ibrahim, you asked in part:

    Bart, Could you please explain what was the cause of the rise in temperatures from about 1900 until 1945 and the cooling from about 1945 until 1960.

    Bart, you responded with:

    There are different climate forcings at work simultaneously; some natural, some man-made. In the beginning of the 20th century, increased solar irradiance, a lack of strong volcanism and GHG were the primary positive (i.e. warming) climate forcings. In the middle of the 20th century, the strong increase in aerosol precursors (aerosol exerts a negative (i.e. cooling) climate forcing) caused the temperature to remain more or less stable. Since the 70s, aerosol burden didn’t increase as much anymore while GHG did; GHG became the dominant climate forcing.

    However, if you look at the GISS net forcings compared with HADCRUT there is no explanation for the cooling from around 1945 through to 1960. Furthermore it is markedly uncorrelated for the sharp warming, then plateau, and then sharp cooling each side of 1940. The period of slighter cooling from 1960 to 1975 might be explained by the Agung eruption, but then how do you explain the next 20 years of sharp warming together with higher volcanic activity. However, GISS (Sato et al) admit that prior to 1990, (and Pinatubo), the volcanic forcing estimates are subject to large uncertainty. (25% to 50%) And, Pinatubo, El Chichon, Krakatoa…. Well known, but Agung?

  1589. DLM Says:

    HAS,

    Very well supported and reasonable presentation. I don’t see any holes in your case, but what do I know.

    “So I think Bart if you agree the confidence limits are incorrect then I think you have to say the use of the various trend lines taken in context is non-scientific, is misleading, and fails in your test of “meeting people’s interest in change of temp”.”

    Maybe he meant to say “meeting some people’s interest in change of temp.” But I don’t want to pretend to speak for Bart. I am sure he will answer for himself.

    I really think you are a good guy, and a good sport, Bart. Work with us here.

  1590. cohenite Says:

    Perhaps while I am waiting for Max to suggest what is missing from the PDO correlation with temperature trends over the last 160 years I may take the liberty of suggesting that in respect of the overall upwards trend in temperature, despite the PDO oscillations [sic], that ENSO asymmetry adequately explains the [slight] temperature increase over that period; see the David Stockwell comment on the McLean et al paper for an overview of such non-linear asymmetry:

    Click to access 0908.1828v1.pdf

    If such asymmetry is not sufficient to explain temperature trend [without recourse to AGW] then the underwritting of such asymmetry by the sun provides a deeper explanation of such trend; see Glassman:

    http://www.rocketscientistsjournal.com/2010/03/sgw.html

  1591. Bob_FJ Says:

    Cohenite Reur April 9, 2010 at 00:03

    You wrote in part, giving a link :

    “…That is, the periods are consistent with PDO phase shift…”

    However, it does not show PDO for me, whereas this link using the same facility does, I hope:

    It looks to me that the PDO is not in phase with HADCRUT after ~1975, but it does look good before then. Over at RealClimate, David B Benson sees AMO + Volcanic forcing as significant, and I’ll give you a link after a comment I’m about to do there. I doubt if taking one oscillation alone is too meaningful, and how can they be weighted? (calibrated to global average surface T) .
    My initial glance-read reaction to the paper you cite on ENSO is that I don’t get the idea of cumulative ENSO warming. For instance, look at 1998 and there appears to be a “correction” following in 1999 + 2000. So why is there not also a cumulative cooling?

  1592. cohenite Says:

    Bob; the post 1976 temperature and PDO phase shift situation is summed up in this paper;

    Click to access 0907.1650v3.pdf

    If I understand you about “cumulative cooling” the asymmetry in favour of the warming cycle would prevent an overall cooling over the post 1850 cycle, and this really is the beginning of the modern warming not any arbitarily selected, pro-AGW point in the 20thC; if you mean post 1997 [the date of Stockwell’s down break]/2002 [the date of the down break postulated by Tsonis] cumulative cooling, doesn’t every temperature indice, except GISS show cooling within this period; the point about ENSO asymmetry is that the internal cooling within the -ve PDO does not equal the warming during the +ve PDO; this is the argument against stationary natural factors. In respect of the trend from AGW,

    “Is there an AGW forcing which can produce a low frequency deterministic trend?”

    That is some AGW factor which can explain the upward trend since 1850, that is the yellow trend line here;

    http://www.woodfortrees.org/plot/hadcrut3vgl/plot/hadcrut3vgl/from:1880/to:1910/trend/plot/hadcrut3vgl/from:1940/to:1976/trend/plot/hadcrut3vgl/from:1976/to:1998/trend/plot/hadcrut3vgl/from:1998/to:2010/trend/plot/hadcrut3vgl/from:1910/to:1940/trend/plot/hadcrut3vgl/to:2010/trend/plot/hadcrut3vgl/from:1850/to:1880/trend

    I can only see PDO [with ENSO asymmetry] powered by the sun as being up to the job; post 1983, with a declining or flat sun, the Pinker et al paper and cloud cover variation seems sufficient to explain any warming.

  1593. manacker Says:

    cohenite

    Sorry for the delay in responding.

    From what I can read, it appears that the PDO swings occur in 20 to 30 year cycles, similar to the observed multi-decadal climate oscillations, so it makes sense to assume that there may be a connection. It is unclear to me what drives the PDO cycles, although it is obvious to me that this has nothing to do with atmospheric GHG levels.

    Solar activity was at an all-time high (for several thousand years) in the 20th century, but has dropped off sharply as solar cycle 24 is having a hard time getting started. The peak Wolf Numbers for solar cycles 19 through 23 (1955-2007) were 68% higher than those for solar cycles 10 through 14 (1858-1902). Has this impacted climate and, if so, how?

    Are the PDO cycles somehow connected with ENSO?

    Is the sun driving this all?

    And then there is the underlying warming trend (0.041C per decade), which some attribute to a gradual recovery from the LIA. What is driving this?

    Maybe GHG also play a part, although it appears from the observed data that this may be not be a major factor, despite the claims by IPCC (and Bart’s belief).

    Anyway, I think the data show that there is much more about our planet’s climate that we do not know that there is that we do know, and that is why I find this thread so fascinating.

    I recall Massim Taleb’s statement in his book The Black Swan that in making forecasts for the future, what we know is not nearly as important as what we do not know.

    And I believe that this is the fundamental weakness of the science supporting the IPCC AR4 WG1 and SPM 2007 reports, which myopically fixate on anthropogenic factors while essentially ignoring all the rest.

    Max

  1594. manacker Says:

    Sorry for typo: it’s Nassim Taleb, of course. A good read. To be recommended to all “climatologists” (and others interested in “predictions”).

    Max

  1595. cohenite Says:

    Max, as well as solar variation through TSI, the role of solar in shaping cloud variation seems to be the other dominant factor in recent warming; that variation in the context of ENSO is discussed in a recent Meehl and Arblaster paper;

    Click to access meehl_solar_science_2009.pdf

  1596. plazaeme Says:

    Bob_FJ:

    … I guess you have some software better than my ‘MS Paint’.

    “The Gimp”, short of a free (and open source) “Photoshop”.

    It would be good to see it using Bart’s 5-year smoothing because the 1998 spike is distracting without considering the sharp “corrections” of 1999 & 2000.

    Here you have (click)

    For joining the lines I used HadCRUT3 and NCDC, since GISS slope is different. And I took out the not smoothed lines in the transparency, just to clarify the image.

  1597. cohenite Says:

    Is there a contest to see who can present a graph showing the temperature equivalence between the ist and 2nd halves of the 20thC? There’s a lot to choose from but even GISS does the trick:

  1598. VS Says:

    Hi guys,

    I’m taking a (long) break, and this time for real. I think the reasons for this should be glaringly obvious.

    Thank you all for your (constructive) comments, support (both private and public) and time :)

    If anybody needs to contact me for some reason, you can do so at vs [dot] metrics [at] googlemail [dot] com.

    All the best,

    VS

  1599. phinniethewoo Says:

    Dr Bart
    Now you broke it.. Now what?
    Producing graphs, crisp & clear, i suppose..

    Has,
    You keep on talking about graphs in the 3000 pages of the IPCC:
    We learnt that (visually appealing) graphs are NOT the way forward.
    If you want to enjoy the radio, do you look at the graphs it produces on an oscilloscope?

  1600. Nigel Brereton Says:

    VS,

    You’ve earned a well deserved break but don’t forget that there is unfinished business here. Hope that you can appreciate that some of us are learning so much, not just from your explanations but also from the responce (or not).

    Enjoy
    NB

  1601. AndreasW Says:

    “we gotta get rid of the medeival warm period”

    “hide the decline”

    “the question is *not* wether the temperature series is stationary or not”

    “No, i don’t think the wg1 graf is deceiving”

    I don’t know which one of those one-liner is my favorite. It’s a close call. One thing for sure: Those climate guys never seise to amaze me!

    VS

    If you are leaving now i will thank you for making this thread the most exiting in climate blogworld for years. Scientifically you swept the floor with the warmers, and doing so with a smile. Pure entertaining!

    I hope you show up some place in climate blogworld. Your contribution would really make a difference.

    cheers:)

  1602. Paul Carter Says:

    Dang, I was hoping VS would be around for longer. I think econometric analysis will improve the accuracy of the neural networks that are used in some of the IPCC-cited climate models.

  1603. Bob_FJ Says:

    Plazaeme,
    Wow!
    Reur merging of ~1940 versus the recent plateau using Bart’s 11-year smoothing.
    Wow!
    I don’t have time now but will comment more tomorrow morning my time in Oz, (Melbourne)
    Wow!

  1604. DLM Says:

    VS,

    Ah, so you are going to pull an Eduardo on us. Did you really expect Bart to accept your revelation, and kiss your feet? This looks a lot more like petulance than frustration.

    Some of us thought you had something important to say here. Apparently it isn’t that important to you.

    Take your ball and go home. Leave the field to Bart, and sod et al.

    Thanks for your hospitality Bart. Please try to do something to clean up climate science. It’s not too late.

  1605. Marco Says:

    @AndreasW:

    Please provide a direct link to what you claim to be a quote by a ‘warmist’:
    “we gotta get rid of the medeival warm period”

    I know you are not able to do so. All you have is a claim by David Deming, who, when challenged to provide proof, never provided that evidence.

  1606. IanH Says:

    @DLM

    I can well understand VS getting frustrated. The settled state in this thread is that the temp is I(!) and we still have people trying to draw a trend line, or justify one for some subset of the data. You need to get over the straight line gig and we can all then move on.

  1607. KD Says:

    Marco

    You will find that the quote “we have to get rid of the Medieval Warm Period” is attributed to an email sent to Dr. David Deming by a member of the IPCC. Here is one link:

    http://www.canadafreepress.com/index.php/article/16163

  1608. KD Says:

    Marco

    ps – even if that quote isn’t accurate, does it really change AndreasW’s argument that much?

  1609. phinniethewoo Says:

    It cannot be denied there is a propensity for trying to deal “mortal blows” to the MWP concept, in alarmed circles..

    The science in that? wow.

  1610. DLM Says:

    IanH,

    And what planet did you just arrive from? There is no settled state on this thread. And I am not among those who are happy drawing spurious trend lines to use as visualization tools.

    Bart’s side has dredged up enough support from plausibly authoritative sources to convince themselves that they are perfectly justified in continuing to do their thing. I think they are delusional.

    OK, VS is frustrated. So he takes his ball and goes home. Case closed.

  1611. JvdLaan Says:

    Marco

    ps – even if that quote isn’t accurate, does it really change AndreasW’s argument that much?

    Well, when seeing that link…

    It has Loehle 2007 in it! And we all know that… well…nevermind (the bullocks) :-)

  1612. manacker Says:

    cohenite

    Thanks for link to Meehl et al. study.

    Max

  1613. manacker Says:

    Can’t really blame VS for leaving us (for now, at least). He has given us all (even those who are not expert in statistical analysis) something to think about, but it appears that “thinking about something” may not be everyone’s cup of tea.

    For me he has raised questions regarding the statistical robustness of the CO2 temp correlation. These questions have not been answered here as yet.

    It would appear to me that it is up to those who support the AGW premise to demonstrate that the correlation is statistically robust, not up to those skeptical of this premise to demonstrate that it is not.

    So, with or without VS, the discussion should continue.

    And I am sure he will be back once it makes sense to do so.

  1614. AndreasW Says:

    Marco

    Who cares who said “we gotta get rid of the medeival warm period”? They certainly did. At least until Mr Mcintyre come along and showed them that statistics isn’t a PR agency but a tool for scientists. Im my eys VS did the same thing with Gisstemp, but now he’s gone and the thread is dead. Science is gone and all that is left is the political mudslinging. Well, it was a fun ride.

    Cheers

  1615. Marco Says:

    @IanH:
    Yes, I know, I already pointed to the *claim* by David Deming. He’s *claimed* this was sent to him in 1995, by a member of the IPCC. When asked who…silence. When asked to provide evidence…silence. Jonathan Overpeck has been mentioned as the source, but in the hacked e-mails he not only proclaims innocence, he appears to not even know Deming. So why sent him an e-mail? Worse even: Overpeck has several articles past the 1995 date on the MWP.

    And while it does not change AndreasW’s argument, it does change his credibility: he clearly is incapable of objectively looking at any statement, and looks for any possibility to interpret such statements as negatively as possible.

  1616. phinniethewoo Says:

    Alternatively ALL the emails of these fancy institutes could be opened up to us, if there were to be any integrity left in these redratholes. The only thing we would recognise in these emails anyways would be hard working people vetting for “pure science”, right?
    Right.

    VS, (Alex and others) thanks for these amazing posts!
    Hope to find you back soon enough in the blogosphere.

  1617. manacker Says:

    Marco

    In assuming (without any evidence to support your assumption) that Deming is lying you are missing the real point here.

    Mann’s hockey stick has been comprehensively discredited.

    The historically well-documented MWP has been confirmed to be both a bit warmer than today’s climate and global in scope, by many studies from all over the world, using many different methodologies.

    I can cite you at least 20, if you are interested.

    Mann tried to kill the MWP, but was unsuccessful.

    IPCC made the foolhardy and later embarrassing decision to embrace the hockey stick without first checking its authenticity and plaster it all over their TAR, most notably in a masterful piece of chartmanship (Figure SPM-10b: Variations of the Earth’s surface temperature: years 1000 to 2100), where Mann’s phony reconstruction is spliced onto the 20th century surface record (hiding the decline?) and then various model scenarios for the 21st century (shooting up to the ceiling like the scary chart in Al Gore’s Oscar-winning sci-fi show).

    Only a few die-hards still give this bit of “climate change denial” any credence today.

    Are you one of these dinosaurs?

    (I hope not.)

    It’s dead and buried. Let it R.I.P.

    Max

  1618. Bart Says:

    Hockeystick discussion is for the open thread. Take it over there.

  1619. manacker Says:

    Bart

    Got it. No more talk of hockey stick.

    Max

  1620. Silent Bob Says:

    Can’t blame VS, pearls before swines.

    Soon this little blog will fall back into oblivion where AGW rules.

    It was nice when it lasted and gave this little blog it’s 20 minutes of fame.

  1621. Bob_FJ Says:

    Plazaeme
    Thanks again for your graphical merging of ~1940 versus the recent plateau using Bart’s 11-year smoothing.

    It’s also interesting that the GISS values comparatively minimise up and down peaks, thus producing a more relentless temperature rise. It’s especially remarkable that their 1998 El Nino is significantly depressed compared with the others (including satellite versions), and that instead of 1998 being the hottest year, they make it 2005. (and the current plateau is less obvious)

    I’d like to consolidate my earlier comment by appending your graph and presenting again here and over at RealClimate and Tamino. It would be nice to have some input from statisticians. If that is OK with you, how would you like to be cited?

    Thanks also for the tip on the free “photoshop”-like software.

  1622. plazaeme Says:

    Bob_FJ:

    Thanks a lot, but I don’t need to be cited. It was your asking (your idea), and the images merging are a child’s task. But it’s really nothing new. Must have been done a lot of times. And with quite more care than my ten minutes.

    Thanks again.

  1623. HAS Says:

    phinniethewoo says on April 9, 2010 at 11:36

    “You keep on talking about graphs in the 3000 pages of the IPCC:
    We learnt that (visually appealing) graphs are NOT the way forward.”

    Pay attention phinnie – it’s the equations not the pictures (although Google tells me the Fig is referenced about 13 times in the report).

  1624. dhogaza Says:

    You’ve earned a well deserved break but don’t forget that there is unfinished business here.

    Yes, there is. VS left without ever telling us where B&R went wrong in their “proof” that their statistical analysis, which VS endorses, overturns so much of the last 100+ years of physics.

    I don’t think he’s smart enough to do it, personally.

  1625. Dave McK Says:

    Kudos to Bart for presiding with such restraint and politesse over this thread.
    Likewise, respect to VS for the same.

    Now it’s time for me to go home, too. Circus was interesting, but Bohr killed Arrhenius long ago and the egress awaits.

    Best regards to all. I hope you all figure it out.

  1626. Ray Donahue Says:

    Thank you Bart and thank you VS. This has been a very enlighting discussion. Regards, Ray

  1627. Peredur Says:

    VS inserted a fundamental key into a strong lock. It is still there …

  1628. Marco Says:

    @Dave McK:
    Are you paying attention at scienceofdoom at all??

    @Max (sorry Bart, had to do it here):
    I already know the graphs that show the MWP globally was a heterogeneous event, with some places being colder and others warmer. Why not make a synthesis of all the graphs? Ah, that’s right, it defies the purpose of confusing lay people.

    (and yes, I am calling Deming a liar)

  1629. Marco Says:

    @AndreasW:

    Again you just claim some said this. Once again I challenge you to provide proof. Try it, just for once.

    (Oh, and what exactly did VS do to GISStemp, other than show it contains a unit root? Notably, the same goes for the absolutely deterministic output of GISS ModelE. Conclusion?)

  1630. Kweenie Says:

    I see the trolls are crawling out from under their stone. This blog is back where it was before.

    Marco, I challenge you, just for once, to provide your refute against VS messages. Until now you completely failed to do so.

  1631. Shub Niggurath Says:

    Marco:I know you guys are happy that ModelE ‘contains’ a unit root.

    But all that it tells us it that ModelE is able to replicate that aspect of the temperature in the natural climate system.

  1632. Allan Kiik Says:

    VS did only one thing, until now, he has provided some real evidence that GISS temperature record contains unit root, and if we want to relate some other variables to this, proper formal method is cointegration analysis. Can we agree on this?

    Looks like some people don’t want to see what kind of results this method can provide us. But why? If we know with 90% probability that CO2 is the main driver of climate change then correctly applied cointegration analysis must show this, no? So there’s nothing to lose, right?

    May I provide a simple engineers view on this: some 10-20 years ago I was busy with designing and building some special measurement instruments for calibration labs, mainly high res temperature (with platinum resistance thermistors) and water flow (electromagnetic method, using Faradays law) measurements. Temperature is easy, its always signal + white noise and most simple filtering algorithms work just fine. But water flowrate is different beast, sometimes it behaves well, just like platinum thermos, but on some occasions gaussian filtering does not work very well and we get some spurious signals on the output of filter. If I understood VS correctly (don’t be too harsh on me if not) there’s a way to detect such behaviour in data by applying some time series analysis procedures. Looks really useful for some instrumentation engineering tasks.

    How is it possible that some climate scientists are afraid to give it a try?

  1633. phinniethewoo Says:

    Marco

    Could you elaborate on the deterministic character of modelE

    Let me clarify my question by skipping a century of scientific wisdom and pointing to Poincare’s exposition of the 3 body problem, and the effect of non linearities. Where he invented for us topology?

    modelE is all linear , right ?
    Right-oh.

  1634. phinniethewoo Says:

    a good introduction for you @ NN Taleb, The Black Swan, chapter 11.
    chapter 11: that’s where the IPCCs climate “science” belongs , btw.

  1635. phinniethewoo Says:

    allan

    quite agree
    I even would insist that there should be a strong correlation between CO2 and true temperature.

    If there is not found anyone , something has to give.
    =the temperature record was compromised.

    [edit]

    TSA from what i understand so far is by nature a much more sophisticated method than “filters” as TSA is an offline formalism: It allows for many options to be used and careful speculation and reiteration, the many tests you saw VS deploying.

    Filter theory comes from radio and analog electronics where with a few components some noise needs to be cleared out in a cheapo way.

    In this respect I do not understand why I for example read in the Hockeystick saga that “scientists” are using Butterworth filters to manipulate grpahs?? Sure TSA and VSs method is far better.

    With all the computer power at hand nowadays I think it is a great idea to start to use this “online” as well : rewrite all patents on measurement and throw in TSA! I’ll carry your suitcase to Tahiti :)

  1636. DLM Says:

    AlanK says:” VS did only one thing, until now, he has provided some real evidence that GISS temperature record contains unit root, and if we want to relate some other variables to this, proper formal method is cointegration analysis. Can we agree on this?”

    No!

    The alleged climate science consensus is dogma. It’s a crusade that they have to win, to save the world and their paychecks. Have you ever seen them give an inch? On anything?

    Bart appears to be among the most open-minded and tolerant of their soldier-clergy, but he ain’t budging.

    There is no point in continuing this discussion here. If it took this long to get to the impasse where the discussion/dispute currently resides, then it would take about four years to get to a similar Mexican standoff, in a fight over a formal cointegration analysis.

    If VS feels he has the chops, he should write a paper and get it published, somewhere. If he wants to continue his very interesting and potentially important exposition in the blogosphere, he should probably do so on a forum that is better suited to his temperament. WUWT, or ClimateAudit would definitely be venues where he would attract a crowd that contained more adoring fans, and fewer harsh mindless critics.

    Anyway, in the grand scheme of things, this is a sideshow. The dogma is crumbling on it’s own. Too many ‘Gates’. See Der Spiegel. See recent public opinion polls. See Copenhagen.

  1637. manacker Says:

    phinniethewoo

    Yeah. Taleb’s The Black Swan is a great read. It should be compulsory reading for all IPCC authors and reviewers.

    I particularly like the Yogi Berra quotes (end of Ch. 9) and what Taleb calls the

    “Berra-Hadamard-Poincaré-Hayek-Popper conjecture, which puts structural, built-in limits to the enterprise of predicting”

    Good stuff.

    Max

  1638. manacker Says:

    DLM

    You may be right that a discussion of “climate science” based on its merits is almost impossible to carry on because of the dogmatic belief in the “overwhelming consensus” and the inability of true “believers” in this consensus paradigm to see anything that lies outside the paradigm.

    VS raised the questions regarding the statistical robustness of the CO2 temp correlation and showed why these questions were valid.

    Despite a lot of back and forth, the basic questions were not answered.

    Yet it is critical for the validity of the CO2 temp causation premise that there be a robust CO2 temp correlation.

    The AGW faithful here have been unable to show that the correlation is statistically robust, so the questions raised by VS remain unanswered and the case for causation remains weak.

    They have also been unable to explain earlier warming cycles, which are statistically indistinguishable from the late 20th century warming period, but occurred before there could have been any significant CO2 impact.

    Yet they attribute the late 20th century warming principally to human forcing, because the models cannot explain it any other way.

    Without even going into detailed statistical analysis, this logic is flawed.

    But you are correct in predicting that it will remain a “Mexican standoff” between the AGW faithful and those who are rationally skeptical of the AGW paradigm.

    And, as you say, the debate will be settled elsewhere as the dogma crumbles on its own.

    Max

  1639. Bart Says:

    Manacker,

    Did it ever occur to you to apply skepticism in more than one direction?

    AGW does not hinge on a correlation between CO2 and Temp. There is more than just CO2 that can affect temp. You’re making a analogous claim to have falsified gravity by pointing to a bird in the sky.

  1640. DLM Says:

    Bart says: “AGW does not hinge on a correlation between CO2 and Temp. ”

    Now that’s bizarre. If you cannot show that CO2 causes temperature to at least noticeably increase, then your settled dogma is nothing more than a quaint theory. We know that there are other things that affect temperature. So what? If you cannot ‘robustly’ establish that CO2 will make it get warmer by some worrisome amount, despite the other influences on temperature, then what is it that you are whining about?

    Skepticism should indeed go in more than one direction. It is the hallmark of the AGW crowd that the use of visualization tools by skeptics is loudly condemned as disingenuous and bordering on genocide, while blatant and willful misrepresentation of science by the AGW doomsayers is just fine, if it advances the agenda.

    Will you answer the question Bart? What about that very influential Academy Award winning film, by the Nobel laureate? You know the one I am talking about. Did you see that joker use an enormous bogus chart and construction equipment to insinuate correlation/causation between CO2 and temperature? Is that unimportant Bart?

    Your side is losing because people can see the dishonesty and lack of proof, period.

  1641. DLM Says:

    PS

    What about that Indian ‘climate scientist scheister , who recently admitted that the Himalayan Glacier melting by 2035 BS was deliberately left in the IPCC report, despite the fact that it was known to be BS?

    Don’t bother to answer. I am out of here.

  1642. John Says:

    Bart

    Can you explain why a trace GHG was chosen as the cause well before there was even a problem!!

  1643. Tony Says:

    Bart, did you really mean to say:

    “AGW does not hinge on a correlation between CO2 and Temp”?>

    Given that AGW is an acronym for Anthropic Global Waming, are you saying that the whole carbon thing is irrelevant?

    So perhaps you could tell us what Humans are doing to cause Global Warming, other than emitting carbon dioxide into the atmosphere?

  1644. sod Says:

    So perhaps you could tell us what Humans are doing to cause Global Warming, other than emitting carbon dioxide into the atmosphere?

    emmiting methan is the obvious answer.

    there also are albedo changes…

  1645. Tony Says:

    Sod,

    Oh dear! So even you are telling us that the AGW=CO2 idea is wrong?

    Shouldn’t you then inform the IPCC that Carbon Trading ought to stop, that Carbon taxes are wrong, and that Carbon footprinting is pointless? And that they should advise governments to institute albedo trading and methane taxes instead?

  1646. Willem Kernkamp Says:

    In reply to:
    Allan Kiik Says:
    April 10, 2010 at 14:35

    “VS did only one thing, until now, he has provided some real evidence that GISS temperature record contains unit root, and if we want to relate some other variables to this, proper formal method is cointegration analysis. Can we agree on this?”

    I think we do agree on this. I would like to move on to discussing the Beenstock & Reingewertz paper. Presumably, they did do the cointegration right. In addition, they have two surprising findings:

    “Importantly, however, the long-run effect of rfCO2 in levels is zero. If instead of a permanent increase in its level, the change in rfCO2 were to increase permanently by 1 w/m2, global temperature would eventually increase by 0.54 C.”

    I have been looking for an explanation of the second surprise. Why on earth would a permanent increase in the rate of CO2 release cause a change in temperature? Almost certainly this parameter stands in for something else. At first, I thought it might be ocean temperature. A warmer ocean can hold less CO2. Hence the up tick in CO2 AFTER an ice age ends. However, the release of CO2 may not be enough (perhaps just 10 ppm per deg C). Otherwise, the ocean has a lot of CO2 to release so a one degree step in temperature would cause a steady release of CO2 for a long time. No surprise there from the point of physics. And, obviously, ocean surface temperature is very closely bound with climate. Something to smile about for those who don’t like statisticians. Once again, they would have reversed cause and effect!

    Bart, perhaps this is a good breakpoint for starting a new chapter. With a shorter and fresher discussion, we may induce experts like professor Beusch to participate?

    Will

  1647. Bob_FJ Says:

    Cohenite
    Thanks for your comment. You’ve given me plenty to think on and read.

    Meanwhile, you may be interested in an older discussion on AMO/PDO/ENSO, with a recent comment from Patrick 027 over at RC, and my nearby response below it at 825.

  1648. sod Says:

    Oh dear! So even you are telling us that the AGW=CO2 idea is wrong?

    Shouldn’t you then inform the IPCC that Carbon Trading ought to stop, that Carbon taxes are wrong, and that Carbon footprinting is pointless? And that they should advise governments to institute albedo trading and methane taxes instead?

    i am pretty shocked by your lack of knowledge. especially on this topic, with a detailed discussion on statistical finesse.

    we are concerned about “CO2 equivalents”.

    http://en.wikipedia.org/wiki/Carbon_dioxide_equivalent

    land use changes are included in basically all carbon trading programs.

  1649. Tim Curtin Says:

    Sod: I am shocked by your own lack of knowledge. “CO2 equivalents” are a very dubious concept, much like adding apples and oranges. There is no scientific basis for the claim that CH4 is 22 or whatever times (Singer et al 2008 claim 75) a more potent greenhouse gas than CO2, especially as CH4 soon dissipates in the atmosphere. Even AR4 WG1 shows a declining rate of growth of atmospheric CH4, Fig.2.4, despite world cattle stocks having increased by 40% since 1961 (FAO Prodstat 2010).

    But then sod is evidently one of those who believe that livestock in general create by way of CO2 and CH4 more than was contained in or derivable what from what they have eaten. The truth is that neither CO2 nor CH4 can be shown to have any statistically significant impact on climate.

  1650. Bart Says:

    DLM, Tony,

    I’ll repeat: There are more factors than just CO2 that can affect temp. The logical consequence is that if multiple of those factors are changing at the same time, no close correlation between temp and one of those factors is expected.

    Food intake affects your weight. But if I’m on a backpacking trip I eat a helluva lot more than usual without gainign weight. Does that mean that food intake does not affect weight? This is not rocket science, but just basic logic.

    John,
    The reason why GHG were expected (not ‘chosen’) to lead a temp increase before it was observed to indeed be the case is because of radiative physics.

  1651. WAM Says:

    Bart,

    All discussion here with VS was about modelling of only tempreature record, as measured. I a polynomial model justified for modelling of temperature time series (so you could do a linear fit) or some other (eg. ARIMA) model is required for the modelling (so OLS fit to linear model is spurious).

    But I have seen in literature (Vostok cores) that CO2 LAGS temperature. Could you show us references that CO2 has LEAD temperature during Holocene optimum or Medievial optimum, or getting out of LIA? Or there were these another factors?

    Your favourite bird-gravity example. You know that it was “proved” that it is physically impossible – using bad model for potential irrotational fluid flow. Only after observation of real physics Kutta and Jukovsky have introduced a model with vorticity, mimicking the trailing-egde flow separation.

    So question: how good the GCM models can replicate observed meteorogical global phenomena? Namely, meridonal exchange of cold polar air and hot tropical air?

    You might be taken aback, but scientists are not afraid to look into heresies :). Therefore it might be educating to look into physical description (hypothesis supported by observations – satelite photos, synoptic maps, etc) of global circulation presented by late prof. Marcel Leroux (his book “Meteorology – Global Warming-Myth or Reality-The Erring Ways of Climatology” and some more technical textbooks).
    You might consider different view on quite interesting subjects:
    – entering into glacial periods
    – entering interglacials
    – how 2km+ glaciers could be created during glacials (and melt)
    – what happened to CO2 during glacials and interglacials
    – mechanisms of ENSO and other oscillations
    – why it is warm and rains in Vancouver and Seattle
    and other

    Maybe you know that polar regions are considered the drivers of weather (at least). During 2nd WW Germans and allies sent ships to monitor weather there to predict the weather for air operations. Same happened during Cold War.
    Look on satellite pictures of the Earth – and you will see big cyclonic structures moving from poles toward equator. This meridonal exchanges transfer heat from tropics toward poles. These (dynamical) patterns result in something called a climate.

    To finish, consider models of circulation backed by real observations of heat (vapour) transfer, not some averaged temperature. And think, why 2-3 days weather prediction is OK in most cases (if you know boundary and initial conditions from measurements) – and more important – when it fails (because you have not accounted for some cold polar air transport not captured & measured in time for the prediction).

  1652. Bart Says:

    WAM,

    CO2 and temperature affect each other in both directions, like chicken and eggs. See the fourth point in this older post. But unlike at the end of an ice age, we know that currently, the excess CO2 is coming from human emissions rather than as a feedback on temp (which can also be seen from the magnitude of the current CO2 peak, which is way higher than expected based on a feedback on temp). Also note that CO2 did affect the eventual temp change when getting out of an ice age; without substantiel effect of CO2 you’d have a hard time explaining the large temp change from glacial to intergalcial.

  1653. Shub Niggurath Says:

    Sod:
    “we are concerned about “CO2 equivalents”….

    CO2e was a clever thing pushed on us by the propagandist Tom Bowman and his buddies including Michael Mann. Let us not bring that into this argument.

  1654. DLM Says:

    Bart says: “I’ll repeat: There are more factors than just CO2 that can affect temp. The logical consequence is that if multiple of those factors are changing at the same time, no close correlation between temp and one of those factors is expected”

    Maybe a little polynomial cointegration would help you with that Bart.

    Look, you have the entire history of the earth to work with. Find a period, or better two, where you can prove a close correlation between temperature and CO2. You can find periods where other factors closely correlate with temperature, can’t you Bart? Why not CO2? Is CO2 such a subtle and mysterious driver of climate change that you can’t show some noticeable effect on temperature, AT SOME #$@%^&* TIME BART?

    Climate science is just too complicated for the climate scientists. And they can’t hide that fact anymore.

  1655. Pat Cassen Says:

    DLM, you can look this stuff up, you know. Here are some places to start; there’s lots more.

    The Application of Size-Robust Trend Statistics to Global-Warming Temperature Series
    Thomas B. Fomby and Timothy J. Vogelsang
    Journal of Climate 2002; 15: 117-123
    http://journals.ametsoc.org/doi/abs/10.1175/1520-0442%282002%29015%3C0117%3ATAOSRT%3E2.0.CO%3B2
    …recent studies have pointed out that strong serial correlation (or a unit root) in global temperature data could, in theory, generate spurious evidence of a significant positive trend…A serial-correlation–robust trend test recently was proposed that controls for the possibility of spurious evidence due to strong serial correlation….The test…provides strong evidence that global temperature series have positive trends that are statistically significant even when controlling for the possibility of strong serial correlation…

    Human activities and global warming: a cointegration analysis
    Hui Liu, Gabriel Rodrıguez
    Environmental Modelling & Software 20 (2005) 761e773
    http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VHC-4CRYB10-4&_user=5977216&_coverDate=06%2F30%2F2005&_rdoc=1&_fmt=high&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1289671075&_rerunOrigin=scholar.google&_acct=C000064007&_version=1&_urlVersion=0&_userid=5977216&md5=04b9f450385694040aa6b4c1702c7818
    Using econometric tools for selecting I(1) and I(2) trends, we found the existence of static long-run steady-state and dynamic long-run steady-state relations between temperature and radiative forcing of solar irradiance and a set of three greenhouse gases series…The estimates of the I(1) and I(2) trends indicate that they are driven by linear combinations of the three greenhouse gases and their loadings indicate strong impact on the temperature series…

    Emissions, concentrations, & temperature: a time series analysis
    Robert K. Kaufmann, Heikki Kauppi and James H. Stock
    Climatic Change (2006) 77: 249–278
    http://www.springerlink.com/content/5v41542449430162/
    We use recent advances in time series econometrics to estimate the relation among emissions of CO2 and CH4 , the concentration of these gases, and global surface temperature…Regression results provide direct evidence for a statistically meaningful relation between radiative forcing and global surface temperature.

    How robust is the long-run relationship between temperature and radiative forcing?
    Terence C. Mills
    Climatic Change (2009) 94:351–361 DOI 10.1007/s10584-008-9525-7
    http://www.springerlink.com/content/4l71047t3175826h/
    This paper examines the robustness of the long-run, cointegrating, relationship between global temperatures and radiative forcing….[The result provides] further confirmation of the quantitative impact of radiative forcing and, in particular, CO2 forcing, on temperatures.

    Also, you might want to check out Richard Alley’s AGU talk on CO2, if you haven’t done so already:
    http://www.agu.org/meetings/fm09/lectures/lecture_videos/A23A.shtml

  1656. WAM Says:

    Bart,

    I have read your older post.
    You assume that the CO2 has contributed significantly to the warming the climate when entering interglacials. However, the vapour content also increases when the temperature goes up.
    And you have no references on the MWP, LIA or Holocene optimum and their correlationm with CO2 level.

    Your first assumption is that the present temperatures temperature levels exceed natural temperature variability. Therefore you seek an explanation in CO2 – because yoy reject other possibilities. Like variations in clouds coverage, for example.

    You know that climate is a pattern of weather, periodically repeating itself (seasons). With possible longer-period variations (maybe quite drastic). These patterns must be repliacable by GCM models. The patterns that are observed and which were observed in real world. If the GCM model cannot replicate (at least qualitatively) observed patterns from the recent past – what is the predictive power of a such model? And climate or weather modellers might have a look at the world outside – compare for example with real atmospheric flows, as shown on sattelite pictures. At least it is what we do when we model by CFD flows around ships, cars or airplanes. But for this we try to understand the real phenomena and their physics, and then we build and check our models.

    Therefore I pointed you to prof. Marcel Leroux – he does explain basic of atmospheric circulations, baed on observations and physics. Just the circulation part is interesting probably for you, because it gives a bit different viewpoint on climate mechanisms, focusing on energy transfers within the atmosphere. He gives a chance to understand meteorology and its links to climate. Again, climate is not an average temperature trend in 30 years+, after 10-20 year cut-off low-pass filtering. Once the mechanism is circulation is understood then the modeller may create his model. If not, he will create something that has no relation to reality (like this model of a wing based on a potential non-rotational flow) – but for sure it will be something he CAN model.
    And Leroux writes quite lively. At least he poses questions.

  1657. DLM Says:

    Pat Cassen,

    Don’t be silly.

    From the abstract of the first paper that you have wasted a lot of time reading, if you have read it:

    “This test also has the attractive feature that it does not require estimates of serial correlation nuisance parameters. The test is applied to six annual global temperature series, and it provides strong evidence that global temperature series have positive trends that are statistically significant even when controlling for the possibility of strong serial correlation. The point estimates of the rate of increase in the trend suggest that temperatures have risen about 0.5°C (1.0°F) 100 yr−1. If the analysis is restricted to twentieth-century data, many of the point estimates are closer to 0.6°C.”

    A temperature rise of about .5C, in about a century. Let’s assume that the measurements, bogus ‘adjustments’, and calculations are correct. And then let’s just assume that 100% of that warming was caused by the addition of 100 ppm of CO2 to the atmosphere, by evil humans. I am really scared.

    Why don’t you let Bart answer the questions. He is a lot smarter than you are.

  1658. Bart Says:

    DLM,

    Watch the talk by Richard Alley that Pat Cassen pointed you too. CO2 has been a dominant driver of climate changes throughout earth’s history. Don’t insult people.

    WAM,

    GCM’s can reproduce past climate changes quite well, including the glacial cycle, 20th century warming, and they predicted the approximate magnitude of volcanic cooling quite well before it was observed.

  1659. Pat Cassen Says:

    DLM – Waste of time reading? Hmm.

    I’m not here answering questions, just looking for info. And yes, I expect that Bart is a lot smarter than both of us.

  1660. DLM Says:

    Bart,

    I have seen that talk by Richard Alley. His explanation of the faint sun paradox by use of that rock thermostat invention plugged into a dubious model is not even faintly convincing. See recent research that blows that nonsense out of the water.

    Don’t you insult people by dodging their questions, and then pretending that you have shown them to be ignorant? Do you want to have a discussion here? Please respond to this with something other than a reference to what somebody said in a lecture, somewhere:

    Look, you have the entire history of the earth to work with. Find a period, or better two, where you can prove a close correlation between temperature and CO2. You can find periods where other factors closely correlate with temperature, can’t you Bart? Why not CO2? Is CO2 such a subtle and mysterious driver of climate change that you can’t show some noticeable effect on temperature, AT SOME #$@%^&* TIME BART?

    If you don’t want to answer my questions, just say so and I will stop wasting my time and yours.

  1661. WAM Says:

    Bart,

    So what physical process is responsible for accumulation of ice during glacials?

    And what physical process causes deglaciation? You know, ice on Antarctis is about 60mln year old :)

    And when it gets warm – like 6-7k years ago – you think we will have more or less rain (when polar regions will get warmer)? Look at Sahara during Holocene optimum – it was green. It was dry during end of last glacial. And circulation – more or less violent (all these storms of my grandchildren).

    I saw already somewhere your comments about the skill of GCM models.

    My question is – can GCM reproduce climatic patterns, not something averaged.

  1662. Bart Says:

    WAM,

    Regional climate is not well reproduced by global models. Some patterns of the changes are; others are not so well reproduced. GCM’s main objective is to simulate the global climate, and that’s what they’re optimized to do, and do quite well.

    DLM,
    I’ve got more things to do than to answer to every single commet in this thread. It seems rather strange to take that as an insult. If you’re not even seriously considering what people point out to you (including sources), then why would I bother spending more time looking for sources and/or paraphrasing them for you?

  1663. DLM Says:

    I will explain it to you Bart. You have apparently put this website up to invite the public to come here and discuss your view on climate change. Am I right, so far? I am a guest here. I find it insulting, when I am invited to take part in a discussion and the host ignores my questions. Maybe it’s just me.

    Your excuse that you would have to look for sources is lame Bart. I asked you a core question. If you don’t have a pretty good answer for it in your head by now, you never will. If your position is that Alley’s foolishness about rock thermostats etc. answers the question, then that is all I need to know.

    And the climate dogma continues to crumble.

  1664. manacker Says:

    DLM

    I believe Bart recommended to you to watch the lecture by Richard B. Alley, an ardent AGW-believer, given at an AGU meeting entitled:
    “Biggest Control Knob – Carbon Dioxide in Earth’s Climate History”

    You have apparently gone through it and found some weak spots. I’ve gone through it, as well, and found some others.

    It’s basically a sales pitch (by an outspoken and very convinced sounding individual) for CO2 being the principal driver of climate (based on paleo-climate studies) and for the premise that a doubling of CO2 would result in 3°C temperature increase (or more).

    It started out with the Vostok curves of CO2 and temperature going back 450,000 years (made famous by Al Gore), where someone asked Alley why the CO2 changes followed the temperature changes by several centuries, if CO2 was supposed to be the driver. Alley skirted around this question without addressing it directly, but made several other claims, of which the principal ones are covered below.

    Alley claimed Cretaceous temperature average was 37°C (I have seen other estimates that put this at 20°-25°C). At that time atmospheric CO2 is estimated to have been over 1,500 ppmv, due to breakup of Pangea and volcanic mid-ocean ridges emitting massive amounts of CO2 (and SO2).

    Alley claimed we could reach this level if all fossil fuels were consumed (this is incorrect; as there are not enough optimistically estimated fossil fuels on Earth to reach even 1,000 ppmv, or 615 ppmv over today’s level, let alone 1,500 ppmv, or 1115 ppmv over today’s level).

    Alley did not attempt to explain the long-term temperature decline, which began at the end of the Cretaceous despite very high starting CO2 level, and which played a significant role in ensuing mass extinctions due to extreme cold.

    Alley used the Paleo-Eocene Thermal Maximum interval as proof of CO2 as cause for rapid temperature increase estimated at around 6°C, during which period an estimated 6,800 Gigatons of carbon (as CO2 equivalent) were released into the ocean and atmosphere (roughly five times the amount contained in all fossil fuels on Earth today), but he did not attempt to explain why temperatures began to drop again as atmospheric CO2 levels had reached their highest levels.

    By the way, the PETM does not provide very convincing support for Alley’s claim of a 2xCO2 climate sensitivity of 3°C upon closer examination. Atmospheric CO2 rose by an estimated factor of around 9 (by 2,400 ppmv), assuming all of the carbon released was CO2, while temperature rose by an estimated 6°C. This would translate into a 2xCO2 climate sensitivity of below 2°C, all other things being equal. However, the carbon release is believed to have occurred primarily in the form of methane (from clathrates) rather than CO2.
    http://www.theresilientearth.com/?q=content/could-human-co2-emissions-cause-another-petm

    Before being oxidized to CO2, methane has a much higher GH impact than CO2 (around seven times on an equivalent mass basis, or 2.5 times on an equivalent carbon basis), so the estimated 2xCO2 climate sensitivity based on PETM is probably even lower.

    At any rate, it appears we have only seen 2,500+ ppm CO2 in the atmosphere quite rarely, except in the very early life of our planet.

    So, unless we have another series of massive submarine volcanic eruptions caused by breakup of continents with massive methane release from clathrates, we’ll never get to 1,000 let alone 3,000 ppmv CO2.

    Alley makes up for the holes in his science with his enthusiasm and conviction, but I did not come away convinced.

    Max

  1665. sod Says:

    I will explain it to you Bart. You have apparently put this website up to invite the public to come here and discuss your view on climate change. Am I right, so far? I am a guest here. I find it insulting, when I am invited to take part in a discussion and the host ignores my questions. Maybe it’s just me.

    yes, it is just you. most people write a blog to relax from their work. not to have an extra shift.

    so ask an interesting question. or at least pretend to be interested in answers given. or search for an answer elsewhere, and report what you found.

    Your excuse that you would have to look for sources is lame Bart. I asked you a core question.

    you did not understand his reply. you don t seem to take a careful look at the stuff that is presented to you. (basically you are doing nothing but repeating denialist talking points)

    it simply makes no sense to research good answers to your questions. it is just a waste of time.

    If you don’t have a pretty good answer for it in your head by now, you never will.

    cars are causing all times of damage to nature. (start with nature covered under roads). you have the whole history of the world at your disposal. please show to me some evidence of cars causing damage to nature, BEFORE the 1850.

    if you can t, will this show that cars don t do damage to nature now?

    the CO2 discussion is over. if you disagree with it, you disagree with basically all scientists and with reality.

  1666. WAM Says:

    Bart,

    I can hardly see global climate. I see tropical one, continental, polar, or something we have in western Europe. Seems to be rather local characteristics.

    From global models some people predict flash floodings, aridity in some regions, local events like hurricanes. Due to GW. Dont’ you find it interesting? Local events from global models.

    You have not answered how the ice could accumulate around Northern Europe and America during glacials. It was for sure rather local event – or sequence of such events.

    Check Marcel Leroux, you may find some to ponder upon. Qualitative models. To check against preconceptions.

    ML says – and you can check it on internet – that during warmings Sahara was wet. During colds it became dry. And it was quite windy during colds (dust was transported from Sahara to Greenland :) – Whiteland for sure at that time)

    When you have cooling in North Pole (like during winter or during less insolation) then a lot of cold air goes from north to tropics, in cyclonic way. Therefore we have storms during winter. It sucks warm and humid air northwards – which causes snow. And the circulation i a bit more complex than Hadley or Farel or norwegian models. Look on satellite pictures, big vortices moving from north toward equator. They drive all variations in weather and climate (using solar energy of course – imbalance between polar and tropical regions).

    And WHY we got into LIA and how could we get out of it? What about CO2 at these times? What about Dryas? Mechanisms? But something physical – for yourself.

  1667. Bob_FJ Says:

    ALL
    Please help me with this conundrum. First, please study this composite graphical mark-up of Bart’s 2nd figure in his lead article. My mark-up is updated from an earlier version with addition of Plazaeme’s input:

    Now let us consider how one might analyse any trends if you were living in the year ~1945 whilst having the same expertise and data. (and had no knowledge of the future). Is there any real difference between ~1945 and ~2009 trend analysis? (and what went wrong after 1945?)

  1668. Bob_FJ Says:

    Sod:

    the CO2 discussion is over. if you disagree with it, you disagree with basically all scientists and with reality.

    No, I don’t think anyone here would contest that CO2 is a greenhouse gas, which probably results in some warming. The problem is that the planet is a very complex and marvellous system with what can be allegorised as an exceedingly complicated thermostat. As you may know, various feedback mechanisms, (including famously clouds), are not fully understood. For instance, if net feedbacks are close to zero, then there is really nothing to worry about. Then of course there are various internal variabilities that are very tricky to analyse.

    BTW, the net rise in global average surface temperature over the last 150 years appears to be less than one degree C. In absolute terms that is the same as one degree K. Have you ever worked out what that would be in percentage change in degrees Kelvin? Don’t be confused by our biological comfort zone. Think absolute temperature and HEAT. What a marvellous global thermostat we have had for millennia!

  1669. HAS Says:

    Bob_FJ

    VS has deduced two relevant AMIRA models from the GISS data.

    Using all the data he gets (T= temp)
    DIFF(T) = – 0.44 * DIFF(T-1) – 0.37 * DIFF(T-2) – 0.31 * DIFF(T-3) + error (T)

    and VS on March 25, 2010 at 12:22 tested 1880-1935 data and derived
    DIFF(T) = – 0.37 * DIFF(T-1) – 0.39 * DIFF(T-2) – 0.34 * DIFF(T-3) + error (T)

    and found that using this latter model all subsequent temperatures were within the 95% confidence limits i.e. using what you knew back in 1935 and (and note no trend) you’d still be unable to reject the hypothesis at 95% confidence that the model was persisting.

    This thread has been about why looking for “time trends” in the data is mistaken, so why persist?

  1670. DLM Says:

    Hey sod,

    You don’t have to go all the way back to 1850 sod, I can’t even show you any damage caused to nature by cars before 1900. I never claimed I could. And I have never claimed that cars and the infrastructure they need to keep them going, are not causing harm to nature. It is really nice to know that you care enough about mother earth to not own a car, or to ride in one. I have to say though, it’s a very good thing that you weren’t born in a time, when people had to kill animals for food, and burn trees to stay warm. Well actually, you could still get a feel for that by living in the Kalahari, or someplace like that. But I guess you would prefer to freeze and starve to death, rather than mess with nature.

    I am not asking any more questions sod. That rock thermostat foolishness is all I need to know.

    Face it sod, if Bart had an answer to the core question I asked, he would have happily provided it, because he started this website for the same reason that the team started RC. But it’s not working. You are losing the battle for public opinion. We are all going to burn up, just as soon as all the factors that are masking catastrophic anthroprogenic global warming get out of the way, and let our nasty CO2 do it’s job.

    Now try to calm down sod. I am getting worried about you again.

  1671. DLM Says:

    Max,

    Can you believe that Bart uses that guy to answer for him?

  1672. HAS Says:

    “wasn’t persisting”

  1673. David Says:

    I was directed here by comments on the RealClimate blog.

    Please see our work on applying cointegration analysis and other time series econometrics to the climate issue which has largely (but not totally – the Nature paper got a decent number of citations but most of the others rather low citations) been ignored by the climate science community. I see a couple of references to our work above. Here are all the relevant papers.

    http://www.sterndavidi.com/topics.html#cli

    Yes there are unit roots probably in the temperature time series but they are there due to the temperature being driven by the greenhouse gas series that almost definitely have unit roots in them.

  1674. DLM Says:

    David,

    Yes, that has been discussed here. Read through the thread and you will find out why your analysis is wrong. Look particularly for comments by VS. Sorry.

  1675. Pat Cassen Says:

    David –

    Glad you came by.

    Do read comments by vs.

    Don’t mind DLM. Harmless.

  1676. cohenite Says:

    Rob; thanks for the link to RC; I don’t go there much so tend to miss out on the odd, worth while ‘outlier’ comment:-). Patrick says this:

    “This perspective shows the net flux is from a higher to lower temperature for LTE conditions”

    LTE’s are the main reason why CO2 heating effects are grossly exaggerated because they are the vehicles of convective movement in the atmosphere; radiative transfer does not occur within a parcel of air, which has LTE conditions because it is thermally consistent with no temperature gradient; so there is no “higher to lower temperature” for an LTE, either internally or between the LTE and the surrounding atmosphere.

  1677. Bob_FJ Says:

    HAS, your comment to me concluded with:

    This thread has been about why looking for “time trends” in the data is mistaken, so why persist?

    Sorry, but this thread is about Bart’s graphs in his lead article, and in particular I point you to: his fourth figure here, and his associated text:

    “…If however we look at the trend through the average of the three datasets over the period 1975-2009 (during which greenhouse gas forcing was the dominant driver of climate change), we see the following:
    The trend over 1975 to 2009 is approximately the same (0.17 +/- 0.03 degrees per decade) for all three temperature series…”

    However, despite Bart’s straight lines, it is possible to see a plateau in the time series between 1998 and 2009, which is remarkably similar to that of around 1940. If a younger Bart had done the same thing back around 1945, AOTBE, (but without any later data), he would have got an almost identical result. However, later history would then prove him to be wrong, because there was indeed a distinct plateau. The question I’ve asked is what is different in the statistical argument between 2009, and what it would have been back in 1945?

    Now let us consider the possibility that the temperature record is accurate and thus reflects the global warming since 1850, (HADCRUT), although this is by no means provenly accurate. This would arguably be the consequence of all the various forcings, plus various feedbacks, plus various internal variabilities such as oceanic oscillations, plus external effects such as possibly solar magnetism and GCR’s.

    This is NOT just about VS’s hypotheses, which have been controversial here.

  1678. cohenite Says:

    Pat Cassen Says:
    April 11, 2010 at 18:42

    Pat, you list a number of cointegration papers which supposedly verify the correlation between CO2 and temperature; the defects in those papers has been noted here:

    cohenite Says:
    April 7, 2010 at 07:53

  1679. cohenite Says:

    Bob_FJ; sorry my 03:13 comment above is to you; I’m having some difficulty posting.

  1680. HAS Says:

    Bob_FJ

    Have seen anyone saying the series isn’t auto correlated. Only real controversy has been whether its integrated or moving average.

    Under any circumstance linear regression breaks down. So a younger (but wiser) Bart wouldn’t try and do what you are trying to get him to do.

  1681. HAS Says:

    It is a bad day

    “Haven’t seen anyone saying the series isn’t auto correlated.”

  1682. DLM Says:

    Pat,
    Thanks for your kind words. I am happy to know that you don’t have me pegged as an evil paid stooge of BIG OIL, or as a mad heretic hell bent on genocide. Please tell sod. He is really angry with me.

    Bob,
    Bart would have been in a different line of work in 1945.

    cohenite,
    Pat doesn’t know that those papers are defective. So it’s OK, if he cites them. He means well.

  1683. Bart Says:

    WAM,

    As I mention in my new post, changes in the sun were an important factor in causing the LIA, as well as other factors. For a plot of CO2 and temp over the past 1000 years see eg this RC post (second figure) and associated article. Note that before the human emissions of CO2 took off, CO2 was mostly a feedback on temperature. That feedback factor (gamma) is the focus of that article.

  1684. Bart Says:

    David,

    Thanks for dropping by and for the link to your articles.

  1685. Bob_FJ Says:

    HAS, Reur April 12 at 04:12 & 05:45
    OK, with reference to this graphical composite, let’s take my issues in a series of steps:

    STEP 1:
    (a) Do you agree that there is a plateau* in published global average temperatures between ~1937 & ~1945?
    (b) If you were situated in time in 1937 with no future data, would you be able to forecast that plateau?
    (c) If you were situated in 1945 with no future data, would you be able to show a trend that included or split-out that plateau?
    (d) If you were situated in say 1950 or later would you be able to agree that there was indeed a plateau around 1940?

    * If you prefer, you may translate ‘plateau’ as a broad peak, which to a degree is affected by the weighting or lack thereof in Bart’s 11-year smoothing.

    Bart, what sort of weighting did you use in your 11-year smoothing, if any?

  1686. Bart Verheggen Says:

    Bob,

    No weighting factors were applied in my smoothing.

  1687. Bob_FJ Says:

    SORRY, html error in my post above; should be, (I hope):

    DLM,you wrote in part and made me chuckle:

    Bob,
    Bart would have been in a different line of work in 1945.

    Yes that’s almost certainly true, but I was fantasising ‘all other things being equal’ in my proposition. (AOTBE)
    OR, even more funny, one might postulate; if it were instead back in the 70’s one might think of James Hansen, quoting in part from here per 1971:

    “…NASA scientist James E. Hansen, who has publicly criticized the Bush administration for dragging its feet on climate change and labeled skeptics of man-made global warming as distracting “court jesters,” appears in a 1971 Washington Post article that warns of an impending ice age within 50 years…”

    No mention of the poor polar bears though!

    [Duplicate deleted. BV]

  1688. manacker Says:

    the CO2 discussion is over. if you disagree with it, you disagree with basically all scientists and with reality.

    No, sod, the CO2 discussion is definitely not over (see this blog and many others).

    The discussion may be “over” regarding the GH theory, that CO2 is a GH gas or that humans emit CO2 from fossil fuel combustion, etc. (with more affluent nations emitting more than impoverished ones).

    But the discussion of the theoretical GH impact of a doubling of atmospheric CO2 is very much an open and ongoing discussion.

    It would be absolute rubbish to claim that the IPCC 2xCO2 estimate of 3.2C is supported by “basically all scientists”.

    It would be even sillier to say that this estimate represents “reality”.

    You may not realize it, sod, but the discussion is just getting started.

    Max

  1689. HAS Says:

    Bob

    “(a) Do you agree that there is a plateau* in published global average temperatures between ~1937 & ~1945? ”

    No, not in the sense that there was a time dependent process creating the plateau. The plateau is somewhat analogous to a run of heads when flipping a coin, you see it but the process that is creating it is basically random.

    “(b) If you were situated in time in 1937 with no future data, would you be able to forecast that plateau? ”

    No, but if I was asked if it was unlikely I’d so “No” – just like that run of heads.

    “(c) If you were situated in 1945 with no future data, would you be able to show a trend that included or split-out that plateau?”

    Not a statistically valid one. On the basis of what has been observed in this series the next change in Temp would roughly be the weighted average of last three changes plus ~10% but in the opposite direction plus a random bit. Looked at that way the “plateau” isn’t the least bit surprising.

    “(d) If you were situated in say 1950 or later would you be able to agree that there was indeed a plateau around 1940?”

    No, being technically precise – just we’d had a run of heads, to carry on the analogy.

  1690. Bart Says:

    DLM,

    You may have missed what I previously wrote to VS:

    Nobody (me included) is obliged to answer to anybody here; we’re all here out of free will. I happen to be the host of this blog. That does not mean that I attend to it 24-7. It doesn’t mean that I’m obliged to answer to anything anyone brings up. It does mean that I set the rules, and nobody else. I try to do so in a fair, constructive and open manner. If you don’t like it, take it elsewhere. I don’t owe you anything, and vice versa.

    I don’t remember having sent you a personal invitation. Of course, you are welcome to contribute to the discussion as long as you remain reasonably polite and on topic, and preferably with something constructive to say.

    Steve Easterbrook has detailed notes of Richard Alley’s talk on his blog.

    In the IPCC report there are a number of figures of temp and CO2 over different time periods from Earth’ history, showing that like a drunken man and his dog they don’t wander away from each other all that far. For the legends and explanations, see IPCC chapter 6. (delta 18O is a proxy for temperature).

    Last 400 million years

    PETM

    “The rapid decrease in carbon isotope ratios in the top panel is indicative of a large increase in atmospheric
    greenhouse gases CO2 and CH4 that was coincident with an approximately 5°C global warming (centre panel).”

    Incidentally, it can also be seen that the large slug of carbon emitted into the atmosphere had a recovery time of 100,000’s of years (despite the much much shorter lifetime of an individual molecule of CO2)

    Past 650,000 years

    Same approximate time period (ice ages), CO2 correlated with Temp.

    Last 1000 years discussed on RC.

    For the past century, see my newer post.

    CO2 versus Temp for the past century. Take note of the caveat:

    There are many reasons climatologists don’t approach climate this way, and they’re good ones. I only did it because I’ve been encountering sources that say that the correlation is zero (nonexistent). Those places are wrong.

  1691. Marco Says:

    @Bob_FJ:
    The Hansen ice age myth is a persistent one: his name appears because Rasool and Schneider used, amongst *many* other things, a computer programme Hansen had written. This by no means Hansen supported the conclusions of Rasool & Schneider, but it’s oh so nice to try and link the things together. Guilty by association, and all that!

    Oh, and Rasool and Schneider did not predict a new ice-age either. They discussed various scenarios, of which one (quadrupling aerosols) *could* lead to an ice age. Their estimate of aerosol influence was too high, though, and there was a lack of understanding of several other greenhouse gases.

  1692. cohenite Says:

    PETM:

    http://www.nature.com/ngeo/journal/v2/n8/abs/ngeo578.html

  1693. Bart Says:

    Interesting.

    “At accepted values for the climate sensitivity to a doubling of the atmospheric CO2 concentration1, this rise in CO2 can explain only between 1 and 3.5 °C of the warming inferred from proxy records.” [of 5 to 9 degrees.]

    That would mean that either other processes also played an important role or that the climate sensitivity is (a lot) higher (ie due to more and/or stronger positive feedbacks).

  1694. cohenite Says:

    Yes, the range is from 1C out of 9C to 3.5C out of 5C; that’s 11% to 70%; the thing that interests me about the PETM is that after it temperatures continued to rise;

    CO2 didn’t cause that because levels of CO2 dropped after the PETM.

  1695. sod Says:

    so many false claims. i will only answer a selected few:

    It would be absolute rubbish to claim that the IPCC 2xCO2 estimate of 3.2C is supported by “basically all scientists”.

    the IPCC gives a RANGE. that range is supported by basically all scientists. my statement is a fact.

    (c) If you were situated in 1945 with no future data, would you be able to show a trend that included or split-out that plateau?

    look at individual model runs. they constantly show “plateaus”.

    You don’t have to go all the way back to 1850 sod, I can’t even show you any damage caused to nature by cars before 1900. I never claimed I could. And I have never claimed that cars and the infrastructure they need to keep them going, are not causing harm to nature.

    i think you did not get my point. that something never happened in the past, does not contradict that it is not happening now. and lack of data might keep us from showing that something happened in the past, while we have good data to show that it is happening now.

  1696. DLM Says:

    Thanks Bart,

    But you really didn’t answer the question:
    “Is CO2 such a subtle and mysterious driver of climate change that you can’t show some noticeable effect on temperature, AT SOME #$@%^&* TIME BART?”

    I wasn’t looking for a drunken man and a drunken dog type of correlation Bart. I was looking for a noticeable effect on temperature, which could be perceived by a human as being a dangerous change in climate. And it would take significantly more than .7C, or 1.2C to scare me.

    I am really surprised that you keep mentioning Richard Alley. That rock thermostat theory looks like a Rube Goldberg concoction. It’s hard to believe that it was taken seriously. I guess it was the modeling that impressed some.

    http://www.nature.com/nature/journal/v464/n7289/full/nature08955.html

    sod,

    You don’t have a point sod. We do have data from the past that is a good as the data we have from today. Your man Phil Jones says that there has been no statistically significant warming for the past 15 years. Despite the humongous amount of CO2 added to the atmosphere by evil humans, the temperature ain’t going up. And we have a La Nina coming, so don’t worry sod. By the way sod, it’s OK to eat cows now. See Cowgate.

  1697. manacker Says:

    sod

    the IPCC gives a RANGE. that range is supported by basically all scientists. my statement is a fact.

    How can you substantiate your claim that the 2xCO2 sensitivity RANGE given by IPCC “is supported by basically all scientists”?

    Had you said “many climate scientists” and “some point within the range” your statement may well have been correct, but that “basically all scientists” support the RANGE suggested by IPCC is false, unless it is substantiated.

    So the statement is not a substantiated fact, but an unsubstantiated claim instead.

    Language is important, sod, and you can’t just make stuff up, just because it sounds good to you.

    Max

  1698. manacker Says:

    Bart

    Sorry to cut in to your exchange with DLM, but the CO2 temp correlation for 800,000 years you cited states clearly (bold letters by me):

    Here we have each value of CO2 plotted against the temperature deviation from reference values — with the temperature being for 1000 years before the corresponding CO2 value.

    Temperature driving CO2??

    The rest of the write-up attempts to rationalize why we think one thing happened for 798,000 years while we think something else happened over last quarter of the 20th century.

    Could be true, but it’s not very convincing, Bart.

    I’d leave that write-up out of your “historical evidence that CO2 drives temp” dossier.

    Max

  1699. sod Says:

    You don’t have a point sod. We do have data from the past that is a good as the data we have from today. Your man Phil Jones says that there has been no statistically significant warming for the past 15 years. Despite the humongous amount of CO2 added to the atmosphere by evil humans, the temperature ain’t going up. And we have a La Nina coming, so don’t worry sod. By the way sod, it’s OK to eat cows now. See Cowgate.

    ouch. such a short paragraph, so many errors.

    data from the past is NOT as good as modern instrumental data. we are accurately measuring CO2 at many places around the world. this is something completely different than proxy records.

    Jones that “yes but only just” to that 15 years claim. he then went on to explain, that it is very close to significance. please read the interview, and not denialist misrepresentations of what he said.

    http://news.bbc.co.uk/2/hi/8511670.stm

    we are still running an el-nino. a rather mild one, that is still causing record temperatures in the satellite datasets. get your facts right.

    http://www.esrl.noaa.gov/psd/enso/enso.mei_index.html

    “cowgate” is, like all those “gate” stories, just a denialist invention. again, check the facts.

    http://tinyurl.com/y8hu7c4

    Frank Mitloehner explains, how cows are less important in the US and california, than they are globally. and that the comparison was a little unfair, as it factored in all (CO2) cost of life stock, while transportation was not including all factors. you have not understood his comment. (because, again, you were not reading what he says, but misrepresentations)

    ———————————

    Had you said “many climate scientists” and “some point within the range” your statement may well have been correct,

    i am fine with that statement, and it is nice that you ae as well. so let us stick to that one.

  1700. manacker Says:

    sod

    thanks for modifying your sentence, but let me comment on another:

    “cowgate” is, like all those “gate” stories, just a denialist invention

    That’s what Nixon and pals said about “Watergate”, too (except the “inventors” were not “denialists” but “the liberal press”).

    Max

  1701. John Says:

    Sod

    Arthur Scargill is likely to have had as much of a role in CO2s part in AGW as CO2 has.

  1702. DLM Says:

    I don’t think Phil was talking about proxy data, when he played into the hands of the denialists by admitting that there has been no statistically significant warming in the last 15 years. It’s a travesty! When are we going to see a sustained period of statistically significant warming that will be sufficiently scary to get the feckless politicians to do something about the earth burning up, sod? (That’s a rhetorical question.) But hey, I hope you are right, and the current El Nino continues for another 30 years or so. Maybe that would help focus their attention.

    I don’t know sod, I live in California and cows are very important to me. But damn those denialists and their unfair comparisons. I wish they had never found out about that Himalayan glacier melting thing, and unfairly compared the truth to a blatant deliberate lie. And those STOLEN INNOCENT emails, OH PULEEEEASE! I can’t believe that parishioners are deserting the AGW church in droves, just because of all this silly stuff.

    Keep up the good work sod. And stay out of cars. They are evil. Cows too.

  1703. Marco Says:

    Actually, DLM, we’ve had statistically significant (P<0.05) warming since before 1994!

    And those poor deniers; they didn't even notice the Himalayas error. It was found by an IPCC contributor.

  1704. NokTang Says:

    Didn’t know that Prof Graham Cogley was an IPCC contributor? I believe he was the one who found the Kotlyakov article which stated the year 2350 instead of 2035. And then there was Dr.Raina, did he contribute to IPCC as well?

  1705. Bart Says:

    Manacker,

    You probably missed my reply on the same temp lagging CO2 issue here. I thought more of you than being confused by a chicken-‘n-eggs issue.

    DLM,

    My aim is not scaring people. A small temp change can however have quite drastic effects (if it persists long enough). Eg in the last interglacial, global avg temps were about 1-2 degrees higher than now, whereas sea level was about 6 metres higher.

    On the temp effect of CO2: If you can point me to a physically based model that can reproduce the glacial to intergalcial temp difference without invoking GHG, then you would have a good point. In the meantime, perhaps consider this.

  1706. Pat Cassen Says:

    cohenite (April 12, 2010 at 03:22) – Yes, I had noticed your critique of Kaufmann et al. (2006); I apologise for not mentioning it in my comment citing those papers.

    So here’s my problem: There are many papers claiming statistically significant deterministic trends, and/or correlation between temperature and radiative forcing (use ‘cited by’ to find them, in addition to the above). These are written by econometricians – not those corrupt advocates of world government, the climate guys. Several of these papers are cited dozens of times, and I don’t see anybody screaming “It’s wrong!” Terence C. Mills, author of the 2009 paper claiming “confirmation of the quantitative impact of radiative forcing and, in particular, CO2 forcing, on temperatures”, seems to have quite an impressive resume; I would like to think he knows what he is talking about.

    Then there’s B&R (as yet unpublished?). And we await vs’ cointegration analysis.

    I don’t particularly like arguments from authority, but I’m not qualified to evaluate any of these papers, and I’m probably too old and stupid to learn enough new stuff. So it would be nice if, say, David Stern (who has published with Kaufmann) would weigh in on these matters. In the meantime, I have to conclude that econometric analyses are not in serious conflict with the physics-based arguments (which I find compelling, and which I am qualified to evaluate) and the paleo data in validating the importance of CO2 for the current warming.

    I’ll stay tuned.

  1707. Frank Says:

    Bart says:

    “Also note that CO2 did affect the eventual temp change when getting out of an ice age; without substantiel effect of CO2 you’d have a hard time explaining the large temp change from glacial to intergalcial.”

    Bart, with all due respect I have a difficult time with this statement – we have good evidence of many cycles over the past 3 million years, including a marked step change in periodicity between earlier and later occurences. Why isn’t it just a matter of orbital cycles with GHGs (CO2 AND H2O) going along for the ride so to speak?

    PS – Looks like WUWT has latched on to the afore-mentioned IPCC temp graph (i.e., “fig 4.”, the one with the various end-points); apparently the effort to accentuate the apparent trend was inserted after the scientific reviews….

  1708. Willem Kernkamp Says:

    Bart,

    I agree with Frank. We can’t prove a role for CO2 by saying that there is no other explanation. Obviously, the world came out of the ice age just fine without the help of CO2. This means that there are other factors.

    The days that climate scientists could make claims about the climate and hand wave them through are over.

    Will

  1709. HAS Says:

    Marco said on April 12, 2010 at 19:40

    “Actually, DLM, we’ve had statistically significant (P<0.05) warming since before 1994!”

    Marco you seem to missing the point about this thread, despite having been posting on it almost since it began.

    The GISS temperature series does not meet the theoretical requirements that would allow you to make the significance tests you are quoting.

    If you feel confident enough in your statistical ability to quote “statistically significant (P<0.05) warming”, you surely by now should have got this simple point.

  1710. sod Says:

    The GISS temperature series does not meet the theoretical requirements that would allow you to make the significance tests you are quoting.

    If you feel confident enough in your statistical ability to quote “statistically significant (P<0.05) warming”, you surely by now should have got this simple point.

    a rather funny reply. marco was replying to this post:

    I don’t think Phil was talking about proxy data, when he played into the hands of the denialists by admitting that there has been no statistically significant warming in the last 15 years.

    so things work like this:

    significance test used to show that there is no significant warming over the past 15 yaers: FINE.

    same test used to show that there is significant warming FOR EVERY YEAR BEFORE 15 years ago: ILLEGAL

    funny.

  1711. HAS Says:

    I didn’t really want to get into what Jones said, but one of the implications of this thread is that Jones made the same mistake as Marco by using regression to make statements about the significance or otherwise of recent warming. Any conclusion (warming/not warming) is unsupported on this basis.

    However this thread has also suggested that when you use the right tests on GISS you can’t reject the hypothesis (95% confidence) that the post 1935 observations didn’t come from the process derived from the pre 1935 observations.

  1712. Bob_FJ Says:

    HAS, Reur April 12 at 10:13:

    “(a) Do you [HAS] agree that there is a plateau* in published global average temperatures between ~1937 & ~1945? ”
    No, [Bob_FJ] not in the sense that there was a time dependent process creating the plateau. The plateau is somewhat analogous to a run of heads when flipping a coin, you see it but the process that is creating it is basically random.

    I dispute that the process that is creating it is basically random.

    Please examine this graphical composite, about which I make SOME basic observations:

    * The HADCRUT NH (Northern Hemisphere) is a different time-series to the HADCRUT SH.
    * The unsmoothed data are annual and are derived from many thousands of data taken twice daily.
    * The black smoothing line is Hadley’s 21-year simplified Gaussian weighting. (except for the first and last ten years, which employ “ extrapolated numbers“)
    *
    Although these are two different data time-series, they both have the same characteristic shape, over ~160 years, including an arguable 60 year cycle, maybe from internal natural variations.
    * The general trend of both time-series is generally upwards roughly in line with the GISS net forcings.
    * However, there are also “internal natural variations” (not shown), most notably oceanic oscillations such as the PDO in the SH. (however, it is very complicated and not well understood)
    * Some of the forcings have dominated in one hemisphere, such as volcanic in the SH, and industrial pollution (aerosols) in the NH, and thus some differences between NH & SH can be expected from these drivers alone.
    * Some of the annual data may appear to be outliers or noise, but they can be traced to real events.

    Just a few questions please HAS:
    [a] Coming back to the visible plateau, (or peak), between ~1937 and ~1945 in HADCRUT T’s, could you please explain how it is randomly very similar in two different but related time-series?
    [b] Between ~1910 and ~1937, there was a sharp warming in both time-series. Is this random too?
    [c] Between ~1945 and ~1975, there was slight cooling, initially sharp until ~1960. Is this random too?
    [d] Between ~1910 and ~1960, there is a sharp characteristic rise, then plateau, then fall; a “mountain” of some 50 years in two different time-series. So is that two somewhat similar results from tossing a coin 50 times?

  1713. John Whitman Says:

    VS & Bart,

    I have had a 4 day break. : )

    Saw that VS wishes a very long break.

    VS, thanks for all you have done here at Bart’s place. I will surely contact you at your email address. Please let us know where you might continue the journey on to cointegration. I admire your honesty and straight forward presentation skills; your courage and intellect. I also thank the many statistical/econometrician professionals you rallied to discuss the application of econometrics to climate science. The glimpse I saw of your premises makes me want to see more of you someplace.

    Bart, although it was very apparently stressful for you, you did attempt (at least) to provide a discussion place for VS. For that I thank you. Over time it became quite clear to me that your premises (and many of your original/normal commenters) were inconsist with many of us who came here just to see the discussion revolving around VS’s knowledge.

    Hey, I am still pounding away on Verbeek’s third edition, ‘A Guide to Modern Econometrics’ : )

    John

    PS – Hey, VS, I predict that 4 to 6 years from now you will see a huge new croup of freshly graduated econometricians/statisticians. Hoorah! Thanks to you.

  1714. Igor Samoylenko Says:

    Willem Kernkamp said: “I think we do agree on this [that GISS temperature record contains unit root]”

    I don’t think so.

    Here are a couple of quotes from chapter 15.4 “The meaning of tests for Unit Roots” in Time Series Analysis by J. Hamilton (1994) (p. 446):

    Unit root and stationary processes differ in their implications at infinite time horizons, but for any given finite number of observations on the time series, there is a representative from either class of models that could account for all the observed features of the data. We therefore need to be careful with our choice of wording – testing whether a particular time series “contains a unit root”, or testing whether innovations “have a permanent effect on the level of the series”, however interesting, is simply impossible to do.

    And he goes on (p. 447):

    There may be good reasons to restrict ourselves to consider only low-order autoregressive representations. Parsimonious models often perform best, and autoregressions are much easier to estimate and forecast than moving average processes, particularly moving average processes with a root near unity.

    If we are indeed committed to describing the data with a low-order autoregression, knowing whether the further restriction of a unit root should be imposed can clearly be important for two reasons. The first involves a familiar trade-off between efficiency and consistency. If the restriction (in this case, a unit root) is true, more efficient estimates result from imposing it. Estimates of the other coefficients and dynamic multipliers will be more accurate, and forecasts will be better. If the restriction is false, the estimates are unreliable no matter how large is the sample. Researchers differ in their advice on how to deal with this trade-off. One practical guide is to estimate the model both with and without the unit root imposed. If the inferences are similar, so much the better. If the inferences differ, some attempt at explaining the conflicting findings (as in Christiano and Ljungqvist, 1998, or Stock and Watson, 1989) may be desirable.

    (emphasis is mine)

    So, by just looking at the data alone and ignoring what it physically represents, if tests find unit root, it gives us a choice of looking at modelling the time series as unit root process; it does NOT rule out using (trend) stationary models. Several commenters in this thread argued that we should still model temperature as a unit root process on the basis that it does not matter if there really is a unit root in the time series; even if there is not, it is “close enough”.

    An obvious question is how close is close enough? In fact, there are several reasons to believe that it may not be close in any obvious sense:

    1) The number of data points is very low – ~130.
    2) The temperature contains a non-linear deterministic trend, which is the response of temperature to slow, non-linear changes in net forcing. This is a known problem for unit root tests (see, for example: Cochrane (1991)).

    DeWitt has looked at Mote-Carlo tests of near unit root processes on very short time series (130 data points, similar to GISS). He looked at a simple near unit root process:

    A1(t) = E(t) + alpha*A1(t-1), where E(t) is white noise

    He found that “over 90% of the time, a series with an alpha of 0.95 will test as having a unit root” and almost 80% of the time, alpha of 0.9 will test as having a unit root. As DeWitt concludes: “All unit root tests have low power against near unit roots when the time series is short. An alpha of 0.95 is equivalent to a time constant of 20 years. You really don’t need a very thick layer of the ocean to get a time constant of that magnitude”

    In another example, DeWitt ran unit tests on a leaky filter with near unit root with a 100 year time scale and found that the power of VS’ test to reject the non-existing unit root based on 124 years was just 5%.

    So, it is not at all obvious that the temperature time series is “near” unit root in any obvious sense to justify modelling it as a unit root process, despite unit root tests finding unit root.

    My personal conclusion is that it far from clear why we should look at modelling temperature time series as a unit root process. Also, even if an argument is put forward for doing so, it does not automatically invalidate any existing results based on modelling it as a (trend) stationary process (see the comment by Hamilton above). This is an important point.

    This leads on to the main topic of this thread: linear trends, their statistical significance and their confidence intervals. These are calculated based on modelling the temperature time series as (trend) stationary with adjustments for autocorrelation (See: Appendix 3.A from the IPCC AR4). This is an entirely reasonable thing to do (see the quote by Hamilton above). Having tests failing to reject unit root in the temperature time series does not suddenly change this.

    Also, as it is clear from the Appendix 3.A I, that other, more complex models have been considered, including FARIMA models by Cohn and Lins (2005) and rejected because (quoting again from the Appendix 3.A)):

    …the results depend on the statistical model used, and more complex models are not as transparent and often lack physical realism. Indeed, long-term persistence models (Cohn and Lins, 2005) have not been shown to provide a better fit to the data than simpler models.

    The dependency of the results on the model has been neatly demonstrated in this very thread with VS using ARIMA(0,1,3) and B & V using ARIMA(2,1,0) and both arriving at fundamentally different results as to the statistical significance of the warming trend over the last 100 years.

    Again my personal conclusion is that the way trends, their statistical significance and the confidence intervals are currently estimated in climate science (and the IPCC report) is reasonable. Are confidence intervals for the linear trends realistic, given that the forcings are non-linear, there are many uncertainties etc etc? They are arguably still too low and there is room for improvement (as ever). And it is not surprising to learn that the work is on-going to improve these (for example, from the same AR4 appendix): “Robust methods for the estimation of linear and nonlinear trends in the presence of episodic components became available recently (Grieser et al., 2002)”.

    What I do NOT see for the life of me is any evidence of fraud, deception, anti-science and all other similar accusations levelled at climate science.

  1715. HAS Says:

    Bob

    There are statistical tests (mentioned throughout this thread) you should use to test the independence of these series and to tell if they are the result of deterministic or stochastic processes, and ultimately how to draw valid statistical inferences from them. These techniques will give better information in the detail than just looking at the graphs, although the graphs will obviously help you form hypotheses to test.

  1716. cohenite Says:

    BobF_J; that graph of the southern and northern hemispheres clearly shows greater cooling in the sth hemisphere from 1940; supposedly the cooling of the 40’s was due to aerosols but the effect of aerosols in the sth hemisphere could not occur because they were not present in the sth hemisphere; the cooling must have been due to natural factors while the lessor cooling in the nth hemisphere must have been reduced for some reason; UHI perhaps?

  1717. John Whitman Says:

    ””””Igor Samoylenko Says: April 12, 2010 at 23:49 – . . . . Again my personal conclusion is that the way trends, their statistical significance and the confidence intervals are currently estimated in climate science (and the IPCC report) is reasonable . . . . ””””

    Igor,

    Honesty, it is good in scientific discussion. I personally, disagree with your personal conclusion.

    I suggest to you that we need to enlist the colaboration of professional econometricians in the climate science area. It is apparent that there is a some lack of functional econometric collaboration with some climate scientists. See VS’s attempt here at collaboration as evidence of the lack with some climate scientists.

    Our non-professional personal opinions matter a little, but only to a very limited extent. How to validate our non-professional opoinions without professionals? No way, but we must choose professionals . . . . wisely.

    I suggest we foster here (or other places if Bart is unwilling and if VS is willing) a collaboration with non-agenda (objective) climate scientists with non-agenda (objective) econometricians. DO YOU AGREE? Shall we go there without bias?

    Other bloggers will be willing to host, I think. (Yoda speak)

    NOTE: I personally find the arguments of some that new statistical analyses (like VS’) have an anti-physics or non-physics basis as being only a knee-jerk reaction of limited view physicists. Let’s let the far view physicists engage. Hey, this is collaboration, n’est pas?

    John

    PS – VS, hope you are still monitoring this stuff.

  1718. dhogaza Says:

    I personally find the arguments of some that new statistical analyses (like VS’) have an anti-physics or non-physics basis as being only a knee-jerk reaction of limited view physicists. Let’s let the far view physicists engage

    You go first. B&R claim that 1 w/m^2 from the sun will cause 3x the warming of 1 w/m^2 radiation from CO2. You propose a physical explanation for how this can be true, and you go convince physicists that your explanation is correct.

    Apparently you seem to think that “far view” physicists will accept this result of B&R and that the only reason “limited view” physicists won’t is because, umm, why, exactly? Physics works. Your typing on a computer that pretty much proves that such far-reaching errors in physics as B&R propose are not possible.

  1719. Bob_FJ Says:

    Sod, you wrote concerning a question I asked of HAS:

    (c) If you were situated in 1945 with no future data, would you be able to show a trend that included or split-out that plateau?
    look at individual model runs. they constantly show “plateaus”.
    http://www.layscience.net/files/ipcc.jpg

    You should always consider the context in which a question is asked! Your comment does not appear to have any connection with that context. Neither does the graph you cited show a plateau that is evident in the published T data at around 1940

  1720. John Whitman Says:

    ”””dhogaza Says: April 13, 2010 at 01:14 – . . . . seem to think that “far view” physicists will accept this result of B&R and that the only reason “limited view” physicists won’t is because . . . . ””””

    dhogaza,

    I suggest that ‘far view’ in my context means physicists with no baggage to defend wrt to climate science interests . . . . pro or con.

    Shall we first define what ‘objective’ and ‘independent’ mean? That would be fruitful for common understanding. Shall we go there first before pursuing ‘far view physicists’?

    I am sure of where the ‘objective’ and ‘independent’ discussions will lead us philosophically. I love those areas. Shall we go there?

    John

  1721. Tim Curtin Says:

    Here’s a special for Bart and other Netherlanders:
    “We propose tests for hypotheses on the parameters for deterministic trends. The model framework assumes a multivariate structure for trend-stationary time series variables…. We apply our tests to examine if
    monthly temperatures in The Netherlands, measured from 1706 onwards, have a trend and if these trends are the same across months. We find that the January and March temperatures have the same upward trend, that the September temperature has decreased and that the temperatures in the other months do not have a trend. Hence, only winters in The Netherlands seem to get warmer”.*

    How beastly this must be – and dangerous to boot. I propose a special fund to help protect Bart & fellow citizens from the horrors of their warmer winters. Apologies for joking, but the authors confirm the point I have been making that we need to disaggregate from the “global” mean confections of CRU & GISS, and then do a frequency distribution of the individual local data sets by month (as I have been doing) to obtain a better impression of what is really happening, and how dangerous it is on the ground, at eg Rotterdam.

    *Testing for Common Deterministic Trend Slopes
    Timothy J. Vogelsang (Cornell) and Philip Hans Franses
    Econometric Institute Erasmus University Rotterdam

    Click to access commontrends.pdf

  1722. cohenite Says:

    dhagoza; a somewhat simplistic misinterpretation of B&R; they say this;

    “If instead of a permanent increase in its level, the change in rfCO2 were to
    increase permanently by 1 w/m2, global temperature would eventually increase by 0.54
    C. If the level of solar irradiance were to rise permanently by 1 w/m2, global
    temperature would increase by 1.47 C.”

    CO2 heats indirectly, solar directly; the indirect CO2 heating is based on continued increase in CO2 levels; when solar is reemitted from the surface, RE, CO2 absorbs and isotopically reemits RE; with a sustained solar heating of the surface RE/2 is returned to further heat the surface; but that RE/2 is reduced by Beer-Lambert and boasted by the slight Stephen-Boltzmann effect at the surface; now this is the important point; with a constant solar for CO2 to continue heating CO2 levels must increase; if CO2 doesn’t increase the, as B&R note:

    “This means that when the temperature deviates from its equilibrium as
    determined in equation (2) about half of the deviation is corrected within a year”

    This lag, in effect homeostatic process of less than a year is consistent with findings of the lag effect of temperature by Trenberth;

    Click to access 2000JD000298.pdf

    So there is some physical reality to what B&R find. The 1w/m2 higher temperature effect from the sun is a reversal of the CO2 heating effect because the solar directly impacts on the SB RE from the surface; CO2 doesn’t.

  1723. DLM Says:

    Marco says: “Actually, DLM, we’ve had statistically significant (P<0.05) warming since before 1994!

    And those poor deniers; they didn't even notice the Himalayas error. It was found by an IPCC contributor."

    Your first bit of gibberish has been adequately addressed by various sensible comments above.

    The Himalayan glacier lie was not an 'error'. It was deliberate. The lack of substance in the ludicrous claim was brought to the attention of the powers that be by IPCC reviewers, before it was mendaciously included in the report. See the words straight from the mouth of one of the main perpetrators:

    http://www.dailymail.co.uk/news/article-1245636/Glacier-scientists-says-knew-data-verified.html

    Oh, but that part about those dumb deniers not noticing the error is funny, even though you can't prove it. I am sure the IPCC, the BBC, the New York Times et al, would have jumped all over it, if the lie had been brought to their attention by some denialist practitioner of voodoo science. What’s really funny is that you don’t realize that the climate dogma is crumbling, due to internal bumbling. Hey Marco, is that alliteration?

  1724. DLM Says:

    Bart,

    I was able to find that website on my own, some time ago. I really don’t need a grammar school primer on climate science Bart. And if I did, a dumbed-down version of canned RealClimate alarmist propaganda would not be helpful.

    I get that CO2 is a GHG. I get the anthropogenic angle. What I am looking for is some solid proof for the theory that burning fossil fuels is going to result in catastrophic global warming. It ain’t on that website Bart. And if I wanted to insult your intelligence, as you have mine, I could provide you with a half-dozen links to ‘denialist’ blogs that plausibly refute everything that clown says.

    Now you claim to know that the avg global temp in the last interglacial was 1 to 2 degrees higher than now. That is pretty much what climate geniuses used to say about the MWP. But that became inconvenient in the 1990s, and now there isn’t enough proxy data from the SH to make that call. But OH MY GOODNESS!, as luck would have it, we do have precise and reliable global proxy data, from all the way back to the last interglacial. It’s ludicrous Bart. And to raise sea levels by 6 meters, it would take melting the equivalent of all the ice in Greenland, down to the point where you wouldn’t have enough left to chill a #@$%^&g Martini. Ludicrous! You are in the intellectual and moral equivalent of Al Gore territory there Bart.

    I don’t care about your models Bart. They spring from the minds of the same people who dreamed up that rock thermostat foolishness. You all can concoct assumptions, and fabricate dubious tweaks to your hearts delight. It’s just not working for you.

    Let’s just skip to the bottom line. Kyoto, big failure. Copenhagen, colossal failure. The public isn’t scared. The credibility of climate science is in the toilet, for very obvious reasons. I will help you:

    If you people want to attempt an improbable comeback, stop that idiotic nonsense about the science being settled. Instead of keeping your heads up that part of your anatomy where the sun don’t shine, seek out and engage the ‘deniers’ in debate. And clean up your act.

  1725. Bart Says:

    Willem, Frank,

    GHG were an amplifying feedback which substantially (being responsible for approximately half of the total forcing) contributed to the final temp change between glacial and interglacial periods. I’m not the one trying to handwave basic physics away.

  1726. Bob_FJ Says:

    Cohenite Reur April 13 at 00:13:

    BobF_J; that graph of the southern and northern hemispheres clearly shows greater cooling in the sth hemisphere from 1940… …the [SH] cooling must have been due to natural factors while the lesser cooling in the nth hemisphere must have been reduced for some reason; UHI perhaps?”

    Yes, that seems to be one of those “inconvenient truths“. (As far as I’m aware there does not seem to be a satisfactory explanation for the cooling after 1940). A couple (?) of years ago, there was an excitement that it was partly explained by different methods of sea-going measurements of SST’s varying from canvas bucket dips giving lower readings due to evaporation through to solid buckets, and engine cooling water inlet measurements. However, that seems to have gone quiet, maybe with the realization that the cooling was also observed on land too. See this composite graph I made back in 2008 comparing SST’s with global average wherein there is similar cooling on land in that period. The proposed divergence line suggests a possible UHI effect of about 0.1 C in recent times.
    Another strangely implied contribution to lessened cooling in the NH is with the Agung volcanic eruption in 1963. Even though this occurred in Indonesia, it is alleged that its influence was felt more in the NH than the SH…. From memory in Sato et al 2008.
    The AMO (Atlantic Multi-decadal Oscillation) does seem to support a lesser cooling in the NH, although other phases of this create strong contradictions elsewhere, so its true significance (calibration) is hard to assess.
    This version of the PDO shows cooling in the period ~1950 through ~1975, but unfortunately it is hard to conceive that it has more effect in the NH than the SH.
    Finally, I doubt if UHI was so important in this period either, so it seems to me that the cooling after 1940 remains an “Inconvenient Truth”

  1727. Bart Says:

    HAS,

    The hypotheses that the temps after 1935 are either just a linear extrapolation of, or just a stochastic process stemming from the pre-1935 temps are both meaningless, for the reason that the climate forcings have changed in the meantime. The stochastic model is like claiming that my weight could have been anywhere between 50 and 105 kg. Well, thanks for that information. That’s a prediction without any skill to speak of.

    John Whitman,

    I appreciate attempts at collaboration, but don’t expect a serious scientist to throw physical principles like conservation of energy out of the window. Perhaps your ‘far view physicists’ (?) would. In my view however, people who are so eager to arrive at a certain conclusion that they’re willing to compromise such basic physical principles seem to be agenda driven. I have made numerous attempts to have a constructive discussion and collaboration, e.g. by suggesting applying the kinds of tests VS is proposing within a physically realistic framework.

    You ask an interesting (rhetorical?) question: “How to validate our non-professional opinions without professionals?” I wrote a post about how non-professionals can filter information about complex topics. I wonder what filter you’re applying in deciding who to trust?

    Igor Samoylenko,

    Thanks for your thoughtful remarks. The links to DeWitt’s analysis are very relevant indeed. Clearly, the level of certainty claimed by some about this whole unit root story is not warranted in the least.

    Funny how people lament climate scientists for appearing too certain, and in the same breath being ready to accept as true any straw that they can cling on if it can be used to argue against the scientific consensus (and by extension against emission reduction policies).

  1728. Bob_FJ Says:

    CORRECTION:
    Instead of my penultimate Para;
    This version of the PDO shows cooling in the period ~1950 through ~1975, but unfortunately it is hard to conceive that it has more effect in the NH than the SH.
    PLEASE READ;
    This version of the PDO shows cooling in the period ~1950 through ~1975, which may help to explain the greater cooling in the SH, although there are some discussions around on its validity and calibration

  1729. Bob_FJ Says:

    Marco, Reur April 12 at 11:33:

    Geez, one can’t trust the media any more; I don’t know what’s happening; it‘s a travesty!
    On the other hand it can be quite amusing.

    For instance, I I chuckled over one report that tourists to Alaska were horrified to see a male polar bear kill and eat a cub that was likely fathered by a different daddy bear, but as a consequence of global warming. Never mind that this behaviour is typical of many patriarchal carnivores and omnivores, such as lions and some alpha males in primate clans.
    Then there are many reports (and videos) of calving of icebergs from ice-shelves as proof positive of AGW. Never mind that calving is a mechanical failure arising mainly from tidal and wave action. Remember the Titanic? Remember the history of sighting of icebergs from the shore in New Zealand about 100 years ago?

    And then there’s….. oh will that do?

  1730. HAS Says:

    Bart

    The hypotheses that “the [GISS] temps after 1935 are either just a linear extrapolation of, or just a stochastic process stemming from the pre-1935 temps are both” testable, and show that the former fails and the latter can not be rejected at 95% confidence levels.

    Can I go way back in this thread and remind you that having established this, the interesting question for science is how we now add in additional information in a way that is statistically robust.

    My last couple of posts has been largely about helping people who are having difficulty understanding that statistical inference (on which most of the science involved in climate change is based) has its own laws that shouldn’t be violated. I have not been attempting to promote any particular world view. I should add, if it gives any comfort, I probably understand and respect physics more than I do statistics, but necessity must (and I hasten to add my ability in both is equality limited).

    If you could acknowledge the validity of statistical theory this thread could probably move on to quite a productive discussion about where to now as John Whitman has suggested, and thereby leave some of the cheer leaders on both sides behind.

  1731. David Stern Says:

    Obviously VS hasn’t read all our papers where we/I address these issues of different series I(2) vs. I(1), I(0) etc. In particular Stern and Kaufmann published in Climatic Change (2000) where I use structural time series methods to model the temperature series as stationary and non-stationary components. I prefer though my multi-cointegration papers that include ocean heat content data. This is the CSDA 2006 paper and the unpublished 2005 Working Paper which has a two layer ocean.

  1732. HAS Says:

    David

    Just one comment from you that would be useful given some of the more basic debates that have been occuring here. Putting aside the particular detail on the appropriate time series methods to model the temperature series (and perhaps other climate related series), is it fair to say that co-integration/mulit-co-intergration techniques are required when dealing with these time series rather than regression techniques?

  1733. Bart Says:

    HAS,

    The former hypothesis (linear extrapolation of pre-1935 trends) is one that nobody believes in, so its refutation is meaningless.

    The latter hypothesis has such wide bounds (to both increasing and decreasing temperatures!) that it cannot be said to posses any ‘skill’. That’s why I referred to as in practice being an ‘anything goes’ prediction (though I’m aware that it wasn’t specified as such, and that there still are bounds, however wide).

    The implicit assumption of this hypothesis is that from 1880 to 1935 there was no deterministic trend, ie no climate forcing acting on the system. But in fact there was, namely a combination of changes in solar, GHG and volcanic forcing. So the temp change over the test period is implicitly assumed to be stochastic/random, whereas in fact it was partly deterministic/caused for a change in radiative forcing. You’d have to either account for that and try to isolate the true stochastic part or find a time period where the net forcing is near close enough to zero (that’s hardly possible though).

    To claim that the global avg temp might as well have decreased 0.7 degrees as increased 0.7 degrees since preindustrial times flies in the face of basic physics, namely that the planetary temperature is governed (a.o.) by the planetary energy balance, and that this balance has substantially changed over the past 100 or so years due in large part to anthropogenic climate forcings, with a bit of help from natural climate forcings.

    It’s like claiming that allthough I’ve eaten much more than my body needed over the past twenty years, my chances of having gained or lost weight are equal nevertheless, and it’s just a coincidence that I’ve gained weight. No, it’s not a coincidence. It’s physics (or biology).

    I’ll repeat from Ramanathan and Feng (though it’s textbook stuff really):

    So the process of the net incoming (downward solar energy minus the reflected) solar energy warming the system and the outgoing heat radiation from the warmer planet escaping to space goes on, until the two components of the energy are in balance. On an average sense, it is this radiation energy balance that provides a powerful constraint for the global average temperature of the planet.

    If you want to tackle the question whether the hypothesis of AGW is correct, then a good start would be to look at eg the model predictions (rather than some made-up linear extrapolation) and how they stack up to the observations. Or use the net climate forcing together with the climate response fucntion (see e.g. p17 of Hansen’s presentation) and compare it to the temp record, preferably corrected for known causes of internal variability such as ENSO and PDO. That’s a hypothesis worth testing.

    To add to David Stern’s comment, the abstract of the CSDA paper he’s referring to is here.

  1734. Marco Says:

    @HAS and DLM:
    I made my comment in full recognition of the discussion here, but it seemed to me DLM was desperately clinging to the false interpretation of what Jones said.

    Oh, and DLM: Jonathan Leake is known as the guy who apparently is incapable of understanding English if not clearly written out for him. As a result, he claimed Mojib Latif predicted a 30 year cooling (no, he didn’t), that the Keenlyside et al article predicted the same (errr….not even close), that one Google search used x amount of CO2 (overestimated by a factor a lot), misrepresented what Dawkins said about astrology, misrepresented a study about exams and in the process falsely accuse facebook as the cause of failing exams, misrepresented a study about magnetism, misrepresented what Simon Lewis said about the IPCC report (resulting in a very lengthy complaint to the PCC), misquoted Pielke Jr and was told as much by Pielke himself (and two months later claimed he had never ever been accused of misquoting, before Lal claimed Leake misquoted him in the article you cite).

    In short: you might want to start being a little more skeptical about the sources you use. Jonathan Leake’s history of misrepresenting science and scientists is now well known.

  1735. Marco Says:

    @Bob_FJ:
    You may want to read stories in the Daily Mail with some more skepticism (even though the Polar Bear story wasn’t even Jonathan Leake’s).

    Nonetheless, while cannibalism in many bear species is common, there are situations which are likely to increase its incidence. Global warming resulting in lower fat reserves is one credible hypothesis.

  1736. manacker Says:

    Bart

    Not “confused” about “chicken and eggs” (a silly analogy BTW).

    Just “confused” about the fact that temp changed first and much later CO2, and that CO2 could not possibly have been the cause for temp change, despite the long-winded rationalization that tries to turn this around.

    This is not to say that I do not accept that CO2 is a GHG, that humans (in the more industrially developed and affluent countries, at least) emit CO2.

    The 800,000-year correlation is simply a poor example for you to parade out to “prove” that CO2 drives temp. Use another one that makes sense, Bart. This one does not.

    Max

  1737. cohenite Says:

    Here’s an interesting history of the WG1, FAQ 3.1 Fig 1 graph:

    The new math – IPCC version

    I also understand that Lindzen and Choi part2 is soon to be available; this will, of course confirm a low climate sensitivity and rebut the radiative [im]balance AGW argument.

  1738. HAS Says:

    Bart

    “The former hypothesis (linear extrapolation of pre-1935 trends) is one that nobody believes in, so its refutation is meaningless.”

    We agree, but the use of linear regression with this time series is endemic – if you were to regard anyone who made this assumption as OT and deleted their posts, I’d stop occasionally reminding commentators that I (and you) regarded them as misguided. OK?

    [snip a bit]

    “The implicit assumption of this hypothesis is that from 1880 to 1935 there was no deterministic trend, ie no climate forcing acting on the system.”

    No detectable climate forcing, and not just a hypothesis – one that can’t be reject at 95% confidence. In your studies if you can’t see it what do you say? And if you reject a hypothesis at 95% confidence what do you do next?

    “So the temp change over the test period is implicitly assumed to be stochastic/random, whereas in fact it was partly deterministic/caused for a change in radiative forcing. You’d have to either account for that and try to isolate the true stochastic part or find a time period where the net forcing is near close enough to zero (that’s hardly possible though).”

    If you can’t observe the effect that the physics suggests should be there what do you do? Tell everyone that the observations were wrong or go on and try and find an explanation . Why are you so stubborn about this?

    [snip another bit]

    “It’s like claiming that allthough I’ve eaten much more than my body needed over the past twenty years, my chances of having gained or lost weight are equal nevertheless, and it’s just a coincidence that I’ve gained weight. No, it’s not a coincidence. It’s physics (or biology). “

    No your analogy is wrong. You haven’t established the nature of your weight time series to find out what its underlying process is, and from that the appropriate statistical techniques to use to determine what might have been correlated with/caused your weight change. Trust me you can accept statistical theory and not reject physics – in fact to be credible you need to accept both.

    [snip some obvious stuff]

    “If you want to tackle the question whether the hypothesis of AGW is correct, then a good start would be to look at eg the model predictions (rather than some made-up linear extrapolation) and how they stack up to the observations. Or use the net climate forcing together with the climate response fucntion (see e.g. p17 of Hansen’s presentation) and compare it to the temp record, preferably corrected for known causes of internal variability such as ENSO and PDO. That’s a hypothesis worth testing.”

    I can’t see why if I’m interested in the professional pursuit of science I need to question any hypothesis about AGW. I want to know what’s going on in the climate. As I’ve noted this depends upon respecting statistics (and physics). Methodologically speaking climate models are problematic, and as I’ve note before their validation falls foul of some of the statistical issues raised in this thread.

    If you want to support Hansen in his particular campaign to convince people to take action that’s OK and you may both regard this as a just cause, but its not science. Science (and Nature) may end up somewhere else and that has to be OK. I’m saying this not because I don’t think there isn’t a problem and that there aren’t risks here, just to encourage dispassionate enquiry on your part.

  1739. Bart Says:

    HAS,

    No detectable climate forcing, and not just a hypothesis – one that can’t be reject at 95% confidence. In your studies if you can’t see it what do you say? And if you reject a hypothesis at 95% confidence what do you do next?

    There has been a detectable climate forcing. And the stochastic hypothesis is A) based on false assumptions and B) makes an unfalsifiable and thus meaningless prediction (of course an ‘anything goes hypothesis is not rejected. But it’s not of any use either). No *credible* hypothesis has been rejected.

    “If you can’t observe the effect that the physics suggests should be there what do you do?”

    The effect that the physics suggests hasn’t been tested in this thread at all. I’ve been pointing out repeatedly how one could go about that and it has been repeatedly ignored. What the physics suggest is embodied in e.g. GCM’s and they seem to simulate the observed temp change rather well. But surely that would be worth testing.

    “No your analogy is wrong. You haven’t established the nature of your weight time series to find out what its underlying process is, and from that the appropriate statistical techniques to use to determine what might have been correlated with/caused your weight change.”

    My analogy is correct. It is well established that my weight depends on the balance between my energy intake and my energy expenditure. Likewise for climate; see the quote I provided. It’s a direct consequence of conservation of energy. Why is it so difficult to accept that?

    “Trust me you can accept statistical theory and not reject physics – in fact to be credible you need to accept both.”

    I am glad we agree. I look forward to you proposing to use sound statistical techniques which are based on assumptions that are physically plausible and that don’t contradict basic physics.

    “I can’t see why if I’m interested in the professional pursuit of science I need to question any hypothesis about AGW.”

    ??? That seems to be your raison d’etre here? You claim that it’s been falsified that temps went up due in part to GHG emissions, though your proof doesn’t stand up to scrutiny. I point out an avenue that could in principle lead to both scientific insight (rather than wishful thinking) and refutation of an important part of established science and then you say you’re not interested?

    “I want to know what’s going on in the climate. As I’ve noted this depends upon respecting statistics (and physics). Methodologically speaking climate models are problematic, and as I’ve note before their validation falls foul of some of the statistical issues raised in this thread.”

    I’d suggest to remove the brackets around “(and physics)”, then I agree. Climate models represent what we think we know physically about the climate system, in a parameterized form. Handwaving that you don’t trust them doesn’t convince me of anything, except perhaps as a sign of bias on your part. Their validation hasn’t even been touched upon by VS (notwithstanding my repeated attempts to move in that direction). And in your previous paragraph you said that you weren’t interested in it in the first place. So much for coherence.

    “If you want to support Hansen in his particular campaign to convince people to take action that’s OK and you may both regard this as a just cause, but its not science. Science (and Nature) may end up somewhere else and that has to be OK. I’m saying this not because I don’t think there isn’t a problem and that there aren’t risks here, just to encourage dispassionate enquiry on your part.”

    ??? I pointed to a scientific point made by Hansen, and you start invoking a ‘particular campaign to convince people to take action’? What does that have to do with anything we’re discussing? I’m all for dispassionate inquiry. Preferably without having a predetermined preference for what the outcome should be.

  1740. Bart Says:

    Manacker,

    If you accept both that CO2 is a GHG and that CO2 responds to temp change as a positive feedback, then the circle is round. The comparison with chicken and eggs is quite apropos I think. The *initial* temp rise (before CO2 went up) was not caused by CO2, but from the time CO2 went up, it (by its nature) contributed to the further warming (lasting a few thousands of years).

    See also http://www.realclimate.org/index.php/archives/2004/12/co2-in-ice-cores/

  1741. Tim Curtin Says:

    Bart et al. Let me try again. You remain convinced that we are experiencing runaway increases in global temperature, even if unit roots are demostrated, and that drastic reductions in world economic growth are thereby mandated, even when Vogelsang & Franses show that using a multivariate structure for trend-stationary time series variables…. “We find that the January and March temperatures have the same upward trend [in the Netherlands from 1706], that the September temperature has decreased and that the temperatures in the other months do not have a trend. Hence, only winters in The Netherlands seem to get warmer”.* Yet GISS uses such data to claim “dangerous” warming by ignoring the 9 months of none.

    Please explain why warmer winters in your homeland and most other even less high latitudes in both NH and SH (including my own Canberra) would be “dangerous”. I spent years working in very hot places like Sudan and Egypt, believe me their annual yields for cane sugar etc make yours in Holland look pathetic (unless in greenhouses). Have you ever grown cane sugar in your winter? good luck! Egypt now has 2.5 harvests a year for many key crops, despite its c2X average Dutch temperatures. How many crops p.a. does your family manage in their open fields?

    This debate has until now been all too academic. What does it mean for real people in real places outside white men’s academies? Famine, that’s what. Reducing atmospheric CO2 to Hansen’s 350 ppm will reduce world food production to the level of about 1988, with c 1 billion extra poeple to feed since then. Congratulations.

    *Testing for Common Deterministic Trend Slopes
    Timothy J. Vogelsang (Cornell) and Philip Hans Franses
    Econometric Institute Erasmus University Rotterdam

    Click to access commontrends.pdf

  1742. manacker Says:

    Bart

    Not to belabor a point, but what you are saying is based on theory, not on physical observations.

    You cited an article showing the CO2 temp correlation over the past 800,000 years as observational evidence for the premise that CO2 drives temp.

    Yet the article presents charts based on ice-core data, which show a 1000-year lag, with CO2 lagging temperature. These data do not provide observational evidence (i.e. empirical data) to support the premise that CO2 drives temperature, so it is a bad example for you to use.

    You then interject the theory, “something else may have started the warming, but later (after over 1,000 years) CO2 took over as the driver”.

    This is not empirical evidence for your premise that CO2 drives temp, but theory.

    So it is a poor example for you to cite to show that “CO2 drives temperature” (which is the point you were trying to make).

    Why is the “chicken/egg” analogy weak? Chickens create eggs; eggs become chickens. Ergo chickens create chickens (and eggs are only an intermediate step in the process, in fact they are embryonic chickens).

    The above is all about logic – not about GH theory. I hope you can see the difference.

    Max

  1743. manacker Says:

    Bart

    Changing subject slightly to something else you posted.

    The LW GH contribution to earth’s energy (im)balance is all very interesting, but only part of the story. The big unknown appears to be the effect of clouds.

    In another paper Ramanathan et al. lamented that we only have model simulations to estimate the net impact of clouds with warming, but no empirical data, so we do not even know whether the net feedback from clouds is positive or negative.

    IPCC conceded that “cloud feedbacks remain the largest source of uncertainty”.

    Spencer et al. did provide us some empirical data based on actual physical observations from CERES, which showed that the net overall feedback from clouds, i.e. SW + LW, is strongly negative over the tropics. Lindzen + Choi found similar results from ERBE observations (which Trenberth et al. attempted to refute in a blog, which, in turn, has again been refuted by another blogger).

    Norris found a similar result over the mid-latitudes, using a different set of long-term data.

    Recently Trenberth stated in an interview that the “missing energy” of the past several years appears to be disappearing into outer space, with clouds possibly acting as a “natural thermostat”.

    So it appears that reflected incoming SW radiation from increased low altitude clouds is playing an important role in the total energy balance, and that IPCC’s “largest source of uncertainty” may be getting cleared up.

    Max

  1744. Bart Says:

    Max,

    More longwave radiation is reflected both to the surface and to outer space (as measured by satellites) under cloud-free conditions at wavelenghts where the various GHG (which are increasing in conc) are absorbing. The clouds ain’t doing it (though they might be doing something else).

  1745. dhogaza Says:

    I suggest that ‘far view’ in my context means physicists with no baggage to defend wrt to climate science interests . . . . pro or con.

    The physics that B&R have “overturned” in essence lies outside climate science. Climate science is a user of the underlying physics, in much the same way that the underlying physics rests to some degree on higher mathematics.

    Physics at this level has no baggage to defend wrt to climate science interests. They might be wondering why their CO2 lasers and the like continue to work, though …

  1746. DLM Says:

    Marco,

    Jonathan who? What was that little tirade about?

    The article I provided a link to was written by somebody named Rose, and directly quoted the so-called ‘scientist’ who helped the IPCC perpetrate HimalayaGate. He did a very bad thing, and you cannot bring yourself to admit it. Your side wails plaintively about the evil dishonest denialist machine that is picking apart your science, but you are loathe to criticize your own. There is a name for that, and despite all attempts at coverup, the public gets it. The credibility of the climate dogma is circling the drain.

  1747. Willem Kernkamp Says:

    Bart,

    “Funny how people lament climate scientists for appearing too certain, and in the same breath being ready to accept as true any straw that they can cling on if it can be used to argue against the scientific consensus (and by extension against emission reduction policies).”

    There is no consensus. Global temperature is currently well below the lower bound of expected temperatures as calculated by the models. Quite possibly there is negative feedback from the water cycle. Let’s find out before we rush into nuclear power.

    Will

  1748. DLM Says:

    HAS says:”“If you can’t observe the effect that the physics suggests should be there what do you do?”

    In ‘private’ emails to fellow climate scientists, they call that a travesty. In public, they can explain anything.

  1749. Frank Says:

    Bart says:

    (To Frank and Willem) – “GHG were an amplifying feedback which substantially (being responsible for approximately half of the total forcing) contributed to the final temp change between glacial and interglacial periods. I’m not the one trying to handwave basic physics away.”

    (To Manacker) – “If you accept both that CO2 is a GHG and that CO2 responds to temp change as a positive feedback, then the circle is round. The comparison with chicken and eggs is quite apropos I think. The *initial* temp rise (before CO2 went up) was not caused by CO2, but from the time CO2 went up, it (by its nature) contributed to the further warming (lasting a few thousands of years).”

    Neither Manacher, Willem or I are trying to “handwave” away the basic physics. None of us, including our gracious host, know the extent of solar insolation, relative humidity or cloud cover during past glacial periods, but we do know that CO2 significantly lagged temperature. We’re also pretty sure that Henry’s Law was in effect then, as well. What we don’t know (then or now) are the feedbacks. To date, attempts to model these apriori in a way that makes CO2 the major (only?) determinant of climate over all time scales requires that scientists like Alley have to slide up and down Occam’s razor “round the circle”.

  1750. sod Says:

    Bart,

    I was able to find that website on my own, some time ago. I really don’t need a grammar school primer on climate science Bart. And if I did, a dumbed-down version of canned RealClimate alarmist propaganda would not be helpful.

    I get that CO2 is a GHG. I get the anthropogenic angle. What I am looking for is some solid proof for the theory that burning fossil fuels is going to result in catastrophic global warming. It ain’t on that website Bart. And if I wanted to insult your intelligence, as you have mine, I could provide you with a half-dozen links to ‘denialist’ blogs that plausibly refute everything that clown says.

    the difference between the clowns on our side and the ones on your site, is knowledge.

    http://www.skepticalscience.com/about.shtml

    most of our “clowns” have a scientific background. most of your “clowns” are professional clowns.

  1751. DLM Says:

    Frank,

    Are you Frank Lasner, who authored this very interesting post on WUWT?

    CO2, Temperatures, and Ice Ages

    “One thing is for sure:

    “Other factors than CO2 easily overrules any forcing from CO2. Only this way can the B-situations with high CO2 lead to falling temperatures.”

    This is essential, because, the whole idea of placing CO2 in a central role for driving temperatures was: “We cannot explain the big changes in temperature with anything else than CO2″.

    But simple fact is: “No matter what rules temperature, CO2 is easily overruled by other effects, and this CO2-argument falls”. So we are left with graphs showing that CO2 follows temperatures, and no arguments that CO2 even so could be the main driver of temperatures.

    – Another thing: When examining the graph fig 1, I have not found a single situation where a significant raise of CO2 is accompanied by significant temperature rise- WHEN NOT PRECEDED BY TEMPERATURE RISE. If the CO2 had any effect, I should certainly also work without a preceding temperature rise?! (To check out the graph on fig 1. it is very helpful to magnify)

    Does this prove that CO2 does not have any temperature effect at all?

    No. For some reason the temperature falls are not as fast as the temperature rises. So although CO2 certainly does not dominate temperature trends then: Could it be that the higher CO2 concentrations actually is lowering the pace of the temperature falls?”

    Makes a lot more sense than the chicken and egg story.

    And many of the comments were also very interesting. For example see: George E. Smith (11:09:07)

    Maybe sod will give us his insightful rebuttal. Or Bart.

  1752. DLM Says:

    sod,

    I always find you amusing. Thank you for that.

  1753. DLM Says:

    PS

    sod,

    Is Dr Murari Lal one of yours, or one of ours?

  1754. Frank Says:

    DLM,

    No, I am not Frank Lasner, just an engineering grad who has enough background in math, stats, physics, thermo and (fluid) mechanics to respect the science and be skeptical of folks like “sod”. Philosophically, I liken the whole AGW debate to the one over the age of the earth, which you may recall, raged on over many years. This debate, of course, has far more importance with respect to our existence, hence my skeptical precondition that a “heroic” hypothesis (AGW) be validated by substantial evidence in advance of societal action – F

  1755. manacker Says:

    Bart

    You keep talking about LW radiation, which is fine. That’s what the GHE is all about.

    But ALL of the energy supplied to earth comes from incoming SW radiation.

    A portion of this energy is reflected back into space by low altitude clouds (and other surface albedo).

    The studies, which I cited, show that this reflected SW energy from clouds increases with higher SST, i.e. that low altitude cloud cover increases.

    The studies also show that the COMBINED LW + SW radiation leaving our planet increase with higher SST, effectively resulting in a net negative feedback from clouds.

    As Norris puts it:

    Results show that upper-level cloud cover over low and midlatitude oceans decreased between 1952 and 1997, causing a corresponding increase in reconstructed OLR [outgoing long wave radiation]. At middle latitudes, low-level cloud cover increased more than upper-level cloud cover decreased, producing an overall rise in reconstructed RSW [reflected short wave radiation] and net upward radiation since 1952.

    This does not negate the GH effectof CO2 at all, Bart.

    It simply tells us that the observed net feedback from clouds is negative, rather than positive as estimated by all the model simulations cited by IPCC, and that the estimated 2xCO2 climate sensitivity per IPCC of 3.2±1.1C is too high. That’s all.

    Max

  1756. Marco Says:

    @DLM: I forgot, it was Rosegate (same newspaper family as Leakegate, though)
    According to Lal, he never said what Rose attributed to him:
    http://dotearth.blogs.nytimes.com/2010/01/19/heat-over-faulty-un-view-of-asian-ice/
    Of course, Rose claims he did. So we have to believe a journalist of the daily rag that a lead IPCC author claimed they put something in they knew was wrong. That’s hilarious, if it wasn’t such a serious matter.

  1757. HAS Says:

    Bart

    “There has been a detectable climate forcing.”

    GHG levels as measured have increased but in this particular time series are you saying you can detect that impact, or are your referring to additional information?

    “And the stochastic hypothesis is A) based on false assumptions and B) makes an unfalsifiable … prediction”

    The false assumption is what? The stochastic hypothesis is falsifiable, and given a few more years we might see this in the data if you are right about AGW. But the salutatory thing about this analysis is that it reminds just how much variation there is in the data. You can’t claim greater precision just because you want it – I commented earlier on attempts to do this in a paper by treating GCMs as relatity in a spurious attempt to make things more accurate.

    “The effect that the physics suggests hasn’t been tested in this thread at all. I’ve been pointing out repeatedly how one could go about that and it has been repeatedly ignored. What the physics suggest is embodied in e.g. GCM’s and they seem to simulate the observed temp change rather well. But surely that would be worth testing.”

    I agree with your first statement, but my reading of this thread is that the reason the next step hasn’t occurred here (all though a number of papers attempting this have been referenced) is that the implications of using statistics properly are not accepted. In this regard GCMs are not the answer (they are a bit like using Quantum Mechanics to model phenomena on a global scale). Also I think you missed the point I made some time ago. You don’t choose a model that produces observed temperature change, you use a model that produces the same underlying process, and forcing GCM to reproduce the temperature just imbeds bias into them.

    “My analogy is correct. It is well established that my weight depends on the balance between my energy intake and my energy expenditure. Likewise for climate; see the quote I provided. It’s a direct consequence of conservation of energy. Why is it so difficult to accept that?”

    Very easy to accept it as a hypothesis, next step is to do the statistics to demonstrate that the two systems are similar in some way. It’s like looking at graphs, useful to develop hypotheses, but not a convincing method to use as a proof.

    “You claim that it’s been falsified that temps went up due in part to GHG emissions, though your proof doesn’t stand up to scrutiny.”

    Ehem, please reread carefully what I have actually been saying.

    “I point out an avenue that could in principle lead to both scientific insight (rather than wishful thinking) and refutation of an important part of established science and then you say you’re not interested?”

    I used the word “interested” in relationship to “science”. In my view the preoccupation with the AGW hypothesis is pushing climate science well beyond its foundations, and it is probably time to pour some concrete.

    “??? I pointed to a scientific point made by Hansen, and you start invoking a ‘particular campaign to convince people to take action’? What does that have to do with anything we’re discussing? I’m all for dispassionate inquiry. Preferably without having a predetermined preference for what the outcome should be.”

    My comment was a bit snarky. I enjoy being pointed to scientific papers to help my understanding, I get a bit bored being pointed at presentation designed to advocate a particular point of view (which in my judgement was what Hansen was doing).

  1758. manacker Says:

    Bart

    Let’s take the cloud feedback one step further.

    IPCC tells us that all the models estimate a net positive cloud feedback.

    Based on this IPCC estimates a 2xCO2 climate sensitivity of 3.2C. [On one page of AR4 WG1 Ch.8 IPCC says 3.2±1.1C, on another 3.2±0.7C, but let’s just concentrate on the range mid-point for now.]

    Of this 3.2C IPCC tells us on p.633 that 1.3C comes from a strong positive feedback from clouds as derived from current GCMs (i.e. ignoring cloud feedback IPCC these same models estimate a climate sensitivity of 1.9C).

    We now have physical observations, which show us that the net cloud feedback is strongly negative, rather than strongly positive.

    This tells me that the 2xCO2 climate sensitivity is, by definition, well below 1.9C and most likely at or below 1.0C.

    In this case, the net impact of all feedbacks is that they cancel one another out, and that the 2xCO2 CS equals the theoretical GH warming to be expected from a doubling of CO2 without any additional enhancement due to feedbacks.

    The GH theory and all that it entails is still valid. The only thing that has been invalidated by actual physical observations is the strongly positive net feedback from clouds.

    Max

  1759. DLM Says:

    Frank,

    So you are not the Lasner Frank. You have made some sage comments that caused me to recall the post by Lasner, on WUWT. I wonder if any of the learned ones here will attempt to pooh-pooh that thorough presentation.

    Like yourself, I also would like to see some proof before I give up cows, cars, and most modern conveniences. And I don’t believe for a second that downgrading my standard of living is going to increase anyone else’s. These political schemes to level the economic playing field invariably result not in spreading the wealth, but in spreading poverty.

    Instead of their case getting stronger, new holes are being shot through the catastrophic AGW theory, at an accelerating rate. Some of the damaging fire is coming from their own trenches. Like the good Doctor Jones’ admissions in the recent interview with the BBC chap. And manacker just pointed this out:

    “Recently Trenberth stated in an interview that the “missing energy” of the past several years appears to be disappearing into outer space, with clouds possibly acting as a “natural thermostat”.”

    Trenberth’s guess makes a lot more sense than Alley’s rock thermostat theory.

  1760. DLM Says:

    Marco,

    I can’t believe that you are serious. The credibility of the IPCC is shot. Even journalists are more trusted.

  1761. Pat Cassen Says:

    manacker –
    My reading of Norris’ recent papers
    (http://scholar.google.com/scholar?as_q=clouds+feedback&num=10&btnG=Search+Scholar&as_epq=&as_oq=&as_eq=&as_occt=any&as_sauthors=Norris&as_publication=&as_ylo=1995&as_yhi=&as_sdt=1.&as_sdtp=on&as_sdts=5&hl=en
    suggests that you are substantially overstating the case for negative cloud feedback.

  1762. Bob_FJ Says:

    HAS, you wrote in full:

    Bob, There are statistical tests (mentioned throughout this thread) you should use to test the independence of these [NH versus SH] series and to tell if they are the result of deterministic or stochastic processes, and ultimately how to draw valid statistical inferences from them. These techniques will give better information in the detail than just looking at the graphs, although the graphs will obviously help you form hypotheses to test.

    Thankyou, but this does not answer the questions [a] through [d] in my comment here.

    It is a fact that the HADCRUT records for the NH and the SH are two entirely separate time-series, of ~160 annual data points, each derived from thousands of bi-daily measurements. Whilst it is expected that there should be differences between the NH and SH because of some* different drivers, the outcome in curve shapes with distinct peaks at ~60 year intervals, are characteristically the same. Furthermore comparisons in magnitude differences seem reasonable given those known driver differences, although there are some questions concerning accuracy etc as touched on in my comment here. Maybe the unexplained significant cooling between ~1940 and 1960, reducing through to ~1975 could be specifically studied statistically, but I can’t see that any dependable quality results would arise.

    Furthermore, your assertions on statistical tests seem to be controversial, as for example demonstrated in the interesting comment by Igor Samoylenko here.

    *(for instance more ocean area in the SH, and more land and industrialization in the NH, to name but a few),

  1763. cohenite Says:

    Mmm, so we’re onto clouds; Steve Short had some interesting things to say about those little things:

    “According to Pinker (2005), surface solar irradiance increased by 0.16 W/m^2/year over the 18 year period 1983 – 2001 or 2.88 W/m^2 over the entire period. This was a period of claimed significant anthropogenic global warming.

    This change in surface solar irradiance over 1983 – 2001 is almost exactly 1.2% of the mean total surface solar irradiance of recent decades of 238.9 W/m^2 (K, T & F, 2009).

    According to NASA, mean global cloud cover declined from about 0.677 (67.7%) in 1983 to about 0.644 (64.4%) in 2001 or a decline of 0.033 (3.3%). The 27 year mean global cloud cover 1983 – 2008 is about 0.664 (66.4%) (all NASA data)

    The average Bond Albedo (A) of recent decades has been almost exactly 0.300, hence 1 – A = 0.700

    It is possible to estimate the relationship between albedo and total cloud cover about the average global cloud cover and it is described by the simple relationship:

    Albedo (A) = 0.250C + 0.134 where C = cloud cover. The 0.134 term presumably represents the surface SW reflection.

    For example; A = 0.300 = 0.25 x 0.664 + 0.134

    This means that in 1983; A = 0.25 x 0.677 + 0.134 = 0.303

    and

    in 2001; A = 0.25 x 0.644 + 0.134 = 0.295

    Thus in 1983; 1 – A = 1 – 0.303 = 0.697

    and in 2001; 1 – A = 1 – 0.295 = 0.705

    Therefore, between 1983 and 2001, the known reduction in the Earth’s albedo A as measured by NASA would have increased solar irradiance by 200 x [(0.705 – 0.697)/(0.705 + 0.695)]% = 200 x (0.008/1.402)% = 1.1%

    This estimate of 1.1% increase in solar irradiance from cloud cover reduction over the 18 year period 1983 – 2001 is very close to the 1.2% increase in solar irradiance measured by Pinker for the same period.

    Within the precision of the available data and this exercise, it may therefore be concluded that it is highly likely that Pinker’s finding was due to an almost exactly functionally equivalent decrease in Earth’s Bond albedo over the same period resulting from global cloud cover reduction.

    Hence surface warming over that period may be reasonably attributed to that effect.”

    And Pat, anyone who wants to underestimate the negative effect of clouds should read this paper which shows what happens when there are no clouds:

    http://www.scienceonline.org/cgi/content/abstract/320/5873/195

  1764. manacker Says:

    Pat Cassen

    I only quoted what Norris wrote in:
    ftp://eos.atmos.washington.edu/pub/breth/CPT/norris_jcl04.pdf

    According to the study the overall impact seems to have been one of increased total outgoing LW+SW radiation since 1952 (as temperature increased slightly). This appears to have been bit more pronounced at mid-latitudes than at low latitudes.

    Max

  1765. Bob_FJ Says:

    Cohenite,
    Thanks for your interesting comment here

    Sod and Marco especially:
    Would you please not immediately dismiss the article by virtue of its source as is your usual wont. Instead, please carefully examine its references and quotes and graphs which originate from AR4, WG1.
    Any comments?

  1766. cohenite Says:

    This business of cloud forcing is a crucial aspect of the AGW debate. My understanding is that surface SW CRF at BOA ~ -0.8 – -1.0 W/m^2/% cloud and surface LW CRF at BOA ~+0.6 W/m^2/% cloud cover. Therefore net SW forcing at BOA ~ -0.2 – -0.4 W/m^/% cloud cover.

    Slide 2 is informative:

    Click to access 7_Dupont.pdf

    As is the Dong paper;

    http://cat.inist.fr/?aModele=afficheN&cpsidt=17842824

    Clouds are negative feedbacks/forcings. And as the Pinker study shows cloud reduction is sufficient, through increased BOA SW receipt, to explain recent warming.

    By way of balance the Clement paper;

    Click to access observational-and-model-evidence-for-positive-low-level-cloud-feedback.pdf

    Which concludes that clouds are a +ve feedback does so on the basis of the following logic:

    “This observational
    analysis further indicated that clouds act as a positive feedback in this region on decadal time
    scales. The observed relationships between cloud cover and regional meteorological conditions
    provide a more complete way of testing the realism of the cloud simulation in current-generation
    climate models. The only model that passed this test simulated a reduction in cloud cover over
    much of the Pacific when greenhouse gases were increased, providing modeling evidence for a
    positive low-level cloud feedback.”

    That is, assume increases in ghg’s cause a REDUCTION in cloud cover!

  1767. cohenite Says:

    Getting back to WG1, FAQ 3.1 fig 1 and the issue of linearity generally this version of Fig 1 is intersting:

    About the IPCC views on WG1 and linearity Professor Glassman says this:

    “IPCC says of the trend method,

    Another low-pass filter, widely used and easily understood, is to fit a linear trend to the time series although there is generally no physical reason why trends should be linear, especially over long periods. The overall change in the time series is often inferred from the linear trend over the given time period, but can be quite misleading. Such measures are typically not stable and are sensitive to beginning and end points, so that adding or subtracting a few points can result in marked differences in the estimated trend. Furthermore, as the climate system exhibits highly nonlinear behaviour, alternative perspectives of overall change are provided by comparing low-pass-filtered values (see above) near the beginning and end of the major series.

    As some components of the climate system respond slowly to change, the climate system naturally contains persistence. AR4, Appendix 3.A Low-Pass Filters and Linear Trends, p. 336

    IPCC is correct to look for physical reasons for its modeling, but seems to confuse the real world with its models. The real world has no coordinate systems, parameters, or values. It has neither infinities nor infinitesimals. It cannot have the properties of scale or linearity. These are all manmade concepts that lead to valid models, that is, models with the ultimate scientific property of predictive power. These are all properties of models of the real world.

    Mathematical models have poles, meaning singularities at which a dependent parameter becomes infinite or undergoes perpetual oscillation. These are instabilities, and a stable system or a stable state is always finite, and any oscillations are damped. The most violent of natural phenomena, supernova in astronomy, and volcano eruptions in geology, are the largest witnessed events in their fields, but in the end are finite in energy, in time, and in space. Man has observed nothing infinite or infinitesimal. Things become infinite in models that employ rates or densities in which the denominators vanish. Nature doesn’t give a fig about man’s models.

    IPCC is not particular enough about definitions, as discussed above or in the Journal for equilibrium, residence time, cloud albedo, and now for stable or linearity. It defines nonlinear as the absence of a “simple proportional relation between cause and effect.” AR4, Glossary, p. 949. The word simple qualifies and blunts a promising definition. But the existence ever of cause and effect is an axiom in science, notwithstanding some painfully obvious counterexamples. Linearity has a precise definition in mathematics and system theory. A system is linear if the response to a linear combination of inputs is that same linear combination of the individual responses. What might be linear in, say, cylindrical coordinates, becomes nonlinear in Cartesian coordinates. The Beer-Lambert Law states that absorbance by a gas is linear in the product of concentration and the distance traveled (from the probability of a collision), but it also expresses gas radiative forcing as the non-linear complement of an exponential in gas concentration. A linear relationship in the macroparameters of thermodynamics is likely nonlinear on smaller scales, that is, in mesoparameter or microparameter spaces. Linearity is a state of mathematical being, and is not continuously measurable. It exists or not. A system cannot be “highly nonlinear”. That “the climate system exhibits highly nonlinear behavior” (AR4, Appendix 3A, p. 336) is doubly meaningless.

    Similarly, although the climate system is highly nonlinear, the quasi-linear response of many models to present and predicted levels of external radiative forcing suggests that the large-scale aspects of human-induced climate change may be predictable, although as discussed in Section 1.3.2 below, unpredictable behaviour of non-linear systems can never be ruled out. TAR, ¶1.2.2 Natural Variability of Climate, p. 91.

    Nothing can be highly nonlinear, and nothing in the real world can be nonlinear. Models, on the other hand, will always be linear or not. Furthermore, linearity is not a prerequisite for predictability as IPCC suggests. Radiation transmission through a gas is nonlinear in concentration or distance as predicted by the Beer-Lambert Law. Outgassing of CO2 from the ocean to the atmosphere is nonlinear in atmospheric partial pressure according to Henry’s Law.”

  1768. Bob_FJ Says:

    Marco you wrote:

    You may want to read stories in the Daily Mail with some more skepticism (even though the Polar Bear story wasn’t even Jonathan Leake’s).
    Nonetheless, while cannibalism in many bear species is common, there are situations which are likely to increase its incidence. Global warming resulting in lower fat reserves is one credible hypothesis.

    You rather missed the point I was making: that the media has been wildly posting stuff in great volume that attributes just about anything to AGW such as reducing perfume in flowers, or fruit bats dropping dead out of trees on hot days around Melbourne.

    BTW, the Daily Mail is only visited by me if I’m referred to it, and via Google, if you meant this, I had not seen that before. I think the first part is regurgitated from something I read much older than from December 2009.

    You could try this ABC (Oz radio) version that quotes in part:

    However, an Inuit leader has told Canadian broadcaster the CBC the incidents are not that unusual and should not be associated with starvation.
    Kivalliq Inuit Association president Jose Kusugak described the connection with climate change as “absurd”.
    He said a male polar bear eating a cub was a normal normal occurrence.

    Or, there is this.

    In addition to your credible hypothesis there are also:
    * If it gets warmer, there is less need for large fat reserves, for several reasons.
    * If it gets warmer, other food sources (than seals) become more available
    * If seals are forced to give birth on rocks, they become easier prey.

    You should really try to be more sceptical of the media.

  1769. David Stern Says:

    “David

    Just one comment from you that would be useful given some of the more basic debates that have been occuring here. Putting aside the particular detail on the appropriate time series methods to model the temperature series (and perhaps other climate related series), is it fair to say that co-integration/mulit-co-intergration techniques are required when dealing with these time series rather than regression techniques?”

    Certainly the time series properties of all these variables need to be taken into account when testing for trends, modeling etc. I’ve used methods that seem to me to be plausible approaches. There may be other plausible approaches and arguments. Simple linear regression methods that don’t take into the potential problems are certainly hazardous. But I wouldn’t be too dogmatic about what the best approach is.

  1770. Marco Says:

    @Bob_FJ:

    Will you please not claim I just dismiss articles which have graphs from AR4, etc.? I dismissed articles from highly dubious sources that have a long history of making stuff up (as in the Daily Mail and The Times, and Jonathan Leake in particular). I also dismiss your simplistic interpretation of some of the newspaper articles.

    @DLM:

    I think it is you who has no credibility. Lal contradicted what Rose ascribed to him. The scientist has a credibility that FAR outweighs the journalist, especially when the journalist makes a claim that would involve fraudulous behavior by a large group of scientists, and in the face of open review of those same claims.

  1771. Marco Says:

    @Bob_FJ:

    I prefer not to base myself on newspaper articles, but on what scientists actually say. Newspaper articles are pretty good at going after the strawman, as in this case: claim someone said the cannibalism is due to global warming, and you’ll find *every* polar bear scientist contradicting that (no need for special interest groups).

    Regarding your other claims:
    1. the fat deposits are not just to keep warm, they are to survive the long period in which polar bears can not hunt. As in “food storage”.
    2. Please tell us which sources of food those would be? You’d have to increase the temperature by a LOT to get other suitable food sources in sufficient amounts to enter the arctic.
    3. You’d first have to get to much more warming before that becomes a necessity

  1772. Bart Says:

    HAS,

    I thought I was pretty clear last time. The false assumption is that the period 1880 – 1935 is purely governed by stochastic processes, whereas in reality there as alreadya change in climate forcing in that period (from solar, volcanic and GHG). And regarding B), the prediction bounds are thus wide that it’s no wonder that the data fall within it (untill now at least). Failure to reject such a hypothesis is meaningless.

    If GCM’s are not the answer, then why use an extrapolated linear trend that nobody believes in? If you want to test whether temps go up according to theory, than presumably you’d want to test that theory? Or else, don’t make far reaching conclusions about the theory that you haven’t even tested. You can’t have it both ways. That is the point I’ve been repeatedly making, and that’s been repeatedly lost on those to whom is most concerns.

    Did you check Hansen’s presentation I linked to? Over 90% of it is science; only the last few slides are about his personal opinion re policy, where he’s always careful to explicitly point out that it’s his personal opinion (as opposed to science). He is a very good scientist, despite him having a different personal opinion on policy than you do.

  1773. Bart Says:

    All: Submit off topic comments to the open thread (or an applicable thread if there is one, eg on CRU). That includes rosegate, leakegate, climategate, McIntyre, cloud feedback, etc. This thread is about the observed temperature record, its statistical interpretation, and conclusions that can (or cannot) be drawn from that.

  1774. Bob_FJ Says:

    Bart,
    Sorry,
    My 8:09 crossed yours of 7:59.
    Marco and Sod,
    Do you want to take it to the open thread? Or?

  1775. Don Jackson Says:

    I (think I) understand why most climate science professionals are opposed to this level of statistical rigor.

    you think, eh?

    look at the graph again:…error bars…

    the VS way of doing an analysis leaves us with zero knowledge about temperature. it could go up by 1°C or down by the same, within a couple of decades.

    his statistical approach is not only false, as i and others have demonstrated above, it is also completely useless.

    Well, sod, I’ve seen indications that the “final analysis” might support your pre-determinded conclusion; you’d have done yourself a favor, waiting for it.

    What I said (and VS chastised me for…) was this… (I think he misunderstood my gist. You’d never have got it.) But that’s alright: It wasn’t a crucial point; just one that would be needed, eventually: If you don’t like the statistics that have been shown to be appropriate and adequate, make your case. Why would you prefer inadequate and inappropriate means to reach your conclusions?

    Go ahead, sod, answer that question…

  1776. JGK Says:

    HAS,

    Could it be that Bart does not understand the role of the Null Hypothesis in statistical analysis? (see “And the stochastic hypothesis is A) based on false assumptions and B)……”). It would explain a lot, and in particular why VS could make no headway.

  1777. HAS Says:

    Bart

    I think we are missing one another here.

    “The false assumption is that the period 1880 – 1935 is purely governed by stochastic processes, whereas in reality there as already a change in climate forcing in that period (from solar, volcanic and GHG).”

    I don’t understand this statement. JGK has suggested that you might not understand the role of the null hypothesis. He may be right, and we need to get past this point, so pardon the rather pedantic deconstruction of your statement.

    An assumption is something you make for the sake of argument. You assume something and then investigate the consequences. Assumptions are important tools for deduction, and in particular can be used to disprove the assumption by use of reductio ad absurdum.

    Perhaps that is what you are doing here – we assume it’s stochastic, but we know that climate forcing has been occurring which should show up as a non-stochastic element, therefore the assumption is wrong.

    The problem is that this reduction ad absurdum argument fails on two accounts.

    The first is that we have empirical evidence that the time series is stochastic. We have tested the hypothesis that it is not stochastic, and find we can’t reject it (in fact I think one of the tests was stronger – namely that we can reject the hypothesis that it is not stochastic).

    The second is that just because climate forcings have increased this need not be inconsistent with an empirical observation that this series is stochastic. It could be, for example, that the impact of the forcings has not yet been sufficient to be observed in the particular time series.

    I should also note that just because forcings have been observed to have an impact on temperature in controlled circumstances that you can not then claim as empirical fact that this will be observed at the global level with GISS. One needs to go out and observe this relationship (and when we do it with this time series we can’t see it).

    I’d like to clarify this issue first before moving on to the other issues you raise.

  1778. Bart Says:

    HAS,

    The stochastic model of VS used as the test period 1880-1935. That to me means that implicitly it was assumed that no deterministic changes occurred during that test period. But those did in fact occur, hence the assumption is invalid. The assumption was not the null hypothesis being tested (it was merely ‘assumed’); the assumption was used in formulating the hypothesis. Or in other words, the hypothesis only makes sense when that (implicit) assumption is met. And it’s not.

    The non-stochastic hypothesis that was rejected was one that nobody believes in in the first place, so it’s meaningless. Please stop bringing it up as if it’s somehow meaningful (at least not without trying to substantiate why it is). The hypthesis that it’s not stochastic hasn’t been rejected at all. How could it? Energy balance, remember? I eat more than my body needs -> I’ll gain weight. Earth receives more radiative energy than it emits -> temp will increase.

    One needs to go out and observe this relationship

    Exactly. That’s what I’ve been saying all along. Somehow people are very eager to make profound statements about this relationship (between forcings and temp) without properly testing for it.

  1779. DLM Says:

    Marco,

    Dream on.

  1780. DLM Says:

    “Energy balance, remember? I eat more than my body needs -> I’ll gain weight.”

    It’s far more complicated than that:

    There is no energy balance in the human body. The need for food varies. You can eat more and gain weight, you can eat less and lose weight, you can eat more and lose weight, you can eat less and gain weight. You can eat more, or less, and stay the same. You can hold your caloric intake stable, and lose weight…gain…lose…gain…and so on. In fact your body weight could very well fluctuate in a random way, if food intake was kept stable, and the same is true with food intake fluctuating up and down.

    I wonder if the earth’s climate is any less complicated than human metabolism.

  1781. Adrian Burd Says:

    DLM,

    You say

    “There is no energy balance in the human body. The need for food varies. You can eat more and gain weight, you can eat less and lose weight, you can eat more and lose weight, you can eat less and gain weight. You can eat more, or less, and stay the same. You can hold your caloric intake stable, and lose weight…gain…lose…gain…and so on. In fact your body weight could very well fluctuate in a random way, if food intake was kept stable, and the same is true with food intake fluctuating up and down.”

    You do not mention anything about energy OUTPUTS!!!! It’s called energy BALANCE for a reason – the difference between inputs and outputs.

    Of course, I’m really glad you’re not my financial advisor since you seem to think that there is no relation between the size of my bank account, how much I earn in a month and how much I spend in a month.

    Adrian

  1782. DLM Says:

    Adrian, Adrian

    No I did not mention OUTPUTS!!!!. I naively thought that everyone here is intelligent enough, so that it wasn’t necessary to mention OUTPUTS!!!!!!!!! Did you notice that Bart didn’t mention OUTPUTS!!!!! either? Why didn’t you HOLLER!!!!! at him?

    Adrian, maybe you would care to tell us how many hours there are in the lifetime of the typical human, when a condition of ‘energy balance’ could be measured, or even assumed.

    I never said that there is no relation between the size of your bank account and how much you earn. Although it could very well be the case that there is no relationship at all. Not knowing anything about you other than what is in your recent post, informs me that you would benefit greatly from my financial advice.

  1783. HAS Says:

    Bart

    First point, VS tested the hypothesis that series up to 1935 was stochastic and couldn’t reject this as a null hypothesis (as I noted earlier I think one of the test rejected that it was deterministic).

    Second point, having been unable to reject the hypothesis (i.e. there is good empirical evidence for this hypothesis) he then assumed that the process up to 1935 continued. So you are right that at this point he made an assumption, but it isn’t exactly the one that you suggest. He then showed that he couldn’t reject the hypothesis that the series post 1935 hadn’t been generated by the process up to 1935.

    I’m going to persist with this, by using a simplifying analogy away from the heat of climate change.

    First though one important point. The ability or otherwise to be able to detect a deterministic trend in this time series is an empirical matter (abeit probabilistic). Either you can observe it or you can’t. It is just as much an observation about the time series as the observations themselves (although less certain).

    Turning now to my analogy.

    You and I are going to start tossing a coin. For the simplicity I’ll assume that we start with a fair coin (i.e. the result of tossing is stochastic), but if you want to assume there is already a deterministic process that doesn’t ruin the story.

    At a certain point you get bored and start to add some chewing gum (forcings) to the tail side. We continue to toss while you continue to add and we then pause to reflect on what we have seen.

    Now assume on the best statistical tests we get the surprising result that the series post-forcings continues to be stochastic.

    Your initial reaction (just as it has been here) is quite understandable.

    “It can’t be stochastic because I was forcing it – physics demands it!”

    But say we have a lot of money riding on the outcome. We are faced with the inconvenient empirical observation that the series continued to be stochastic (I had to resist the urge to say “inconvenient truth”).

    At some point in order to make sure I don’t walk away with the proceeds of the bet you need to park that initial emotional response and get down and dirty into thinking about why this happened and what it tells us. To use an analogy I think you have used; we need to stop believing that the bird can’t fly and work out why it does.

    The answer may be quite trivial, and I want to stress that this isn’t the important reason for wanting to understand the DGP of these time series – that has to do with how you manipulate them.

    Here’s a list of possible issues that need to be explored by the science of coin tossing:

    1. We weren’t observing the results of the flips particularly well;
    2. We didn’t do the tests right, or we didn’t do the right tests;
    3. We had a type 1 error;
    4. You used bubble gum and it was lighter than air;
    5. The coin was very large relative to the gum you put on so the effect was muted;
    6. I was scrapping it off as you were putting it on;
    7. I was sticking stuff on the other side to counteract what you were doing
    8. ………..

    About the only issue that can be dealt with without more information is 2. and that has been quite widely debated here. But to move on into the other issues there does need to be agreement on two points:

    1. The bird can fly, we can see it (i.e. this series is stochastic, as far as we can see)
    2. These kinds of statistical tests are the appropriate ones to use when looking at these kinds of time series

    No one is going to want to go further in a situation where inconvenient observations are ignored because they don’t fit peoples belief about what the physics should be saying, and statistical techniques are treated with suspicion because they don’t produce the results people want.

    To solve this stuff we are going to need to respect both statistics and physics.

  1784. phinniethewoo Says:

    You can be fattening up a tapeworm while you eat more and more ?
    Quite relevant in the analogy, thinking of COP15.

  1785. Adrian Burd Says:

    DLM,

    It seems that you are completely misunderstanding the scientific usage of the phrase “energy balance”. So rather than insulting others, perhaps you should go away and read a book – and I happen to know that Bart fully understands the concept of energy balance.

    The whole concept revolves around another concept, conservation. Think of a tank of water that has a leak in it. If the rate of flow into the tank equals the rate of flow out of the tank because of the leak, the level of water in the tank does not change. This is called a condition of steady state. If the rate of inflow exceeds the rate of outflow, the the level of the water in the tank increases. If the rate of inflow is less than the rate of outflow, the level in the water tank decreases.

    The same principle applies to eating. If, as Bart said, one eats more than one’s body requires (or to satisfy others, one’s body and its parasites), one gains weight. This is pure conservation principle. I have come across failing undergraduates that understand this, so please tell me that you understand it, because if you don’t, then you do not understand the fundamental principles upon which the universe works.

    Adrian

  1786. Bob_FJ Says:

    Adrian Burd, you wrote in part:

    The same principle applies to eating. If, as Bart said, one eats more than one’s body requires (or to satisfy others, one’s body and its parasites), one gains weight. This is pure conservation principle.

    But nature is complicated, as DLM commented.
    Various diseases and genetics, or a change in activity etc can wreck the general majority law of “eat more get fatter”. Furthermore, a statistical analysis might indicate a divergence from that law, but it does not tell you what.

    DLM, I hope you don’t mind me interfering.

  1787. DLM Says:

    Bob,

    No I don’t mind. What you have correctly put your finger on is that Adrian and others here are not operating in the same very complicated world that we are. To Adrian et al, it is down to a perfect understanding of the “pure conservation principle”, or whatever pure principal seems to suit their purpose. They know all the physics, and all the angles. And what they don’t know, they can assume, because they are smart. Of course they conveniently ignore the fact that nature is a game of combination shots, bank shots, combination bank shots with top-spin, and/or side-spin, and a dazzling array of shots that involve a multitude of permutations of all of the above. Knowledge like theirs would enable them to precisely predict where all the balls are going to end up, after a billiards break shot. Just ask them. They will model it for you, and prove it. Chaos does not phase these guys.

    You and I know that if you eat too much, you will get fat. They call it ‘energy balance’, and proclaim it as a revelation. But it is a dumb analogy in a discussion of the greenhouse effect. If I wasn’t deep into a bottle of fine single malt scotch, I would explain it for them. More interesting, would be a comparison of the discussion on this thread with different schools of thought on investment analysis, that was brought to mind by Adrian’s goofy reference to his bank account, spending, investment … whatever. I’ll get back to you on that, as I think it would shed some light on the irreconcilable differences between bart and VS.

  1788. DLM Says:

    Correction:

    Make that ‘before a billiards break shot’. And if a ‘pipeline’ is involved, make it thirty years after the break shot.

  1789. DLM Says:

    From the latest whitewash:

    “It is very surprising that research in an area that depends so heavily on statistical methods has not been carried out in close collaboration with professional statisticians,”

    Sorry Bart.

  1790. Igor Samoylenko Says:

    DLM,

    Great cherry pick of a quote. Here is a more appropriate quote from the report:

    Although inappropriate statistical tools with the potential for producing misleading results have been used by some other groups, presumably by accident rather than design, in the CRU papers that we examined we did not come across any inappropriate usage although the methods they used may not have been the best for the purpose. It is not clear, however, that better methods would have produced significantly different results. The published work also contains many cautions about the limitations of the data and their interpretation.

    Here. I highlighted the relevant bits.

  1791. Igor Samoylenko Says:

    DLM: “From the latest whitewash [the Oxburgh report]…”

    So, if you don’t like the conclusions it becomes a whitewash, does it? I see. This global conspiracy to hide the “truth” only keeps growing, doesn’t it? No one can see it apart from you and the rest of your tin-foil-hat crowd…

    Is it any wonder why it is so hard to take you guys seriously?

  1792. phinniethewoo Says:

    the OUTPUT!!! thing is that when Bart goes to the John, or when he goes to the workout??

    the answer will be again -both- of course, depending on your point of view blablah.. what happens when he goes to the john in the workout centre when he eats on the spinning machines have we covered for that possibility What if he hangs on a rope swirling back and forth through the window of the fitness centre every time he swings back to the tree he bites off a cherry etc

    I never get delightful simple straightforward clear cut answers in agwh blogs..

    Yogi berra:
    If you come at a fork in the road, take it!
    Clear cut. no nonsense, everybody gets it.

    TSA: it is really challenging . Like OLS ,and VSs short explanation on it, the meaning of all of it lies in stomachy Linear Algebra.
    Nobody should be allowed to speak on AGHW from now on before passing an exam on JL Rotman’s adv modern algebra.

  1793. Bart Says:

    Please continue the discussion pertaining to CRU inquiries about alleged wrongdoings on the CRU thread (which I updated btw two small quotes from the recent Oxburg report).

  1794. Bart Says:

    Regarding the energy balance, Adrian Burd made the important point: it’s the ratio between energy input/income and energy output/expenditure that matters. Genetic make-up, metabolism, sickness, sports, off course it can all influence the *energy balance* and thus your weight. Still, the energy *balance* is important, regardless of handwaving about factors that, indeed, influence said energy balance. All other things being equal, if I eat more, my weight will increase more in comparison to the situation where I ate less. If all other things are not equal, than of course the outcome may also be different. As with climate: many factors can influence the energy balance, and hence climate/weight. Conservation of energy, folks. What’s so scary about that?

  1795. Bart Says:

    HAS,

    I just re-read the comment where VS first presented his results pertaining to the stochastic vs deterministic trend, and indeed he first investigates what the timeseries look like pre-1935. Whether that amounts to good empirical evidence for it to be stochastic is something I doubt. I submit that the stats lingo is often hard for me to follow, and I may well be misunderstanding things. I note though that the R2 of his stochastic trend equation to the observations is very low. I don’t actually know how meaningful it is at all to be able to fit a stochastic trend to a relatively short timeseries. But not being able to reject a hypothesis that allows for a very broad variety of outcomes is hardly meaningful. That is clearly the case for the hypothesis of how post-1935 temps behave, but it may (to a lesser extent) also be true for pre-1935 temps. I.e. the skill of his model equations is quite poor.

    Not to mention that it ignores the effect of forcings, which we physically expect to be there. Thus, what needs to be tested, is if a model based on known physics is better or worse than a stochastic model. The stochastic model having such a broad variety of possibilities make it hard, if not impossible to reject perhaps for another few decades, but there probably are tests to compare its skill to a competing model (and this time one that has meaning; not an extrapolated linear trend). Were VS or somebody to engage in such an exercise, I would be very interested indeed. What matters is e.g. can the model say anything about the chance of temps going up or down?

    Your dig about things being inconvenient or that I’d ‘want’ certain conclusions is misplaced. The only things I have a problem with if unphysical conclusions are drawn and not recognized as such.

    I agree with your final sentence, as I’ve repeatedly said. However, I do recognize that you (and VS) feel that I don’t pay statistics the respect it deserves, and I feel the converse.

  1796. manacker Says:

    Conservation of energy, folks. What’s so scary about that?

    Nothing scary at all. It is a wonderful idea, which incidentally has nothing to do with our planet’s putative “energy balance”, global warming, etc., but just plain common sense. Wasting resources of any kind is stupid. Wasting resources we do not have and must purchase at high cost from folks who are not all that friendly toward us is absurd.

    It does not need to be justified by a somewhat questionable (and scary) postulation of dangerous AGW.

    Max

  1797. manacker Says:

    Bart

    As I understand it, those who are “experts” in statistical analysis are not convinced that the observed data show a robust statistical correlation between CO2 and temperature, while those who are “experts” in climatology and the greenhouse theory are.

    Since we are talking not about theoretical deliberations but rather about a statistical analysis of the observed data, I would think that the “experts” in statistical analysis are probably more qualified to express opinion on the statistical robustness of the CO2 temperature correlation than those who are “experts” in greenhouse theory or climatology.

    In other words, the question is one of statistical analysis of observed data, rather than of greenhouse theory.

    What do you think?

    Max

  1798. phinniethewoo Says:

    Bart,

    energy balance: I agree if the sun would shine harder it would get hotter, all things being the same. Same with your weight and food.

    How the earth’s energy balances relate to a rise in CO2 is an entirely different discussion.
    it’s been said also in this blog that CO2 behaves in strange ways not so straightforward as the consensus prof calculuses would want us to know about. I refer in the respect to the Al Thekasski posting March 28, 03:33.

    Anyways I agree: Probably energy balances approach leads to better observations over the next decennia, and better knowledge of climate.
    This is not such an unusual approach ; cfr for example with materials stress analysis which was really big ticket problem for 100years as everybody was trying to find out how the atoms were behaving with each other via unsoveable ODE PDEs.. nowadays stress energy finite element analysis is used and the theorising kinda petered out.

    Be careful when you make observations though when you go for your new approach! Know what to do this time when you use time series :)
    Otherwise bllkk

  1799. phinniethewoo Says:

    Bart,

    What if the satellites you shoot up to sort out the E inbalances, have the annoying habit of transmitting back not fancy graphs crisp and clear, but instead.. Time series data….
    What will your approach be towards the TSA then?
    What will you do when you encounter forks in the road while analysing the TS? Eg when unit roots pop up => Will you difference?
    Or leave that to Tammy to decide?

    Let’s be prepared this time before we jump to overhaste conclusions on the E-inbalances. Let’s read up on statistics.

  1800. Igor Samoylenko Says:

    HAS,

    Like Bart, I fully admit that I lack statistical expertise to evaluate VS’ claims independently, which means I have to resort to and rely on my interpretation of other experts’ opinions and I may be getting things wrong (and my language can be a bit sloppy). But somehow, I feel I understand Bart (and Eduardo and Lucia) much better than VS (and you).

    As far as I can see, what VS has shown is that the GISS temperature time series could be generated by a unit root process. I fully accept this. He went on to choose ARIMA(0,1,3) as an example of such a unit root process and then on the basis of that he showed that there is no statistically significant warming trend in the data over the last 100 years or so.

    As you know, B & V used a different unit root model ARIMA(2,1,0) and showed there was a statistically significant warming trend in the data.

    We also know what there is a trend-stationary process in the general form of:

    T(t) = F(ghg(t), solar(t),…) + U(t), where F() is a deterministic function and U(t) is stationary noise

    which is also capable of generating the said time series (I hope you are not disputing this or are you?).

    This trend-stationary process is modelled in the GCM computer models and is based on known physics (the fact that some statistical approximations are used in the models does not change this). We know that at least the output of one of them (ModelE) tests positive for unit root using the same tests as VS used.

    So, we have two alternative processes capable of generating the GISS time series:

    1) Pure unit root process (ARIMA(0,1,3), ARIMA(2,1,0), any other?)
    2) A trend-stationary process T(t) as implemented in climate computer models.

    As far as we know, both 1) and 2) will test as having unit root. And as far as we know unit root tests are incapable of distinguishing between the two if 2) is “close” in some way to unit root (see my comment above explaining that it is not necessarily “close” in any obvious sense of the word given the very short sample size and presence of non-linear deterministic trends).

    We also know that pure unit root in the temperature time series is unphysical. This means 1) is an approximation. And with any approximation one has to be careful not to get carried away by taking the approximation too far. VS’ attempt to show that there is no statistically significant warming in the last 100 years is taking it too far. This conclusion is spurious at the very least. This is clear from the fact that a different ARIMA model DOES show a statistically significant warming trend (before you even get to the physics).

    Are you saying VS has ruled out 2)? How? Just about everything I read on the interpretation of unit root test results applied to finite samples cautions against reading too much into them (see for example Hamilton (1994), Cochrane (1991)). I would also like to see a physical explanation of a pure unit root in the temperature time series which will not invalidate energy conservation laws.

    So, if you want to respect statistics you can go ahead and model the time series as a pure unit root process as an approximation. VS is not the first or only one to do it. But if you also want to respect physics, one has to be careful about taking this approximation too far and ensure that this exercise does not violate basic physics.

    BTW: did you see David’s reply to your question?

  1801. DLM Says:

    “All other things being equal, if I eat more, my weight will increase more in comparison to the situation where I ate less.”

    That is really smart, and physical. Now can you tell us within a pound or two how much you will weigh in 30 years, if you increase your current daily calorie intake by 2.104%? You do know what your current calorie intake per day is, right? You can model this thing, right? Let me help you out on one important point: you cannot count on all other things being equal.

  1802. DLM Says:

    Igor, Igor

    “It is very surprising that research in an area that depends so heavily on statistical methods has not been carried out in close collaboration with professional statisticians,”

    That happens to be the part of the whitewash that was very germane to the discussion that has been going on here, ad infinitum.

    Remember: Bart set up a straw man with a 12 year ‘blogal cooling’ chart that he cherry picked from the denialsphere ( probably little Joe Deleos’s work). And he presented his own concocted charts, which he claimed show a warming trend with a 95% confidence interval. He then boldly concluded:

    “There is no sign that the warming trend of the past 35 years has recently stopped or reversed.”

    Then VS , who does seem to be a professional statistician, says:

    “Actually, statistically speaking, there is no clear ‘trend’ here, and the Ordinary Least Squares (OLS) trend you estimated up there is simply non-sensical, and has nothing to do with statistics.

    In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk. Simply calculating OLS trends and claiming that there is a ‘clear increase’ is non-sense (non-science). According to what we observe therefore, temperatures might either increase or decrease in the following year (so no ‘trend’).”

    Now I am sure that you are intelligent and honest enough to man up and admit that your whining about me cherry picking is just knee jerk foolishness.

  1803. DLM Says:

    Max,

    manacker Says:
    April 15, 2010 at 15:06

    They don’t get it. And they don’t want to get it.

  1804. Pat Cassen Says:

    manacker -(April 15, 2010 at 15:06) “…those who are “experts” in statistical analysis are not convinced that the observed data show a robust statistical correlation between CO2 and temperature…”

    And yet there are “experts” in statistical analysis who claim that there is a robust correlation between CO2 (or GHG forcing) and temperature. See my comments at here and here.

  1805. Igor Samoylenko Says:

    Pat,

    Well, those experts you cited are not the real experts for DLM, manacker et al because they published results that conform to the consensus and as such are immediately disqualified. VS on the other hand who has not published anything on the topic yet (as far as I can see) IS the real expert, since his claims in this blog can be easily span to their liking.

  1806. phinniethewoo Says:

    I think everybody agrees that interpreting a 140 sized observed time series is leaving a lot open for speculation.
    However, VSs formal way of interpreting it beats A’l Gore’s I’d say.
    The unit root poses a fork in the road and you should make a calculated decision there to transform your data. The reasonable suggestion is to difference, and check retiterate and test.
    I think that’s a better program than drawing little lined corridors in charts whenever you like. The latter is what we can now safely call pathetic disinformation to herd the masses from grist.org

    Let s agree on one thing: Future temperatures will -NEVER- be accurately predicted from a climate model.

    Reason for that is that forecast GHG output ,amongst others, on which these climate models depend, is completely UNCERTAIN:

    GHG output is UNCERTAIN because:
    – we see the West outputs 10% less after a financial crisis; how many such crises are in the pipeline?
    – The rest of humanity is doing the same as the West: China is moving its people into cities=towerblocks, where they consume LESS not more. The carbon footprint of a metrosexual taking the tube is smaller than that of a farmer.
    -We do not know how many argameddon wars there will be in the 21st century
    -we do not know what that volcanoe in Iceland is going to do tomorrow. For the moment it is blackening the sky and is an artefact that can be seen from satellites with more ease than all human constructs together.
    -Nuclear fusion is humming along in 20y time, at least if we make these Zigoteaus smell the coffee sometime.
    -plantlife can expand in multifold ways: let’s remember 90% of oceans are sterile because lacking iron dust. 70% of the earth is arid, could be irrigated or greenhoused .It might be , post obamania, when reason returns back to planet earth, that priorities are set right not to endlessly clunk billions in dictator’s coffins ,help-help-help, but start doing some fun terra farming. How will that all work out on GHG output?

    If we can NEVER predict future temperatures, why should we now pay more taxes to mitigate 3 extra degrees of warming in 2100?
    “because it will be 6 degrees!” shouts sod&tammy ..right.

  1807. DLM Says:

    Igor, Igor

    You haven’t apologized for that BS accusation of cherry picking. Are you able to engage in honest discussion?

    I have not chosen VS as an expert, to the exclusion of competing experts. I guess you don’t recall that when you and others cited conflicting viewpoints from other statisticians, I challenged VS to defend his position. I specifically remember commenting that Igor had raised an interesting red flag, with regards to Cochrane , or whoever. My memory is obviously better than yours. Here is one refresher:

    # DLM Says:
    April 6, 2010 at 04:41

    VS says: “However, this is all beside the point, as Breusch and Vahid explicitly avoid the unit root discussion we’re having here. I cite: “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards.” So they take both assumptions (without dwelling into which one is correct too deeply, although they did provide me with a crucial reference to answer this question :) and take a look at the outcomes.”

    I was moved to re-read the B&V paper (twice) in a different light, by your gracious account of Prof. Breusch’s qualifications. Now I will have to admit, that I can see why your assertion that you and they are in basic agreement is being questioned.

    I am not sure that it is correct that they have explicity avoided discussion of the unit root. And if they have avoided it, I wonder why they would do so, if the issue is so important. It seems to me that they did discuss the unit root issue, and they said: “The question we are trying to answer though is not about a unit root in the temperature data, it is about a tendency of the data to drift upwards.” Aren’t you saying that determining whether there is a trend or not, is about a unit root in the temperature data?

    Their conclusion: “We conclude that there is sufficient evidence in temperature data in the past 130- 160 years to reject the hypothesis of no warming trend in temperatures at the usual levels of signi…ficance.”

    Isn’t your conclusion the opposite?

    I obviously don’t have much knowledge of statistics (just like sod), but I can read. And the words are not currently adding up on this specific issue-the alleged conflicts, between your analysis and the B+V paper. I read Willem’s response to sod several times and it doesn’t clear it up for me.

    In a nutshell:

    1. Why would B&V explicitly avoid discussion of the unit root?

    2. Why did they find a warming trend in the data?

    Help me out VS.

  1808. manacker Says:

    Pat and Igor

    You will both have to admit that the “climatology experts” are more inclined to accept the CO2 temp correlation as “statistically robust” and, therefore, the case for “causation” as plausible than the “statistical experts”, who, based on this thread, appear to be decidedly more divided on this issue.

    Phinniethewoo has just summarized the dilemma of “prediction” pretty well here (as Nassim Taleb has also done in the cited book).

    We are not even sure that CO2 is really the principal driver of climate, as has become more evident recently, when both atmospheric (surface + troposphere) and upper ocean temperatures have dropped despite record increases in atmospheric CO2 (in other words, the energy “imbalance” has been one of less energy input than output, while CO2 has increased, defying the postulation that CO2 is the principal driver of our climate).

    Yet we are trying to predict what our climate will be in 100 years based on a myopic fixation on only this one forcing factor, while essentially ignoring all the rest.

    Please explain to me why this is not absurd.

    Max

  1809. Bart Says:

    Igor makes a very good point as usual.

    I would add that the stochastic model presented by VS in this thread has very poor skill as compred to a physical based model.

    And indeed, the ability to find a stochastic model that fits a relatively short dataset is perhaps not at all surprising. The wider it gets in parameter space (allowing a large variety in the value of the to be fitted parameter), the less meaningfull/skillful it is.

    Manacker and others: I have absolutely nothing against bringing good statistics and statisticians on board in climate science. Quit the strawman arguments.

  1810. Bart Says:

    Manacker,

    Explain to me why 130 years is too short to arrive at solid conclusions yet 10 years is somehow enough?

  1811. DLM Says:

    “Manacker,

    Explain to me why 130 years is too short to arrive at solid conclusions yet 10 years is somehow enough?”

    strawman argument

  1812. manacker Says:

    Bart

    You raised a (strawman) question regarding 130 years (slight warming of 0.041C per decade) versus 10 years (slight cooling of 0.064C cooling per decade).

    Neither period tells us anything definite about CO2 as the driver for the observed change.

    There are too many unknown unknowns, Bart, to be able to draw such a conclusion, and, besides, as the exchange here has shown, the correlation between CO2 and temp has not shown a statistically robust correlation. The case for causation is , therefore, weak.

    That is the issue here, in a nutshell, Bart.

    Max

  1813. DLM Says:

    Max,

    Bart thought that he read where you said you had drawn a solid conclusion based on 10 years, so his strawman was inadvertent. I am sure he will apologize for scolding you and others for allegedly using strawman arguments, and then using one himself.

  1814. HAS Says:

    Just some brief comments. So much to do so little time.

    Bart

    The skill is poor because that is all we know using this series alone. We need to add more information to improve the skill. How we take the next steps to add this information is what is important.

    My dig about things being inconvenient was more intended for the humour than anything else, but I note you still refer to “unphysical conclusions” rather than “unphysical observations” i.e. the stochastic nature of this trend.

    There have only been two conclusions drawn from this thread as I see it.

    The first is that the GISS series is autocorrelated and stochastic and therefore if you want to use it in any more complex models you need to transform the relevant time series so you don’t get spurious results that bias statistical inferences.

    The second is that if you use the DGP derived from the series up to 1935, you can’t reject the hypothesis that the balance of the series was produced by the same process. As we both note this observation is calling out for more information to help find our what is going on.

    Neither of these is “unphysical”.

    Igor

    The processes looked at were ARIMA (3,1,0) and ARIMA (0,1,2) (see VS April 1 2010 at 00:49).

  1815. Bob_FJ Says:

    HAS,
    I’d like to draw your attention again to this comparison between HADCRUT3 Northern & Southern Hemispheres:
    (together with GISS net global forcings):

    It is a fact that these are two entirely separate time-series, of ~160 annual data points, each derived from thousands of bi-daily measurements. Whilst it is expected that there should be some differences between the NH and SH because of some* different drivers, the outcome in curve shapes with distinct peaks at ~60 year intervals, are characteristically the same. Furthermore comparisons in magnitude differences seem reasonable given those known driver differences.
    *(for instance more ocean area in the SH, and more land and industrialization in the NH),

    WRT your tossing of coins analogy, and VS’s claim that there may have been just as much chance that there was a cooling over the last 130 years, could you please explain how the two different time-series have similar complex shapes?
    Note that in general the indicated overall T rise follows the net forcings although there appear to be some internal variabilities (cyclic or random oscillations) including those unknowns or poorly understood. (and UHI/accuracy questions etc)

  1816. Igor Samoylenko Says:

    HAS,

    Yes, you are right of course. My mistake; quoted the two ARIMA models from memory without checking and got them the wrong way around.

    Just a quick note on what you said: “The second is that if you use the DGP derived from the series up to 1935, you can’t reject the hypothesis that the balance of the series was produced by the same process.”

    Yes but this is based on fitting ARIMA(3,1,0) to data and then drawing conclusions using this model. If you fit ARIMA(0,1,2) to data you arrive at a different conclusion. Is this not the case?

    Also, what about David’s reply? What he is saying seems to match what I have read elsewhere about the interpretation of the unit root test results and the use of cointegration: yes, these methods can be used but this does not exclude other (which I presume also means traditional) methods and one should not be too dogmatic about it.

  1817. phinniethewoo Says:

    The 2 camps are not going to be reconciled soon, i am afraid, based on scientific arguments.
    Discussions like we saw on this thread will improve the science though..I think we saw some serious scientific reports been commented and improved/corrected and some new reports in the making at high speed

    I have been reading now several hours and i think better blog technology is going to do more for science than 10new universities; there is a lot possible to make it more readable to manage spammers , you could think of supressing spammers like me without blocking them (eg shadow them out .. work with recommendation levels etc) if posters were addressing each other referring to comment numbers we could reveal quickly subtreads etc..
    Wonder now how the patenting around blogs looks like,maybe i should shut up :) s
    agwh
    The analogy with the vilage under the leaking dam is interesting: We can discuss dam technology until the end of times: one camp will remain thinking it will burst tomorrow and all should start to work on it now, the other camp thinks it is well worth the risk not to mind.

    A compromise between Keynesian centralists and Schumpeterians should be based on limiting damage for both sides:) I would be happy to see big moving dynamic projects on nuclear fusion..build then a big base on the moon with the Chinese and put 1000 men there in case we are all doomed here.Obama should cut the ribbon, no the whole congress..
    For my part we create 10 more projects like that

    cap and trade and cop15 ideas that is the Rubicon. No paseran.
    rationing is never going to work; it is only going to result in more CO2 molecules being unleashed than without cap and trade.

    Rationing and control we have some history on that in the EU: one of the first projects of the EU was agriculture..since then we have seen mountains of butter lakes of wine straight bananas with a spec and provided with safety goggles oh and only apples from South Africa and mangoes from gosh knows where in the supermarket; apple trees hav become too dangerous i think in Europe..the butter coordination between Belgium and Holland in the embryonic stadium of the EU was also good fun: 100s of smugglers raging with collonades of huge trucks along small forest roads at all times of the day and the night :) most of the weatlhy people in both border regions are old butter smugglers; (the rest are tax dodgers from both sides)
    People are still fat there from the butter glut, from when butter trade was “rationed”.

  1818. Bart Says:

    Manacker wrote

    …as has become more evident recently

    What time period did you have in mind with “recently”? And what were you saying about statistical interpretations from 1880 onwards?
    Nuff said.

  1819. Bart Says:

    Phinnie,

    There’s an open thread devoted to random thoughts.

  1820. Bart Says:

    HAS,

    A model that predicts equal chance of temp/body weight going up or down in the face of an increasing energy imbalance (i.e. more energy going in (the planet/my body) than going out), however hard to reject because it allows for a very broad array of possibilities, has an unphysical tendency to it: Energy conservation tells us that the temp/body weight should increase. As indeed it does. That one could specify a stochastic model that cannot be rejected at the 95% level is perhaps neither surprising nor meanignful. One could *not* conclude from that that the forcings/energy imbalance did not have any effect on the temp/body weight. Because the hypothesis that they *do* affect temp/body weight hasn’t even been tested.

    It is my understanding that in testing for unit root one has to include a trend term, to avoid spuriously concluding the presence of a unit root. This point was made by Tamino and VS has also stated that

    almost all test equations include a trend term.

    The choice of trend term influences the test result for unit root (at least that’s my understanding; correct me if I’m wrong). VS has tested the GISS record using a linear trend, CO2 concentrations and CO2 forcings as the trend term (again, correct me if I’m wrong) and concluded the likely presence of a (near) unit root. He has not tested the GISS record using the lagged net forcing or similar as the trend term, whereas that is the phsycially expected underlying deterministic trend.

  1821. manacker Says:

    Bart

    You asked what I meant with “recently”.

    I was referring to the period after 2000 for atmospheric temperature (surface and troposphere) and the period since Argo measurements gave us improved temperature data for the upper ocean around 2003. During this brief period both have cooled.

    This is much too short a period to draw any conclusions on long-term trends, but it should be possible to make a quickie qualitative energy balance. The data tell us that our planet has cooled overall, since the other possible heat sinks, such as latent heat from melting ice or evaporating water during the period are not large enough to make much difference. During this brief period, however, atmospheric CO2 levels continued to rise, so something has not gone according to the theory.

    Trenberth has privately referred to this dilemma as a “travesty”, and in an interview has opined that the missing energy may have been radiated into outer space, with clouds somehow acting as a “natural thermostat”.

    This sounds plausible to me, since it cannot be found on Earth (unless it disappeared forever into the deep ocean).

    Bear in mind, Bart, that I am not opining that this is a “long-term trend”, it is simply a short-term anomaly that raises some questions about how our planet’s climate works and seems to invalidate the “hidden in the pipeline” postulation of Hansen et al.

    Max

  1822. manacker Says:

    Bart

    To your second question concerning my interpretation of the long-term temperature “blip”(since 1880), I am not sure exactly what you are asking, but let me try to respond anyway.

    The HadCRUT record since 1850 shows three notable multi-decadal periods of warming (at rates between 0.14 and 0.16C per decade) with multi-decadal periods of slight cooling in between. The length of each half cycle has been around 30 years. Underlying all this has been a gradual warming trend of around 0.04C per decade over the entire record.

    The first two warming cycles occurred before CO2 could have played a major role, while the third (late 20th century) occurred when atmospheric CO2 was rising rapidly. This is the period, which has received most of the attention by IPCC.

    After 2000 the record shows us that the atmospheric warming has stopped, and there has been a slight cooling. This period is too short to draw any conclusions. Is it (as Bob_FJ has postulated here) the beginning of a longer-term cooling trend, such as the one which occurred after 1945? Or is it just “background noise” within a longer period of continued warming? These are unanswered questions, Bart. Theory may point us into one or the other direction, but we will only know for sure several years from now.

    Hope this answered your question.

    Max

  1823. Bart Says:

    Manacker,

    Ocean heat storage (0-2000 metres) has increased since 2002, sea level has kept on rising over the past decade, ice sheets on both poles are losing mass (i.e. ice).

    The “travesty” Trenberth was talking about referred to our incomplete abitlity to track all components on the climate system to the degree necessary to accurately follow the enerby balance on a (sub-)decadal timescale. If you agree, then support requests for more funding for long term monitoring.

    In science waiting until we’re sure is like waiting for Godot. Science doesn’t deliver certainty; it delivers probability of something being right or wrong. Requesting absolute certainty seems a popular cop-out for looking inot the other direction and continue doing what you’re doing.

  1824. manacker Says:

    Bart

    Ocean heat storage (0-2000 metres) has increased since 2002

    Sorry, Bart. Check the Argo results since 2003. They show a cooling of the upper ocean.

    Max

  1825. Bart Says:

    You’re probalby referring to the OHC in the upper 700 metres. Even then, to refer to that apparent plateau of a few years (in what looks like a long term increase) as a cooling is ironic in light of you this thread.

    I was referring to the OHC in the upper 2000 metres. Check fig 2 here: http://www.skepticalscience.com/global-cooling.htm

  1826. manacker Says:

    Bart

    Here is the quote from Trenberth.

    The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.

    Did you get that “lack of warming”. This is “code” for “observed cooling”.

    Max

  1827. Nir Says:

    Manacker:

    Sorry, Bart. Check the Argo results since 2003. They show a cooling of the upper ocean.

    I was wondering where you got this data from; all I could find was the graph, at http://www.argo.ucsd.edu/global_change_analysis.html#temp
    which does not appear to show any cooling of the upper ocean.

  1828. manacker Says:

    Bart

    I am referring to the Argo measurements, which cover the top 2000 meters of the ocean and have shown cooling since 2003.

    http://www.argo.ucsd.edu/

    Argo is a global array of 3,000 free-drifting profiling floats that measures the temperature and salinity of the upper 2000 m of the ocean.

    Max

  1829. Bart Says:

    Max, this back and forth on what is being said in the emails is not interesting, and neither the topic of this post. For some context, see eg
    http://www.skepticalscience.com/Kevin-Trenberth-travesty-cant-account-for-the-lack-of-warming.htm
    but further correspondence on this belongs on the open thread.

  1830. manacker Says:

    Bart

    You asked for references regarding the upper ocean cooling measured by Argo.

    Click to access heat_2006.pdf

    We observe a net loss of 3.2 (± 1.1) ´ 1022 J of heat from the upper ocean between 2003 and 2005.

    Then there is the study by Loehle, which carries the Argo data series to 2008 and shows continued upper ocean cooling.

    The Global Warming Hypothesis and Ocean Heat

    Max

  1831. manacker Says:

    Bart

    OK. We can end the discussion on the planet’s energy balance since 2000 (or 2003) and upper ocean temperature as being non-germane to the topic of long-term global atmospheric temperature trends, and therefore OT here.

    Point taken. You’re the boss.

    Max

  1832. phinniethewoo Says:

    ok I leisurely stride over to the open threat then

    structured thoughts on randommality here
    random thoughts on the structure of all there

    Hadcrut record should be renamed hadcruD record btw: Hadley CRUD record .

  1833. IanH Says:

    Bart – not wishing to complain on your site, but this excellent thread has degenerated to pointlessness. Perhaps you could move all this useless going over old ground to another thread – invite VS back and moderate it better.

    Just my 2c, haven’t been back here since VS said he was taking a break, and hundreds of off topic posts go by

  1834. Bart Says:

    IanH,

    You have a point. I strive to keep it on topic as much as possible, but have not been as strict as I could have been at times. I’ll try to do better again…

    I’m not going to plough back through the thread to delete off topic comments though. I think the signal-to-noise ratio is still fairly good, e.g. between HAS, Igor, myself and a few others we keep exploring these issues.

    VS is welcome to re-join the discussion if or when he wishes to do so. I didn’t sent him or anybody else away.

  1835. DLM Says:

    manacker says: Sorry, Bart. Check the Argo results since 2003. They show a cooling of the upper ocean.

    Bart says: I was referring to the OHC in the upper 2000 metres. Check fig 2 here: http://www.skepticalscience.com/global-cooling.htm

    Bart was referring to the upper 2000 meters, of the other oceans that skepticalscience knows about. And skepticalscience has their own buoys.

    manacker says: OK. We can end the discussion on the planet’s energy balance since 2000 (or 2003) and upper ocean temperature as being non-germane to the topic of long-term global atmospheric temperature trends, and therefore OT here.

    Bart couldn’t mean that, could he? He specifically brought up the alleged period of cooling in the post that started this thread. Wasn’t it the purpose of his post to put the old kibosh on any claim that there has been recent cooling? Maybe I read it wrong. Anyway, Bart wants to talk about 1979-2009 warming trend, and that does include 2000-2009, so I guess he is right.

    I think they call this political science.

  1836. DLM Says:

    I hope this helps refocus the discussion. May I suggest that we continue along the line of the original controversy (but avoid mentioning anything about temp data from 2000-2009 shhhh).

    Bart says:The trend over 1975 to 2009 is approximately the same (0.17 +/- 0.03 degrees per decade) for all three temperature series.

    The error represents the 95% confidence interval for the trend, i.e. if you were to repeat the trend analysis a hundred times on the real underlying data, 95 times you would find that the trend is within the range 0.14 to 0.20 degrees per decade. …

    The observed yearly variability in global temperatures (sometimes exceeding 0.2 degrees) is such that 10 years is too short to discern the underlying long term trend (0.17 degrees per decade). There is no sign that the warming trend of the past 35 years has recently stopped or reversed.

    VS says:Actually, statistically speaking, there is no clear ‘trend’ here, and the Ordinary Least Squares (OLS) trend you estimated up there is simply non-sensical, and has nothing to do with statistics. …

    In other words, global temperature contains a stochastic rather than deterministic trend, and is statistically speaking, a random walk. Simply calculating OLS trends and claiming that there is a ‘clear increase’ is non-sense (non-science). According to what we observe therefore, temperatures might either increase or decrease in the following year (so no ‘trend’).

  1837. phinniethewoo Says:

    Bart

    While we await cointegration , I think the heteroskedastitcity of GISStemp could be confirmed -or rejected!- from an alternative track.

    Ideally it is a simple average but Hansen (2001) describes it is subject to other processing.

    I do not know if arythmetically manipulating h’skadic timeseries leads to a better/same/ or worse timeseries..
    Worth investigating?

  1838. manacker Says:

    DLM

    Bart has started this discussion off with a comparison of the GISS, NCDC, HadCRUT, UAH and RSS temperature records. The latter two measure something different than the first three, plus they only started in 1979, so let’s ignore them for now.

    But how accurate are the surface temperature records and can we really trust them to tell us what is going on?

    Let’s ignore the classical UHI effect, which studies from all over the world tell us could account for most of the 20th century warming, while IPCC tells us (based on Parker “calm night / windy night” reconstructions rather than actual measurements at rural and urban locations) are “real but local, and have a negligible influence (less than 0.006°C per decade over land and zero over the oceans) on these values.”

    The first “link in the chain” is the temperature measurement device, itself. Keeping in mind that the entire HadCRUT time period from 1850 to today has shown a net overall linear warming of around 0.7°C, it is important that the sensors can accurately measure the temperature to a few hundredths of a degree to start off with.

    The problem with the “historical global temperature record” appears to be that the sensors are only accurate to somewhere between 0.2 to 0.33°C. This inaccuracy is significant, when we consider that we are looking at peak decadal warming rates of around 0.16°C/decade.

    Accuracy of climate station electronic sensors – not the best


    Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station Networks
    X. LIN AND K. G. HUBBARD

    The root-sum-of-squares (RSS) error for the HMP35C sensor with CR10X datalogger was above 0.2°C, and rapidly increases for both lower (30°C). Likewise, the largest errors for the maximum–minimum temperature system (MMTS) were at low temperatures (<-40°C). The temperature linearization error in the HO-1088 hygrothermometer produced the largest errors when the temperature was lower than -20°C. For the temperature sensor in the U.S. Climate Reference Networks (USCRN), the error was found to be 0.2° to 0.33°C over the range -25° to 50°C. The results presented here are applicable when data from these sensors are applied to climate studies and should be considered in determining air temperature data continuity and climate data adjustment models.

    A second problem has been pointed out by Anthony Watts. This involves poor siting of the measurement stations, resulting in spurious warming signals.

    Click to access surfacestationsreport_spring09.pdf

    These are caused by proximity to AC exhausts, asphalt parking lots, heated buildings, etc. The example is given for two nearby stations near Sacramento, California. Over the 70-year period 1937-2006 the poorly sited station showed a spurious warming signal of 0.2°C per decade or 1.4°C.

    A third problem was caused by the shutting down of two-thirds of the weather stations, many located in sub-Arctic and Arctic locations in Siberia. The total number of stations was reduced from around 15,000 to 5,000 in just a few years. Most of these shutdowns occurred around 1990, at exactly the same time as an apparent 1.2°C warming of the global temperature. How much of this apparent rise was caused by the station shutdowns?
    http://www.uoguelph.ca/~rmckitri/research/nvst.html

    A fourth problem was caused by the data manipulation by GISS and NOAA (and probably also by HadCRUT, although raw data are not available). In the study by Watts cited above it is pointed out that NOAA has added in an upward adjustment of 0.5°F (roughly 0.3°C) over the 20th century. This represents almost half of the reported linear warming over this period. GISS has “homogenized” the data by making older temperatures appear cooler than the raw data, so that the warming trend is made to look steeper.

    In summary, we have four “weak links” in the chain, the latter three of which go into the direction of making warming look more significant than it really has been, and the first simply giving inaccurate data to start off with.

    So comparing GISS, NCDC and HadCRUT figures does not really tell us much. They all suffer from the same distortions. They are all “managed” by individuals who strongly believe in the premise of dangerous AGW (Hansen, Karl, Jones). The actual warming trend may well have been much less than shown by these records.

    But, unfortunately, these records, with all their warts and blemishes, are all that we have, so we have to live with them. I would just not take them too seriously.

    Max

  1839. HAS Says:

    Some comments and observations:

    Bob at April 16, 2010 at 00:11

    “I’d like to draw your attention again to this comparison between HADCRUT3 Northern & Southern Hemispheres: (together with GISS net global forcings)”

    Just looking at graphs allows you to develop hypotheses about what is going on, but to test these robustly you need to turn to statistics, first to identify the nature of the time series being dealt with (the subject of this thread) and then how to properly undertake statistical inference about the relationship between them (not yet looked at in this thread). Hence my earlier comment that there are statistical techniques for looking at the time series you are interested in.

    Igor at April 16, 2010 at 00:13

    Yes you are right that the ARIMA models overlap, and there is reasonable debate about which to use. What they do have in common the fact that the data needs to be transformed before estimating parameters (i.e. simple regression without transformation will give spurious results).

    In particular David’s comment was that “Simple linear regression methods that don’t take into the potential problems are certainly hazardous” so if these are what you are referring to as “traditional” methods (and they seem to be the tradition in a lot of papers in climate science) then they are problematic. Where the debate lies is around which of these more complex models to use, and the techniques to use when analyzing potential relationships between these time series (again to reinforce, not yet addressed directly on this thread – only by reference to a number of papers including David’s that have done this).

    Just on the particular issue of which model is best ARIMA(3,1,0) or ARIMA(0,1,2) this is discussed by VS at April 1, 2010 at 00:49. Just some comments of my own for what it is worth, and this in part addresses Bart at April 16, 2010 at 08:46 on whether to use a constant or not.

    A significant structural issue between the two models discussed by VS is whether or not there is drift (NB both agree the time series needs differencing to make it well behaved so the constant represents the trend in temperature, or the drift).

    In the ARIMA (3,1,0) model VS first fits with a constant, but finds that the constant is not significantly different from 0 at the 5% level, so he quite correctly eliminates it from the model. He ends up with the remaining parameters significant at the 1% level. So VS had included the trend/drift in the test equation and eliminated it on the basis of the observations.

    The ARIMA (0,1,2) model has the constant significant at the 5% level, but not significant at the 1% level. If you had been demanding a higher level of confidence you would have eliminated the constant and re-estimated the parameters.

    I think in general one can interpret this as saying you are starting to see some evidence of drift, but you can’t with high confidence reject the hypothesis that its non-zero. The feature of VS’s testing done on the ARIMA (3,1,0) model is that it has addressed the possibility that the DGP has changed significantly over the period (quite apart from the implications for forcings if it had you would have to take this into account in any statistical inference about the relationship between this series and others). The answer is that no significant change in the DGP can be observed.

    There are two other issues in Bart’s April 16, 2010 at 08:46.

    I’ll deal with the second (about trend) first because it follows from the above discussion, and helps with the second (body weight).

    You say “VS has tested the GISS record using a linear trend, CO2 concentrations and CO2 forcings as the trend term (again, correct me if I’m wrong) and concluded the likely presence of a (near) unit root. He has not tested the GISS record using the lagged net forcing or similar as the trend term, whereas that is the physically expected underlying deterministic trend.”

    You do have this wrong. VS has only looked at the GISS temp record as a time series (in looking back he also did some preliminary work on the other temp series), and has not attempted to relate this anything else, lagged or otherwise. As I recall I don’t think he has even directly himself analysed the CO2 series to show it is I(2), and instead just relied on quoting Beenstock and Reingewertz (2009) and others.

    Hence my rather persistent comment that to see what might be behind this temperature series we need additional information but that what has been done here tell us we can’t use simple least squares to analyze those more complex relationships. The time series need to be transformed so the assumptions used in statistical inference remain valid. These are the techniques referred to as co-integration and the like.

    There is one consequence though of Temp being I(1) and CO2 being I(2) – it indicates that if these two series are related then changes in Temp should be related to the rate of change in CO2 concentrations. I reiterate that nothing so far on this thread has attempted to address this issue; however it is the subject of B&R’s paper. I should say that, along with a number of commentators here, my view is that B&R’s conclusions about the physical implications are seriously over-egged.

    Turning to the problem of your weight what does this thread teach us? The first thing is that this thread doesn’t try and relate weight to what you eat – it just looks at your observed weight over time. This doesn’t mean this is all irrelevant. Statistics demands that you know the nature of this series before you do anything else.

    So you should still first establish the nature of the DGP for your personal weight time series. I’m would think it almost certain that there is autocorrelation in it and there will be other artefacts that need to be compensated for depending on the frequency of sampling. If you consistently over eat you are likely to see a trend in the series, if you randomly eat and diet you probably won’t.

    Only once you understand the DGP can you confidently apply techniques for comparing you weight gain to what you eat.

    I’d also add that you will get a lot clearer picture from your weight time series than you do from the global temperature series because you are much more directly measuring something related to energy balance. The better analogy might be to think of the complications when you can only measure your own weight, but you are concerned about the energy balance across your whole extended family.

  1840. phinniethewoo Says:

    As a member of the public (aren’t we all?) I find Max his summary disconcerting, to say the least.

    Thanks Max for that.

    But OK I am sure the “consensus” is that there is nothing to be seen here..
    Nothing to see , all’s fine and dealt with , move along.

    What I wonder now is if it is justified to average the temperature of 1 site over a year, and if so with which confidence intervals?
    The temperature fluctuates deftly. Of course there is seasonality in it which makes it all more difficult to interprete, but i think econometrics and the statistics they use since 1944 is very well geared up towards dealing with seasonality. If we look at say a “winter” at a site, we are still left with 90 measurements (which are themselves also an average of a couple of meausrements, with all the variability there.. a stochastic)

    What I mean to say is,maybe indirectly : instead of the GISStemperature anomalies, tout-court, shouldn’t we rather look at the GISS temperature anomalies , differenced ??

  1841. phinniethewoo Says:

    Meant to say: where do we start with differencing?
    Maybe we should start differencing more upstream..

  1842. Bob_FJ Says:

    Bart,
    Returning to the topic of this thread; that is; your graphs, particularly Fig. 4.

    I agree with others here that your linear trend does not mean much, other than a visualization of what might be happening as an underlying trend. For instance, one concern with it is that you have seemingly subjectively eyeballed the unsmoothed data and made a judgement on where the line should start and end. (or more likely, from memory, you started it based on your CMA 11-year unweighted smoothing curve in fig.2?) However, when I eyeball the unsmoothed data, it seems to me that you could have just as well started it about ten years earlier, with much the same result. However, that would be contradictory to your 11-year smoothing.

    I also submit that CMA smoothing like in your Fig.2, is preferable to linear trends in noisy data. However, it too is imperfect in this case, partly because it does not distinguish between noise and real events, but also because of the arbitrary selection of interval and weighting, if any. (and which again, can be manipulative)

    I also believe that the GISS 5-year smoothing is the best option available because it means that the end of the smoothing line is only two years short of the end of the series, and arguably it does not smooth excessively.

    I guess you have archived all your data, and that it might be possible to insert 5 instead of 11 years into the plot, which would be interesting.

    I also argue that the GISS data trends differently to the others, the most significant being a depressed 1998 El Nino, and a raised 2005, (thus 2005 becoming the hottest year, not 1998 including in satellite data). This has the effect of showing less of a plateau than all the others.

    It would be very interesting to see your Fig. 2 modified with GISS type smoothing, and preferably with the GISS data removed.

  1843. Hank Roberts Says:

    Did anyone from the American Statistical Association show up here since VS invited them (in their topic on climate change) a couple weeks ago?

    That’s: http://magazine.amstat.org/2010/03/climatemar10/

    If so it might be worth inviting them to identify themselves (to Bart who’s hosting the thread, at least) to help sort out the topical from the other stuff.

  1844. phinniethewoo Says:

    Barts temperature charts carry another contentious tag which is the name “temperature” on the vertical.

    Referring to Paul_K’s posting of March 21, 2010 at 12:16, I think that
    when we want to give a measure (1 number) the qualifiation of being earth’s temperature it should be indicative of the calorific energy content of our climate (= atmosphere+oceans+sands). That’s what we intuitive would agree to be earth’s temperature.
    This energy content is proportional to surface radiation.
    Surface radiation (a W/m^2 measure) however is proportional to T^4, (T in Kelvin).

    So i think the recrding site’s temperatures, should be powered to the 4th and then multiplied by the surfaces they have been demarcated for.
    These radiation chunks should be summed together, for a global hystorical record radiation measure.

    That sum , surface radiation, is what we want to TS-analyse and then cointegrate with GHGs and TSIs, to do anything physically relevant here, i think.

  1845. DLM Says:

    Contained in yet another very cogent and useful summation of this long-running discussion that should enable us to move on, is the following:

    HAS says:”Turning to the problem of your weight what does this thread teach us? The first thing is that this thread doesn’t try and relate weight to what you eat – it just looks at your observed weight over time. This doesn’t mean this is all irrelevant. Statistics demands that you know the nature of this series before you do anything else.”

    HAS, you are looking at this from a much different perspective than is Bart. Bart is not in the statistics business, he is in the climate change/AGW business.

    His choice of the weight gain analogy is perhaps revealing. Everybody knows that if you eat more, you will gain weight, right? Which to Bart is a good analogy for the ‘increased CO2 causes warming’ consensus. The deal for the keepers of the consensus is, when you start out knowing that food intake has increased, or CO2 has increased, then you already know the ultimate effect. It’s physical. Even if you don’t really see a significant weight gain, or it doesn’t really get noticeably hotter for say fifteen years or so, you know it is happening. Just wait for it. Don’t despair. Something is masking the effect. You will get very fat and really hot, any day now.

    So when someone comes along, who does seem to be in the statistics business, and he claims that the statistical analysis you have presented to support your settled science is nonsense, you are really not concerned. You already know the nature of the data series. And you have already found the answer you were looking for, using the old reliable OLS stuff.

    Thus , the science remains settled.

  1846. Bob_FJ Says:

    HAS, you wrote in part:

    Just looking at graphs allows you to develop hypotheses about what is going on, but to test these robustly you need to turn to statistics, first to identify the nature of the time series being dealt with (the subject of this thread) and then how to properly undertake statistical inference about the relationship between them (not yet looked at in this thread). Hence my earlier comment that there are statistical techniques for looking at the time series you are interested in.

    Thanks for that HAS, but let me try and shorten/simplify my original question:***

    The two HADCRUT northern and southern hemisphere time-series graphed here are distinctly different both in complex derivation of 160 data points, and also in having different drivers*. They both have the same characteristic smoothed curve shape strongly suggesting a significant ~60 year cycle, with both being in-phase and rising. Also, the differences in magnitudes are of the order expected**.

    Please advise what is the probability of this complex match from 2x 160 “coin tosses“? (do you think)

    *Even what may seem to be nominally the same drivers in both the NH & SH are probably modified by other parameters such as cloud cover and air/water circulation etc.
    **There are credible hypotheses for some of the divergences in magnitudes.
    ***my original question is most expansive here

  1847. Marco Says:

    Max,

    Why don’t you know that the claims from Watts’ surfacestation project have been found….errr…outright wrong? (In fact, the ‘poorly sited’ stations introduce a COOLING bias)
    See Menne et al 2010.

    And why don’t you know that no stations were removed? The facts are that around 1992 a huge effort was made to collect data from a large number of stations that never or poorly reported to GHCN. These were then *added*.

    Oh, and the claim that removing the stations at high latitude and/or altitude introduces a warming bias…has been shown wrong, too. Hilariously, Watts even put up a post from Spencer who noted the same thing. After all, the satellites showed the same warming, while not being affected by any station addition or drop-out. Of course, several others have shown the claim to be wrong by analysing the data, and posted those results on sites like Lucia’s Blackboard (Zeke Hausfather), and Tamino (Grant Foster himself). But also the guys as Clearclimatecode did the analysis.

    And while Max tries to jump up and down and point to people with a supposed agenda, Jeff Id made his own reconstruction (with some help from others)…and got a higher warming trend than HADCRU!

    Thermal Hammer


    Bob Tisdale? Same story!

    And then there is this little story, claiming a *cooling bias* do to instrument exchange.
    http://rankexploits.com/musings/2010/a-cooling-bias-due-to-mmts/

  1848. manacker Says:

    Marco

    You can’t be serious! AC exhausts, asphalt parking lots, heated buildings, etc that introduce a “cooling bias”? ROTFL.

    Deleting two-thirds of the stations (many in the Arctic and sub-Arctic), which coincides with an apparent sunbstantial rise in temperature, yet “had no influence on the record” (or even introduced a “cooling bias”)? C’mon, Marco.

    “Ex post facto corrections”, “variance adjustments” and other manipulations to the record which have no real impact? Get serious.

    Thermometers that do not even have the accuracy to measure the minute decadal variations in temperature?

    Records managed by an overt AGW-activist (Hansen) or a Met Office (Jones) that issues dire warnings of BBQ summers, record hot years, unusually mild winters, etc. (all of which turn out to be BS)?

    What we want are unbiased and impartial weather data, not AGW propaganda.

    But, having said all that, these are the only records we have, as suspect as they might be, so we have to live with them. As I wrote earlier, we just shouldn’t take them too seriously.

    There is just too much room for error, and too much “incentive” for the error to go into one direction.

    Max

  1849. Marco Says:

    Max, handwaving doesn’t cut it here. The data analysis has been done, and it shows what it shows. You can then try to claim it can’t be true, but then you’d have to show the data analysis is wrong. Try it. Several ‘skeptics’ have tried, and only confirmed the result.

    Your claim on the data stations ‘drop-out’ is laughable. For starters, the satellites show the same warming. How exactly are those affected by fewer and more stations in the land-based reconstruction? Right, they are not. Another tiny little problem: it is known that the high altitude and high latitutides are warming faster. Gee, what does ‘removing’ the stations that warm faster do to the trend?

    You also completely fail to acknowledge that several ‘skeptics’ have made their own temperature reconstruction, yielding a trend that exceeds that of HADCRU. That is, according to these ‘skeptics’ HADCRU and GISTEMP have an error that points downwards…oh, those dastardly alarmist, underestimating the warming trend in their data to….eh…eh….eh….yeah, Max, the incentive for the error to go into one direction has resulted in an UNDERESTIMATION by the supposed alarmist, if the reconstruction of the ‘skeptics’ is to be believed.

    And it is hilarious that you link the Met Office weather predictions and Phil Jones. Hint: the Met Office and CRU are not the same organisation. They are not in the same place. And Jones is only employed at one of those two places. It isn’t the Met Office…

  1850. manacker Says:

    Marco

    Sorry, you have not brought any real data to show that the siting distortions listed by Watts are unreal.

    The claims of UNDERESTIMATION of GISS and HadCRUT are not credible.

    Sure the Met Office and Hadley are totally separate and Hansen and Jones have no influence on the GISS and HadCRUT records.

    The “globally and annually averaged hand-picked land and sea surface temperature” record is a mess. But it’s all we have, Marco.

    The satellite record only started in 1979. It shows a slightly lower rate of warming than the surface records (although greenhouse warming should occur more rapidly in the troposphere than at the surface).

    I am not arguing whether or not it has warmed over the past 150 years.

    I am not disputing that this occurred in three statistically indistinguishable multi-decadal warming spurts, with multi-decadal slight cooling spurts in between, and the length of each half-cycle of about 30 years.

    I am also not disputing that the latest of these warming spurts occurred in the late 20th century, when there was also a measured increase in atmospheric CO2, which may well have been realated to the warmin.

    I am simply telling you that several studies show that the temperature record is a construct with several open questions as to its reliability.

    Max

  1851. phinniethewoo Says:

    I think VSs break before a cointegrating exposition is justified.
    From this thread it is apparent that a better examination of the observed data is due. This is not just about statistics! It is about physics and arythmetics.

    We now have a misspecified “temperature” record, not just for statistical reasons: Paul_Ks posting -I am sure there’s a few reports to be found under the rug around the idea- completely undercuts the present method of getting at the GISStemp anomalies.

    Jensen’s inequality on fourth power summations demands we reprocess all uhi sites data for time averaging (How we come at yearly averages) AND spatial averaging (sites juggling).

    In Bart’s graphs, We see some juggling now 1850-1935, and eyeball some rise 1970-2009,in this comic misspecified GISStemp record, but how would a record look recalculated for Radiation? We don’t know.

    There is a massive amount of (admittedly adulterated but ok) site data out there, and it should be used differently! Homework!! Something to fill a rainy sunday afternoon with.. With the speed of fancy institutes nowadays : See y’all back in 20years. Put on your yellow jackets and safety goggles, and ON YER BIKES!!!

  1852. manacker Says:

    Marco

    You wrote:

    Your claim on the data stations ‘drop-out’ is laughable. For starters, the satellites show the same warming.

    No, Marco. The satellites do not show the same warming.

    Comparing the average temperature for 1986-1990 with that for 1991-1996, the surface record (average of GISS, NCDC and HadCRUT) shows a warming of 0.04°C, while the satellite record (average of RSS and UAH) shows a cooling of –0.02°C. So the net difference over the two 5-year periods before and after 1990 is 0.06°C. Is this a spurious warming signal that should be subtracted from the surface records? Who knows?

    Max

  1853. Marco Says:

    @Max:

    1. I pointed you to Menne et al 2010. They crunched the numbers. Watts tried the handwave approach, Pielke Sr tried to move the goalposts (“boohoo, they did not offer Watts to co-author”, as if Watts could ever co-author a paper that demolishes his one claim to fame). Watts also deletes those messages that ask him when he will finally do the analysis he has promised to do when surfacestations would be 75% complete. It most likley has been since early last year. Gee, I wonder why. You also tried the handwave approach, but don’t even pay attention to the scientific literature. Why don’t you just admit you were wrong?

    2. You then attack your ‘fellow’ ‘skeptics’ who found a faster warming (and yes, their analysis may well be wrong), while failing to realise that this contradicts your own claim that having people like Hansen, Jones and Karl is likely to have introduced a warming bias in their reconstruction. Why don’t you just admit you were wrong?

    3. You linked the Met Office’s failed prediction to Jones. This is hilariously stupid. Why don’t you just admit you were wrong?

    Oh, and Hadley is actually part of the Met Office. Sigh.

    4. You did not just point to potential issues with the temperature record. You *strongly* suggested that the temperature record is affected by various aspects that you claim will introduce a warming bias, even going as far as suggesting bias from the researchers involved (Jones and Hansen in particular).

    Regarding your post on the trends: hilarious, looking at a 5-year period. The loss of data stations is something that should affect the record from 1990 onwards!

  1854. Bart Says:

    On both this and the CRU thread I removed a few off topic comments. Keep it polite, on topic and substantial please.

  1855. manacker Says:

    Marco

    You made a false claim that the satellite and surface records both showed the same warming immediately after 1990 (when many stations were shut down), and I provided the data to show that you were wrong.

    Indeed, the data show that the satellite record has shown less warming overall than the surface record (despite the fact that GH warming should show just the opposite, and IPCC even claims the opposite!)

    Admittedly, the satellite record only started in 1979, so there is no “long-term trend” here.

    Watts showed some of the “adjustment” and “correction” problems with GISS and NCDC (where raw data are available). HadCRUT raw data are not available, so it is difficult to say whether the same problems exist there.

    All three records have been “corrected” ex post facto, which I have personally observed. In most cases these “adjustments” make older temperatures look colder, so that trends show greater apparent warming.

    Marco, we are rehashing the same old stuff over and over again. I have shown you various sources that indicate that the surface record has a lot of “warts and blemishes”, and you argue that this is not so.

    I also showed you a study which pointed out the inaccuracy of the thermometers to start off with (greater than the decadal rate of temperature change). You have not taken a stand on this point.

    All of this simply points to the fact that the temperature record we have (the “globally and annually averaged hand-picked land and sea surface temperature”) is a somewhat questionable construct.

    On top of all this comes the fact that the guys in charge of all three records are firm believers in the dangerous AGW paradigm, so they are biased to start off with.

    If you, personally, believe the temperature record is an absolutely pure and true value, that’s fine. Just don’t try to sell me this story. There are just too much data out there showing the problems.

    Max

    PS What does this all mean? Not too much, really, since it obviously has been warming since 1850. It’s just that we are making long term projections based on some theoretical deliberations and a dicey past record. that’s all.

  1856. Bob_FJ Says:

    Marco, you wrote a lot of things here which seem quite daunting in total, but let’s take some of them, one at a time:

    [1] And while Max tries to jump up and down and point to people with a supposed agenda, Jeff Id made his own reconstruction (with some help from others)…and got a higher warming trend than HADCRU!
    http://noconsensus.wordpress.com/2010/03/24/thermal-hammer/#more-8539

    Item [1]: If you read “Thermal Hammer” with more care, you should see that Jeff Id uses certain code to analyse the GHCN inventory file. (which is a different thing to HADCRUT & GISS.)
    You should also note that there is some jocularity, and what could be taken as caveats.

    Here is part of what Jeff says towards the end:

    Several skeptics will dislike this post. They are wrong, in my humble opinion. While winning the public “policy” battle outright, places pressure for a simple unified message, the data is the data and the math is the math. We”re stuck with it, and this result. In my opinion, it is a better method. Remember though, nothing in this post discusses the quality of the raw data. I’ve got a lot of information on data quality, for the coming days. In the meantime, consider what would cause such a huge difference in trend between the northern and southern hemispheres.

    That is like a pregnant question, with a suggestion of more to come.
    Here is a is a graphical composite where that difference can be seen, but also that the resultant curves are VERY different to all the other presentations, including from satellites.
    In all except GISS, the Super El Nino of 1998 is tops by far, but NOT in Jeff Id’s presentation of GHCN. The GISS hottest year of 2005 also disappears. It looks to me that something is not right between GHCN and the published curves. Perhaps it is a coding error (?), but whatever, it seems to be a case of “watch this space”

  1857. Marco Says:

    @Max:
    If removing stations (but remember, they were not actually removed, contrary to Watts’ false claims) has any influence, it should last from 1990 onwards. They are not used from 1990 onwards, and thus any influence should show up for the WHOLE range 1990-2010. That’s the first line of evidence that the use of fewer stations has no effect. The second line of evidence comes from all those people who have done the number crunching on the GHCN network, looking at the trend with and without the added stations. Result? If different at all, the ‘removal’ of stations introduced a cooling trend.
    You make much of Watts’ data analysis, but this is at the very least naive, considering his inability (or unwillingness?) to do the analysis others had done, thus resulting in a libelous false claim (deliberate removal of stations to introduce a warming trend). He’s also been caught many times in making even the most basic mistakes (see e.g. his posts on the anomaly and different baselines used). Your reference to him has zero credibility.

    And I would appreciate it if you also refrain from making false claims. I did *not* claim the surface station record is free from warts and blemishes. I *do* claim that there is no evidence that those warts and blemishes go mostly in one direction, and that the influence of people like Jones and Hansen will make that one direction towards a warming bias (as you claimed). There’s no evidence for that. None. Zip. Zilch. Au contraire, if ‘skeptics’ like Jeff Id are right, it is the opposite! You are thus making a libelous attack on people like Jones and Hansen without any evidence, similar to Anthony Watts. While I disagree with Jeff Id on several aspects of his argumentation, at the very least he is honest enough to follow the data where it goes, instead of handwaving and fingerpointing without any evidence.

    Regarding the thermometer issue: look up the difference between accuracy and precision, AND try to make a coherent argument as to how these would affect a trend in one direction or the other. Note, you’d have to take into account that there are a few thousand thermometers.

  1858. Marco Says:

    @Bob_FJ:
    My reference to Jeff Id’s reconstruction was not to discuss the correctness of any of the reconstructions (yes, it is markedly different from the others, for whatever reason), but to rebut his unsubstantiated claim that the involvement of Hansen and Jones (and Karl) in the temperature reconstructions is likely to have led to an overestimation of the trend. Apparently, one can use a procedure that actually yields a much faster trend, and that procedure comes from a ‘skeptic’! Since Bob Tisdale comes with a similar result, we have TWO ‘skeptics’ that claim the procedures for GISTEMP, HADCRUT, and NCDC are underestimating the trend. Exactly the opposite to Max’ claims.

    Of course, it may well be that both Jeff Id and Bob Tisdale are wrong, but at the very least they’ve done the number crunching, unlike one ex-weather forecaster, whose work Max so gladly references.

  1859. manacker Says:

    Marco

    You wrote that “removing stations” around 1990 “should show up for the WHOLE range 1990-2100” rather than just over the period 1991-1996.

    The most significant “jump” in temperature should obviously be just after the stations were removed. And, as the records show, the satellite record showed net cooling (1991-1996 average compared to 1986-1990 average) while the surface records showed warming.

    Marco, the whole discussion is beginning to get repetitive, and we are talking about a long-term temperature record that shows a very small decadal increase of 0.04C per decade, measured by thermometers that have an accuracy of 0.2C to 0.33C.

    The HadCRUT record (preferred by IPCC) shows three statistically indistinguishable multi-decadal warming “blips” (of about 30 years each and warming rates between 0.14 and 0.16C per decade), the first two of which occurred prior to any significant human CO2 emissions.

    The “globally and annually averaged (hand-picked) land and sea surface temperature” used to measure all this is a construct, which is not transparent and leaves considerable doubt as to its real absolute value.

    As of 1979 we also have a tropospheric (satellite) record. This shows a slightly slower rate of warming than the surface record over the same short-term period.

    So it is fair to say that we can safely conclude that it has been warming ever so slightly since 1850 (when the modern HadCRUT record started), and that this has occurred in three multi-decadal warming cycles of around 30 years each, with slight cooling cycles of about the same time length in between.

    I do not believe that you would argue the above conclusions, but who knows?

    Max

    .

  1860. manacker Says:

    Marco

    Coming to your final point.

    You write that there is no “evidence” that Hansen (or Jones) have influenced the GISS and HadCRUT temperature records “toward a warming bias”.

    I agree. You have chosen your words well. Perhaps a completely open independent audit would reveal this, but such an audit has not yet occurred, so there is no “evidence”.

    The “Climategate” revelations have not impacted Hansen directly, and the Jones “indiscretions” which have been exposed do not include any direct evidence that the HadCRUT record was “fudged” (only that some pretty sloppy work was done).

    But let’s try a bit of logic.

    If a car salesman tells you all about the virtues of a model he is trying to sell you, you might suspect (even though there may not be any “evidence”) that he may be exaggerating his story in order to “make the sale”.

    If an avowed AGW-activist like Hansen, who testifies before the US Congress about “dangerous CO2 levels”, irreversible “tipping points” leading to extinction of species, etc., calling for ending coal-fired power plants and a carbon tax, and who compares coal trains to the “death trains of WWII” in another statement is in charge of a temperature record, it is reasonable to assume that he will, like the car salesman, exaggerate his story in order to “make the sale”.

    But, as you wtote, this is not “evidence”.

    You wrote that there is no connection between the HadCRUT record, Phil Jones and the Met Office.

    Researchers at the Met Office Hadley Centre produce and maintain a range of gridded datasets of meteorological variables for use in climate monitoring and climate modelling. This includes the HadCRUT3 temperature record.
    http://hadobs.metoffice.com/

    This same Met Office has issued numerous “warnings” of “record hot years”, “BBQ summers”, “unusually mild winters”, etc. (caused by AGW), none of which have actually occurred.

    Phil Jones has devoted a considerable part of his career to the construction of the HadCRUT dataset. He has also written numerous papers on AGW, including several cited by IPCC in AR4.

    The HadCRUT record has been ”corrected” and “adjusted” several times “ex post facto”. The net result of these changes has been to make recent warming look more significant.

    There is no “evidence” that this all hangs together, as you wrote.

    Max

  1861. Tim Curtin Says:

    I tend to agree with Max rather than Marco, as the latter has yet to explain why GISS constantly changes the historic temperature record. I am now occasionally recording the GISS rankings of GMT, and it really is amazing how Jim H. keeps rewriting history. For example, back in 2006 GISS stated the “anomaly” in 1880 from the mean in 1950-80 was -11, while today in GISS the anomaly for 1880 from 1950-80 is -24. How to rewrite History (or invent AGW) in one easy step, copyright James Hansen.

  1862. Adrian Burd Says:

    manacker says:

    “But let’s try a bit of logic.”

    “logic”….hmmmm, ok I’ll fly with this “logic”.

    “If a car salesman tells you all about the virtues of a model he is trying to sell you, you might suspect (even though there may not be any “evidence”) that he may be exaggerating his story in order to “make the sale”.”

    It would be ever so nice if this “logic” was applied to both sides, but as many in have this thread have shown, it seems to applied in only one direction – i.e. to make libelous statements about climate scientists.

    Adrian

  1863. A C Osborn Says:

    Adrian Burd Says:
    April 18, 2010 at 18:23
    i.e. to make libelous statements about climate scientists.

    Bart I know this is Off topic, but I can’t let that statement go without an answer.

    Adrian, you did read the Climategate emails and what the “climate scientists” said and called anyone who questioned their findings, even saying that they were glad they were dead?
    You do know that anybody who does not agree with the “climate scientists” is called a “Denier” don’t you, regardless of their education or profession.

  1864. Marco Says:

    Max,

    You still don’t get it, do you? You simply cannot use a very short-term to look at trends, they are too much affected by noise (and just averaging records, which use different methods, introduces another source of errors). You can more reliably determine a trend pre-1990 (many stations), and post-1990 (much fewer stations) and compare those two. Try that one. And I don’t see you providing *any* evidence that the other line of evidence (actual number crunching of the station data with and without the added stations) is somehow wrong.

    Second, an accuracy of 0.2-0.33 is more than enough to see a trend. Accuracy means that the *absolute* value may show an *absolute* error (in either direction):

    Unless you can show that the accuracy shows a drift over time and that it mainly goes in one direction, it simply has no effect on the trend!

    Third, you once again try to link Phil Jones to the Met Office and its occasional errors in the seasonal forecasts. However, Phil Jones is in no way involved in the seasonal forecasting. In fact, I’d be very surprised if *any* of the people involved in HADCRUT are also involved in the weather forecasting. The two are completely different entities. Note also that the Met Office puts probabilities in its forecasts. The few instances they got it wrong fit quite nicely with those probabilities. Making more of it is a failure to understand probabilities.

    Fourth, in relation to this issue of linking weather forecasts of the Met Office to Phil Jones, you once again try to move the goalposts by putting words in my mouth. That’s the second time you tried that. This appears to be a pathological condition for you, you’ve done that to others also (including here to VS).

    Fifth, your analogy with the car salesman is a nice try, but you don’t get a cigar. Scientists love to show other scientists wrong. HADCRUT is a direct result of scientists believing they could do a better job than those at GISS. Moreover, they are not selling anything. Their comments on the dangers of AGW come from an analysis of the data, rather than the confirmation bias you try to invoke using the car salesman analogy. Of course, GISTEMP is openly available, data and code, so why don’t you do your open, “independent” audit? Not that I have much trust in your abilities to do so, considering your use of Watts as a credible source of information and analysis.

    Sixth, improved methods continuously are developed. That this supposedly increases the recent warming is not backed by any evidence. Tim Curtin’s little example is a funny one, since higher temperatures pre-1940 actually make the case for CO2-induced warming even bigger. Climate scientists currently link the warming pre-1940 mainly to solar influence. If there was not as much warming, as would be the case if Tim’s suggestion of data fiddling is taken as correct, there would be less solar influence on warming. This would make the case for the sun driving the recent global warming, so often put foward by ‘skeptics’, even less likely!

    @Tim:
    New data is constantly added to GHCN, including older data. Moreover, every now and then errors are identified. This means there is a constant update of the record. That this happens to lead to results you do not like is not the fault of the data.

  1865. Marco Says:

    @A C Osborn:

    You are aware that the one ‘skeptic’ whose death Jones did not exactly find something to cry about was a ‘skeptic’ who repeatedly and constantly accused climate scientists of fraud?

    And whereas Jones made his remark in a ‘private’ e-mail, this ‘skeptic’ did so on a blog that anyone could read, out in the open, with the explicit goal that people would be able to see this.

    Moreover, please provide evidence that climate scientists call everyone who does not agree with their results a “denier”. Go ahead, cite the UEA e-mails. There’s a whopping two examples of the word “denier” in there. Or try realclimate, ‘loads’ of examples, too.

  1866. manacker Says:

    Marco

    You opined:

    You still don’t get it, do you? You simply cannot use a very short-term to look at trends, they are too much affected by noise (and just averaging records, which use different methods, introduces another source of errors).

    I can only agree, Marco. The only record that makes any sense at all is the long-term record since 1850, which shows three multi-decadal warming cycles of around 30 years each, with slight cooling cycles of the same length in between, with an underlying warming trend of 0.04C per decade (as we have been recovering from a colder period, called the Little Ice Age).

    Short-term “blips” (like the one from 1976 to 2000) are meaningless, in themselves, and should not be taken too seriously (as IPCC has unfortunately done).

    Marco, your analysis of the Met Office “hit rate” on predictions is flawed. They predicted hot summers, hot years, overly warm winters, all of which turned out to be total BS. They should get out of the “prediction” business, and stick with the reporting of actual data, instead.

    Contrary to your suggestion, it appears that it is you that “still don’t get it” (not me), Marco. I don’t have to “put words in your mouth” (as you surmise). You do that well enough all on your own

    But let’s get down to the basics.

    Do you agree that the Hadley record shows an underlying warming trend of 0.04C per decade since 1850? (Yes or no. If “no”, please specify).

    Do you agree that there have been three statistically indistinguishable multi-decadal warming “blips” within this record, each lasting around 30 years, and each showing warming of between 0.14 and 0.16C per decade? (Yes or no. If “no”, please specify)

    Do you agree that these warming “blips” were followed by slight cooling “blips”, each of the same approximate length of 30 years? (Yes or no. If “no”, please specify.)

    Do you agree that the first two of these warming “blips” occurred before human CO2 emissions could have made a significant contribution to the warming? (Yes or no. If “no”, please specify.)

    Once we clear up the above points, we will probably not have too much to disagree about, Marco.

    Awaiting your specific answers to these simple and straightforward questions.

    Max

  1867. Bob_FJ Says:

    Marco, you wrote, re item [1] (Roman’s Thermal Hammer)

    “…Apparently, one can use a procedure that actually yields a much faster trend, and that procedure comes from a ’skeptic’!…”
    “…Of course, it may well be that both Jeff Id and Bob Tisdale are wrong…”

    I’ve checked-out Jeff Id’s recent update “Thermal Hammer Part Deux“ including the comments, and it tells me that the code is still not properly developed. Furthermore, I feel that there is nerd group behaviour that overlooks an apparent contradiction to what they have achieved so far. A simple test is to look at the satellite data. Here is a comparison with UAH and RSS according to Wikipedia, which is unlikely to be corrupted by sceptics. (could still have William Connelly type influence?).
    Can you see what I’ve been saying about the 1998 “Super El Nino”, which unaccountably disappears in the “hammer job”?
    As I said before; “watch this space!”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Now we come to item [2] for the first time:

    [2] “…Since Bob Tisdale comes with a similar result, we have TWO ’skeptics’ that claim the procedures for GISTEMP, HADCRUT, and NCDC are underestimating the trend…”
    “…Bob Tisdale? Same story!
    http://i40.tinypic.com/11hxq87.png…”

    Your link only gives a tinypic-photo-share graphic that involves CRUTEM3; that being only for land air temperature anomalies. Do you have a link to where it originated so that I can review the text, and see if there are other graphics that might be compared with those shown by Jeff Id?

  1868. Bob_FJ Says:

    Tim Curtin, you wrote:

    For example, back in 2006 GISS stated the “anomaly” in 1880 from the mean in 1950-80 was -11, while today in GISS the anomaly for 1880 from 1950-80 is -24. How to rewrite History (or invent AGW) in one easy step, copyright James Hansen.

    I seem to remember seeing somewhere an archival thing showing GISSTEMP with a high 1998 El Nino, just like everyone else. I can’t find it now though. Are you aware of it at all?

    The GISS low 1998 and high 2005 numbers etc show a more relentless warming trend than everyone else of course.

    Anyone, if Tim can’t help?

  1869. Willis Eschenbach Says:

    Marco Says:
    April 18, 2010 at 19:07

    @A C Osborn:

    You are aware that the one ’skeptic’ whose death Jones did not exactly find something to cry about was a ’skeptic’ who repeatedly and constantly accused climate scientists of fraud?

    And whereas Jones made his remark in a ‘private’ e-mail, this ’skeptic’ did so on a blog that anyone could read, out in the open, with the explicit goal that people would be able to see this.

    That was my friend John Daly. A google search of his web site reveals that exactly three pages of his site contain the word “fraud”.

    Only one of these three is by John, the other two are by guest authors. None of them refer to individual scientists, they are all talking about the IPCC.

    John said:

    IPCC 1994 and IPCC 1995 both imply that the agreement of the modified models with mean global temperature provides confidence in the ouputs of those models. Such a claim is scientific fraud.

    I can’t disagree with that …

    In other words, totally contrary to your libellous claim, I do not find a single instance of John or anyone on his site calling any climate scientist a “fraud”.

    A simple apology to the memory of someone who was both a good man, and more of a gentleman than either you or I, will suffice.

  1870. Bob_FJ Says:

    Marco,
    I’ve responded to your April 18, 2010 at 19:07, over at the open thread, because it has nothing to do with Bart’s graphs.

  1871. Tim Curtin Says:

    Bob_FJ @ April 19, 2010 at 00:35

    said replying to my post “I seem to remember seeing somewhere an archival thing showing GISSTEMP with a high 1998 El Nino, just like everyone else. I can’t find it now though. Are you aware of it at all?”

    Well, the standard GISTemp site http://data.nasa.giss.gov gives the anomalies as now perceived, and that for 1998 does not seem to have changed recently, at 69 in 2002, 71 in 2008, and 70 in 2010.

    “The GISS low 1998 and high 2005 numbers etc show a more relentless warming trend than everyone else of course.” I am not sure about that, as 1998 has not really changed, but I agree that the “hotter” years of 2005 (75), 2007 (74) and 2009 (72) are very dubious, as there is no evidence of really stronger El Ninos in those years than in 1998. My analysis of randomly selected GISS data by gridded latitudes and longitudes shows how capricious its data sets are – places come and go but the Global magically emerges higher every two years. The claim by 12 April that March 2010 was the 4th hottest ever is garbage when as usual half the globe is marked in gray (no data) including amazingly most of Canada (see map for March at Climate Audit, whose Steve Mc noted the amazing claimed heatwave in Finland, now corrected by Gistemp).

  1872. Tim Curtin Says:

    Marco @April 18, 2010 at 18:53 said in response to mine of April 18, 2010 at 15:24
    “Tim Curtin’s little example is a funny one, since higher temperatures pre-1940 actually make the case for CO2-induced warming even bigger. Climate scientists currently link the warming pre-1940 mainly to solar influence. If there was not as much warming, as would be the case if Tim’s suggestion of data fiddling is taken as correct, there would be less solar influence on warming.”

    Marco: i fear you completely misunderstand the variations in the GIstemp anomalies I cited for 1880: “for example, back in 2006 GISS stated the “anomaly” in 1880 from the mean in 1950-80 was -11, while today in GISS the anomaly for 1880 from 1950-80 is -24”. That made 1880 seem colder relative to 1950-80 than the original. Converting the anomalies to the actuals, GIStemp in 2002 had Global Mean temperature in 1880 at 13.89oC, and by March 2010 this had fallen to 13.76 oC, i.e. colder, not warmer, as your comment claims, thereby exaggerating the apparent warming since 1880.
    Then Marco added “@Tim: New data is constantly added to GHCN, including older data. Moreover, every now and then errors are identified. This means there is a constant update of the record. ”

    So what new data does GIStemp have for 1880? Do tell! It is certainly not any data for the 99% of Africa, Central America, and SE Asia that are absent from all GIstemp data sets for 1880-1900, all of them hot places. So any revisions/new data would be more likely to raise GMT for 1880-1900 than reduce it.

    Yiour finmal comment “That this happens to lead to results you do not like is not the fault of the data” and untrue, all I would like to see is data that have NOT been subjectively tampered by Hansen Ruedy and Sato to tell the story that their boss likes to see. ALL statements by both hadleyCRUT and GISS about unprecedented warming since 1850-1900 (CRU) or since 1880-1900 (GISS) are untrue, as there is no basis for such a claim.

  1873. manacker Says:

    Hey, guys, let’s cobble together a “globally and annually averaged hand-picked land and sea surface temperature” construct, so we can figure out when we’ll reach our “irreversible tipping point”..

    Got no data for Africa or most of Latin America for the early years? No problem (we’ve got lots of Arctic / sub-Arctic stuff).

    For later years we can add in lots of good Africa and Latin America stuff and toss out a bunch of those ghastly Arctic sites (hard to find station managers anyway, since the USSR collapsed and the nearby Gulags have all been shut down).

    While we’re at it, let’s just move a bunch of stations around and throw out two-thirds of them (strategically selected, of course). Let’s keep the ones where AC exhausts and asphalt parking lots have been put in.

    Duh! It’s warming! A robust record, if I ever saw one!

    Max

  1874. Marco Says:

    @Max,

    I’m not interested in your attempts to move the goalposts. I reacted to your unsubstantiated claims that GISTEMP, HACRUT and NCDC are unreliable becuase of the people involved in their reconstruction. I also reacted to your support of false claims proven to be false by Anthony Watts. When you finally admit that you made completely unsubstantiated claims, I may be interested in taken the discussion further.

  1875. Kweenie Says:

    “The few instances they got it wrong fit quite nicely with those probabilities.”

    The few instances? The MET was so pathologically wrong (and always on the warm side, ever wondered why??), that they stopped giving seasonal predictions from this year…

    “And whereas Jones made his remark in a ‘private’ e-mail”

    If Jones used his UEA email adress thanh this is considered not to be private and subject to FoI requests.

  1876. Marco Says:

    @Tim:
    Still don’t get it? The colder one ‘makes’ the 1880-1970 period, the more warming needs to be explained through factors other than CO2 increase. In other words, the *less* warming in 1880-1970 period, the *less* room for natural factors being any explanation for the current warming.

    Your disingenious attempts to make the changes in GISTEMP deliberate changes to exaggerate warming are biting you right back in the behind.

  1877. Marco Says:

    @Willis:
    Dog whistle. Look it up. You don’t even need to say “fraud”, you can write stuff so that *others* will make the claim.

    Jones has had what he considers really nasty experiences with John Daly. He’s got every right, in a personal e-mail, to indicate he’s not going to cry over that event. Whatever nice person you and many others may think John Daly was, you cannot demand others to feel the same way and publically express something that they do not feel. Hypocrisy is not the way to go.

  1878. manacker Says:

    Marco

    You wrote:

    I’m not interested in your attempts to move the goalposts. I reacted to your unsubstantiated claims that GISTEMP, HACRUT and NCDC are unreliable becuase of the people involved in their reconstruction. I also reacted to your support of false claims proven to be false by Anthony Watts. When you finally admit that you made completely unsubstantiated claims, I may be interested in taken the discussion further.

    You are just making yourself look ridiculous with your statements, as is becoming apparent here.

    You have been unable to show me any studies confirming that thermometers next to AC exhausts (in summer), heated buildings (in winter) and asphalt parking lots (all year around) will not show a spurious warming signal, as Watts has reported.

    You have tried to claim that the failed Met Office predictions (BBQ summers, record hot years, etc.) have actually been good forecasts, which “fit quite nicely with those probabilities” (whatever that is supposed to mean).

    You have not been able to demonstrate that the “homogenizing” of data, as well as the “ex post facto corrections” as practiced by GISS, NCDC and HadCRUT, is not the same as “cooking the books”.

    Sorry, Marco, you are losing credibility here.

    Max

  1879. manacker Says:

    Bart

    Believe this has been pointed out here by several bloggers, but it is clear that the global surface temperature record needs an in-depth, independent and transparent re-work.

    As Judith Curry stated recently:
    http://bishophill.squarespace.com/blog/2010/4/18/judith-curry-on-oxburgh.html

    In my opinion, there needs to be a new independent effort to produce a global historical surface temperature dataset that is transparent and that includes expertise in statistics and computational science.

    The record as it now stands just has too many “warts and blemishes” to be taken too seriously as a data set for determining long-range policies related to climate.

    Max

  1880. A C Osborn Says:

    manacker Says:
    April 19, 2010 at 11:17 Sorry, Marco, you are losing credibility here.

    Max, he is not losing credibility, he has Lost all credibility by refusing to answer your and tim’s questions.
    As for his response to me and Willis using the “Hypocrisy is not the way to go.” is quite staggering under the circumstances.

  1881. Bart Says:

    Enough of the ‘he said, she said’ type of discussion. Take it to the open thread or have your posts removed.

  1882. manacker Says:

    Bart

    I am sure you did not consider my quotation of Judith Curry’s statement that the surface data set needs a complete rework as “he said, she said” type of discussion.

    Right?

    Max

  1883. Bart Says:

    I think Curry is overstating the case that there may be something terribily wrong with temp reconstructions, though perhaps her concern is more to restore science’s credibility. In principle, there’s nothing wrong with trying to emulate previous work though, and surely statistics expertise would be a welcome addition. I replied to Curry both at RC and BH btw.

    See for several temp reconstructions (incl ‘citizen science’ projects) here: http://rankexploits.com/musings/2010/comparing-global-land-temperature-reconstructions/

  1884. phinniethewoo Says:

    yes but it is boooring at the open thread Bart??
    I find it touching that the establishment starts to frame itself as victims now, whining about libel.. for when can we expect cynical Pelosi moves..

    My suggestion for reconsidering the aereally averaged earth temperature fell on deaf ears..Casu of “dure de comprenure” again, with our friends the consensus climate scientists.

    Te= &#x3a3; Ti * Si
    Si is the uhi site’s portion of the earth’s surface.
    I do not know really what the physical meaning is of this Te?

    Te is very much unequal to a radiation based measure
    Tr= &x3a3; Ti^4 * Si
    Tr, note, or the 4th root thereof, is the one we should associate with earthly warmth cosiness and temperature indeed.Not Te.

    Note that for the same Te rolling from the IPCCs press, we can have infinitely many Tr, depending on how the Ti lie around?

    For examples,for the same Te, if you have a bit warmer poles , as warmists continue to nag us with, you’ll need, for same Te, a bit colder equators. in the construct of Te.
    Now this would have a big impact bringing down Tr.
    Tequator^4 is a bigger chunk in the Tr equation than Tequator is in the Te equation. so variations there on equator have bigger impact.

    It is not the <= that is of concern , Tr Te are different animals alltogether, which makes a joke of climate concerns based on Bart's above graphs and Te.

    We could do some MonteCarlo races to alleviate the concerns?

  1885. Tim Curtin Says:

    Marco @ April 19, 2010 at 10:52
    “@Tim:
    Still don’t get it? The colder one ‘makes’ the 1880-1970 period, the more warming needs to be explained through factors other than CO2 increase. In other words, the *less* warming in 1880-1970 period, the *less* room for natural factors being any explanation for the current warming.”

    That’s a non sequitur, if you know what that means. (1) I never referred to the 1880-1970 period. GIStemp has made 1880-1900 look progressively colder, which makes 1900-2009 look a lot warmer. (2) Even if I had referred to a base 1880-1970 period, the GIStemp resort to exaggerating the warming from 1880-1970 in no way reduces the room for natural factors etc… Why would it?

    I did not make 1880 colder between the 2002 and 2010 readings by Gistemp, they did, and as like IPCC 2007 they consider almost ALL warming to be AGW, the more warming since 1880-1900 the better for Hansen’s world view. Remember IPCC AR4, WG1, p.671: >90% certainty that humans have exerted a substantial (>50%) warming influence on climate.

    Marco added: “Your disingenious (sic) attempts to make the changes in GISTEMP deliberate changes to exaggerate warming are biting you right back in the behind”.

    How so? Whether deliberate or not, the implausibility of Hansen having any basis in 2010 for finding 1880-1900 colder than he did in 2002, is manifest, especially as GIStemp has offered no new evidence for finding 1880-1990 colder than it previously stated (in 2002) in the face of its continuing inability to find any temperature records for 1880-1900 in Panama, Khartoum, Kampala, Kinshasha, Kuala Lumpur, Bangkok, Shanghai, et al ad lib. Magically, ALL of those places contribute to the 1950-1980 base period for the Great Anomalies, NONE of them contribute to the huge negative anomalies in 1880-1900 so beloved of CRU GISS & IPCC and you ad infinitum.

    Marco, until you acknowledge the importance of base periods like 1950-80 having the same met station coverage as both ante- (1880-1900) and post-1980 periods, you like Bart in his graphs atop here have no credibility.

    So why not have GISS and CRU prepare maps etc for just those areas of the Globe that have the SAME coverage for ALL periods of interest, namely 1880-1900 (or 1910), 1910-1940, 1940-1970, 1970-2000? Actually it is quite easy, and shows zilch GLOBAL warming since 1900. Try it using the excellent Sato-Ruedy map and data sets (I exclude Hansen from this attribution since he clearly did not realise how subversive Sato & Ruedy are).

  1886. manacker Says:

    Bart

    I believe Curry’s statement made two separate but interrelated points:

    The temperature record, as it now stands, has been massaged and manipulated in a non-transparent fashion (even after-the-fact), to the point that:
    a) it is essentially a meaningless construct, and
    b) it has damaged the credibility of “climate science”, in general

    The call for a complete, independent and transparent rework makes sense for both reasons, as I am sure you will agree.

    If a clean and transparent record still shows the kind of multi-decadal cyclical warming trends we see in the current surface records, then they are likely to have been meaningful, and we should try to understand them, rather than simply concentrating essentially all of our efforts on the last warming cycle.

    If this clean record shows overall warming over the entire period that demonstrates a robust statistical correlation with atmospheric CO2, then the case for causation is strengthened.

    And, in the process, climate science again recovers from the “black eye” it has given itself.

    Max

    PS As a second step, Curry also calls for some meaningful paleo-climate studies to replace what is out there today (again with the same two reasons in mind), but that is another point.

  1887. Marco Says:

    @Max:

    I repeatedly referred to Menne et al 2010! But since this apparently is not clear enough, here’s a direct link to the paper:

    Click to access menne-etal2010.pdf

    I also never said the predictions of the Met Office were good forecasts. The facts are that some failed forecasts have resulted in some people making rather large claims about the Met Office always being wrong. The real facts are that the predictions of the Met Office on occasions have failed, but this is something that probability theory predicts. If I throw a coin 100 times, probability theory tells me there is a close to 100% chance that I will have 7 times heads or 7 times tails *in a row*. The Met Office isn’t doing all that bad:
    http://www.metoffice.gov.uk/corporate/verification/city.html

    Your last remark is just plain stupidity. You make the claim of something being wrong, you prove it. Not just do loads of handwaving and say “I do not trust it”.

  1888. Marco Says:

    @Tim:

    I won’t get in the anomaly discussion with you again. You’ve been explained that so many times already, that I don’t think I can offer an alternative explanation that finally gets through.

    Sadly, for the benefit of others, I will need to point out once again that every colder temperature in the early part of the record means more temperature increase must be explained without being able to invoke (much) CO2 forcing. That is, natural forcings that increase temperature.

    Regarding your claims on Hansen, Sato and Ruedy: Wow, talking about ad hominems. And then Willis Eschenbach complains about Phil Jones’ remark in a personal e-mail. Moreover, you’ll need to show the data and analysis, not just claim “zero warming”. Actually, I’d like to see it from credible sources. Jeff Id, Zeke Hausfather, Lucia Liljegren are fine with me. You, not so much.

  1889. Bart Says:

    Manacker,

    “A robust statistical correlation with atmospheric CO2” is not the holy grail of AGW. It is the net forcing that the science expects to drive the temp, and not even in a 1 to 1 way, because of inertia in the system. THe CO2 and temp correlation is a huge canard.

    Check out the links in my CRU post on temp reconstructions and adjustments. THere ain’t anything there.

  1890. phinniethewoo Says:

    THe CO2 and temp correlation is a huge canard.

    interesting
    if there is no correlation wot’s the big worry then?
    Al can stop jetting around then to convert us.

    Just for the record: Is the -observation- of CO2 and pseudotemp still of any interest in the consensus camp ?

  1891. phinniethewoo Says:

    Jeff Id, Zeke Hausfather, Lucia Liljegren are fine with me. You, not so much.

    Here we go again the “scientist” clown clerus insisting on fomal brainrinsing accreditation

    This is like the FSA carefully pruning away any outside intruders in their protected banksystem biotope, for many years..All the while they put many multiculti nincompoop pictures and “Formal education” canards on their site.

  1892. manacker Says:

    Bart

    No one doubts the theory, Bart, but a lack of a robust statistical correlation would make the observed empirical case for causation weak, no matter what the theory says. That’s why I had hoped this thread would come up with something definitive.

    This is no “canard”. Nor is it a “holy grail”. It is simply the way that science works.

    Max

  1893. DLM Says:

    Bart says: “THe CO2 and temp correlation is a huge canard.”

    Thank you for finally responding to my question regarding your opinion of Nobel laureate, Al ‘Carbon Neutral’ Gore’s, use of that huge C02/ temperature chart as a visualization tool in his Academy Award winning movie.

    Get real Bart. If you could show a robust statistical CO2 and temp correlation, you wouldn’t be calling it a huge canard.

  1894. DLM Says:

    Tim says:”So why not have GISS and CRU prepare maps etc for just those areas of the Globe that have the SAME coverage for ALL periods of interest, namely 1880-1900 (or 1910), 1910-1940, 1940-1970, 1970-2000? Actually it is quite easy, and shows zilch GLOBAL warming since 1900. Try it using the excellent Sato-Ruedy map and data sets (I exclude Hansen from this attribution since he clearly did not realise how subversive Sato & Ruedy are).”

    Over here in the colonies, we call that an apples-to-apples comparison. I wonder why anyone would have a problem with that.

  1895. Willis Eschenbach Says:

    Marco Says:
    April 19, 2010 at 11:02

    @Willis:
    Dog whistle. Look it up. You don’t even need to say “fraud”, you can write stuff so that *others* will make the claim.

    Marco, what you wrote was flat out wrong. You falsely and spitefully claimed that John Daly

    repeatedly and constantly accused climate scientists of fraud?

    When I pointed out that NEITHER JOHN NOR ANYONE AT HIS WEBSITE accused any climate scientist of fraud, you ignore the fact that you had just libelled a good man. Now you accuse him of writing “stuff” so that others will accuse scientists of fraud.

    Words fail me. You make a libellous accusation, and when you are asked to put up or shut up, you invent some new, unspecified, uncited crime of writing “stuff” … and in best AGW form, you don’t provide a single fact to back it up.

    I give up. You wonder why AGW supporters are in ill repute? Regarding Jones and John Daly, see here. Regarding rejoicing at someone’s death, your moral compass needs degaussing …

  1896. NokTang Says:

    Willis, don’t waste your brain cells on Marco. In fact, if I was an AGW proponent I would ask him to stop posting, as he’s doing quite a good job destroying it’s case. Now of course, he’s doing quite a good job ;)

  1897. Marco Says:

    @Willis:
    I find it quite funny you go after me for supposedly making a libelous false statement. When will you ask your buddy Watts to apologise for his libelous false statements on NOAA deliberately removing high latitude and high altitude stations to introduce a spurious warming trend? When will YOU apologise for falsely claiming fraud using Darwin as an example?

    And accusing the IPCC of fraud IS accusing climate scientists of fraud. They are the authors of the IPCC report. Moreover, you do not need to use the word fraud. Want a few examples of John Daly claiming fraud without using the actual word? How about his bio:
    “All these scares have advanced the interests of what was a small academic discipline 30 years ago to become a mammoth global industry today. It is my view that this industry has, through the `politics of fear’ which it has promoted, acted against the interests of the public.”
    His last article? Guess what, same accusation of “climate industry”. The dog whistle, loud and clear.

  1898. JvdLaan Says:

    Well said Marco, but it is all a waste of time, the Watts-crowd are not be convinced, like the ID- and creationist-protagonists.

  1899. manacker Says:

    Tim Curtin

    Your idea of running the whole temp data series for those locations that have a continuous record makes sense.

    One could combine these into some sort of a composite record of all these locations.

    This could provide new insight into whether or not the cobbled-together “globally and annually averaged (hand-picked) land and sea surface temperature” record really makes sense, or if the observed trends are primarily result of the many station changes, “homogenization”, “ex post facto corrections”, etc.

    Should not be too hard to do.

    I would think that as few as 200 good data series would tell the story, especially if series can be found that cover several regions of the globe.

    What do you think?

    Max

  1900. manacker Says:

    Bart

    I thought you were going to cut “he said, she said” stuff like Marco is posting here. It’s getting repetitive, boring and non-value added.

    Max

  1901. manacker Says:

    JvdLaan

    You wrote:

    Well said Marco, but it is all a waste of time, the Watts-crowd are not be convinced, like the ID- and creationist-protagonists.

    I’d agree with you about the ID- and creationist crowd. The key weakness of their argument is not that it is not based on a well thought out hypothesis, but that it is not supported by empirical data derived from actual physical observations.

    This is exactly the same weakness of the premise that AGW, caused principally by human CO2 emissions, has been the primary cause of past warming, and that this represents a serious potential threat.

    The theory is there, as are the computer model simulations based upon this theory, but the empirical data are lacking, as with ID- and creationist hypotheses.

    A statistically robust correlation between temp and CO2 would have been a good first step, but (as can be seen from this thread) it has been elusive. Without this statistically robust correlation of the empirically observed CO2 and temperature data, the case for causation is weakened.

    That’s the dilemma.

    Max

  1902. Pat Cassen Says:

    manacker Says:
    April 19, 2010 at 22:38

    “A statistically robust correlation between temp and CO2 would have been a good first step, but (as can be seen from this thread) it has been elusive.”

    Once again:
    From Terence C. Mills
    Climatic Change (2009) 94:351–361 DOI 10.1007/s10584-008-9525-7

    “This paper examines the robustness of the long-run, cointegrating, relationship between global temperatures and radiative forcing….[The result provides] further confirmation of the quantitative impact of radiative forcing and, in particular, CO2 forcing, on temperatures.”

    And there are others, as you know…

  1903. Bob_FJ Says:

    Marco, returning to the topic of Bart’s graphs above;
    Cohenite made an interesting comment concerning the IPCC’s use of linear trending for different parts of the temperature record. I expressed interest in hearing what you might have say about it, and urged you and Sod, as follows.

    Would you please not immediately dismiss the article by virtue of its source [WUWT] as is your usual wont. Instead, please carefully examine its references and quotes and graphs which originate from AR4, WG1. Any comments?

    Sod ignored it, but the relevant part of your response was:

    “Will you [Bob_FJ] please not claim I just dismiss articles which have graphs from AR4, etc.?…”

    Will you please comment on this issue concerning linear trending, per the IPCC.

  1904. Tim Curtin Says:

    Phinnie and jvdL, re “defending the present pathetic measure for “earth’s temperature”, do read Pielke snr et al, in
    JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 112, D24S08, doi:10.1029/2006JD008229, 2007.

    Marco, please explain in what respect it is incorrect to state as I did that the GISS base period for its anomalies of 1950-80 includes locations that are absent from both the 1880-1910 and the 1980-2010 periods, and why, that being the case, it does not matter.

  1905. Bob_FJ Says:

    Tim Curtin, thanks for your comments concerning El Nino’s etc according to GISS.

    What I meant by:
    “The GISS low 1998 and high 2005 numbers etc show a more relentless warming trend than everyone else of course.”
    is illustrated in Bart’s Fig.2 top of this page. Because 1998 is lower than all the other sources, and 2005 onwards are higher, Bart’s 11-year smoothing, shows a steeper and straighter curve. However, I’ve also argued (here) that Bart’s unweighted 11-year smoothing interval is excessive and also ends by 5 years short in a period of crucial interest. When GISSTEMP is seen here, with their usual 5-year smoothing, it is only a 2-year problem, but the significant plateau in the unsmoothed data as seen elsewhere is markedly diminished. Oh and BTW, 1999 & 2000 are lower in GISSTEMP, which adds to that trend effect.

  1906. Bob_FJ Says:

    Willis, Marco, & JvdLaan:
    I’ve commented here on your exchanges WRT Phil Jones and John Daly, because Bart may rule that it is off-topic on this ‘ere thread.

  1907. Marco Says:

    @Tim Curtin:
    Strawman. I did not say that “it is incorrect to state as [you] did that the GISS base period for its anomalies of 1950-80 includes locations that are absent from both the 1880-1910 and the 1980-2010 periods, and […] that being the case, it does not matter”.

    However, you’d have to SHOW that it matters. Not handwaving that stations are in warmer or cooler regions, because *that* does not matter. As I understand it the anomaly is calculated station specific, with the period 1950-1980 taken as the base period, i.e. the average anomaly being zero over that period. That’s regardless of the *trend*.

  1908. Marco Says:

    @Bob_FJ:
    Comparison with a sine wave…hmmm…shoddy comparison.

  1909. manacker Says:

    Pat Cassen

    Yes, I have seen that, but it looks like VS has already responded.

    Model studies are nice, but the case for an observed robust statistical relationship is still weak, as VS has pointed out.

    You can dance around it all you want to, but the observed multi-decadal warming/cooling cycles are hard to explain. There is something more powerful than CO2, which is driving these oscillations. Could it be the same natural variability, which has caused cooling after 2000 despite record increase in CO2, according to Met Office?

    Max

  1910. Tim Curtin Says:

    @ Marco April 20, 2010 at 10:35
    who said “As I understand it the anomaly is calculated station specific, with the period 1950-1980 taken as the base period, i.e. the average anomaly being zero over that period. That’s regardless of the *trend*.” Well not for the first time you understand wrong. If you go the GIStemp site and use their station specific interactive tool, the anomalies for say Africa in 1880-1910 relative to 1950-80 simply do not exist, in the absence of data (gray on the maps, 9999 at the Lat-Long data). But, lo and behold, we still have a cold and ever colder “Global” anomaly for 1880-1910 vis a vis 1950-1980.

    Don’t bother to reply until you have checked what I say with GIStemp interactive. In particular, you can set 1880-1900 as the base for anomalies, and ask for the resulting anomalies for 1950-80 or 1980-2000. For the latter you will find massive “global” warming for most of the NH (actually just North America and western Europe) and none at all for rest of NH and all SH except Australia, in the absence of baseline data for most of SH in 1880-1900. Are you up to it? Probably not.

    Conveniently, that makes Global 1880-1910 seem to have colder anomalies than are warranted given the absence of data for Central & South America, Africa, and most of SE Asia.

  1911. manacker Says:

    Marco

    FYI

    (Approximate) “sine curve” from Dr. Syun Akasofu per Dr. David Evans:
    http://joannenova.com.au/2009/04/global-warming-a-classic-case-of-alarmism/

    Max

  1912. Marco Says:

    @Tim:
    Actually, you will see that most warming, as notably predicted for an enhanced greenhouse effect, has taken place in the Northern hemisphere close to the arctic. Not “just North America, Western Europe and Australia”.

    I still don’t see why you believe adding stations in Africa affects the trend such that warming is exaggerated. Yes, the global anomaly of 1880-1900 is based on less than perfect, since we’re missing stations. At the same time the smoothing procedure GISTEMP uses tries to compensate for that. And once more: the less warming pre-CO2 increase, the less warming needs to be explained by natural forces. Not something you’d want if you so desperately want the AGW theory to be wrong.

    @Max:
    On such a short record, highly questionable. And I hope you already read the Menne et al paper. I’m interested in hearing your comments…

  1913. Pat Cassen Says:

    manacker –
    “…but it looks like VS has already responded.”

    Where?

  1914. phinniethewoo Says:

    OK I calculated now with excel, nobody else is gonna do it amongst the 100K consensus “scientists”
    I carved up the earth in 10 equalsized chunks with varying average yearly temperature in Kelvin

    I get then an average IPCC Te= 288.5 degrees kelvin
    and a Tr = 289.316
    That’s OK , we’re only after differences

    265 4931550625
    270 5314410000
    280 6146560000
    285 6597500625
    290 7072810000
    295 7573350625
    295 7573350625
    300 8100000000
    300 8100000000
    305 8653650625

    288.5 7006318313

    289.3160094

    Now, in VS slightly more amateuristic style,
    I warmed up the north and south pole 2 degrees and made the tropics chunk 3 degrees cooler? That’s still 0.1 degrees “trend” up , right?

    voila:
    267 5082121521
    272 5473632256
    280 6146560000
    285 6597500625
    290 7072810000
    295 7573350625
    295 7573350625
    300 8100000000
    300 8100000000
    302 8318169616

    288.6 7003749527

    289.2894871

    And what do we see?
    The IPCC Te went indeed up 0.1 degrees ..help! give us more tax money help! help a’l jetting around like mad , eating his chateaubriands from 20000ft high all the way to the bank, laughing at us.

    The Tr went erm erm DOWN ??????????????/
    Could Bert explain why we have the IPCC shoving us up an “averaged earth temperature” that can actually show us a trend of 0.1 degrees UP, when in fact there might be COOLING??

    Ideas, suggestions, new crisp graphs?
    -it’s all Bush’s fault
    -But you’re not a bishop where is the scientific report

  1915. manacker Says:

    Pat Cassen

    March 4, 13:54
    March 5, 13:40
    March 5, 16:53
    March 8, 22:44
    March 13, 10:57
    etc.

    Max

  1916. manacker Says:

    Marco

    You say “on such a short record”

    I agree.

    But it’s all we have (the short HadCRUT record from 1850 to today).

    That’s the record which Bart analyzes above and which Akasofu analyzed to arrive at the “sine curve” with the slightly tilted axis.

    Quite simple, actually.

    Max

  1917. manacker Says:

    Marco

    Didn’t see anything in Menne et al. about:

    AC exhausts in summer
    Heated buildings in winter
    Asphalt parking lots
    etc.

    Can you point me to these sources of spurious warming, and how they should actually result in a net cooling signal?

    Max

  1918. Pat Cassen Says:

    manacker –
    vs’ criticism of published work is essentially that of B&R: “…the GHG forcings are I(2) and temperatures are I(1) so they cannot be cointegrated…”

    But my reading of Fomby and Vogelsang, Stern and colleagues, Mills, and even Kaufmann et al., all of which conclude that there are statistically significant positive trends in the temperature data, and/or that cointegration (and other methods of) analyses reveal connections with GHG forcing, does not support the B&R criticism.

    Read those papers yourself and tell us what you think.

  1919. Bob_FJ Says:

    ALL,
    I’ve asked HAS four times to assist on the following statistical issue with the shape of the HADCRUT3 temperature curve , without a direct response. Can anyone else assist?

    “…but let me try and shorten/simplify my original question:***

    The two HADCRUT northern and southern hemisphere time-series graphed here are distinctly different both in complex derivation of 160 data points, and also in having different drivers*. They both have the same characteristic smoothed curve shape strongly suggesting a significant ~60 year cycle, with both being in-phase and rising. Also, the differences in magnitudes are of the order expected**.

    Please advise what is the probability of this complex match from 2x 160 “coin tosses“? (do you think)

    *Even what may seem to be nominally the same drivers in both the NH & SH are probably modified by other parameters such as cloud cover and air/water circulation etc.
    **There are credible hypotheses for some of the divergences in magnitudes.
    ***my original question is most expansive here”

  1920. Tim Curtin Says:

    @ Marco 20 April, who said

    “I still don’t see why you believe adding stations in Africa affects the trend such that warming is exaggerated. Yes, the global anomaly of 1880-1900 is based on less than perfect, since we’re missing stations. At the same time the smoothing procedure [sic] GISTEMP uses tries to compensate for that.” How? by inventing temperatures for Khartoum in 1880-1900? – or applying the NYC trend 1880-1950 backwards from Khartoum’s actuals in 1950-80? Do tell.

    Marco added: ” And once more: the less warming pre-CO2 increase, the less warming needs to be explained by natural forces. Not something you’d want if you so desperately want the AGW theory to be wrong”.

    Hardly! but a non sequitur anyway, and I do not so desperately want
    the AGW theory to be wrong, as I know that it is true but trivial in the real world. I have previously challenged Bart et al to show regressions of T against CO2 and natural forcing at any location proving AGW>NF.

  1921. manacker Says:

    Pat Cassen

    Thanks for tip. Yes. Just like VS, I have read them (although I am no statistical expert, as VS appears to be). VS has apparently come to the same conclusion as B&R, namely

    there is no evidence relating global warming in the 20th century to the level of greenhouse gases in the long run

    Max

  1922. Marco Says:

    @Max:
    Menne et al simply do an analysis of the supposedly good and bad sites. Facts show the former to give a slightly faster warming trend than the latter. This is the way to test the hypothesis that poorly sited stations yield a faster warming trend. Hypothesis rejected, Watts’ claims refuted, simple as that. I know it’s hard for him, surfacestations was his one claim to fame.

  1923. manacker Says:

    Marco

    Watts’ claim that AC exhausts, heated buildings and asphalt parking lots, etc. cause a spurious warming signal has NOT been refuted at all by Menne, contrary to what you claim.

    But we are starting to get repetitive here, Marco.

    Max

  1924. Marco Says:

    @Tim:
    Are you willfully playing stupid? There’s plenty of literature out there that explains how it works. Read that first (it’s referenced on the GISTEMP homepage), ask your questions to Reto Ruedy, and then perhaps you can come with a well-constructed argument. for now I simply can’t figure out what you exactly object so much against, since you are all over the place (hence me myself also trying different stuff and likely being less than consistent).

    Your frequent attempts to make local comparisons simply does not work. You’d have to know the values of various parameters on a local scale, which is often not available. Or perhaps you can point me to the dataset that shows, for several individual locations for the same period as the temperature set the:
    * CO2 concentrations (OK, we could use Mauna Loa for that)
    * Aerosols (sorry, can’t use global records for that, there can be huge differences on a local scale)
    * Absolute humidity
    * TSI with correction for local albedo, including cloud albedo, and the place on earth.

    And when you’ve done that, and all the analysis, you can explain to me what has caused the large temperature increase during interglacials. If the greenhouse effect played such a small role [edit], there should be an ENORMOUS climate sensitivity to ‘natural factors’, which in turn strongly suggests there should be a much higher variability in global temperatures on a year-to-year basis.

  1925. Marco Says:

    Yes, Max, we are indeed getting repetitive (to the extent that you posted your message twice):

    Menne et al refute the claim by Watts that the stations that he designated “poor” introduce a warming bias in the temperature record for the US. Au contraire, these stations actually show a LOWER trend than those marked “good”.

    These are the facts, and unsurprising you do not like them.

  1926. Tim Curtin Says:

    1. I take the GIStemp site as it comes. Use it yourself to find that the “anomalies” are a misleading construction when the base period of 1950-1980 includes sites in Africa etc that are absent willy nilly in 1880-1910, thereby resulting in larger negative anomalies for 1880-1910 from 1950-80 than are warranted. Forget the saintly Sato and Ruedy, how would YOU concoct temperatures for Khartoum etc for ALL 30 years in 1880-1910 in the absence of any data? This is actually what the fuss with Phil Jones of CRU was largely about – his refusal to show how he “homogenised” non-existent data, in his case from 1850-1910, to produce “anomalies” for Khartoum etc in the CRU “global” mean anomalies from 1850. This is way above Lord Oxburgh’s IQ, I still have hopes of you doing better!
    2. You then asked “Or perhaps you can point me to the dataset that shows, for several individual locations for the same period as the temperature set the:
    * CO2 concentrations (OK, we could use Mauna Loa for that)
    * Aerosols (sorry, can’t use global records for that, there can be huge differences on a local scale)
    * Absolute humidity
    * TSI with correction for local albedo, including cloud albedo, and the place on earth’’
    Well actually, I can and have for the USA in terms of CO2, humidity (RH but AH also if you insist), and albedo, not to mention actual solar surface radiation, and various other variables (eg windspeed), as I have previously reported here for quite a few locations, eg Pt Barrow. Not being funded by Exxon, and of course no chance of a dollar from the Australian Government or the ANU (where however I am giving an unfunded – not even a library card! – seminar next week on related issues), I do not have time or resources to complete as soon as I would like, such analysis of my date sets for USA (NOAA) and Australia just yet, but am working on it. But as ever, unlike NOAA and our BoM, HadleyCru remain obstructionist, so none of those variables are available for the UK. Set up a secret email address and I will gladly mail you my USA results so far (my own address is tcurtin@bigblue.net.au).

  1927. Bill Hunter Says:

    Right off the bat there is deception.

    “3 major compilations”

    Thats untrue. FOIA requests have established GISS starts with HadCrut and is primarily a gridding exercise of numbers from elsewhere, yet it is just accepted as a confirmation without question.

    Its not surprising vs bailed out when he couldn’t get a reasoned response from Bart on that IPCC graph that produces a visualization that no analytical procedure could ever show was anything more than a deception.

    Its so easy to prove that pick 100 years 1909-2008, pick 75 years 1934-2008 and pick 50 years 1959-2008, pick 25 years 1984-2008, and pick 10 years 1999-2008 and guess what the accelerating trend vaporizes into a fluctuating and decelerating trend.

    Figures lie, liars figure. Somebody spent more than a few hours figuring they could not use recent nor equal 25 year segments out to a 100 years so they cherry picked 75 years to leave off the graph.

    And Bart cannot admit that.

  1928. manacker Says:

    Marco

    Menne states that a good part of the observed “cooling trend” comes from changes in measuring devices.

    associated instrument changes have led to an artificial negative (“cool”) bias in maximum temperatures and only a slight positive (“warm”) bias in minimum temperatures.

    The fact that AC exhausts emit heat is fairly obvious. If these are near the measuring device, they will cause a spurious warming signal; this does not take “rocket science” to figure out.

    The analysis by Menne only shows that the anomalies averaged out over one set of stations all over the USA were statistically no different from those averaged out over another set.

    The reason why station exposure does not play an obvious role in temperature trends probably warrants further investigation. It is possible that, in general, once a changeover to bad exposure has occurred, the magnitude of background trend parallels that at well exposed sites albeit with an offset. Such a phenomenon has been observed at urban stations whereby once a site has become fully urbanized, its trend is similar to those at surrounding rural sites.

    A more meaningful analysis would simply be to take two nearby stations and compare these, as Watts has done for Marysville and Orland, CA. In this comparison the poorly sited station (Marysville) showed a long-term spurious warming trend of 0.2C per decade as compared to the well-sited station (Orland).

    So Menne really hasn’t refuted anything, Marco, except that you can prove anything you want to with a large enough “averaged” (and “homogenized”) sample.

    Max

  1929. manacker Says:

    Bill Hunter

    I am also at a quandary why Bart defends the IPCC chart in AR4 Ch.3 FAQ. Most everyone that has looked at it sees that it is a bit of “smoke and mirrors”. It was apparently slipped in “after the fact”.

    The global temperature curve is essentially a sine curve with lots of annual (even monthly) ups and downs, an underlying amplitude of around 0.23C and a multi-decadal half-cycle of around 30 years on a tilted axis with a long-term warming trend of around 0.04C per decade.

    Short-term trends in such an overall curve can always be shown to be steeper than longer-term trends.

    One could have drawn a 40-year trend at the start of IPCC’s “20th century” (1906-2005) and the linear warming trend would be twice that for the entire 100-year period.

    Would this prove a deceleration of the warming trend?

    No. No more than the IPCC curve proves an acceleration of the warming trend, as it is meant to do.

    The chart is a fraud and you are right when you write, “Figures lie, liars figure”. The statement on p.5 of SPM 2007 is a verbalization of this lie, again intended to convey the false message of accelerated warming.

    Max

  1930. Marco Says:

    Great, Max, you are taken two examples, carefully chosen I am sure (you can fill in yourself what I mean with the “carefully chosen”) as evidence that the Menne et al analysis does not refute Watts’ continuous claims that poorly sited stations show a higher warming trend (which is, notably, what Watts claims). You deliberately neglect the bigger picture: the bad stations give a *lower* trend than the good stations. And that’s exactly opposite to Watts’ claims.

  1931. Marco Says:

    @Tim:
    I see, you do not want to read up on the literature, don’t want to talk to the people that work on GISTEMP, all you want to do is vent your anger here about not understanding how they did what they did. Note also that ‘the fuzz’ about Phil Jones was his unwillingness to provide raw data, NOT how he analysed the data. That was all freely available in the literature.

    Regarding your analysis: I already indicated I have little faith in your ability to do the analysis (I earlier looked at temperature data you claimed showed no increase, and actually found a HUGE increase, much larger than the global mean). Then again, it could be fun to see you explain a decreasing SSR in Pt Barrow from 1960 onwards, and at the same time an increasing temperature. I’ll think about your offer, but I’m not in the mood for large audit excercises.

  1932. DLM Says:

    Bill Hunter says:”And Bart cannot admit that.”

    Bart seems like a nice enough guy. However, like the rest of the climate science establishment, Bart is riding a tiger and he can’t get off. The climate science dogma is crumbling, but they must adhere to the party-line until the bitter end.

  1933. InquiringMind Says:

    Haven’t been here in about a week so I apologize for commenting on the weight gain analogy so after the fact.

    Here’s a riddle:

    I maintain my food intake precisely at 2000 calories per day.
    My “bathroom outputs” remain constant.
    My activities (voluntary and involuntary) are fixed such that when combined with the caloric value of the “bathroom outputs” they add to 2000 calories per day.
    I’m in “perfect” energy balance.

    Every day I weigh myself. Everyday my weight goes up.

    How can this be?

  1934. Frank Says:

    InquiringMind Says:

    “How can this be?”

    Curious to hear your answer. Clearly, maintaining “perfect” energy balance is not necessarily the same as maintaining mass balance…

  1935. phinniethewoo Says:

    inquiringmind

    it could depend on the time of the day you are weighing yourself with respect to the in/outs?

  1936. manacker Says:

    Mrco

    We can argue this until we are both blue in the face, but you (and Menne’s paper) will not convince me that AC exhausts next to thermometers do not cause a spurious warming signal, and that is exactly what Watts claims.

    Menne compares one big average of a whole bunch of stations with another, concedes that differences in measuring devices may have introduced an error and that the distortion to the record may have occurred prior to the data series, but that one group shows more warming of the maximum temperatures and more cooling of the minimum temperatures, but more work should be done to confirm all this.

    The example of two nearby stations tells me more than Menne’s study. And the data are 100% transparent, as well.

    I’ll stick with AC exhausts near thermometers causing a spurious warming signal, Marco (I can test that one all by myself).

    But we have truly beaten this dog to death. Believe what you want to. So will I.

    Max

  1937. manacker Says:

    Marco

    Further to my last post, some interesting reading to supplement your Menne study.

    Click to access surface_temp.pdf

    Max

  1938. manacker Says:

    Marco

    For an interesting sequel to Menne et al. see:

    Rumours of my death have been greatly exaggerated

    Max

  1939. Frank Says:

    Marco Says (@Tim):

    “I see, you do not want to read up on the literature, don’t want to talk to the people that work on GISTEMP, all you want to do is vent your anger here about not understanding how they did what they did.”

    – And exactly which people (who) work on GISTEMP deign to talk with you?

    “Note also that ‘the fuzz’ about Phil Jones was his unwillingness to provide raw data, NOT how he analysed the data. That was all freely available in the literature.”

    – Good science relies upon reproducibility. Given the large number of data series in existence (and the nearly infinite number of combinations thereof), it would literally be impossible to attempt replication of Jones’ results without explicit guidance as to which data were used in the original work. Ditto with respect to methodology.

    But let’s rise above the task weeds, shall we? The big picture is that you are trying to advance a linked hypotheis (AGW, i.e., that human emissions of CO2 causes harmful warming that must be avoided by reducing economic activity to quasi subsidence levels of human existence) that ex-ante demands a super abundance of incontrovertible physical evidence.

    To date you have absolutely none. The surface temperature record? Even if it were absolutely “pristine”, it has no demonstrable link to CO2 emissions and is otherwise unremarkable relative to countless other centenial level series clearly observable in myriad paleo-proxy records. The Hockey Stick? At best poor science and at worst outright fraud, depending on how one views dendrochronology in general or the inclusion of cherry-picked bristle cone pines in particular. Climate models (GCMs)? In general, models are not data, and therefore are not evidence. In particular, good models make strong predictions that can be verified by previously unobserved data. Despite massive effort to date, the GCMs don’t do this, and moreover have falsely “predicted” phenomena (e.g., a mid-tropospheric hotspot and constant relative humidity) that are known (by measurement) not to have occurred.

    In summary, you continue to rearrange the deck chairs on the TItanic while your less ideological peers (i.e., non-socialists?) have begun making their way to the few remaining life boats. My gatuitus advice? Review the facts, but don’t dawdle. – F.

  1940. manacker Says:

    InquiringMind

    I am 3 weeks old. Part of my “activities (voluntary and involuntary)” include gaining mass by around 15% per day.

    I am on a high-salt diet, and my body is retaining (non-calorific) water.

    I am returning from a space voyage, and gravity is gradually increasing.

    My bathroom scales are located in a high-speed, multi-story elevator, and I measure each day at a different point in the upward ride, starting with the top floor and working my way down to the bottom floor.

    My daily weight records are being “homogenized” and “variance corrected” by GISS.

    Max

  1941. Bob_FJ Says:

    Tim Curtin & Max especially,
    You may well have already seen this, but talking of GISS corrections, this is a very interesting article.
    EXTRACT:
    With some work I started back in late December and through January, and with GISS putting stamp of approval on “missing minus signs” I can now demonstrate that missing minus signs aren’t just an odd event, they happen with regularity, and the effect is quite pronounced when it does happen. This goes to the very heart of data gathering integrity and is rooted in simple human error. The fault lies not with GISS (though now they need a new quality control feature) but mostly with NOAA/NCDC who manages the GHCN and who also needs better quality control.

  1942. ge0050 Says:

    I’ve uploaded a very simple excel simulator to the public domain. All are welcome to give it a try and modify as you wish.

    http://rapidshare.com/files/378673183/tempSim.xls

    This uses a coin toss to simulate temperature change. H=1, T=-1. The resultant temperature is the nett sum of H and T.

    The results are graphed. Each time you press F9, the graph is recalculated. Ignore the scale, it is arbitrary. Look at the graphs produced each time you press F9.

    I believe you will find that this simulator produces plausible temperature records. This suggest to me that temperature could well be a random walk.

    Try this simulator.

  1943. ge0050 Says:

    I’ve uploaded another version of the simulator, with a faster calculation and double the number of points.

    http://rapidshare.com/files/378678110/tempSim2.xls

  1944. Eli Rabett Says:

    Well, one time Eli did have a question about GISSTemp, so he Emailed GISS and got a polite response in a day or so.

    As to the surface stations, for every one with an AC unit, there is another shaded by trees. Whatever

  1945. phinniethewoo Says:

    Whatever indeed: Because whatever average Te Tearth they are calculating with all this data Ti, has no physical meaning whatsoever.
    So much is certain.

  1946. manacker Says:

    Eli Rabett

    Shaded by trees = chance is good that temp reading may be representative

    Next to AC exhaust (or not shaded from sun) = chance is good that temp reading has spurious warming signal (only in daytime, in case of sun)

    If “for every one with an AC unit, there is another shaded by trees”, then chances are good that half of stations have spurious warming signal and other half may have representative reading.

    If AC units were added after 1970s, record will show spurious late 20th century warming signal.

    No rocket science here.

    Max

  1947. Bob_FJ Says:

    Eli,
    This is one of my favourites at Ciampino Airport in Rome

    Lovely, isn‘t it?

  1948. manacker Says:

    Bob_FJ

    Interesting article.

    Max

  1949. Marco Says:

    @Max:
    Ah yes, Watts handwaving. He promised to do an analysis when they were up to 75% of the stations. Any and all questions on his blog when they surpassed that level as to the status of the paper were deleted. Also, Menne et al did the quality control themselves, so Watts can’t complain about that. His complaints about the collaboration are also funny, considering that Menne et al claim Watts declined to collaborate. And as well all know, he would never have accepted the outcome anyway. Remember, “spurious warming” is his one claim to fame.

    Well, actually, he now has a second claim to fame, and you link to that document. In it, Watts and D’Aleo claim that NOAA deliberately removed certain stations, with the express purpose to increase the warming trend. We’ve been over this before, but to show how you just keep on referring to Watts as having any type of credibility, against all evidence to the contrary, here are once again my remarks on that libelous piece of false information:

    The first claim is provingly false, and something that is known for a looooooong time already (discussed in the 1990s, notably), since it was not a removal of stations, but an addition of station information. This data gathering effort was performed in the late 1980s and early 1990s.

    The second claim is not just provingly false, it is libelous. Several people have done the analysis that Watts and D’Aleo did NOT do: the ‘removal’ of the high-latitude and high-altitude stations actually introduces a COOLING bias. This analysis has been done by many people, ranging from Tamino to Zeke Hausfather, and even Roy Spencer noted on Watts’ blog(!!!!!) that he found no evidence that the station ‘removal’ had any effect.

  1950. manacker Says:

    Aw c’mon, Marco, you’re beating a dead horse.

    The surface temperature record is a mess, and you know it.

    The more you bring out silly examples (i.e. AC exhausts introduce “cooling bias”, etc.) the more it shows that the record REALLY is a mess.

    But, as I said earlier, it is the only record we have (prior to 1979), so we have to live with it. We just shouldn’t place too much faith in its accuracy.

    Max

  1951. Marco Says:

    @Frank:
    We’re discussing the global temperature series here, not all the other stuff. But if Bart allows:
    Just about anything and everything Jones did could *easily* be reproduced. Easily, in the sense that the data is available from the same sources as Jones used. Ah, but there’s the crux, isn’t it? People who want to reproduce what Jones did would actually have to do the same work that Jones did. But even that is not directly necessary: for the US one could easily take the procedures that Jones (et al, I should actually say) used and apply that to all the freely available data for the US, and compare that to the gridded data that is available from HADCRUT.

    You also come up with a strawman:
    “The big picture is that you are trying to advance a linked hypotheis: AGW, i.e., that human emissions of CO2 causes harmful warming that must be avoided by reducing economic activity to quasi subsidence levels of human existence”.
    Tell me who tells us that we need to reduce economic activity to quasi subsidence levels of human existence to prevent dangerous levels of CO2? Not climate scientists. Not the IPCC. Not me, either! In fact, you’ll see plenty of scientists pointing to possibilities to prevent dangerous levels of warming without a major impact on the economy. Of course, you then get alarmists like Richard Tol screaming that it will cost too much, that the economy will be ruined, and whatnot. And then we get others claiming climate scientists are oh so alarmist…

    You also make some funny claims about the GCMs. For starters, regardless of the source of warming (sun or enhanced greenhouse effect) there should be a tropospheric hotspot. Something that several scientists claim IS observed. In addition, the enhanced greenhouse effect predicts a stratospheric cooling: there even very few ‘skeptics’ dare to question that observation. Your comment on the models and relative humidity are too sweeping to even warrant a response. I’d say, RTFIPCCR.

    And please leave the ideology nonsense out. You’re not making your case any stronger. I am most definately not a socialist, nor are most others who just happen to trust real climate scientists in their assessment.

  1952. Marco Says:

    Max, yep, beating a dead horse. You have already decided that the surface record is tainted, that it has a warming bias due to whatever you are told may introduce a warming bias (such as AC exhausts or the people involved in making the record), and any and all evidence to the contrary is to be ignored.

    Beliefs over data. Pictures over data. Assumptions over data. Libelous claims over data. You’re a true Watts acolyte.

  1953. manacker Says:

    Marco

    AC exhausts generate heat. I can check this out all by myself (that’s REAL DATA), without performing an averaging of 40% of the US weather stations with all sorts of other variables and inaccuracies involved, in order to come up with the conclusion that, on average, AC exhausts introduce a “cooling bias”.

    Get serious, Marco.

    Max

  1954. NokTang Says:

    “Of course, you then get alarmists like Richard Tol screaming that it will cost too much, that the economy will be ruined, and whatnot. ”

    Haven’t seen you around argumenting on Roger Pielke Jr’s blog when Richard Tol made 4 posts about his views on IPCC. Neither on climategate.nl where Tol makes a regular appearances about IPCC or ditto on Klimazwiebel. Don’t be a chicken and discuss with Tol your doubts about his views. But I expect it’s like your discussion with VS, you scream a lot, but don’t know much about the matter, at least you haven’t shown it, until now.

    Calling Tol an alarmist….what a chutzpah.

  1955. Eli Rabett Says:

    For another thing, heat from AC units tend to be confined to the area directly in front of the fan, with not much to the side. You can measure this yourself if you cared to.

  1956. Eli Rabett Says:

    Oh yeah VS got his head handed to him on the amstat thread by Francisco who pointed out that the fo. VS said that he would be back to them. Time will tell.

  1957. manacker Says:

    Marco,

    OK. To get back on topic here, let’s take the HadCRUT record “as is” and do a quickie “reality check” on IPCC predictions (pardon me, “projections”).

    The overall linear warming trend from 1850 to 2009 was 0.0042C per decade or around 0.7C over the 160 years. Over this period atmospheric CO2 increased from an estimated 285 ppmv to a measured 390 ppmv, or by a factor of 1.37.

    As pointed out earlier, there were distinct, statistically indistinguishable, multi-decadal warming and cooling cycles; these can be approximated with a “best fit” sine curve with an amplitude of around ±0.23C and a half-cycle time of around 30 years.

    If this trend were to continue to year 2100, we would see another 0.4C warming over the next 91 years.

    But IPCC tells us that we can expect between 1.3 and 6.5C warming above 2008 values by 2100. Wow! That sure is a lot!

    Let’s analyze this more closely. IPCC models use several assumed “storylines” and “scenarios” to arrive at this very high range.

    The top two scenarios (A1F1 and A2) have atmospheric CO2 increasing to 1590 and 1280 ppmv, respectively. As there are not enough optimistically estimated fossil fuel reserves on our planet to even reach 1000 ppmv, we can discard these cases as unrealistic from the start.

    The next three scenarios (A1T, B2 and A1B) show CO2 increasing at CAGR of between 0.65% and 0.86% (1.5 to 2x the actual rate seen over the past 5 or 50 years), so can also be discarded as unrealistic.

    The lowest IPCC scenario (B1) shows CO2 increasing at a CAGR of 0.48%. This is somewhat higher than what we have seen, but is not unreasonably so. In this scenario, atmospheric CO2 increases to around twice the “pre-industrial” value by 2100.

    This case has temperature increasing by 1.1 to 2.9C above 1980-1999 average values, or 0.8 to 2.6C above 2008 value (with an average of 1.7C increase).

    Mind you, this projection is based on a 2xCO2 climate sensitivity of 3.2C, which, in turn, is based on model simulations, which have been fed various assumptions. This figure assumes strongly positive feedbacks from water (as vapor, liquid droplets and ice crystals). Empirical data from recent studies based on actual physical observations have raised serious doubt concerning these feedback assumptions and the resulting 2xCO2 climate sensitivity.

    So if the assumed climate sensitivity is high by a factor of two or even three (as it now appears likely), the projected warming by 2100 will only be an imperceptible 0.3 to 1.3C, and really nothing to worry about at all.

    No wonder the EU politicians are having no problem “committing” to no more warming than 2C!

    (Besides, none of them will be around in 2100.)

    Max

  1958. Tim Curtin Says:

    Marco SAID (April 21, 2010 at 16:51) “I earlier looked at temperature data you claimed showed no increase, and actually found a HUGE increase, much larger than the global mean).”

    Dear Marco, when and where did you find that? Do tell.

    Then you added “again, it could be fun to see you explain a decreasing SSR in Pt Barrow from 1960 onwards, and at the same time an increasing temperature”.

    Well, I just did: regressing 1st differences in Av temps at Pt Barrow (to avoid unit root problem) against the level of [CO2] and changes in total direct and diffused solar radiation (Wh/sq.m.) and in “H2O” (precipitable water, in cm), I find that the coefficient on [CO2] is negative but not sig., and that on SR is positive, but also not sig., whilst that on H2O is large and hugely sig. Taking account of the albedo, the net SR becomes sig. at 90%, with adj. R2 a decent .41, and [CO2], alas, remains neg. but luckily insig. with “H2O” again the largest and most sig. variable.

    You owe me at least a slivovitz.

  1959. manacker Says:

    Eli Rabett

    You aren’t telling me that AC units outside “the area directly in front of the fan” introduce a “cooling bias”, are you?

    Just checking.

    Max

  1960. KJ Says:

    There seems to be a reluctance on the part of climate modellers to accept that the global temp record could be described by a statistical process because the underlying physics is deterministic. However, it could be that the extensive averaging over the raw data just washes out the physics. One way to test for this would be to change the order of averaging:

    As I understand it, the record is obtained by averaging over the year for each site, then averaging this result over the whole globe. Instead, though, you could first produce a global monthly average and then average over the year. Other possibilities can be conceived. If the two results were the same, as in principle they should be, this would point to trends in the data being deterministic. On the other hand, if they did not match, this would suggest that any trend was an artefact of the data analysis rather than due to the underlying physics.

    Has such a test ever been considered?

  1961. manacker Says:

    Eli Rabett

    I’ll agree that the “other side” of an AC unit would introduce a “cooling bias”.
    (That’s why we install them.)

    Max

  1962. Marco Says:

    @Tim:
    1st differences? Do you mean you took the first derivative?

    I surely hope not, it would be McLeanian analysis. And that leads to the hilarious statement that if something explains a short term variation it logically also explains the long term variation (errrrrr……no).

  1963. DLM Says:

    Manacker says:”Eli Rabett

    I’ll agree that the “other side” of an AC unit would introduce a “cooling bias”.
    (That’s why we install them.)”

    But Eli has his installed backwards.

  1964. Frank Says:

    Marco Says (@Frank):

    “Just about anything and everything Jones did could *easily* be reproduced. Easily, in the sense that the data is available from the same sources as Jones used.”

    – Marco, please provide the source that indicates explicity which (of the many temperature) series were used and explicity what methodology was used.

    “Tell me who tells us that we need to reduce economic activity to quasi subsidence levels of human existence to prevent dangerous levels of CO2? Not climate scientists. Not the IPCC. Not me, either! In fact, you’ll see plenty of scientists pointing to possibilities to prevent dangerous levels of warming without a major impact on the economy.”

    – Marco, this is a blatant dissembling of reality. Waxman / Markey (passed in the House, but fortunately not yet in the Senate) calls for CO2 reductions (vs 2005) from 3% to 80% between 2013 and 2050. This is all part and parcel of AGW alarmism. I’m not aware of any rational person that thinks such reductions, leading to a corresponding reduction in energy production, can possibly be offset by any build-out of solar toys or piddle power we can envision. Hence, dividing the resultant lower energy output by future higher population projections actually does portend a return to subsidence levels of economic activity.

    “For starters, regardless of the source of warming (sun or enhanced greenhouse effect) there should be a tropospheric hotspot. Something that several scientists claim IS observed.”

    – Marco, again you are dissembling. By several scientists, you are of course referring to the Real Climate crowd’s feeble attempt to overcome the GCMs incorrect prediciton of increased greenhouse warming (note, upward temperature trend) of the mid-troposphere. This unique signature (endorsed by the IPCC I might add), is not confirmed by real data from satellites or radiosondes. It is bordering on pathos to watch the alarmists attempt to resurrect their predictions by continually moving the goal posts (from greenhouse only to general warming) or waving their arms (disregarding direct satellite and radiosonde date in favor of ‘wind shear’).

    I find it ironic that you want to get back “on topic” whenever anyone asks you for definitive evidence of AGW, while you seem to have no problem wandering all over the map when it suits you. Very well, then. We have (reference the 2nd post in this long thread) a historical surface temperature record with a stochastic trend, about which we can not make infeerences re. CO2. Again, what evidence do you have for AGW?

    Regards – F.

  1965. Bob_FJ Says:

    Marco,
    returning to the topic of Bart’s graphs above;
    Cohenite made an interesting comment concerning the IPCC’s use of linear trending for different parts of the temperature record. I expressed interest in hearing what you might have to say about it, and urged you and Sod, as follows.

    Would you please not immediately dismiss the article by virtue of its source [WUWT] as is your usual wont. Instead, please carefully examine its references and quotes and graphs which originate from AR4, WG1. Any comments?

    Sod ignored it, but the relevant part of your response was:

    “Will you [Bob_FJ] please not claim I just dismiss articles which have graphs from AR4, etc.?…”

    Will you now please comment on this issue concerning linear trending, per the IPCC.

  1966. Eli Rabett Says:

    No folk, Eli actually measured temperatures about 3 m directly in front of an AC unit and a few meters to the side. There was a measurable difference in front and none to the side.

  1967. DLM Says:

    Eli,

    Tell Eli to try measuring the downwind side.

    You are really smart, Eli. Would you care to explain where the MISSING HEAT is?

    Now this revealing bit of settled climate science is priceless: “Existing observing systems can measure all the required quantities, but it nevertheless remains a challenge to obtain closure of the energy budget. This inability to properly track energy—due to either inadequate measurement accuracy or inadequate data processing—has implications for understanding and predicting future climate.”

    It also has obvious implications for understanding the competency of the climate scientists.

    They claim to be able to measure all the inputs and outputs of the ‘energy balance’ down to about a tenth of a degree, but somehow half of the heat that is supposed to be here due to CO2 radiative forcing, is MISSING!

    And that foolishness is the best climate scientists can do, after decades of research that has cost taxpayers 70 billion dollars.

    What if your bank told you that they had all of your deposits and withdrawals recorded correctly, but half of your money was MISSING?

  1968. DLM Says:

    By the way Eli,

    After your friend Eli get’s done screwing around with the AC measurements, ask him to get out to the airport and do the same, with jet exhausts. Anthony says that is also a big siting problem, and he has been looking for a volunteer for a very long time.

  1969. Tim Curtin Says:

    Marco: I took first differences, just as I said. When are you going to answer my question to you?

  1970. Anonymous Says:

    Eli Rabett

    But no “cooling bias” anywhere on the exhaust end of the AC unit, either, as Marco is trying to conjure up.

    Max

  1971. Marco Says:

    @Tim:
    somebody who does McLeanian analysis and then claims there’s no evidence for any effect of CO2 is doing things wrong. You have found that precipitable water may explain the variation around the trend. But what explains the trend…?

    Regarding your question

    Global average temperature increase GISS HadCRU and NCDC compared

    Just FYI, July temperatures in Barrow are on average already WELL above 4 degrees.

  1972. Marco Says:

    @Frank:
    http://www.cru.uea.ac.uk/cru/data/temperature/#sciref
    (this took me a whole of 5 seconds to find).
    Here’s a list of the stations used:
    http://www.cru.uea.ac.uk/cru/data/landstations/
    (that took me a whole of 10 additional seconds to find)

    Regarding reduction of CO2 I do not share you alarmism about the economy.

    Regarding the tropospheric hotspot: It’s been found by various non-realclimate-related scientists:

    Papers on tropical troposphere hotspot

    Regarding the supposed evidence of a stochastic trend: if you take a period with known natural forcings and compare that to another period in which there are no natural forcings that explain the trend, you’ll have to find an explanation. Apparently rather surprising, and unwanted for some people, is that CO2 is a quite good explanation for the observed increase in temperatures in that latter period. The evidence is simply that there is no evidence for any other forcings that explain the temperature increase over the last 40 years.

  1973. Marco Says:

    @Max,

    Once again you are making things up. I did not claim that AC exhausts cause a cooling bias. I told you, and referred to the relevant publication, that the collection of supposedly poorly sited station does NOT introduce a spurious warming bias, but at worst a spurious COOLING bias in the record. That you want to jump up and down over single stations where there might be a warming bias just shows you don’t want to look at the bigger picture, which is that on average poorly sited do NOT introduce a spurious warming bias.

  1974. Anonymous Says:

    Marco @ April 23, 2010 at 12:52
    “@Tim: somebody who does McLeanian analysis [but I did not by your own definition of McLean analysis involving derivatives, which I did not use] and then claims there’s no evidence for any effect of CO2 is doing things wrong. You have found that precipitable water may explain the variation around the trend. But what explains the trend…?”

    The trend is best explained by “precipitable water” (using the NOAA/ERL data)

    Then Marco: “Regarding your question

    Global average temperature increase GISS HadCRU and NCDC compared


    Just FYI, July temperatures in Barrow are on average already WELL above 4 degrees.”
    Really? I have just been to GISS, and their anomaly for Barrow July 1980-2009 against 1950-1980 is just 0.3oC, and for July 2009 alone they have 9999 (no data, as it was too hot to venture out of doors to get the readings, at Max Mean T c7oC).

    Marco, please give just 5 reasons why you are not deluded!

  1975. manacker Says:

    Marco

    We are continuing to beat a dead horse. The Menne study is inconclusive, in that it also includes spurious signals from changes in measuring devices, spurious shifts that may have occurred before the record was taken and slight warming versus slight cooling of minimums and maximums.

    It is a good example how you can take a large sample and show almost anything.

    On the other hand, there is absolutely no question that AC exhausts near thermometers result in a spurious warming signal, as do jet engine exhausts, heated buildings in winter, asphalt parking lots, etc., all of which have been pointed out by Watts. This is not “rocket science”, Marco.

    By definition, these will introduce a warming bias, whether Menne was able to detect it in his sample or not.

    So let’s stop this discussion now. It has gotten repetitive, with you simply saying that I “don’t want to look at the bigger picture”, whereas you obviously do (when it suits your purpose) but “don’t want to look at the basic fact that AC exhausts, etc. introduce a spurious warming signal, by definition, as stated by Watts”.

    Max

  1976. manacker Says:

    Marco

    In your latest to Frank you fell into the old sweet IPCC trap with:

    The evidence is simply that there is no evidence for any other forcings that explain the temperature increase over the last 40 years.

    The IPCC logic goes as follows:

    1. Our computer models cannot explain what caused the multi-decadal warming trends prior to mid-2000.
    2. We know that the (statistically indistinguishable) late 20th century warming was caused primarily by human CO2
    3. How do we know this?
    4. Because our computer models cannot explain it any other way

    This is now compounded by the Met Office logic of attributing the cooling after 2000 to natural variability (a.k.a. natural forcing), even though natural forcing is estimated by IPCC to have been essentially insignificant as a factor in the warming from 1750 to the end of the 20th century.

    “The evidence is simply that there is no evidence for any other forcings that explain the temperature increase over the last 40 years”

    really translates into:

    “we really do not know all there is to know about what makes our planet’s climate behave the way it does, and so cannot say with any certainty that the late 20th century warming can be attributed principally to human CO2 rather than some natural forcings”.

    That would have been a more honest statement, Marco.

    Max

  1977. Marco Says:

    Max, Watts claimed that badly sited stations introduced a spurious warming trend. He did NOT SOLELY state that certain badly sited stations introduced a spurious warming trend. Remember that Menne et al included ALL stations that surfacestations at the time had marked as a bad station. Also, AC exhaust do not by definition introduce a warming trend. Exact location (and actual functioning) will be of importance, too. As Eli already noted, a few meters to the side of the exhaust, and nothing happens.

    Second, the computer models CAN explain the warming up to mid-20th century using primarily known changes in natural forcings. But then the warming after that can’t be explained anymore. Note that the models are based on physics. You can also read this:
    http://tamino.wordpress.com/2009/08/17/not-computer-models/

  1978. Marco Says:

    Tim,
    First difference is not exactly the same as taking the first derivative, but the effect is quite similar: you remove the trend. It’s McLeanian analysis all over.

    Funnily enough, even that little analysis you yourself did shows your prediction to be highly questionable! After all, you predicted “3.86oC in 2006 to 4.127oC” in 2100. With a supposed 0.3 degree increase from the 1950-1980 period to the 1980-2009 period, how are you explaining a mere 0.267 further rise for a 90-year period? What is the rationale for the forecasting method?

    What I myself did was simply looking at the raw data for Barrow, and of course the rural station with the longest record (1901-2010). It shows average temperatures over the last decade well above 4 degrees in the raw data. The homogenised data shows an even larger temperature, but its warming trend is much lower. Pick whatever you want, it does not correspond to your claims.

    For the raw data of Barrow
    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425700260000&data_set=0&num_neighbors=1
    Since 2002 well above 4 degrees
    Homogenised data:
    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425700260000&data_set=2&num_neighbors=1
    Since 1985 well above 4 degrees.

    And when I look at the anomaly map for July for the period 1980-2009 with 1951-1980 as the base period, I get:
    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2010&month_last=3&sat=4&sst=0&type=anoms&mean_gen=07&year1=1980&year2=2009&base1=1951&base2=1980&radius=250&pol=reg
    With Barrow color coded with a 0.5-1 degree above the base period.

    Perhaps if you start indicating the direct source of your data we can see where the discrepancy between our analysis lies.

  1979. JamesG Says:

    Marco
    Yes the tropo hotspot should be there for both solar and CO2 induced heating IF the theory of strong positive water vapour feedback is correct. That’s what is significant about it being missing. Nobody was bucking any of the physics other than the unproven strong positive water vapour feedback. Somehow that was lost in translation. As Dessler says, if it’s anywhere it’ll be in the tropi tropo. If it isn’t apparently there then ordinarily in most fields of science that would be a good refutation of that part of the theory. In climate science it means that the instruments are obviously wrong and need correction – in line with the models of course.

    Similarly, yes the strato cooling is supposed to now be the one true measure of AGW but apparently it’s been flat-lining since 1995. Now you can speculate as to why but then we can all do that. It’s called guesswork. But what does Occam’s razor suggest? That’s right – that the signature is missing.

    But now we have the missing heat, a 3rd piece of evidence that supports the refutation of significant AGW. Just how many does it take?

  1980. Anonymous Says:

    Marco Says(@Frank):

    “tp://www.cru.uea.ac.uk/cru/data/temperature/#sciref
    (this took me a whole of 5 seconds to find).
    Here’s a list of the stations used:
    http://www.cru.uea.ac.uk/cru/data/landstations/
    (that took me a whole of 10 additional seconds to find)”

    – Marco, you’ve provided links to the ‘finished’ product and a station listing, neither which is useful for purposes of validation. But why should you be any more useful than Jones et al?

    “Regarding the tropospheric hotspot: It’s been found by various non-realclimate-related scientists:
    http://agwobserver.wordpress.com/2009/09/06/papers-on-tropical-troposphere-hotspot

    – While I specifically mentioned that “alarmist “handwaving” (wind shear)doesn’t resucitate the models, I’m not surprised that you have taken that low road here (from one of your sources, which also include Real Climate offerings by the way):

    “Surprisingly, direct temperature observations from radiosonde and satellite data have often not shown this expected trend. However, non-climatic biases have been found in such measurements. Here we apply the thermal-wind equation to wind measurements from radiosonde data, which seem to be more stable than the temperature data…”

    – And finally you reply with this gem:

    “The evidence is simply that there is no evidence for any other forcings that explain the temperature increase over the last 40 years.”

    Bingo! Outside of “post-modern science”, the above statement always constitutes a logical fallacy, i.e., an argument from ignorance. Again, what evidence do you have?

    Regards – F.

  1981. Frank Says:

    Oops! Switched browsers. The previous “Anonymous” post is mine…

    Regards – F.

  1982. JvdLaan Says:

    Marco, it is completely useless to discuss with the Watt-crowd Trolls. I admire your stamina, but they will never get it, the bigger picture.

    Maybe a video will help:
    http://jules-klimaat.blogspot.com/2009/07/video-anthony-watts-wanted-to-be.html

    And for Phinnie and DLM: http://www.youtube.com/watch?v=rrNToxbN4n8

  1983. HAS Says:

    I had stopped following this thread because it seemed to have wandered off to places unknown, but did come back for a look and was surprised how much of its initial thrust had been lost.

    Marco said on March 1, 2010 at 21:44

    “Second, the computer models CAN explain the warming up to mid-20th century using primarily known changes in natural forcings. But then the warming after that can’t be explained anymore. Note that the models are based on physics. You can also read [Tamino on not-computer-models]”

    A quick look at the reference shows modelling done in precise violation of the statistical issues raised here.

    Marco said on March 1, 2010 at 21:44

    “First difference is not exactly the same as taking the first derivative, but the effect is quite similar: you remove the trend. It’s McLeanian analysis all over.”

    Marco I think you misunderstand the maths here, what happens is that you isolate the slope of the time series, and this can aid in isolating particular types of trends. In fact one of the issues being discussed earlier was whether the ARIMA I(1) models had a constant or not, which would indicate a trend/drift or no trend/drift.

    I should add that I find some of your discussion on this point confusing – perhaps I should just note that the first derivative of a continuous series is the limiting case of discrete differencing where the interval tends to the infinitesimal, but that we aren’t dealing with continous series here.

    BTW what’s a McLeanian analysis (and why are its consequences hilarious)?

  1984. Frank Says:

    JvdLaan Says:

    “Marco, it is completely useless to discuss with the Watt-crowd Trolls. I admire your stamina, but they will never get it, the bigger picture.”

    – Bingo! Another example of a logical fallacy – this time an ad hominem. I’ll extend my request to you – What evidence of “it, the bigger picture” do you have? Oh, and to save you some time, please refrain from responding with appeals to authority (also a logical fallacy) such as the IPCC or RealClimate said so.

    Regards – F.

  1985. Tim Curtin Says:

    Marco: 1. Thanks for links.

    2. Your GISS matches my NOAA, so that’s a comfort! But I cannot see where you get a rise of 4oC for July from 2000 to 2009. The trend is strong, 3.3933+0.2677x, but only because of the final 3 years (ave 4.33 for 2000-2006 and 6.1 for 2007-2009).

    3. The colour coding in your map is for Alaska as a whole, not Barrow, and at 0.5-1.0 does not match your 4.0oC claim. If you go to the raw data you find the anomaly is 0.32 for Barrow’s lat. and long. Still no 4. The GISS map is dubious anyway, as the linked txt data shows no data for Barrow in July 2009, even though GISS clearly does have the data at your link. Quality control?

    4. The data base I have been using is in 2 sets, 1960-90, and 1991-2005/6. Why it ends in 2006 is unclear, perhaps because it is very non-PC?! Here is the link for 1991- be quick before it gets expunged.
    http://rredc.nrel.gov/solar/old_data/nsrdb/1991-2005/hourly/list_by_state.html

  1986. HAS Says:

    Had been lazy. Now have done a search of the thread on McLean and have read McLean et al, Foster et al and Stockwell et al, and understand perhaps what a “McLeanian analysis” might be but still don’t see the joke.

    Overall reflection is that this whole subject area would benefit from more systematic use of statistial analysis and time series analysis tools (particularly co-integration) for the reasons mentioned in Stockwell et al and as discussed on this thread.

    I also wonder perhaps Marco if you were confused by the word “derivative” in its common English use (as used by McLean to describe what looks something like differencing) and its technical mathematical use.

  1987. manacker Says:

    Marco

    Concerning Watts vs. Menne. I believe we have rehashed this ad nauseam. To your second point, you wrote:

    Second, the computer models CAN explain the warming up to mid-20th century using primarily known changes in natural forcings. But then the warming after that can’t be explained anymore.

    Check IPCC AR4 WG1 Ch. 9 (p. 691) for a different slant on this:

    Detection and attribution as well as modeling studies indicate more uncertainty regarding the causes of early 20th-century warming.

    Max

  1988. Robert S Says:

    JamesG
    “Yes the tropo hotspot should be there for both solar and CO2 induced heating IF the theory of strong positive water vapour feedback is correct. That’s what is significant about it being missing.”

    But as Dessler also says, even if the topical troposphere isn’t warming faster than the surface, this just means the negative lapse rate feedback isn’t as strong, and the sum of WV+LR feedbacks remains the same.

  1989. JvdLaan Says:

    So Frank if someone points you to a scientific source it is an appeal to authority?
    But ok, for the bigger picture regarding surfacestations, did you know this: http://www.ncdc.noaa.gov/oa/about/response-v2.pdf ? It is the answer from NOAA and that’s what I mean with the bigger picture, since a lot of the discussion here was about AC next to thermometers etc.
    But in the end, the trend was the same as stated in the answer by NOAA.
    And please, stop this Ad Hominem whining, as my BS Bingo card is almost full (again!).

  1990. manacker Says:

    Robert S

    Minschwaner + Dessler give an estimate of WV feedback based on observations, which is well below the IPCC estimate (AR4 Ch.8) based on roughly constant RH assumption.

    The long-range NOAA observations show that both RH and SH (water vapor content) have decreased as temperature has increased.

    Another graph of the NOAA data shows the decrease in RH at various elevations.

    So what is the real WV feedback?

    Max

    PS I think we can safely conclude that, in real life, WV does not march in goose-step with Clausius-Clapeyron.

  1991. Robert S Says:

    Manacker
    Soden 2005 found a strong positive water feedback and roughly constant RH. And Dessler 09 finds evidence for a strong WV feedback as well.

    Could you give a little more on the source for the q data? Who put that graph together? Why would the troposphere become drier with increasing temperatures? Decreasing RH I can see, but q?

  1992. phinniethewoo Says:

    about The more intriguing answer VS had to John 1 April 10:10 :

    I appreciate the problems with getting seasonal parameters right, but it is a terrible reduction in data , from 4000(sites) * 365 * 2(bi-daily recording) to 1 sample point only.

    A yearly average is taken i suppose 1jan-31dec.
    So one could repeat and produce a sample of 140years, by taking yearly averages 1july-30june. This way one gets a new sample with different variation in it as obtained from the larger data reservoir?

    If this new 140y leads to same conclusions we should have better statistical significance?

    There must be other ways to get more data out of the whole temperature data than just yealry averages.

    as VS mentioned Stats is about -variation- so we should try to make full use of iit instead of reducing the pool of data

  1993. manacker Says:

    Robert S

    You asked about the chart on specific humidity versus change in SST (K) which I posted.
    The chart (Fig. 7) comes from Minschwaner + Dessler (2004):

    Click to access minschwaner_march04.pdf

    Figure 7 shows the correlation between interannual variations in monthly mean UT water vapor and c SST , where the SST variations are computed as before but over the time period 1993–99. As in Fig. 6, we also include the linear least squares fit, model results, and extremes of constant specific and relative humidities.

    The linear regression shows a positive slope of 3.0 ± 1.2 ppmv K21 (2s) with a correlation coefficient of 0.47 (80 points). The implied positive feedback is smaller than indicated by our model (8.5–9.5 ppmv K-1), but as with the case of MLS, the HALOE water vapor data show that the UT humidity–SST relationship in the present climate regime lies between the cases of constant mixing ratio and constant relative humidity.

    A closer look at Fig. 7 shows that the lowest WV feedback is the one based on the actual observations, followed by the M+D model. The highest WV feedback is the constant RH assumption.

    To make this easier to visualize, I have extended the M+D Fig. 7:

    As you can see the actually observed range was 1.5 to 4, The M+D model showed 8 and the constant RH assumption (essentially the IPCC case) showed around 26, or around 17 times the observed value.

    Max

  1994. manacker Says:

    Robert S

    The NOAA data is referenced. It shows not only a decrease in observed RH since 1948, but also a decrease in observed SH (water vapor content).

    This may seem strange at first, but cloud formation and precipitation are two variables which apparently affect RH (and SH) with warming.

    At any rate, there appears to be enough data out there based on actual physical observations, which show that the IPCC model assumption of essentially constant RH with warming is incorrect, and that the WV feedback from the model simulations is not validated by the physical observations.

    Is it high by a factor of 2? Or a factor of 4?

    If we add to this the uncertainties regarding the net cloud feedback, it raises serious doubts about a climate sensitivity of much more than around 1C.

    Max

  1995. Bob_FJ Says:

    phinniethewoo Reur April 24, 2010 at 22:08

    So [given VS’s argument] one could repeat and produce a sample of 140years, by taking yearly averages 1july-30june. This way one gets a new sample with different variation in it as obtained from the larger data reservoir?

    Sounds good to me, but I’m only an engineer.
    And/or, there is the HADCRUT record with 160 years, and comparison of Northern and Southern hemispheres which show a remarkable coincidence in characteristic curve shape.

  1996. DLM Says:

    Jvd, Jvd

    Yeah, you people have the AC exhaust thing under control, and you have nailed the physics of all the factors operating on this global energy budget thingy down to tenths of a Watt per square meter. Well, at least that’s true in your grant applications, peer reviewed articles, and those authoritative IPCC reports. But in candid discussions amongst the publicly unequivocal imminent climate geniuses, the data is “lacking”:

    Kevin ‘it’s a travesty’ Trenberth says:
    Mike
    Here are some of the issues as I see them:
    Saying it is natural variability is not an explanation. What are the physical processes?
    Where did the heat go? We know there is a build up of ocean heat prior to El Nino, and a
    discharge (and sfc T warming) during late stages of El Nino, but is the observing system
    sufficient to track it? Quite aside from the changes in the ocean, we know there are major
    changes in the storm tracks and teleconnections with ENSO, and there is a LOT more rain on
    land during La Nina (more drought in El Nino), so how does the albedo change overall
    (changes in cloud)? At the very least the extra rain on land means a lot more heat goes
    into evaporation rather than raising temperatures, and so that keeps land temps down: and
    should generate cloud. But the resulting evaporative cooling means the heat goes into
    atmosphere and should be radiated to space: so we should be able to track it with CERES
    data. The CERES data are unfortunately wonting and so too are the cloud data. The ocean
    data are also lacking although some of that may be related to the ocean current changes and
    burying heat at depth where it is not picked up. If it is sequestered at depth then it
    comes back to haunt us later and so we should know about it.

  1997. Robert S Says:

    Manacker

    I was asking where the NOAA data came from, not the M&D graph (which shows a definite increase in q).

    “It shows not only a decrease in observed RH since 1948, but also a decrease in observed SH…”

    A decrease in q necessarily implies a decrease in RH, but the former is not supported by most measurements. Dessler+Zhang 2008 and Soden 2005 both show a clear increase in q at a large enough rate to keep globally average RH roughly constant.

    “it raises serious doubts about a climate sensitivity of much more than around 1C.”

    You still have to explain how past climatic variations were so large with such a small sensitivity.

  1998. manacker Says:

    Robert S

    The only long-range data set of RH (and SH) from actual observations is that of NOAA, which I cited. It shows a reduction of both over time.

    Minschwaner and Dessler did a shorter-term study over the tropics and found a small increase in SH, but a marked decrease in RH.

    Both of these data sets point to a much smaller WV feedback than the IPCC model simulations estimate with assumed essentially constant RH. According to Minschwaner and Dessler, this should result in a theoretical modeled warming somewhere between 0.8K and 1.2K (let’s say 1.0K) from a doubling of CO2 with WV feedback.

    One big unknown here is the change in precipitation with warming. A study by Wentz et al. states:
    http://www.scienceonline.org/cgi/content/short/317/5835/233

    Climate models and satellite observations both indicate that the total amount of water in the atmosphere will increase at a rate of 7% per kelvin of surface warming. However, the climate models predict that global precipitation will increase at a much slower rate of 1 to 3% per kelvin. A recent analysis of satellite observations does not support this prediction of a muted response of precipitation to global warming. Rather, the observations suggest that precipitation and total atmospheric water have increased at about the same rate over the past two decades.

    Held and Soden (which you cited) is based on model predictions, and expresses considerable uncertainty regarding WV feedback:

    Our uncertainty concerning climate sensitivity is disturbing. The range most often quoted for the equilibrium global mean surface temperature response to a doubling of CO2 concentrations in the atmosphere is 1.5°C to 4.5°C. If the Earth lies near the upper bound of this sensitivity range, climate changes in the twenty-first century will be profound. The range in sensitivity is primarily due to differing assumptions about how the Earth’s cloud distribution is maintained; all the models on which these estimates are based possess strong water vapor feedback. If this feedback is, in fact, substantially weaker than predicted in current models, sensitivities in the upper half of this range would be much less likely, a conclusion that would clearly have important policy implications.

    Robert, I think if you truly want to be unbiased and objective here you will have to admit that the case for a strongly positive WV feedback, based on assumed constant RH, is very weak.

    Max

  1999. manacker Says:

    Robert S

    As an adjunct to our discussion on recent observations pointing to a WV feedback, which is much lower than that assumed by the IPCC climate models (with essentially constant RH), you switched subjects and added:

    You still have to explain how past climatic variations were so large with such a small sensitivity.

    Well, Robert, I really don’t have to explain anything.

    Let’s put it this way. empirical data from current physical observations are worth 10 times the results derived from paleo-climate reconstructions (due to all the unknowns and errors involved in these reconstructions), and results from paleo studies are worth 10 times the results from climate model simulations (due to the assumptions made).

    If current data show us that a strongly positive WV feedback (based on assumed constant RH) is not supported by physical observations, as appears to be the case, that’s good enough for me, without having to try to defend the strong feedback premise with paleo studies.

    Max

  2000. Marco Says:

    @Frank:
    The references show what is done to the raw data. The station list allows you to go get the raw data yourself.

    Ah, that’s right, that’s what you do not want to do. You want OTHERS to do all the hard work, and then scream loudly whenever you find even the tiniest of possible issues that may have an effect somewhere.

    And instead of ‘handwaving’ the publications away that use wind shear as an additional measure, why don’t you try and put a publication together which shows why this is wrong?

    Of course, the Wattsian PMS that you like is to say “we don’t know enough, so we don’t know anything. Let’s throw away all the physics, because who knows, maybe something else explains it better”.

  2001. Marco Says:

    Tim,

    I pointed out that we’re already at an average July temperature well above +4, not an anomaly of +4. You predicted that temperature for 2100. Bit early for 2010 to be so far above your prediction, isn’t it?

    Second, on all of my maps Barrow is right where the +0.5 to +1 color coding is given by GISS.

    Third, McLeanian analysis is to remove the trend (which is what you also did), and then claim that you’ve done an analysis that explains the trend. What you have done is to find an ‘explanation’ for the interannual variation (actually, changes in precipitable water may very well be a consequence of the interannual variation). This type of analysis is hilarious, because John McLean defended his analysis by the immortal words “If the SOI accounts for short-term variation then logically it also accounts for long-term variation”. Wow, just wow. In the same way we can thus conclude that since the rotation of the earth explains the short-term daily variation of the temperatures, it also explains the long-term monthly or annual variation of the temperatures! Right?

  2002. Hoi Polloi Says:

    Anybody still reading this thread since VS left the building? Ever since it’s wrestling with pigs and they love it.

  2003. Tim Curtin Says:

    Re Marco April 25, 2010 at 13:01
    Where on earth do you find that “we’re already at an average July temperature well above +4, not an anomaly of +4”? I have just downloaded from GISS the anomaly for March 2010 which is 1.11 from 1950-1980. For July 2009 it was 0.64oC. Where do you find 4oC? NOT at GISS. BTW, the latest [CO2] data from Mauna Loa for March 2010 shows a smaller increase from March 2009 than there was from March 2008 to March 2009. And the annual semi-log growth in [CO2] from 1959 to 2009 is 0.00295% p.a. Wow. You must be frying, turn up your air con, dear Marco, I would hate to see you become a crispy crittur.

    Marco again: “You predicted that temperature for 2100”. When and where? I never did and never would. Following Arrhenius it is unlikely to be more than 0.6 because of the logarithmic response to CO2 forcing – we’ve had 0.7 for 40% in increase in [CO2] since 1900, so an extra 60% will generate at most 0.6oC.

    Marco: “Second, on all of my maps Barrow is right where the +0.5 to +1 color coding is given by GISS”. That is pathetic. The GISS maps have a link just below the map to the data in .txt, do you think you can find it? Evidently not. But if you can you will be able to locate the anomalies by latitude and longitude, but then I forgot, you do not know and are incapable of finding out the coordinates for Barrow.

    Of course the GISS mappers in their rush to get global temps for each month before they have even received the data from most stations routinely leave out Barrow, which should according IPCC theory show a bigger anomaly than the rest of Alaska, but it does not, which is why they deem all Alaska to be conveniently good enough for Barrow.

    Marco: “Third, McLeanian analysis is to remove the trend (which is what you also did), and then claim that you’ve done an analysis that explains the trend”. That is another false claim. VS exhaustively showed here that the global temp data series all have a unit root. That I addressed, in order to avoid the otherwise inevitable spurious correlation with [CO2], by first differencing (NOT by 1st derivative). John McLean can speak for himself. He does his thing, quite well actually, I do mine.

    Please note that I do NOT 1st difference the Mauna Loa data for [CO2], as it has not been shown as yet that this series has a unit root, and in accordance with the physics, it is the absolute value of [CO2], in ppmv (now 390), NOT the annual increments, which is what matters for “climate change”.

    Marco, I gave you the link to my primary data source for Barrow and 1200 other US locations some time ago. I await your own stellar regressions thereon with interest but zero expectation that you are up to it.

  2004. JamesG Says:

    Robert S
    Fine but do see the progression please… The tropi tropo isn’t warming as it was supposed to but that’s no problem for the theory because a. the lapse rate feedback, or b. the instruments are wrong, or c the error bars for the models are big enough to drive a tram through, or d that wasn’t the true signature anyway despite what the IPCC says.

    The strato isn’t cooling as it was supposed to but that might be because of a. ozone, b. instrument error, c water vapour.

    The heat is missing from the upper ocean but that might be because a. it is magically bypassing the upper ocean and settling farther down. b. instrument error again, c. undefined noise in the undefined climate system.

    What is your scientific conclusion about this deluge of feeble excuses which somehow seem to collectively avoid reaching the conclusion that what was supposed to be there just isn’t? Too many people with too much funding at stake to admit the truth perhaps?

    And I daresay Lindzens new findings, ordinarily a 4th piece of evidence against significant AGW will no doubt be found to be due to the ubiquitous instrument error too.

  2005. manacker Says:

    Marco

    You wrote to Tim:

    In the same way we can thus conclude that since the rotation of the earth explains the short-term daily variation of the temperatures, it also explains the long-term monthly or annual variation of the temperatures! Right?

    I think if you replace the word “rotation” with “motion” (i.e. “spin”, “axial tilt” and “movement relative to the sun”), you’ll get a sentence that makes more sense. Right?

    Maybe that’s the better analogy for what Tim is saying than yours.

    Max

  2006. Marco Says:

    Tim,

    (sigh) I repeat: I am NOT TALKING ABOUT ANOMALIES. The July average temperature (as in ‘absolute’ temperature) is already above 4 degrees. Something that you DID predict by 2100. Right here:

    Global average temperature increase GISS HadCRU and NCDC compared


    “BTW, the semi-log growth rate of mean temperature in July at Barrow from 1960 to 2006 was 0.071339% p.a.. Projecting to 2100 at that rate we get from 3.86oC in 2006 to 4.127oC, a rise of 0.2666, or 0.027oC per decade. Is that enough to wipe out all polar bears?”

    Regarding my supposed inability to find the correct data for Barrow:
    http://tinyurl.com/36dkkv3
    This is the text file, where I have been so kind for you to use 1951-1980 as the base period (standard), and look at the 1981-2008 anomaly for the region where Barrow is (as in 156 W, 71 N). Guess what? Anomaly is 0.7809.

    And yes, you used a procedure that mostly removes the trend. As I already noted it is not exactly the same as the first derivative, but the result is quite similar: you remove the trend. And then we are supposed to be surprised you cannot find a link with increasing CO2? It is simply completely irrelevant that you do not take the first difference for the CO2 data if you already remove the one part climate scientists say causes the trend: the trend itself.

  2007. Marco Says:

    Max,

    Tim is doing the same as McLean: remove the trend (first difference does that to a data set), find an explanation for the resulting variance of the data (McLean: SOI; Curtin: precipitable water), and then claim that this also explains the long-term trend. That comes down to explaining the temperature differences observed during the various seasons by the rotation of the earth.

  2008. Marco Says:

    My last sentence to Tim is a bit wrong, it should read:
    It is simply completely irrelevant that you do not take the first difference for the CO2 data if you already remove the one part climate scientists say CO2 causes: the trend itself.

  2009. Robert S Says:

    Manacker

    I’m still wondering where the NOAA data came from. Specifically, what dataset?

    “Held and Soden (which you cited)”

    Wrong paper. I was referring to Soden 2005 “The Radiative Signature of Upper Tropospheric Moistening”. And you keep going back to Dessler+Minschwaner 2004 while ignoring Dessler+Zhang 2008, which finds an increase in q and a constant RH.

    Well, Robert, I really don’t have to explain anything.

    You do have to explain how a low climate sensitivity is consistent with past climatic fluctuations if you’re going to ignore Soden 2005 and Dessler+Zhang 2008, along with numerous more recent studies that show a tropical tropospheric hotspot.

  2010. ptw Says:

  2011. Robert S Says:

    Manacker

    Ok. The NOAA data is from the NCEP reanalysis, as was used in the Paltridge 2009 study. I’m not sure why people continue to reference this dataset, as other reanalyses show opposite trends in q (Dessler has a forthcoming paper on this issue), and NCEP has known problems with spurious trends in humidity data.

  2012. DLM Says:

    JamesG,

    The answer in all cases where the observed data ain’t in synch with the settled science, is: b) the instruments are faulty

    According to doghaza, it is because climate scientists do not make the instruments. I am not kidding. He really said that. Funniest thing I have seen in a very long time.

    As soon as climate scientist figure out how to make thermometers, they will find the MISSING HEAT!

  2013. Tony Says:

    I wonder what a timeline of the global average car colour would look like. Could it be compared to the global average temperature timeline, in terms of relative uselessness?

  2014. HAS Says:

    Marco

    I guess you didn’t follow my post at April 24, 2010 at 00:34 where I tried to help you out with some basic maths.

    You repeat at March 1, 2010 at 21:44:

    “And yes, you used a procedure that mostly removes the trend. As I already noted it is not exactly the same as the first derivative, but the result is quite similar: you remove the trend.”

    And then again in a separate comment at 21:44:

    “Tim is doing the same as McLean: remove the trend (first difference does that to a data set)……”

    Perhaps you have been reading too much Charles Dodgson – he might have been a mathematician but when he wrote The Hunting of the Snark (“What I say three times is true”) the references were not meant to be taken literally. (Martin Gardener has done an great annotation if you are interested in one view on the True Meaning).

    I note you also have a go at Frank:

    “Ah, that’s right, that’s what you do not want to do. You want OTHERS to do all the hard work, …”

    Doing a basic calculus course wouldn’t be that hard work (you might even find it fun), and it might save you some public embarrassment. Unless you understand this basic math you don’t have a hope of understanding McLean et al, Foster et al and Stockwell et al.

  2015. manacker Says:

    Robert S

    You cannot be serious!

    Now it’s the NOAA record from 1948 to today which is wrong, not the constant RH assumption made for the climate models. Duh!

    Then you come with a paper by Dessler that is “coming out”.

    He (and Minschwaner) already published a study, which I have cited, which shows that the RH drops significantly with warming, and that the “constant RH assumption” of the IPCC models gives grossly exaggerated water vapor content with warming.

    Gimme a break, Robert. You are losing touch with reality here.

    As DLM has just written:

    The answer in all cases where the observed data ain’t in synch with the settled science, is: b) the instruments are faulty

    Looks like you have fallen into this trap, as well, Robert. Too bad. It does not do too much for your credibility.

    Max

  2016. manacker Says:

    Tony

    Try mixing a whole bunch of paints of all colors.

    It comes out BS brown.

    Max

  2017. manacker Says:

    Robert S

    Are you “ignoring” Minschwaner + Dessler (because it does not agree with your personal viewpoint?

    Contrary to the later Dessler paper (which is simply a “rehash” of the “party line”), it is based on original work based on actual physical observations made over several years. It shows clearly that theIPCC model assumption of constant RH is not supported by the physical observations.

    Ignoring the NOAA record (because it shows long-term data that are not compatible with the IPCC model assumptions) is also “bad science”, Robert.

    Shame on you!

    Max

  2018. manacker Says:

    Robert S

    You are still asking for the NOAA data set.

    I have given you the link earlier, but will repeat.

    http://www.esrl.noaa.gov/psd/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Specific+Humidity+(up+to+300mb+only)&level=300&lat1=90&lat2=-90&lon1=180&lon2=-180&iseas=1&mon1=0&mon2=11&iarea=1&typeout=1&Submit=Create+Timeseries

    Enjoy!

    Max

  2019. ptwoo Says:

    It is important to note that VS his exposition so far regarding the unit roots did NOT attack the consensus in se.

    The formalism of detecting and dealing with the unit root he describes only suggests it is better to difference the time series in order to be able to use the wider stats toolbox.

    If there is a CO2 caused temperature rise, sure this will pop out of the analysis. Nothing was done to forbid that possibility.
    Sure it is good to have a -method- rather than no method?

    Sure it is worthwhile to follow another method in a blog or in a scientific report another path for analysis when proved paths show utterly incapable to prove correlations or , for that matter, predict the weather 3 days hence lest 30000 days hence.

    Ironically, the warmists attacked him by suggesting it is better to look for the ghost in the machine “Thor is somewhere, let’s look for Thor!” , let’s look for a ghost with a mixture of unproved tools in a random way.
    Another suggestion was to approach , suddenly, because it is a bad idea to tread a path where the conclusion is not set in advance in stone by pope Gore , to approach “the issue” quickly instead via energy balances
    etc

    Allthough the warmists had plenty of suggestions , maybe some worthy ones as well, to do something “else” , they do not have any formalism except the formalism of BBC “journalist” Peter Sinclair, maybe.

    It is standard trick in mathematics to differentiate or to integrate when analysis on the level you’re at proves difficult.
    VS showed us science at work.
    The warmists showed us , again, the BBC at “work”.

  2020. Robert S Says:

    Cut the BS, Max.

    There are problems with the NCEP reanalysis data for humidity, and it is you who is saying the satellite measurements from Soden 2005 and Dessler+Zhang 2008, which both show globally averaged RH to be roughly constant, are wrong. Dessler himself seems pretty certain that the constant RH assumption has been validated by measurements. Soden 2005 specifically says

    “Although an international network of weather balloons has carried water vapor sensors for more than half a century, changes in instrumentation and calibration issues make such sensors unsuitable for detecting trends in upper tropospheric water vapor.”

    I’m not ignoring the NCEP reanalysis because it “does not disagree with my personal viewpoint”, but because there are known large biases in that particular dataset, and because other reanalysis show opposite trends in q.

    As for Dessler+Minschwaner 2004, I have two things to say
    1) Newer studies by Dessler (e.g. Dessler+Zhang 2008 uses satellite measurements) show a roughly constant RH, as I’ve said. You can call this a “rehash of the party line” or whatever, but it doesn’t make the results false.
    2) A positive trend in q, while RH is declining, as was found in D+M 2004, just means the lapse rate feedback isn’t as strongly negative, and the sum WV+LR is the same.

    The whole talk of losing credibility is pretty lame.

  2021. manacker Says:

    Robert S

    Sorry. No sale.

    The later Dessler paper is not based on any new original work; the 2004 study (which showed a significant reduction in RH with warming) was based on an original study based on physical observations.

    The WV + LR argument is weak, Robert. M+D showed that the increase in WV was significantly less than than assumed by IPCC. This does not automatically mean that the negative LR feedback would compensate for this error in the IPCC assumption.

    Get serious, Robert.

    You are making stuff up as you go along.

    Admit it. The observed data do not support the IPCC assumption of constant RH with warming, despite all your hand-waving.

    Max

  2022. Robert S Says:

    “The later Dessler paper is not based on any new original work”

    What are you talking about? The Dessler+Zhang 2008 study used q and RH measurements from the NASA AIRS. They found globally average RH to be roughly constant. Soden 2005 did as well.

    “M+D showed that the increase in WV was significantly less than than assumed by IPCC.”

    You’re right, but only above 250mb in the tropics. This location and altitude represents only a small fraction of the total global WV feedback. Small trends in globally-averaged RH would largely be compensated for by the LR feedback. However, Dessler+Zhang 2008 (“Water-vapor climate feedback inferred from climate fluctuations, 2003 – 2008”) found that globally averaged RH remained constant, so it’s irrelevant.

    “The observed data do not support the IPCC assumption of constant RH with warming, despite all your hand-waving.”

    Only if you apply M+D 2004 to all layers of the atmosphere across the planet and accept NCEP as the one and only. Other reanalysis and more recent studies show RH to be roughly constant.

  2023. Frank Says:

    JvdLaan Says:

    ” Frank if someone points you to a scientific source it is an appeal to authority?
    But ok, for the bigger picture regarding surfacestations, did you know this: http://www.ncdc.noaa.gov/oa/about/response-v2.pdf ? It is the answer from NOAA and that’s what I mean with the bigger picture, since a lot of the discussion here was about AC next to thermometers etc.”

    – A general reference to a person or organization would constitute an appeal to authority. Specific references to data / observations do not constitute such a logical fallacy. Having said that, your link to NOAA’s “talking points” on how they fabricate the surface temperature record is not evidence of AGW, since the record itself is not evidence of AGW.

    Marco Says:

    “The references show what is done to the raw data. The station list allows you to go get the raw data yourself.”

    – This is nonsense. Which data? Are you saying that they used every thermometer reading from every location?

    And instead of ‘handwaving’ the publications away that use wind shear as an additional measure, why don’t you try and put a publication together which shows why this is wrong?

    – Tilt. Why would anyone want to use indirect measurements when direct measurements are available. Dissembling perhaps?

    “Of course, the Wattsian PMS that you like is to say “we don’t know enough, so we don’t know anything. Let’s throw away all the physics, because who knows, maybe something else explains it better”.”

    – JvdLaan, this is a good example of an ad hominem…

    Regards – F.

  2024. Bart Says:

    ptwoo/phinniethewoo,

    Don’t post under multiple pseudonyms please.

    I have no beef with people suggesting different ways of analysing the data, as long as they don’t jump to conclusions.

    VS wrote “almost all test equations include a trend term”; however, the trend terms he included are incomplete (as I’ve pointed out numerous times).

    Weather and climate have different characteristics regarding their predictability. Already by virtue of being a long term average, climate is more predictable than the instantaneous weather. The average Dutch summer temperature (i.e. climate) is known within a few degrees, but I wouldn’t have a clue what the weather will be on july 24th of this year. Weather is very dependent on initial conditions; climate is not. Climate is more strongly dependent on boundary conditions (i.e. energy balance; changes in climate forcings). Climate is thus more deterministic than weather is. Your statement regarding 3 vs 30000 days is hence meaningless.

    You’re welcome to engage is a constructive conversation, but stop your silly accusations.

  2025. Marco Says:

    @HAS:

    A simple example of what happens to a curve when you do first differencing can be found here:
    http://www.duke.edu/~rnau/411georw.htm

    We have a line that goes up, and first differencing completely removes that upward trend. If you then do an analysis including the parameter that may have caused the upward trend on the first difference, you will not find a very good correlation (if any at all), and if you include variables that are likely correlated with the short-term variability, guess what happens?

    I understand all the stuff about there possibly being a spurious trend in the temperature series, but then removing it and claiming there’s no room for CO2 to explain the remaining variability is a strawman.

    Regarding Stockwell et al: you are aware that it also rebuts MdFC09? It claims that 5-9% of the trend may be explained by SOI *under the specific assumption there is no interaction between long-term warming through other forces and SOI*. MdFC, without any actually analysis, it should be said, just claim that the long-term trend can be explained by the correlation of variability with ENSO (the “short-term trend also ‘logically’ explains the long-term trend”-fallacy).

  2026. Marco Says:

    @Frank:

    1. It might help if you actually read the papers and the information about the land stations used.

    2. What “direct” measurements would that be? Satellites? Sorry, but temperature is a derived parameter in the analysis of radiance. And that’s without the considerable problem of getting the information as a function of height and the problems involved in correcting for satellite drift.
    The radiosonde data? Known to be heavily affected by a variety of issues (see for example Sherwood 2005).
    So, some use yet another parameter as extra control, and because that makes the data closer to expectations, that parameter has to be rejected? Same challenge again: get your criticism published, rather than do loads of dismissing handwaves here on a blog.

  2027. HAS Says:

    Marco

    The point that you are missing is that the two graphs are different. You haven’t removed the trend, you have produced a new series that represents the slope (aka the trend) of the other at each point. I can see that if you are relying on the graphs at Duke you will be confused because they have been (hopefully only) lazy and not labeled the Y axis as points per month.

    The fact that the slope in the second is flat is simply an artifact of the series having a constant trend. If it were accelerating the slope of the first difference would continue to go up etc.

    Differencing (and differentiation) isolates the slope, it does not not remove it.

    A basic course in maths still beckons.

  2028. manacker Says:

    Robert S

    Here’s an interesting lecture by Dr. William Gray.
    http://ams.confex.com/ams/88Annual/techprogram/paper_129136.htm

    Gray points to observations that show that (contrary to IPCC model assumptions) precipitation increases linearly as SST goes up (refer to Wentz et al. report cited earlier), that there is no observed upper level moistening to maintain constant RH as models are predicting (refer to Minschwaner + Dessler) and that there is no observed upper tropospheric enhancement of warming as assumed by models (missing “hot spot”). From all this he concludes that the WV feedback is neutral to slightly negative.

    He does cite the long-term NOAA data showing a reduction in SH and RH with warming, which he agrees may have errors, but has not been refuted by actual long-term data, so basically still stands as physically observed data (as opposed to model simulations, which are based on all sorts of assumptions).

    Another bit of support comes from the Aqua satellite data. This shows that water vapor does not march in goose-step with Clausius-Clapeyron (as assumed by the model simulations), but that the atmosphere tends to be self-regulating (as the NOAA observations also show).

    But Robert, I see that no matter what data are presented, you already have your mind made up on this to only accept the data that confirm your personal belief, so a further discussion is pointless.

    Next you’ll be telling me that the IPCC model simulations are right in assuming that clouds exert a strongly positive feedback with warming (despite the recent physical observations to the contrary)! (I can only comment that blind faith is a great thing (but it doesn’t have much to do with “science”.)

    Max

  2029. Tim Curtin Says:

    Re Marco @April 25, 2010 19:16
    “I am sorry, I did not realise you were referring to my Barrow forecast for 2100; one month doth not a summer make in 2100. Barrow in July has been even hotter, over 7 as recently as 1989, so 4 in July 2009 was nothing unusual. I stand by my forecast, as there is really no significant trend at Barrow. Wanna bet?

    Sure the map you refer to covers Barrow, but only by including it in Alaska as a whole. Why does the map not show the actual grid for Barrow itself, where the anomaly was only 0.3 for 1980-2009, not your .789.

    As you refuse to address spurious correlation by first differencing I am sure you will be glad to accept my finding using just the actual values (thereby better retaining the trends) that the changes in average daytime temperature at Port Barrow in July from 1960 to 2006 are almost completely explained by “global” solar radiation (ie net of albedo), with Adj R2 0.87, t=19.98 and p = 2.63E-24. Not only that, bringing in Net Forcing (NF) by GHG along with both “global” (i.e. net) solar radiation and “H2O”, NF proves to have a NEGATIVE coefficient, while SR and H2O are strongly positive and fully significant with p=0.02 and 0.10476E-8, and Adj R2= 0.96, while this run even passes the Durbin-Watson test (>2).

    So I am delighted more generally that thanks to Marco I no longer have to do first differencing to get away from the nuisance of the other tedious tests carried out by VS. Will you accept my invitation to be co-author of my upcoming paper showing the irrelevance of CO2 to temperature trends?

    Where CO2 is relevant is in explaining much of growth in crop yields – I have just done a test using a quadratic function on CO2, temperature, and rainfall which produces an amazingly accurate hindcast of wheat yields in Moree NSW 1965-1999 using just those 3 variables. See you at my CO2 & food seminar at ANU here in Canberra Thursday 1230, volcanoes permitting?

  2030. Marco Says:

    @HAS:
    Let’s assume for a moment a linear trend. In that case the value on the y-axis for x=0 is the slope of the original curve for the first derivative, and close to that value for the first difference (it usually is somewhat off). If there’s a change in the slope at increasing x values, that will generally show up as curvature.

    For the Barrow data you will get a close to linear trend from 1960 onwards.

    However, if you then try to correlate the first difference data with an increasing CO2 forcing, your correlation will, of course(!), be really bad: the one thing that the increased CO2 forcing explains is the linear trend, and that has become a flat line. A flat line correlates rather poorly with an almost linear increase. You could also take the first difference of the CO2 data, but then you’d have to add a constant as well. A flat line at a few ppm does not fit well with a flat line at a different level. Plenty of issues with Tim’s fitting procedures!

  2031. manacker Says:

    Robert S

    Some more stuff on the strongly positive WV feedback, as projected by the IPCC model simulations.

    IPCC AR4 WG1 Ch. 9, p.675 (Fig. 9.1) shows graphical simulations of the warming signatures of each forcing that the IPCC models expect to see under certain conditions. Graph A is what we would expect to see from a known change in TSI, graph B is what IPCC would expect to see from a known change in volcanic activity, graph C is what the models would expect to see from a known change in well mixed GHGs and so on.

    Since we cannot measure the atmosphere for each individual warming signature, we need to combine all warming signatures into one (graph F). So if we were to measure the atmosphere we would need to compare it to graph F.

    Graph F shows a big red blotch. This big red blotch is a “fingerprint” of the simulated warming signature of well mixed GHG’s. As is generally known today, the big red blotch does not exist in real life, therefore graph C is incorrect.

    The big red blotch in graph C is based on the assumption that there will be an accumulation of WV in the troposphere. Inasmuch as there is no big red blotch there is no accumulation of WV in the troposphere as predicted by the model simulations.

    This tells us that without the accumulation of WV the IPCC’s projected temperature increase (which relies heavily on an accumulation of WV) will not be achieved.

    In addition, the real world data show slight stratospheric cooling and slight tropospheric warming. This is what we would expect to see as a result of changes in ozone (graph D); these changes have been observed and are not questioned.

    Slight tropospheric warming would also be caused by a slight increase in TSI, which has also been observed.

    Real world data also shows slight surface warming predominately in the NH, which could well be at least partially attributed to increases in GHGs (CO2 etc.) but what we do not see is the big red blotch caused by a tropospheric accumulation of WV.

    In other words, the model assumptions on tropospheric moistening are wrong and there is no observed upper tropospheric enhancement of warming, as predicted by the models.

    Santer et al. (2008), “Consistency of modelled and observed temperature trends in
    the tropical troposphere” claim that the discrepancy between modeled and observed tropical tropospheric temperature trends has been resolved.

    Click to access NR-08-10-05-article.pdf

    We revisit such comparisons here using new observational estimates of surface and tropospheric temperature changes. We find that there is no longer a serious discrepancy between modelled and observed trends in tropical lapse rates.

    However, a later study by McIntyre and McKitrick, extending the data series from 1999 to the end of 2007, shows that this discrepancy between the model simulations and the actual observations still exist.
    http://arxiv.org/abs/0905.0445v1

    Click to access 0905.0445.pdf

    This is a dilemma for the IPCC projections of expected AGW by year 2100, inasmuch as the lack of WV increase (and enhanced warming) in the tropical troposphere invalidates the assumption of strongly positive feedback from WV, which represents two-thirds of the projected warming.

    So the absence of the predicted “hot spot” is a major problem for the IPCC, despite the fact that many papers are attempting to downplay it as insignificant, in order to defend the IPCC projections.

    This has actually followed a rather strange pattern.

    Santer et al. (2005) tells us:

    On multi-decadal timescales, tropospheric amplification of surface warming is a robust feature of model simulations, but occurs in only one observational dataset. Other observations show weak or even negative amplification.

    [In other words, models show the “hot spot”, but physical observations generally do not.]

    Then came the claim by Santer (2008) that the radiosonde data were inaccurate and may have simply missed the “hot spot”, despite the fact that it should be almost 1°C warmer than the surrounding area. This was followed by the suggestion that we should use the wind shear data from the very same sondes to provide a proxy for the temperature. These data, along with a bit of software, showed the hot spot may exist after all. In this later paper, Santer argued that it was possible that the “hot spot” might be present and yet went undetected.

    The latest rationalizations state that the “hot spot” is not a necessary “fingerprint” of greenhouse warming (despite IPCC AR4), so its absence does not refute greenhouse warming. In addition, its absence does not invalidate the premise that there will be a strong WV feedback.

    So we have gone full circle on the existence and relevance of the “hot spot”, as well as the observational support for enhanced tropospheric moisture and a strongly positive WV feedback.

    You’ll have to admit, Robert, that it all smells a bit fishy.

    Max

  2032. Marco Says:

    Tim,

    I really wonder where you get your data. July 2009 in Barrow was 6.7, not 4. Betting will be useless, considering neither of us is likely to be still alive in 2100, unless they find a cure for old age really soon.

    And why don’t you give a link to the grid with Barrow that you claim only shows a 0.3 anomaly for the 1981-2008 period vs 1951-1980 (standard for GISS) ? I’ve given you the map AND the text values.

    I also previously pointed out that SSR for Barrow (and I do assume you meant Barrow, Alaska, not Port Barrow) shows a downward trend, while temperatures show an upward trend. And yet again you find a correlation between the two. Amazing, you have just ‘proven’ that less sunlight warms the earth! Considering that that requires amazing explanations, I will decline being a co-author on your paper.

    Regarding your seminar: I have already seen your (in)ability to explain your various claims on this topic on Tim Lambert’s blog (the infamous Tim Curtin thread), so I have absolutely no intention to waste more time on you.

  2033. Frank Says:

    Marco Says (@Frank):

    “What “direct” measurements would that be? Satellites? Sorry, but temperature is a derived parameter in the analysis of radiance.”

    – By your reasoning we should reject thermometers as well, since these only indirectly yield temperatures by measuring the expansion of a fluid or change in electrical resistence. Maybe Gaia informs you directly of atmospheric phenomena, but most of rely on calibrated instruments that have a well-known hierarchy of accuracy. (Hint: radiosondes and satellites lie at the top of the scale – contrived indices like wind shear lie at the bottom). Which brings me to this gem:

    “So, some use yet another parameter as extra control, and because that makes the data closer to expectations, that parameter has to be rejected? ”

    – Forgive me for asking this, but are you really comfortable with “some” people rejecting independently confirmed data in favor of contrived data that better agrees with “their” expectations? This thinking seems unscientific, at best, and is incredibly dangerous in the political realm.

    Again, neither the surface temperature record nor the GCMs provide evidence of AGW. What evidence do you have?

    Regards – F.

  2034. manacker Says:

    Robert S

    We have been discussing the IPCC model assumption of a strong WV feedback due to constant RH with surface warming, with both of us citing conflicting data; you have brought studies, which show strong increase in WV with warming (enough to essentially maintain constant RH), while I have shown you equally convincing data which show strongly reduced RH or even reduced atmospheric water vapor content (SH) with warming.

    Let me ask you to open up your mind a bit and put aside any firm beliefs that you have on this topic for an instant.

    The long-term NOAA radiosonde record of atmospheric water vapor content (SH) shows a reduction from 1948 to today. You may say that this record is flawed and should be ignored, but you have no real evidence to support this claim.

    This long-term record has been plotted against the HadCRUT record of global temperature, which has shown an increasing trend over the same time period.

    At the same time there have been short-term studies, which show an increase in water vapor content with surface warming. Some of these show a slight increase in WV, other show a major increase (enough to maintain constant RH).

    A closer look at the long-term record also shows these short term “blips” where SH increases with surface temperature, as can be seen from the shapes of the two curves, even though the long-term trend clearly shows a decreasing SH while temperature increases.

    The question that this raises: Is there some sort of a “natural thermostat” mechanism by which atmospheric water vapor content is regulated to prevent a long-term “positive feedback” from water vapor, as is assumed by all the IPCC climate models? Does this tie in some way with the formation of clouds and/or precipitation trends?

    Think about it a bit, Robert.

    Max

  2035. manacker Says:

    Bart,

    Not to cut in to your exchange with Phinniethewoo, but you wrote something that caught my eye:

    Weather and climate have different characteristics regarding their predictability. Already by virtue of being a long term average, climate is more predictable than the instantaneous weather. The average Dutch summer temperature (i.e. climate) is known within a few degrees, but I wouldn’t have a clue what the weather will be on july 24th of this year. Weather is very dependent on initial conditions; climate is not. Climate is more strongly dependent on boundary conditions (i.e. energy balance; changes in climate forcings). Climate is thus more deterministic than weather is. Your statement regarding 3 vs 30000 days is hence meaningless.

    The fallacy of your argument lies in (a) the unpredicted outlier and (b) the small error that gets amplified by a longer prediction period.

    The “weather/climate” analogy may hold for things that are repetitive and unchanging in the long run, but this analogy does not hold at all for predicting long-term change.

    For a good treatise on why long-term projections are prone to greater uncertainties and errors than shorter term projections, I can recommend “The Black Swan”, by Nassim Taleb (which I believe Phinniethewoo has already recommended to you).

    Read it, and you will see the error in your logic.

    Max

  2036. Bart Says:

    Manacker, you wrote:

    The “weather/climate” analogy may hold for things that are repetitive and unchanging in the long run, but this analogy does not hold at all for predicting long-term change.

    That sounds like an implicit admittance that the climate is changing, right? We’re making progress.

    The point is that the future climate depends on boundary conditions, whereas the future weather depends on initial conditions (and hence its predictability quickly fades away after a few days). If we have a decent handle on the (expected) change in boundary condition, we’ll also have a half decent handle on the expected changes in climate as a result. (Uncertainy in climate sensitivity plays a role here as well of course)

  2037. Robert S Says:

    You may say that this record is flawed and should be ignored, but you have no real evidence to support this claim.
    http://www.springerlink.com/content/v164l177374p1445/

    “Major problems are found in the means, variability and trends from 1988 to 2001 for both reanalyses from National Centers for Environmental Prediction (NCEP) and the ERA-40 reanalysis over the oceans, and for the NASA water vapor project (NVAP) dataset more generally. NCEP and ERA-40 values are reasonable over land where constrained by radiosondes. Accordingly, users of these data should take great care in accepting results as real.”

    Even Paltridge 2009 notes that the NCEP humidity data should be treated with great caution.

    On M+D 2004, what does Richard Lindzen have to say about the results?

    “…the observations do make a case that the water vapor feedback above 200 millibars [12 kilometers] is likely to be somewhat positive”…But what do these results mean for the larger picture of climate change? “The climate implications are very limited,” Lindzen says. One of his main criticisms is that the upper troposphere doesn’t have much influence over the water vapor feedback of the entire atmosphere.

    D+Z 2008 found that the overall WV feedback was large and positive, consistent with a constant RH.

    The Hotspot
    Beyond Santer et al., we also have Allen and Sherwood
    http://www.nature.com/ngeo/journal/v1/n6/abs/ngeo208.html
    Sherwood et al.
    http://journals.ametsoc.org/doi/abs/10.1175/2008JCLI2320.1
    Haimberger et al.

    Click to access i1520-0442-21-18-4587.pdf

    Whether the hotspot exists or not, it’s easy to see why it is a predicted result from all warming. It just has to do with the fact that the observed lapse rate in the tropics is close to the saturated value, and as the surface warms, latent heat release during moist adiabatic ascent increases. I’m not sure about the AR4’s discussion on the topic, but an atmospheric thermo text of mine asks readers to prove the “hotspot” from basic principles – regardless of how the surface warms. Perhaps the hotspot isn’t there, I don’t know, but I honestly don’t believe it is a fingerprint of GHG warming.

    Clouds
    You’re right that there is a great deal of uncertainty surrounding the cloud feedback. I don’t think the IPCC conclusively showed that the feedback is strongly positive. MY problem with the idea that clouds provide a strong net negative feedback isn’t that it “conflicts with my ideology” somehow, but that it does not explain the paleoclimate. Unless transitions to and from glacial periods were initiated by MUCH larger forcings than previously thought, I don’t think an insensitive climate system works.

    Frankly, I’ve grown tired of this “you can’t accept the truth because your mind is already made up” talk. If you’re going to continue down that road, I really see no point in continuing this conversation. I think we can all accept proper evidence when it’s presented.

  2038. manacker Says:

    Bart

    Yes. I do admit it.

    Climate is changing.

    It alway has.

    And it always will.

    But that does not change the basic fact that the longer a prediction term is, the more likely that unknown outliers or small errors between expected and actual trends will render the prediction worthless.

    This is true for any prediction in a changing world, including climate.

    Your key word here was “uncertainty”.

    A brief (and maybe silly) example is the period after 2000, where the Met Office now tells us that “natural variability” has more than offset record CO2 increase to invalidate a 0.2C per decade warming prediction, replacing it with an almost 0.1C per decade observed cooling.

    This is not to claim that the recent cooling is part of a long-term trend, just that it was not expected and therefore not predicted, due to this “uncertainty”.

    You are aware (I am sure) of the predictions made around 1860 that Manchester would be covered in two meters of horse manure by 1920, due to the rapidly expanding number of horse carriages (similar forecasts were made for New York). The automobile was the “outlier” in this case (and Henry Ford laughed all the way to the bank).

    Are we sure that there is not such an “outlier” in the case of our planet’s future climate, which would override all the “boundary conditions” and theoretical “climate sensitivities”, of which we think we are aware?

    The longer our prediction period, the greater is the chance that this will be so.

    By definition.

    Max

  2039. manacker Says:

    Robert S

    You have avoided my question regarding the long-term versus short-term RH and SH record, instead referring me to Trenberth et al., who attempt to invalidate the long-term record by citing some short-term examples.

    Yes. I am aware of Trenberth et al. The short-term “blips” in the record seem to show increased SH with increased temperature (as I pointed out earlier), but the long-term record shows decreased SH with warming (as I also pointed out).

    And, based on this observation, my question to you was:

    Is there some sort of a “natural thermostat” mechanism by which atmospheric water vapor content is regulated to prevent a long-term “positive feedback” from water vapor, as is assumed by all the IPCC climate models? Does this tie in some way with the formation of clouds and/or precipitation trends?

    That is the question, Robert. It is not intended as a “trick question”.

    Other than addressing this basic quandary, I do not think it makes any sense for us to continue to quibble about WV feedback.

    I have shown you studies that demonstrate that the WV feedback is minor (with RH dropping with increased temperature) and you have shown me studies that say it is major (with RH remaining essentially constant). After looking at the NOAA record more closely, I suspect the problem is that the short-term effect is not the same as the long-term effect.

    And that is what my question is all about. I think it is valid.

    Otherwise we’ll move on to cloud feedbacks, where all climate models cited by IPCC assume a strongly positive feedback (0.69±0.38 W/m^2K), resulting in a 2xCO2 temperature increase of 1.3±0.55K, but observed data seem to put these assumptions into serious question.

    Max

  2040. Robert S Says:

    “Yes. I am aware of Trenberth et al. The short-term “blips” in the record seem to show increased SH with increased temperature (as I pointed out earlier), but the long-term record shows decreased SH with warming (as I also pointed out).”

    I’m not sure what you’re talking about. Trenberth et al. show that longer trends (1988-2001) in the NCEP reanalysis are wrong. Prior to 1988, the satellite data that Trenberth uses is not available, but it is known that long term records in radiosondes contain large inhomogeneities due to improving observing systems, increasing spatial resolution (but still very little ocean coverage), and the NCEP data in particular contains large model biases. Chapter 3 of AR4 states:

    The network of radiosonde measurements provides the longest record of water vapour measurements in the atmosphere, dating back to the mid-1940s. However, early radiosonde sensors suffered from significant measurement biases, particularly for the upper troposphere, and changes in instrumentation with time often lead to artificial discontinuities in the data record (e.g., see Elliott et al., 2002).

    Soden 2005 studied satellite data for the period 1982-2004, and found that q increased at a large enough rate to keep RH constant. NCEP shows the opposite – that q decreases.

    Considering that the newer generation of reanalyses (i.e. EMCWF and MERRA) and the satellites themselves do not show the same trends, I don’t know why you continue to assert that the q data from NCEP is correct. It isn’t a “short-term vs. long-term” difference, it’s that either NCEP is wrong, or every other observational dataset is. I prefer the former.

  2041. Robert S Says:

    “I have shown you studies that demonstrate that the WV feedback is minor (with RH dropping with increased temperature)”

    I don’t think I’m being ‘close-minded’ when I say that you have done no such thing. You’ve linked to M+D 2004, which shows an increase in q, but a decrease in RH at a particular altitude in the tropics. Richard Lindzen (as I quote above) notes that this location only represents a small portion of the overall WV feedback. Soden 2005 and D+Z 2008 use satellite trends (both short-term and long-term) to show that globally averaged RH remains constant and that the overall WV feedback is large and positive.

    You’ve also linked to the NCEP reanalysis data. Not only does this data have known issues that result in spurious trends, but better observational records of WV content show opposite trends.

    On the topic of clouds, I’ll admit that I’m not as read up as I’d like to be, and so I don’t really have much to say (other than what I’ve written above). I did recently read Lindzen’s paper on the Faint Young Sun paradox, and a possible resolution involving cirrus clouds. Interesting, if less than conclusive.

  2042. phntwoo Says:

    Bart

    Changing a climate boundary condition does not necessarilly mean we are heading for doom in 2100 either, right?

    I think everybody knows by now climate swings about , and more than just with the iceages. The whole discussion on eg MWP showed there is more in the spectrum. Glaciers are melting since 1800 another example of variation globally and locally.

    I thought we are in interglacial and the long term prediction is : getting cooler. Which humanity absolutely cannot afford.

    So that’s why we should heat more and abundantly spread CO2 around.
    It is also better for plantlife (ergo animal life )
    Lindzen explained plantlife is basically starved from CO2 since holocene. We are trying to fix this now, please join us in this win win strategy!

    BANG!

  2043. phntwoo Says:

    It is a strange thing btw that we are not able to predict the weather better than 2 days in advance, given that we have now complete global picturing and accurate historic data of the last 3000 days. Even for a chaotic system that’s very poor

    Would not be surprised if the whole community forgets to difference their data where it is needed and that that’s the reason it always goes wrong.

    Let’s do the sound thing, and start by not trusting these biotopes with many Phd’s not anymore.

    I want to Met’s data on the web, now.
    The data, the methods and the code.
    Cannot possibly do that?
    => Time for Jack Welsh policies: kick 20% out tomorrow , stamp in the stonker on the cobbles. ask again. I think by midweek we might have cooperation.

    I think it is BREATHTAKING to find out there is a stinky rathole in Belfast where they keep bleating about “their” intellectual property.
    We should fetch the treerings and research tomorrow, then close down the whole thing, give the erudites lingering on there a one way ticket to Kandahar.

  2044. manacker Says:

    Robert S

    OK.

    I see that you are unwilling to concede that there are data that show a weak to non-existent WV feedback and other data that show a strong WV feedback, i.e. that the data are not conclusive for either case.

    You selectively quote Prof. Lindzen with regard to the validity of the M+D findings of slight increase of SH over the tropics with warming, but you fail to mention that Lindzen does not at all support the premise of a strong WV feedback, and that this is certainly not the reason for his doubts concerning M+D.

    You apparently do not think it is very important whether or not there is an observed signal of increased tropospheric moisture and resulting warming, even though others, such as Dr. Gray find this very important. Regarding the missing “hot spot” you “don’t believe it is a fingerprint of GHG warming”, even though the IPCC charts I cited show that it is exactly that.

    You cite a study by Soden as evidence that there is no discrepancy between modeled and observed tropical tropospheric temperature, yet a later study I cited by McIntyre and McKitrick (with data from 1999 to end 2007 added) shows that this discrepancy still exists.

    You also ignore the studies I cited which show that precipitation does not lag SH increase as assumed by the models, but increases at around the same rate, acting to reduce RH.

    I also see that you believe there are “better” long-term records of SH than those of NOAA, (where “better” apparently means they agree more closely with what you want to believe, even though they do not cover the entire time period), so the NOAA record should therefore be ignored as irrelevant.

    It appears very much that your mind is made up, Robert, so there is not much point in continuing this discussion.

    Unfortunately, your arguments were not very convincing, for the above reasons (you basically close out what you do not want to hear).

    So let’s move on to clouds, if you are game.

    I’ll get back to you.

    Max

  2045. Robert S Says:

    “You selectively quote Prof. Lindzen with regard to the validity of the M+D findings of slight increase of SH over the tropics with warming, but you fail to mention that Lindzen does not at all support the premise of a strong WV feedback, and that this is certainly not the reason for his doubts concerning M+D.

    Richard Lindzen, on page 19 of his new new paper titled “Can thin cirrus clouds in the tropics provide a solution to the Faint Young Sun paradox?”, says the following:

    Recent studies suggest that the strong positive water vapor feedback implied by the invariance of relative humidity may be within reasonable agreement with satellite observations [Dessler et al., 2008], even though the vertical profile of relative humidity is not strictly conserved.

    By Dessler et al. 2008 he’s referring to Dessler+Zhang 2008.

    “You cite a study by Soden as evidence that there is no discrepancy between modeled and observed tropical tropospheric temperature”

    Soden 2005 measured moisture trends, not temperature. Though I do suspect that there is little discrepancy between modeled and observed tropical tropospheric trends given the several studies I cited above (besides Santer et al., which you is ‘refuted’ by M&M).

    “Regarding the missing ‘hot spot’ you ‘don’t believe it is a fingerprint of GHG warming’, even though the IPCC charts I cited show that it is exactly that.”

    Despite what the IPCC says, I’m fairly certain that the hotspot is not a fingerprint of GHG warming, but of all warming. I’ve already explained why above.

    “where ‘better’ apparently means they agree more closely with what you want to believe, even though they do not cover the entire time period”

    Oh, come off it. I’ve already explained why I think the NCEP humidity data is probably wrong, and it has nothing to do with NCEP not “agreeing with what I want to believe”.

  2046. Tim Curtin Says:

    1. Marco@ April 26, 2010 at 16:10
    Apologies again, I thought you were referring to July 2009 when you said “July [at Pt Barrow] was already 4”. You are right, in fact the average for July from 1981-2009 inclusive is 4.96, well up on 3.82 for 1951-1980, and the same for the mean annual, -11.32oC v. -12.78. However the best fit for the trend lines is logarithmic (higher R2s relative to linear), which implies the rising trend is already levelling off.

    As for the correlation between AVGLO (“global” – net- SR) and Average temp., , Excel never lies, or does it?

    Variable Coefficients Stan. Error t Stat P
    Intercept 0 #N/A #N/A #N/A
    RF -0.903320551 0.286413562 -3.15390286 0.002901748
    AVGLO 0.000512355 0.000212204 2.414450528 0.019984733
    H2O 4.896126086 0.696870588 7.025875633 1.0476E-08

    Adj R2 = 0.94

    Everything depends on variable selection. You evidently were not paying attention when reading my last, where I invariably stressed that “global” solar radiation (AVGLO here) meant for the ESRL Barrow data set solar surface radiation NET of albedo.

    If we remove the albedo component and use it on its own, we find it has a positive and highly sig. (99%) on AvDaytime Temp. (Adj R2=0.81). But albedo as such is really a negative term as it REDUCES the impact of incoming solar radiation – and thereby explains why falling “AVGLO” is associated with rising AvDT in a univariate analysis.

    So lets now regress AvDT on all the main variables (except Radiative Forcing – RF – from [CO2:]
    Variable Coefficients Standard Error t Stat P-value
    Intercept 0 #N/A #N/A #N/A
    DIR+DIFF 0.000102 0.000503 0.2035 0.839727
    Albedo 0.000208 0.000948 0.219448 0.827364
    H2O 4.473366 0.650322 6.878696 2.17E-08
    RH -0.04191 0.024931 -1.68126 0.100136
    AVWS 0.059088 0.239422 0.246792 0.806271

    Only H2O is significant (Adj R2 =0.94). Note how when AVGLO is decomposed into DIR+DIFF & Albedo, both not significantly different from zero, it ceases to play a role.

    Add the RF of [CO2] to the mix:
    Coefficients Standard Error t Stat P-value
    Intercept 0 #N/A #N/A #N/A
    DIR+DIFF 0.000143 0.00055 0.260115 0.796077
    Albedo 0.000146 0.00101 0.144913 0.88549
    H2O 4.545091 0.754404 6.024741 3.99E-07
    RH -0.03559 0.041165 -0.86466 0.392257
    AVWS 0.081646 0.268605 0.303962 0.762694
    RF -0.16436 0.84596 -0.19428 0.846913

    Alas for Marco, RF is still negative (like RH as it happens), and as ever H2O alone is stat.sig.- hugely – and positive; Adj. R2 = 0.94. Anyway, Marco, you have the data sets, DIY, stop armwaving and explain why H2O is not decisive and why RF determined Barrow’s climate from 1960-2006 despite what these regressions tell us.

    Finally, Marco said “why don’t you give a link to the grid with Barrow that you claim only shows a 0.3 anomaly for the 1981-2008 period vs 1951-1980 (standard for GISS) ? I’ve given you the map AND the text values”. Actually I used your very own map and data source: select July 1980-2009, scroll down to the actual Lat-Long coordinates (71N and 157W) for Pt Barrow for the July 1980-2009 anomaly on 1950-80, which is 0.32, whatever the map shows by including Barrow in the whole of Alaska. That is actually quite big, and it is more than plausible that Barrow in a maritime location in the Arctic Circle would have a different trend from that at Anchorage (c61 and 150) with anomaly of -.0037oC, despite the map showing the whole of Alaska cooking in July 1980-2009 with anomalies of 1 to 4oC. Admit it, the GISS maps are misleading, produced as they are from necessarily very incomplete data for the last month as at 10 days into the current month.

    Regards, Tim

  2047. manacker Says:

    Robert S

    You are beating a dead dog.

    You cannot deny that there are data out there, which show decreased RH with warming and other data that show roughly constant RH.

    The long-term NOAA/NCEP humidity data are also there, Robert. This data base is being up-dated and published regularly. You cannot just “wish it away”. Just how accurate the data are is another question (in all likelihood much more accurate than any data derived indirectly from paleo-climate reconstructions, which are being used to make all sorts of claims).

    The fact that this long-term record shows that the short-term relationship between temperature and SH appears to behave differently than the long-term relationship, is something that raises several questions about the long-term versus short-term WV feedback.

    The “hot spot” question is not as simple as you claim, either. You state “despite what the IPCC says, I’m fairly certain that the hotspot is not a fingerprint of GHG warming, but of all warming”, yet IPCC has specifically shown it as related to GH warming (as opposed to warming from solar irradiance, etc.). The lack of a tropical tropospheric “hot spot” is a problem for the hypothesis of tropospheric moistening and WV enhancement of the GH effect, which is difficult for you to simply arm wave away.

    The observed fact (M+M) that tropospheric temperatures as observed do not agree with those from the climate models is also a related problem. This suggests that there is no observed upper level moistening to maintain constant RH as models are predicting.

    You can always find a climate scientist somewhere who will come out with a paper to defend the dogma if it gets attacked by another climate scientist (unfortunately, that’s the way it works in a multi-billion dollar big business like AGW). It then takes some time for the less generously funded scientists who are skeptical of some part of the dogma to come out with a “debunking” of the defending paper, etc.

    And, if you put on your blinders, you can always ignore any uncertainties or data that do not directly support the “mainstream view”.

    But that does not make these data and uncertainties go away, Robert.

    Max

  2048. phntwoo Says:

    Bart

    so let me get this straight:

    While Mr Gore jets around propagating his populist froth, when you enter a discussion with consensus scholars, all the irrefutable doom reduces to :

    “Probably” the signposts wherewithin climate fluctuates will move, IF we double the amount of CO2 again in the next 100 years..

    Right?
    Correct me where the above statement is wrong.
    Thank you.

    Hardly anything worth increasing taxes for.

  2049. manacker Says:

    Robert S

    We have discussed the topic of WV feedback. We agree that the net feedback fom WV with warming is likely to be positive, although we do not agree on the magnitude.

    We can both cite papers and studies to support our different views on this, so there is no point discussing this any further.

    You support the IPCC position that this is strongly positive, based on maintaining essentially constant RH with warming, while I am more skeptical of this claim on a long-term basis. You place a lot of credence on a recent Dessler et al. study, which, however, only tells a part of the story, as pointed out here by Roy Spencer:
    http://www.drroyspencer.com/2009/02/what-about-the-clouds-andy/

    In addition to differences between “short-term” and “long-term” feedbacks, the “missing link” appears to be the net impact of clouds on both outgoing LW and SW radiation.

    Let’s look at the IPCC AR4 WG1 report regarding cloud feedbacks.

    The 2xCO2 climate sensitivity of CO2 alone (without any feedbacks) as estimated by IPCC (Myhre et al.) is somewhere around 1.0°C.

    IPCC estimates that the WV feedback constitutes the strongest positive feedback at 1.80±0.18 W/m^2K, followed by the closely related (negative) lapse rate feedback of -0.84±0.26 W/m^2K. The net sum of the two is 0.96±0.44 W/m^2K.

    Surface albedo feedback is estimated to be 0.26±0.08 W/m^2K.

    Net cloud feedback is assumed to be 0.69±0.38 W/m^2K.

    With a doubling of CO2 this brings the total temperature impact (°C) to:

    1.9±0.15°C – 2xCO2 plus all feedbacks, except clouds
    1.3±0.55°C – net cloud feedback
    3.2±0.70°C – 2xCO2 with all feedbacks

    In other words, the net feedback from clouds is assumed by all models to be strongly positive, representing 40.6% of the total 2xCO2 temperature impact, but with a fairly wide spread between models.

    As IPCC puts it:

    The large spread in cloud radiative feedbacks leads to the conclusion that differences in cloud response are the primary source of inter-model differences in climate sensitivity. However the contributions of water vapor/lapse rate and surface albedo feedbacks to sensitivity spread are non-negligible, particularly since their impact is reinforced by the mean model cloud feedback being positive and quite strong.

    But how realistic are the assumptions leading to the strongly positive modeled net cloud feedback?

    IPCC tells us (SPM 2007):

    Cloud feedbacks remain the largest source of uncertainty.

    After the publication of AR4, a study by Spencer et al., using physical observations from CERES satellites over the tropics, showed that the net feedback from clouds is strongly negative, rather than positive, as assumed by the climate models.

    Click to access Spencer_07GRL.pdf

    This report concludes:

    Our measured sensitivity of total (SW + LW) cloud radiative forcing to tropospheric temperature is -6.1 W m-2 K-1.

    Over the region measured, this represents an observed strongly negative net feedback from clouds (as compared to a strongly positive net feedback, as assumed by all the climate models cited by IPCC.

    What impact do these observations have on the model-based 2xCO2 climate sensitivity of 3.2°C, as assumed by IPCC?

    Replacing the impact of a strongly positive net feedback with that of a strongly negative net feedback would reduce the 2xCO2 climate sensitivity to somewhere around 1°C (or around one-third of the IPCC model-based estimate).

    In a later study using ERBE observations of net outgoing SW and LW radiation,, Lindzen and Choi come up with an even lower figure (although their study has been challenged on a blog site by Trenberth et al. and a L+C sequel is on the way…”and the beat goes on”).

    The modelers have also taken issue with the IPCC claim on strongly net positive cloud feedback. One of the main problems (as acknowledged by IPCC) is that models have a hard time dealing with cloud feedbacks. To deal with this problem in the CGMs, the first climate sensitivity tests using superparameterization embedded within a conventional GCM were made.
    ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf

    The climate sensitivity of an atmospheric GCM that uses a cloud-resolving model as a convective superparameterization is analyzed by comparing simulations with specified climatological sea surface temperature (SST) and with the SST increased by 2 K.

    The global annual mean changes in shortwave cloud forcing (SWCF) and longwave cloud forcing (LWCF) and net cloud forcing for SP-CAM are -1.94 W m-2, 0.17 W m-2, and -1.77 W m-2, respectively.

    [Note: This represents a net global LW+SW forcing of -0.885 W/m^2K, as compared to the IPCC estimate of +0.69 W/m^2K.]

    The overall climate sensitivity of SP-CAM for the Cess-type perturbation is relatively weak compared to other GCMs, but fairly similar to the climate sensitivity derived from limited duration aqua-planet simulations of the NICAM global CRM.

    This weak sensitivity of SP-CAM is associated with negative net cloud forcing changes in both the tropics and the extra-tropics. In the tropics these are primarily due to increases in low cloud fraction and condensate in regions of significant mean mid-tropospheric subsidence. In the extratropics these are caused by a general increase in cloud fraction in a broad range of heights, and a strong increase of cloud liquid water path in the lower troposphere.

    SP-CAM’s major advantage over conventional GCMs is the ability to resolve cloud motions at a much finer scale, allowing deep convective processes and cloud fraction to be represented more naturally than standard GCM parameterizations allow.

    These latest model studies are in general agreement with the findings of Spencer et al.

    It appears that IPCC’s “largest source of uncertainty” back in early 2007 (i.e. “cloud feedbacks”) has since been cleared up by both enhanced model simulations and actual physical observations.

    Max

  2050. Bart Says:

    phntwoo,

    I thought we were discussing climate science, but I get the feeling that you somehow have taxes in mind when you discuss climate change? However, science is independent of your dislike of taxes.

    This doom talk (whether climate doom or economic doom) is all yours; not mine.

    More GHG will give a warmer climate, all other things being equal. The signposts within which the climate fluctuates will move into warmer directions (as they already have).

  2051. phntwoo Says:

    Dr Bart

    true true true

    when you say “warmer” you mean like: warmer as in more NYT articles and obamania and NOS froth ?
    Or do you mean “warmer” as in : has been scientifically observed?

    If you claim the latter : Any credible referrence there? Certainly your graphs won’t “do”.

    Fluctuating in “Warmer” directions is a difficult (read: laughable) scientific criterion to base trillion dollar invesments on.

  2052. Robert S Says:

    I’m sorry, but it appears you don’t get it.

    You cannot deny that there are data out there, which show decreased RH with warming

    Besides the NCEP reanalysis, specifically, what data? Not the M+D 2004 study, as even Lindzen recognizes that D+Z 2008 has the the large WV feedback under the constant RH assumption to be “within reasonable agreement with satellite observations [Dessler et al., 2008], even though the vertical profile of relative humidity is not strictly conserved.” What does Spencer have to say about the study? That Dessler didn’t discuss the clouds? In a humidity study?

    The fact that this long-term record shows that the short-term relationship between temperature and SH appears to behave differently than the long-term relationship…

    As long as we’ve had satellites to check, the NCEP reanalysis has been wrong. Soden 2005 studied 22 years of humidity data from satellites, and found the constant RH to be roughly correct. If you want to believe the NCEP trends were somehow magically correct before satellites, that’s fine, but it’s not at all logical.

    A decreasing q as NCEP shows means a negative WV feedback.

    “Lindzen and Choi come up with an even lower figure (although their study has been challenged on a blog site…”

    And in a paper

    Click to access Trenberth2010etalGRL.pdf

    It sounds to me like a major error, and in correcting it, Trenberth obtain a climate sensitivity within the IPCC range. But I’ll wait to see what L+C come out with before commenting further.

    And like I said, I’m not as read up on the cloud feedback as I’d like to be. I might take a look at your links a little later, but I’m still skeptical of Spencer’s strong negative cloud feedback hypothesis because a low climate cannot explain the paleoclimate (glacial periods, D-O events, etc). I’ll take a look, though.

  2053. Bart Says:

    phntwoo,

    Say something substantive and on topic, or take it elsewhere. So far you’re just adding noise here, and I’m running out of patience.

  2054. manacker Says:

    Bart and Phinnie

    Is “warmer” better?

    Is “cooler” better?

    Is the “Goldilocks just right” perfect “globally and annually averaged land and sea surface temperature” that of year 1998 (a great year for the right bank Bordeaux appellations, as I recall, and the modern “record-holder”)?

    Or is it that (0.2°C) cooler “GAAALASST” of year 2008?

    Or maybe that even a smidgen (0.6°C) cooler “GAAALASST” of 1968?

    Or the even just barely (0.7°C) cooler year 1928?

    How about the not quite so much (0.5°C) cooler year 1878?

    Gosh, it’s really hard to pick the “Goldilocks just right” perfect “GAAALASST”.

    But, unless we know what the “Goldilocks just right” perfect “GAAALASST” temperature is, we are “shooting in the dark” with any advice to the politicians and policy makers, who really need to know this, so they can act accordingly.

    Maybe we had better admit we don’t have a clue, rather than giving them all false hopes that we really know what’s going on.

    Then comes the dilemma that even if we knew what increased CO2 will cause in the way of temperature change (which we unfortunately really don’t know) we don’t have a clue what natural forcing factors (or natural variability) will do for us over the next 1, 10, 50 or (even less) 100 years.

    A real dilemma!

    Phinniethewoo has got it right. Let’s don’t base “tax policy” on lack of knowledge (i.e. “ignorance”).

    Let’s don’t try to fool them that we really know what is going to happen in the future (i.e. “arrogance”).

    Let’s remember what Einstein said about the two.

    Max

  2055. manacker Says:

    Robert S

    Your last post is a rehash of a rehash (as was the first paragraph of my last post to you).

    Forget it.

    The pre-satellite NOAA record is what it is. To write it off as “nonsense” is (in itself) “nonsense”. It gets up-dated and published regularly.

    The other points I made stand, as you have been unable to refute them.

    But let’s move on to clouds, where you will have an even more difficult time, in view of the IPCC stated “uncertainty” and the subsequent observations.

    Max

  2056. phntwoo Says:

    Well Bart,

    I pointed out some structural, if not populist DEFECTS in your graphs to which you did not respond in any scholarly fashion ?

    I also pointed out to you the fact that “earth’s temperature” is a Mickey Mouse Measure no scientist should really believe in without a good laugh.
    I did not get any scholarly response to that either??

    as for your last “scientific” answer to me;

    More GHG will give a warmer climate, all other things being equal. The signposts within which the climate fluctuates will move into warmer directions (as they already have).

    This is meaningless drivel if you cannot say:

    -What the signposts are within which your platonic climate were to fluctuate

    -How they move along and what the “steady state” is.. Warmists seem to acknowledge there were iceages, I wonder when they are going to acknowledge there is more than just iceages and we just do not know enough about it yet.

  2057. Robert S Says:

    “The pre-satellite NOAA record is what it is. To write it off as ‘nonsense’ is (in itself) ‘nonsense’.”

    I’m not saying it is “nonsense”, but I’m almost certain it’s wrong. For as long as we’ve had satellites to verify humidity trends, the NCEP reanalysis has been wrong. I’m not talking a little wrong — the sign of the trend for q data is opposite satellite measurements (i.e. satellites measure a large increase in q). That is roughly 30 years that the NCEP humidity data is wrong. To say that the pre-satellite humidity trends are correct, despite the many changes in instrumentation, despite the changes in spatial and temporal resolution (but still almost no ocean coverage), despite the known problems with NCEP model bias, and despite that it has been wrong throughout the satellite era…well, it’s ludicrous.

    Lindzen seems to be convinced of the strong positive WV feedback (consistent with constant RH) given the results of Dessler et al. 2008. Not you, though. You’re holding on to the only dataset which shows a negative trend in q (and thus, a negative WV feedback).

    You’re also holding on to the supposed rebuttal of Santer et al., despite the two Sherwood papers and Haimberger et al. which do show a hot spot. But having looked the AR4 figures on the hotspot again, I see you’ve misunderstood them — figure 9 shows the simulated vertical temperature response from the observed changes in forcings over the past century. The observed change in solar forcing over that period, and especially over the last 50 years or so, has been small, and so the resulting hotspot from solar heating is small. BUT, chapter 9.2.2 does reference Cubasch et al. 1997, which does show that a larger change in solar forcing would result in a similar hotspot to the hotspot predicted from GHGs (figure 5 from Cubasch).

    As for clouds, again, not as informed as I’d like to be. Though, having read Spencer 2007 and 2009, along with Lin et al. 2010, I don’t think Spencer has shown that the 6.1 Wm-2/K is correct — In deriving this value, Spencer assumes that the climate system exhibits no memory. Of course, the climate system does in fact exhibit strong memory (OHC, GMT, etc), with GMT autocorrelation remaining significant at lags up to 8 years. Lin et al. show that the linear striations in the climatic phase space from Spencer 09 may not accurately represent the true radiative feedbacks, and in a system with memory (like the climate), the actual feedback signal would be undetectable in short term observations (of TOA imbalance vs. surface temps as Spencer uses).

    In fact, In his most recent paper (April 2010), Spencer notes in the conclusion:

    …even if the [estimated 6.1 Wm-2/K] do[es] represent feedbacks operating on intraseasonal to interannual time scales it is not obvious how they relate to long-term climate sensitivity.

  2058. manacker Says:

    Robert S

    Yeah. An observed negative 6.1 W/m^2K net cloud feedback (over the tropics) is a lot higher than a global negative 0.86 W/m^2K, from GCM superparameterization.

    Tropics only cover around 1/3 of globe to start off with.

    But, the point is, even if Spencer’s 6.1 is high by a factor of 8 or 10 globally, it still cancels out and offsets in the other direction an assumed strongly positive net global feedback of 0.69 W/m^2K, as assumed by the IPCC climate models (without the benefit of superparameterization).

    The tropics are also where M+D found that RH decreased sharply with increased SST (but that is another topic, which we have closed off).

    Max

    BTW if you are so happy to accept (your take on) Lindzen’s views on long-term RH response to SST, why do you have problem with his findings on total outgoing LW+SW radiation from ERBE observations? Is this a case of “cherry-picking”?

  2059. manacker Says:

    Robert S

    You mentioned a RealClimate blog by Fasullo et al., which was intended to debunk Lindzen and Choi (2009), so I decided to check this out more closely.

    Link to attempted debunking of Lindzen and Choi 2009
    RealClimate
    “Lindzen and Choi Unraveled”
    Guest Commentary by John Fasullo, Kevin Trenberth and Chris O’Dell

    http://www.realclimate.org/index.php/archives/2010/01/lindzen-and-choi-unraveled/comment-page-2/#comments

    “In an article in press (Trenberth et al. 2010 (sub. requ.), hereafter TFOW), we show that LC09 is gravely flawed and its results are wrong on multiple fronts.”

    The most damaging claim by Trenberth et al. is:

    The result one obtains in estimating the feedback by this method turns out to be heavily dependent on the endpoints chosen. [edit] In TFOW we show that the apparent relationship is reduced to zero if one chooses to displace the endpoints selected in LC09 by a month or less.

    So with this method the perceived feedback can be whatever one wishes it to be, and the result obtained by LC09 is actually very unlikely. This is not then really indicative of a robust cloud feedback.

    This claim is effectively refuted by a blogger named H. Tuuri

    Chris, I have now analyzed the monthly tropical (20 S – 20 N) data at http://earth-www.larc.nasa.gov/erbeweb/Edition3_Rev1/
    I imported the data to a free statistics program called OpenStat. Since I was not able to find a numeric table of the Reynolds and Smith OISST v2 product, I ‘digitized’ by hand the graph on page 8 of http://science.larc.nasa.gov/ceres/STM/2009-11/22_Wong_1109.pdf

    I used OpenStat to compute moving averages over 12 months (I used option ANALYSES -> Autocorrelation in OpenStat). With shorter moving averages, the NET flux graph contains too much noise, though the delta-SST graph is smooth also with shorter moving averages.

    I analyzed the 1986-1990 sequence of El Nino/La Nina, as well as the 1997-1999 massive El Nino.
    1) For the 1986-1990 event, the variation in the 12 month moving average of the NET flux is from 345 W/m^2 to 347 W/m^2. The variation of the 12 month moving average of delta-SST is -0.18 K to 0.16 K. We get:
    delta-NET flux / delta-SST = 2 W/m^2 per 0.34 K = 6 W/m^2 per K.

    2) For the 1997-1999 event, the 12 month moving average of the NET flux at the start of 1997 is 344 W/m^2. It rises to a local maximum of 345 W/m^2, and later spikes at 346 W/m^2. That is, even the 12 month smoothed Net flux is not smooth at all, but has a double maximum. The 12 month moving average of delta-SST rises from -0.03 K to +0.22 K. We get:
    delta-NET flux / delta-SST = 1 W/m^2 OR 2 W/m^2 (depending on which of the double maxima we choose)
    per 0.25 K = 4 W/m^2 OR 8 W/m^2 per K.

    Conclusion: we get results that agree with Lindzen and Choi in LC09. We had to use a very long moving average of 12 months to smooth the NET flux enough for our graphical analysis.

    I used OpenStat to compute also the regression between delta-SST and the transfer of heat from the Tropics to higher latitudes. The correlation coefficient is -0.005 and the slope is -0.32. That is, there is essentially no correlation. An El Nino event in the Tropics does not cause the transfer of heat to increase. That is, the extra heat from El Nino is handled locally in the Tropics.

    I will next do further research on the NET flux from 60 S to 60 N, and its correlation to HadCRUT3 monthly global temperature anomalies.

    I will check if I can repeat the results of Forster and Gregory:http://homepages.see.leeds.ac.uk/~earpmf/papers/ForsterandGregory2006.pdf
    The data and the programs that I used are available by email from me, if someone wants to repeat my calculations.

    Since L+C are now preparing an up-date of their 2009 study, we’ll have to wait to see what the outcome is.

    At least the critique by Trenberth et al. that “the apparent relationship is reduced to zero if one chooses to displace the endpoints selected in LC09 by a month or less” appears to be wrong.

    Max

  2060. phntwoo Says:

    Bart,

    Correct me if I am wrong as a non scientific taxpaying member of the public:

    Reading through this thread I get the impression that the warmists trying to rebutt VS all the time are very poorly schooled in statistics?

    You implied several times that “climate scientists” are busy with physical laws, instead .
    Well no: That is one severe misallocation of our tax money then?.
    Climate science as practised should be 90% statistics, buddy.

    All you (should) do is observe and collect data, interprete it and do predictions based on it. I call that statistics.

    The physical laws are known by all and for a bachelor’s introduction course. We all know Newton and have seen Navier Stokes once.

    And we all know YOU , and your clerus, cannot possibly solve these equations.

    So you should stick to stats. Correct stats.

    Or at least go buy and try to read “stats for dummies”
    I am sure the library in Gouda has a copy, it cannot all be about making cheese there.
    Then come back and start us the turgid blogs like Tamino, RC, and BV ..

    Cheers.
    I hope I have expressed my scientific concerns well ?

  2061. Bart Says:

    phntwoo,

    I have repeatedly said that my knowledge of statistics is not particularly deep. I don’t know about others. There were quite a few commenters who exhibited a decent understanding of the stats, from all different shades of grey related to the ‘skeptic’-‘warmist’ scale.

    I think you’re entirely wrong that climate science should be 90% statistics. Collecting and interpreting physical data involves mostly physics, and often uses statistics as a tool; physics based modeling is also used as a tool; or pen and paper; or other. Simulating the effects of physical relationships (“solving these equations”) is mostly done by climate models; *not* statistics.

    That said, there surely is room to have more statistics expertise contributing to climate science.

  2062. Pofarmer Says:

    Frankly Bart, you’re way off base. Having done research based on physical and mechanical systems myself, the statistics was always the greater part. You can do all the data collecting and physics that you want, but, at the end, it’s the physics that prove or disprove what you’re trying to do, so, it’s important to get them correct. And, furthermore, if the Climate models are using the same physical relationships, based on the same faulty assumptions, then, they too, are nothing but GIGO. The Climate science community(whatever that is) should NOT be scared of looking at the data with new statistical tools. At worst, it’s a dead end, at best, you just might learn something useful.

  2063. Frank Says:

    Max,

    Your (and Robert’s) posts on this thread have been very enlightening. While I am well aware of the large gap between AGW theory and demonstrable science, it’s been helpful to see the source materials referenced (and discussed) within the confines of a single blog thread. Hat’s off to Bart, as well – you are a gentleman and a good host.

    Regards – F.

  2064. manacker Says:

    Frank

    Thanks for your post.

    As you can see from the exchange with Robert S there are data out there demonstrating that the strongly positive feedbacks assumed by the IPCC model simulations are grossly exaggerated and then there are studies defending these values.

    AGW is a multi-billion dollar big business, so it is easy to find funding for any studies that support the premise that AGW, principally caused by human CO2 emissions, has been the primary cause of past warming and could represent a serious potential threat.

    Yet there are studies out there, which show that the assumptions supporting this premise are false.

    These should be taken seriously, even though they may be in a minority, partially due to much more limited taxpayer funding for such studies. AGW proponents have a hard time accepting them, however, as you can see. The accuracy or relevance of measured data are questioned, the methodology used is challenged, etc. Other studies are cited, which show different results, as if these would automatically cancel out the studies putting the AGW premise in question. “My three studies are better than your single study” arguments pop up.

    The key to the above stated AGW premise is the postulation of strongly positive feedbacks, primarily from water vapor and clouds, which multiply the warming expected from a doubling of CO2 from barely 1C as specified by the greenhouse theory to over 3C.

    Without these postulated strongly positive feedbacks, which are based on computer model simulations, AGW represents no threat (and the multibillion dollar AGW business can be shut down).

    Recent physical observations on water vapor and (even more importantly) on clouds tell us that the feedbacks (and the resulting 2xCO2 temperature impact) assumed by the model simulations are very likely to be exaggerated, and that the 2xCO2 impact is likely to be around 1C (of which we have already seen about half).

    This is basically what the discussion between Robert S and myself is all about.

    Max

  2065. Robert S Says:

    “You mentioned a RealClimate blog by Fasullo et al.,”

    I never mentioned any blog post — I referenced the actual paper. You were the one who brought up RealClimate. Anyhow, there are actually many problems with LC09 highlighted in Trenberth et al., beyond the ‘claim effectively refuted’ by H. Tuuri (who?), and there are several critiques of the paper in the blogosphere, but I prefer we stay within the confines of the literature if you don’t mind (else we end up quoting blog comments and such).

    “But, the point is, even if Spencer’s 6.1 is high by a factor of 8 or 10 globally…

    The point isn’t that it may or may not be high — it may accurately represent the short-term relationship between TOA net radiation and surface temperatures. What Lin et al. effectively show is that this relationship does not accurately represent the true feedback signal, and thus, we cannot garner any useful information about climate sensitivity from it.

    I really don’t think there is any robust observational evidence that points to a strong negative feedback, as far as I’m aware. In terms of cloud modeling, a good summary on the topic is Chapter 8.6.3.2 in AR4.

    “BTW if you are so happy to accept (your take on) Lindzen’s views on long-term RH response to SST, why do you have problem with his findings on total outgoing LW+SW radiation from ERBE observations? Is this a case of ‘cherry-picking’?”

    …No. My position on a topic isn’t dependent on Lindzen’s approval, but rather on the evidence. Evidence points to a strong positive WV feedback. For the longest time Lindzen was unconvinced of a strong WV feedback, and the fact that he accepts it now should be telling of the strength of the evidence.

  2066. Robert S Says:

    Just for reference, the Trenberth et al. (2010) paper I cite above is here

    Click to access Trenberth2010etalGRL.pdf

  2067. DLM Says:

    Max says: “AGW is a multi-billion dollar big business, so it is easy to find funding for any studies that support the premise that AGW, principally caused by human CO2 emissions, has been the primary cause of past warming and could represent a serious potential threat.”

    That is irrefutable. Let them try. And furthermore, the multitude of papers that have been produced by the AGW peer review industry (according to Nobel laureate Dr. Phil Jones) are pretty much accepted at face value, as long as the alleged physics line up with the alleged ‘settled science’. Dr Phil has said that none of the peer reviewers asked him to show his work, and that this stunning lack of curiosity is standard practice in the climate science business. But if a paper is heretical, it is in for a very rough ride. They will scrutinize, and demonize it to death.

    This blog discussion is a microcosm of the larger and incongruously labeled ‘settled debate’. They are trying to obfuscate, frustrate, and talk you to death Max. It has at long last reached the point, of only being slightly amusing. You are wasting your time and energy on this willfully dense crowd, Max. They are impervious to the inconvenient facts.

  2068. manacker Says:

    Robert S

    The Fasullo, Trenberth et al. blog refers to the Trenberth et al. paper, which I have seen. It does not shed much new light on the discussion, unfortunately, but is rather a typical “knee-jerk” reflex to “defend the paradigm”, which we see all too often out there when someone challenges it.

    The main point of contention (the unsubstantiated claim that one could move the data series of LC2009 by “one month” and get totally different results) was shot down pretty effectively, as I pointed out, so can be discarded.

    We’ll now have to wait for the sequel to LC2009. Lindzen obviously knows his business, so I would expect something substantial.

    But the pro-AGW crowd will undoubtedly scramble to try to discredit anything that raises questions concerning the paradigm, as they have consistently in the past. Yawn!

    The Spencer observations show a very strongly negative net feedback from clouds over a few-year period over the tropics.

    The superparameterization model studies showed the same on a global basis, directly contradicting the assumptions of the more primitive IPCC models (without benefit of superparameterization and with the IPCC admission that “cloud feedbacks remain the largest source of uncertainty”).

    So let’s rejoice, Robert! We actually have two reasons to do so: a) this largest source of IPCC uncertainty has been cleared up and b) we don’t have to worry about frying to death from a doubling of CO2 sometime around 2100 (compared to the pre-industrial year 1750). This is truly GOOD NEWS!

    You wrote:

    I really don’t think there is any robust observational evidence that points to a strong negative feedback, as far as I’m aware. In terms of cloud modeling, a good summary on the topic is Chapter 8.6.3.2 in AR4.

    WHAT???

    You have just seen a study showing physical observations, which point to a strongly negative feedback, as well as model studies, using an improvement called superparameterization, which show the same, yet you “don’t think there is any robust observational evidence that points to a strong negative feedback, as far as you are aware”.

    Duh! Open your eyes (and your brain, while you are at it, Robert.)

    Then you bring out all the old obsolete AR4 Ch.8 model stuff (with no superparameterization). This has obviously been outdated by the model study I cited.

    Wake up, Robert! IPCC AR4 is based on 4-year old data, which has been superceded.

    Max

  2069. Frank White Says:

    obert Engle and Clive Granger won a Nobel prize for showing that the statistical methods used by of a couple of generations of scientists in several fields (including ours) was based on models that give spurious results with the kind of data we find in nature.

    As I approach age 80, I look forward to finding new ways to learn more about the Earth. So if I am willing to spend the next 5 years retooling in order to test if certain climate correlations are spurious or not. What is your excuse for not doing so?

    We must retool. For most of the readers that may not be a big problem. We may have to learn R, a programming language for statistical analysis. (S-Plus if funds permit.) We may have to learn new statistical concepts. [Try Engle, R. F. and Granger, C. W. J. (eds): 1991, Long-Run Economic Relationships. Readings in Cointegration, Oxford University Press, Oxford.] (The same tools apply to climate data.)

    The SOI is an interesting metric But who has tested the raw barometric data to determine if the constuction of the index is consistent with the properties of the data? Maybe the data loses or gains something in the process of calculation. I just don’t know. But experience with similar data tells me that this is a question worth exploring.

  2070. Frank Says:

    DLM,

    Your points to Max re. the intransigence of well-funded / highly motivated AGW believers are correct, but the bigger picture is that the scientific debate is now taking place on the very foundations of the so-called ‘evidence’. Given what has been ‘invested’ to date, and the amount of plunder potentially available to rent-seekers and socialists (of all stripes), no one should expect anything less than an uphill fight going forward. Maybe in the end, science and reason lose to power and emotion, but based on political developments in Australia, Germany and probably the UK, there seems to an awakening among the people at large that AGW is not the ‘consensus’ it was sold as.

    Regards – F.

  2071. Robert S Says:

    I’m fairly certain that LC09 contained major errors, but I’ll wait to see what they come out with next.

    “The Spencer observations show a very strongly negative net feedback from clouds over a few-year period over the tropics.”

    And yet, even Spencer is unsure of how these feedbacks are related to long term climate sensitivity. Lin et al. show that these measurements probably do not represent the true feedback signal.

    “The superparameterization model studies showed the same on a global basis”

    Getting the same sign, but completely different magnitudes does not constitute validation. But you seem to have much confidence in superparameterizations. Do the authors of the study you cite?

    Bony and DuFresne (2005) point out that the high degree of dependence of simulated climate feedback on low cloud response is a common feature of climate models, and conventional models diverge greatly in their low cloud responses. Clearly the representation of ubiquitous low clouds and small scale convection is still a weak point with current vertical resolution of SP-CAM, and low clouds produce a dominant part of the net global cloud forcing change predicted by the model. Thus the overall climate
    sensitivities produced by the model must be regarded with caution. A next step with SP-CAM is to couple it to a slab-ocean model so that cloud responses in more realistic climate change scenarios can be evaluated.

    This study was published before AR4, and there have been plenty of articles on cloud-modeling since. Are superparameterizations better? In some respects, yes. Superparameterizations are computationally intensive and expensive, and like other cloud resolving models of its kind (e.g. DARE and full-global CRMs) are still in their infancy.

    Like I said, I don’t think there is any robust observational evidence of a strong negative cloud feedback. In fact, evidence either way is pretty sparse. Clement 09 did find observational evidence for a positive cloud feedback in the NE Pacific on decadal timescales, and the only model that could reproduce this effect projects a positive cloud feedback over the entire Pacific. But the authors note that their approach is not ideal, and I’m not entirely sure how this relates to the global cloud-feedback.

    In any case, I still don’t see how an insensitive climate system can explain things like the Dansgaard-Oeschger events, and transitions to and from glacials. This has doubt has nothing to do with “conflicting ideology”, or “believing with what I want to believe”, but it represents what I see as a very large hurdle that a low-sensitivity has to overcome.

  2072. manacker Says:

    Robert S

    Superparameterization showed a net global strongly negative cloud feedback (a bit stronger with a minus sign than the positive feedback assumed by the more primitive climate models cited by IPCC).

    This points to a 2xCO2 climate sensitivity of around 1C (instead of 3+C, as assumed by IPCC).

    Spencer et al. have shown a very strong net negative cloud feedback over the tropics. Just how strong this negative feedback will be on a longer-term basis over the entire globe is open, but there is no question about the sign, regardless of the Lin “climate has a memory” rationalization. Forget that one, Robert, it is not credible.

    These observations confirm the superparameterization results in suggesting a 2xCO2 impact of less than 1C.

    LC09 does not have “major errors”, as you claim. The Trenberth paper, itself contained a major error (the claim that LC09 result would change with a change of less than a month in the time frame). As was pointed out, this claim was false. Whether or not the other claims were also false or fabricated, is really hard to tell, but I would not place too much credence on this paper. It was a desperate “defensive move”.

    Lindzen will come out with some more data soon, and I’m sure that one of the “insiders” (Trenberth, Tamino, Schmidt?) will again try to discredit it, but no one will truly believe this except a few die-hards like yourself that already have their minds made up.

    The paleo-climate stuff you mention as “evidence of a high climate sensitivity” is all so dicey that one can prove almost anything one wants to with it. Just look at the Mann hockey stick fiasco. Or the PETM junk that has come out, that no one can agree on.

    Forget that kind of junk science, Robert, and stick with current physical observations. These point toward a much lower climate sensitivity than that assumed by IPCC.

    And they confirm that the warming forecasts of IPCC are totally off the wall and not credible.

    All you really have to do, Robert, is open your eyes (and your mind). Take your blinders off.

    Max

  2073. manacker Says:

    Robert S

    One further point.

    You just wrote:

    Like I said, I don’t think there is any robust observational evidence of a strong negative cloud feedback. In fact, evidence either way is pretty sparse.

    I don’t think there is any robust observational evidence of a strong positive cloud feedack, either, yet all the models cited by IPCC assumed a strong positive feedback despite this.

    Now we have (post AR4) observational evidence of a strong negative cloud feedback, which you claim is not robust.

    A rather empty claim, Robert.

    Sorry, Robert, the fact remains that IPCC assumed something that was not supported by physical observations (robust or not) and has since been refuted by physical observations.

    Max

  2074. manacker Says:

    Frank and DLM

    Thanks for your remarks.

    Yes. There is no doubt that the “tables are turning” on the AGW “scientific consensus crowd”.

    Just as short as three years ago, they enjoyed great respect among the public in general. Most politicians loved them, and Nobel Peace Prizes and Oscars were being handed out, etc. AGW was “PC”, money was flowing and it was a wonderful time.

    But something bad happened.

    It was not only “Climategate”, because the public mistrust had actually started several months earlier.

    I believe that the “trigger” occurred in February 2007, when the much-ballyhooed AR4 “Summary for Policymakers” report was published.

    Even though the mainstream media jumped on it with glee, the first cracks were beginning to show. First analyses of the report showed that it had glaring errors based on sloppy science and gross exaggerations (most of these critiques remained in the blogosphere at first).

    James E. Hansen did not do the AGW movement any good shortly thereafter, when he gave his rather hysterical “tipping point” testimony to US Congress.

    Both SPM 2007 and Hansen’s proclamations were just a bit too exaggerated, too hyperbolic. Al Gore’s film also did not help, as people soon saw that it was riddled with exaggerations and outright errors.

    And the “mainstream” scientists themselves began behaving a bit too arrogantly, dismissing critique by non-climatologists as irrelevant, etc.

    The number of skeptics among both scientists and non-scientists began to grow exponentially as the warning cries of impending disaster by the AGW crowd became shriller and less believable.

    The fact that it has not warmed after 2000 despite record CO2 increase has not helped the credibility of the AGW cause. The fact that many AGW-supporters denied the observed cooling at first only made things worse.

    Following all the latest revelations of sloppy science, collusion among an elite group of insiders to block out any critique, manipulation and outright fabrication of data, gross exaggerations of risks, etc. the confidence and respect which “mainstream climate science” enjoyed as little as 3 years ago among the general public has been lost.

    And it is lost forever, even though some climatologists have not realized it yet.

    But that is the way it goes with all passing fads, and this one is no different despite the fact that it had become a multi-billion dollar big business.

    As they say: the bigger they are, the harder they fall.

    To put it into vernacular: the wheels have come off of the AGW gravy train and it has ended up in the ditch.

    And, unfortunately, respectable science has also been dealt a blow (but I am convinced it can and will recover, by distancing itself from the AGW “mainstream” crowd that are responsible for this fiasco).

    Max

  2075. Robert S Says:

    Is this bizarro world?

    “Spencer et al. have shown a very strong net negative cloud feedback over the tropics. Just how strong this negative feedback will be on a longer-term basis over the entire globe is open, but there is no question about the sign, regardless of the Lin ‘climate has a memory’ rationalization. Forget that one, Robert, it is not credible.”

    Lin et al. don’t say the ‘climate has a memory’, but rather, the climate exhibits memory, in the statistical sense. And because of this, Spencer’s assumption of a climate system with no memory leads one to erroneously conclude that these short term relationships accurately capture the true feedback signal. Hence, neither the sign nor the magnitude can necessarily be deduced.

    And your faith in the results of superparameterizations is puzzling. As highlighted in the textbook, ‘The Global Circulation of the Atmosphere’ (2007)

    CRM-based global simulation strategies such
    as superparameterization, DARE, and even full global CRMs are becoming more affordable, and already show promise for such simulation challenges as the diurnal cycle of continental convection and the MJO. Current CRM microphysical parameterizations need considerable further refinement to skillfully predict the radiative properties and feedbacks of the full global suite of cloud systems.

    Wyant himself, the author of the study you cite, cautions that superparameterizations still don’t accurately represent processes related to low cloud cover, a major factor in the cloud feedback. Superparameterizations will add to the sum of knowledge regarding how clouds change in a warming world, but for now, their study is still in its infancy.

    “LC09 does not have “major errors”, as you claim.”

    Yes, it does. If you want to take a random blog comment on the accuracy of one particular problem with LC09 as evidence that all other problems with the paper are inaccurate, be my guest. This won’t win you any arguments, however. Even Spencer recognizes that LC09 has problems
    http://www.drroyspencer.com/2009/11/some-comments-on-the-lindzen-and-choi-2009-feedback-study/

    It is not clear to me just what the Lindzen and Choi results mean in the context of long-term feedbacks (and thus climate sensitivity). I’ve been sitting on the above analysis for weeks since (1) I am not completely comfortable with their averaging of the satellite data, (2) I get such different results for feedback parameters than they got; and (3) it is not clear whether their analysis of AMIP model output really does relate to feedbacks in those models, especially since my analysis (as yet unpublished) of the more realistic CMIP models gives very different results.

    and

    …I predict that Lindzen and Choi will eventually be challenged by other researchers who will do their own analysis of the ERBE data, possibly like that I have outlined above, and then publish conclusions that are quite divergent from the authors’ conclusions.

    In any event, I don’t think the question of exactly what feedbacks are exhibited by the ERBE satellite is anywhere close to being settled.

    Spencer, someone agrees with Lindzen on a low climate sensitivity, says their analysis is no good. But again, I’ll wait to see what Lindzen comes out with next before completely writing the ideas in LC09 off.

    “A rather empty claim, Robert.”

    I’ve already explained exactly why I believe this. Trenberth et al. (2010) highlight several problems with LC09, and Lin et al. (2010) show problems with Spencer’s methodology. I would love to agree with you that all signs point to a low climate sensitivity due to clouds, but so far, the jury is still out, especially given papers like Clement 09 (but even without).

  2076. DLM Says:

    Max,

    Again, you are right on. Your description of the events that have led to the current state of the climate dogma’s credibility (circling the drain) is very well done. It wasn’t only Climategate, but I would suggest that Climategate was the ‘tipping point’. Thank you Phil Jones, and friends. He may someday receive another Nobel Prize, for undoing what he had done in the first place.

    Robert is asking if this is bizarro world. May I suggest that you let Bart help him with that. He can’t hear you.

  2077. Robert S Says:

    “Superparameterization showed a net global strongly negative cloud feedback (a bit stronger with a minus sign than the positive feedback assumed by the more primitive climate models cited by IPCC).

    This points to a 2xCO2 climate sensitivity of around 1C (instead of 3+C, as assumed by IPCC).”

    You misunderstand the paper. Wyant et al. clearly state that their estimate for the climate sensitivity parameter is around 0.41 K/Wm-2. Given a radiative forcing for 2xCO2 around 3.7 Wm-2, this would imply a climate sensitivity of around 1.5 K. Anyway, aside from the fact that the study of superparameterizations is still in its infancy, in later papers Wyant argues that SP-CAM exaggerates the strength of the negative cloud feedback.

    Interestingly enough with regards to LC09, aside from the errors discussed above, there was a paper by Forester&Gregory published in 2006 that also analyzed the ERBE data and came to the exact opposite conclusion — a positive feedback factor of around 2.3 Wm-2/K, implying a climate sensitivity higher than the IPCC. LC09 doesn’t even mention this paper (for whatever reason) to explain why their results differ so greatly.

    “The paleo-climate stuff you mention as “evidence of a high climate sensitivity” is all so dicey that one can prove almost anything one wants to with it.”

    Now the ice ages didn’t happen? Absolutely no information can be garnered from the paleoclimate?

  2078. Robert S Says:

    DLM
    “He can’t hear you.”
    And Max
    “All you really have to do, Robert, is open your eyes (and your mind). Take your blinders off.”

    This type of BS has got to stop. Max hasn’t budged an inch on a single issue either. But it’s easy to tell which one of us is close-minded — I haven’t editorialized on the motivations of the likes of Spencer or Lindzen, opined on the ‘dogma’ of so-called skeptics, and never once discussed policy implications or whether any warming is good/bad or any level of potential threat. Max has, and it’s pretty clear that he can’t accept AGW, regardless of the science, because those scientists who agree with the consensus are just in it for the research grants, or to be liked, or what have you, and they only publish rebuttals to skeptical research to uphold the status quo (couldn’t possibly be because there is actually something wrong with the study), and all this doesn’t matter anyway because no one knows the perfect temperature, but even if we did, we still don’t know how CO2 effects temps, and even if that weren’t the case, we still don’t know what the system will do down the road, so any efforts to curb emissions would be pointless. BUT, now that we skeptics got some published papers (however flawed or weak in evidence) that show we have nothing to worry about, the AGW house of cards should come crashing down at any moment now, and anyone who doesn’t accept these papers as the gospel truth is a fool that can’t come to grips with reality.

    Climate science will recover eventually, but only if scientists convert to the house of Lindzen (peace be upon him) now.

    I see there’s really no point in continuing this conversation.

  2079. manacker Says:

    Robert S

    You wrote:

    those scientists who agree with the consensus are just in it for the research grants, or to be liked, or what have you, and they only publish rebuttals to skeptical research to uphold the status quo (couldn’t possibly be because there is actually something wrong with the study), and all this doesn’t matter anyway because no one knows the perfect temperature, but even if we did, we still don’t know how CO2 effects temps, and even if that weren’t the case, we still don’t know what the system will do down the road, so any efforts to curb emissions would be pointless.

    You have exaggerated a bit here, Robert, but (other than that) it’s a pretty good summary.

    Scientists are probably motivated by more than just research grants (although these have been in the billions and have certainly been a very important incentive for young scientists to enter the “climatology” field).

    Rebuttals to skeptical reports are published very quickly after the skeptical report appears, so your presumption probably makes sense. “Defending the paradigm” is probably a better description of the motive, however.

    You are absolutely right that no one knows the “perfect temperature”, or whether this may be 1 or 2C higher than today’s temperature. History tells us that “warmer is generally better than colder”, so it is very unlikely that the “perfect temperature” is colder than today’s, but it could well be a bit warmer.

    To your statement “we still don’t know how CO2 effects temps, and even if that weren’t the case, we still don’t know what the system will do down the road”, you are partly right. We think we know based on the GH theory how CO2 theoretically acts as a GH gas, although we cannot tie past observed temperature changes to CO2 changes, because there are too many statistical uncertainties (as VS points out here). But you are absolutely right that we do not know what the system will do down the road. The dismal failure of the models to predict the cooling since 2000 is a good example of this lack of knowledge.

    And, yes, efforts to curb CO2 emissions would, indeed, be pointless for several reasons: a) there is no indication that China, India and other developing nations are going to derail their economic growth because of a “rich man’s theoretical problem” cooked up largely by EU politicians acting on questionable “science”, b) there have been no specific actionable mitigation proposals with a direct cost/benefit analysis (only political statements of cutting CO2 to X% below the level of year Y, with no specifics how to get there or what it will change in the climate, except for a direct or indirect carbon tax, which everyone knows can have zero impact on our climate.

    But let’s get back to the subject of cloud feedbacks (IPCC’s “largest source of uncertainty” back in early 2007), but fortunately with some new data based on actual physical observations confirmed by an improved modelling technique called superparameterization.

    Back in 2006, this was still assumed (by the models cited by IPCC, at least) to be strongly positive, resulting in 1.3C of the assumed 3.2C climate sensitivity (2xCO2). [Others, such as Ramanathan, had doubts even then that net cloud feedback was positive, and lamented the lack of actual physical observations to check the model assumptions.]

    Now we khave actual physical observations, which show us that it is most likely strongly negative (about the same magnitude, but with the opposite sign), which suggests that the 2xCO2 climate sensitivity is somewhere around 1C (as it is with no feedbacks).

    This suggests that the negative feedbacks from lapse rate and clouds essentially cancel out the positive feedbacks from water vapor and surface albedo., which is really good news for us all, because it tells us that the IPCC temperature projections for 2100 are grossly exaggerated.

    Why do you not rejoice, Robert?

    Max

  2080. manacker Says:

    Robert S

    You imply that I misunderstood the superparameterization paper I cited by Wyant et al.

    The statements below, copied from the report, are hard to misunderstand.

    The climate sensitivity of an atmospheric GCM that uses a cloud-resolving model as a convective superparameterization is analyzed by comparing simulations with specified climatological sea surface temperature (SST) and with the SST increased by 2 K.

    And

    The global annual mean changes in shortwave cloud forcing (SWCF) and longwave cloud forcing (LWCF) and net cloud forcing for SP-CAM are -1.94 W m-2, 0.17 W m-2, and -1.77 W m-2, respectively.

    For more explanation of what this strong net negative feedback from clouds implies see Steve McIntyre’s article:

    http://www.sott.net/articles/show/186951-Suggestions-of-strong-negative-cloud-feedbacks-in-a-warmer-climate

    Max

  2081. manacker Says:

    Robert S

    Did the (fairly recent) Ice Ages really happen? Duh!

    Do we know exactly why and how they happened? (Not really).

    Do we know for sure what caused more distant climate changes? (Probably even less).

    Paleo-climate stuff is good background info (if not manipulated or based on poor statistical analysis like Mann’s hockey stick), but it is still very dicey data because of the interpretation of sparse data and the many unknowns, in particular the “unknown unknowns”, which become more important the further back in time you go.

    Empirical data from today’s observations, particularly if they are reproducible, are worth 10 times as much as any paleo data, and these are worth 10 times as much as any model studies based on theoretical deliberations.

    [That’s my weighting formula, with which you may not agree.]

    Max

  2082. willard Says:

    Speaking of Steve’s article, here is the CA thread:

    Cloud Super-Parameterization and Low Climate Sensitivity

    Compare and contrast the tone, the depth and the editorial content of both discussions.

  2083. Bart Says:

    I removed some posts that were wildly off topic or mudslinging. The former can go to the open thread, the latter can go away.

  2084. Robert S Says:

    Manacker

    Even with the negative cloud feedback implied by Wyant 2006, the total feedback parameter f is still positive, meaning a net amplification of the initial warming due to CO2. I don’t know precisely what Wyant’s values for other feedbacks are, but he explicitly gives us his sensitivity parameter: lambda=0.41 K/Wm-2, which gives us a climate sensitivity of 1.5 K. And anyway, in later papers, Wyant argues that SPs grossly exaggerate the negative cloud feedback.

    “Now we khave actual physical observations, which show us that it is most likely strongly negative (about the same magnitude, but with the opposite sign), which suggests that the 2xCO2 climate sensitivity is somewhere around 1C (as it is with no feedbacks).”

    In reaching the value of 1C, you are incorrectly calculating the climate sensitivity.

    “Why do you not rejoice, Robert?”

    Umm…because there’s no evidence of this? We have one relatively new modeling technique, where even a co-author of the Wyant 2006 study (Bretherton) says in 2007:

    Over the short run, even global CRMs will not allow us to more confidently predict climate sensitivity, just as they do not fully solve tropical rainfall bias problems. They are better regarded as a different modeling perspective on the problem, with formidable strengths but some weaknesses compared to conventional AGCMs.

    We also have LC09, which even Spencer admits has some major problems, and where a paper was published only 3 years earlier that came to the exact opposite conclusion in studying ERBE data. Lindzen doesn’t even mention this paper in LC09 to explain why their results differ so greatly.

    And finally, we have Spencer’s last few papers. First off, Lin et al. 2010 show that Spencer’s technique for discerning feedbacks from short-term observations (of TOA imbalance vs. SSTs) does not accurately capture the true feedback signal. Not that the “sign is right, but perhaps not the magnitude”, but that we can’t conclude anything concrete from this technique. Spencer himself admits as much in his most recent paper:

    even if they [the measured feedback parameters] do represent feedback operating on intraseasonal to interannual time scales it is not obvious how they relate to long-term climate sensitivity.

    I would love to accept these papers as the ‘death knell’ to AGW, but they’re really not that strong. I don’t say that because I “can’t face the facts” or because I’m “so entrenched in my beliefs that nothing could convince me”, but because it’s the truth. There are lots of things I am skeptical about with regards to AGW — claims of ‘things are worse than we thought’, or the strong positive cloud feedback as we’ve been discussing. You, on the other hand, have been so skeptical of all papers providing evidence for AGW for so long, but you apply no skepticism whatsoever to papers against AGW.

    And I don’t so much mind conversing with you, Max, but you could you tell your pet DLM to get the hell out of here. He adds nothing to any discussion he’s involved in.

  2085. manacker Says:

    Robert S

    I would not refer to the latest data, which suggests a strongly negative net feedback from clouds as the “death knell” to AGW (as you put it).

    I would, however, refer to it as the “death knell” for a climate sensitivity of 3.2C±0.7C, as estimated by the climate models cited by IPCC (AR4 WG1 Ch.8, p.633).

    This is quite simply because this estimate included the impact of a strongly positive net feedback from clouds of 1.3C±0.55C, and the latest data show that this assumption was in all likelihood incorrect, and that the net feedback from clouds is most likely strongly negative, instead.

    If the net cloud feedback were neutral, we would be down to a 2xCO2 climate sensitivity of 1.9C±0.15C according to IPCC.

    Since the latest data show that it is strongly negative instead of neutral, we end up with a 2xCO2 CS well below 1.9C.

    Wyant et al. have given us an estimate of 1.5C, or a net negative impact from clouds of 0.4C.

    [With Wyant’s reported net feedback of –0.855 W/m^2K, the CS estimate by Wyant appears a bit on the high side. Inasmuch as the IPCC estimate of +1.3C was based on an assumed feedback of +0.69 W/m^2K, it would seem logical that a slightly higher negative feedback would result in a slightly higher negative warming. But let’s stay with the 1.5C figure, anyway.]

    This means that the IPCC temperature predictions for 2100 are overstated by at least a factor of 2.

    With a total 2xCO2 warming of 1.5C by year 2100, we have already seen around 40% of this or 0.6C (CO2 from 285 ppmv in 1850 to an estimated 560 ppmv by 2100, and 390 ppmv today), leaving us added warming from today to 2100 of 0.9C

    This is not much more than the 0.7C, which we have seen over the past 150+ years, with no deleterious effects, whatsoever.

    So even if all else were to remain the same, AGW no longer presents a significant threat to humanity. This is not a “death knell” for AGW per se, even though it looks very likely that it is a “death knell” for the premise that AGW represents a serious potential threat, as some would have us believe.

    And that was actually my point.

    Max

  2086. Robert S Says:

    Let me fix that paragraph:
    I would love to accept these papers as the ‘death knell’ to ~3K CS, but they’re really not that strong. I don’t say that because I “can’t face the facts” or because I’m “so entrenched in my beliefs that nothing could convince me”, but because it’s the truth. There are lots of things I am skeptical about with regards to dangerous AGW — claims of ‘things are worse than we thought’, or the strong positive cloud feedback as we’ve been discussing. You, on the other hand, have been so skeptical of all papers providing evidence for ~3K CS for so long, but you apply no skepticism whatsoever to papers against ~3K CS.

    “it would seem logical that a slightly higher negative feedback would result in a slightly higher negative warming. But let’s stay with the 1.5C figure, anyway.”

    It only seems logical if the relationship between the net feedbacks and climate sensitivity is linear. It’s not.

    There still is no robust observational evidence that clouds provide a net negative feedback. I’ve already explained why papers like LC09 and Spencer2007/2010 don’t provide robust evidence. And you can extol superparameterizations as some sort of grand triumph over conventional GCMs all you want, but as Bretherton notes, this is not appropriate.

  2087. manacker Says:

    Robert S

    In your latest post you write that you consider the IPCC estimate of a 2xCO2 CS of 3.2C to be correct “not because I ‘can’t face the facts’ or because I am ‘so entrenched in my beliefs that nothing could convince me’, but because it’s the truth”.

    “TRUTH?” Hold on there, Robert. You do not have the monopoly on “truth” here, any more than I do. Drop that line of reasoning, because it is silly.

    I think you will have to admit that superparameterization has brought a marked improvement in the ability of climate models to estimate the impact of clouds, as everyone (including both Bretherton and Wyant) agrees (without calling it “a grand triumph” – it’s just a major improvement, that’s all). Do you agree – YES or NO?

    I think you will also agree that IPCC stated in AR4 that “cloud feedbacks remain the greatest source of uncertainty”, yet that all the models cited by IPCC assumed a strong net positive feedback from clouds. Do you agree – YES or NO?

    I think you will have to agree that Ramanathan and Inamdar stated prior to AR4 that it was not possible to say whether net cloud feedbacks would really be negative or positive, lamenting that there were no good data from physical observations to support the model assumptions. Do you agree – YES or NO?

    And you will have to agree that Spencer et al. showed a strong net negative feedback from clouds over the tropics based on CERES observations. Do you agree – YES or NO?

    There was another study (using a different data set) by Norris, which also pointed to a net negative cloud feedback over a longer time period outside the tropics as well. Are you familiar with this study, and, if so, do you agree – YES or NO?

    Now all of this may not sound like much to you, but it tells me clearly that the net feedback from clouds is very likely to be strongly negative with warming, rather than strongly positive, as assumed by all the climate models cited by IPCC.

    Using the estimate of Wyant, this would point to a 2xCO2 climate sensitivity of 1.5C (rather than 3.2C, as estimated by IPCC using the strongly positive net feedback assumption for clouds). Do you agree that this is what Wyant has concluded – YES or NO?

    This makes good sense to me.

    You say Spencer et al. and LC2009 do not provide “robust observational evidence that clouds provide a net negative feedback”. This is your opinion, which you are, of course, entitled to.

    I would say that all the papers cited by IPCC as evidence for the assumed strongly positive feedback from clouds provide even less “robust observational evidence that clouds provide a net positive feedback”.

    So we are back to probabilities, and it is clear that these have changed since AR4, based on Wyant et al., Norris and Spencer et al. even if you are unable or unwilling to see this. Things change, Robert, and the IPCC data is outdated based on the latest information.

    Max

  2088. Robert S Says:

    “In your latest post you write that you consider the IPCC estimate of a 2xCO2 CS of 3.2C to be correct ‘not because I can’t face the facts’ or because I am ‘so entrenched in my beliefs that nothing could convince me’, but because it’s the truth’.”

    You’re mangling my words. I said the truth was that the Lindzen, Spencer, and Wyant papers do not provide strong evidence against a CS of ~3K — I did not say that ~3K CS is the truth.

    As for your questions
    1) Yes, but there is still much work to be done on superparameterizations. As Bretherton says, we should not place more confidence in CS estimates from CRMs (like superparameterizations) .
    2) Yes.
    3) I haven’t a clue if I&R said this, but I’ll take your word for it.
    4) Spencer showed a strong negative feedback. He suspects it’s clouds, but isn’t sure. But I will say YES to the question, though I disagree on the implications.
    5) I’m not aware of this study — could you post it? I have read Clement et al. 2009 (of which Norris was a co-author), which studied decadal changes in cloudiness and found a positive feedback over the Pacific. Norris also published a paper in 2006 that purported to find a positive feedback

    Click to access JCLI3558.pdf

    6) Yes, though Wyant state in later papers that their 2006 model exaggerates the cloud feedback, meaning a CS > 1.5K.

    “This is your opinion, which you are, of course, entitled to.”

    Ok, then would you cut the ‘open your mind” nonsense? I’ve provided valid reasons for why I think LC09 and Spencer papers are not robust evidence for a negative cloud feedback.

  2089. Robert S Says:

    As it turns out, there are two papers that come to opposite conclusions to that of LC09 — net positive feedbacks as deduced from ERBE data, and a best estimate CS of around 3K or so. Forster and Gregory (2006)
    http://journals.ametsoc.org/doi/abs/10.1175/JCLI3611.1
    and Tsushima et al. (2005)

    Click to access tsushima-etal-05.pdf

    It just seems odd that Lindzen mentioned neither of these to explain why their results are so different.

  2090. manacker Says:

    Robert S

    Below is link to Norris study on clouds.
    ftp://eos.atmos.washington.edu/pub/breth/CPT/norris_jcl04.pdf

    The study shows in general that both outgoing LW radiation (OLR) as well as reflected SW radiation (RSW) increased with warming over time; the net outgoing flux therefore increased. This appears to be the case at both low and middle latitudes.

    If we assume (as IPCC does) that the changes in clouds are feedbacks from observed higher surface temperatures caused by GH warming (rather than a separate forcing in itself, as assumed, for example, by Svensmark), then this would represent an observed net negative cloud feedback with surface warming over the long-term study period.

    Results show that upper-level cloud cover over low and midlatitude oceans decreased between 1952 and 1997, causing a corresponding increase in reconstructed OLR. At middle latitudes, low-level cloud cover increased more than upper-level cloud cover decreased, producing an overall rise in reconstructed RSW and net upward radiation since 1952. At low latitudes, the decline in RSW associated with the decrease in upper-level cloud cover compensates for the rise in OLR, yielding zero change in net upward radiation if the increasing low-level cloud cover trend is not considered. RSW reconstructed from increasing low-level cloud cover, if reliable, leads to an increase in net upward radiation at low latitudes. The decrease in reconstructed OLR since 1952 indicates that changes in upper-level cloud cover have acted to reduce the rate of tropospheric warming relative to the rate of surface warming. The increase in reconstructed net upward radiation since 1952, at least at middle latitudes, indicates that changes in cloud cover have acted to reduce the rate of tropospheric and surface warming.

    Just another set of long-term data that point to a net increase in outgoing flux (LW+SW) and a negative feedback from clouds with observed surface warming.

    Max

  2091. manacker Says:

    Robert S

    To the studies you cited, Forster et al. concluded a 2xCO2 CS of 1C to 4C (a pretty wide range that overlaps on the low side with LC2009), with this somewhat contradictory statement:

    There is preliminary evidence of a neutral or even negative longwave feedback in the observations, suggesting that current climate models may not be representing some processes correctly if they give a net positive longwave feedback.

    It is generally accepted that the outgoing reflected SW radiation increases with surface temperature (due to increased low altitude clouds over the tropics), so if the outgoing LW radiation also increases (= “negative longwave feedback”), this would appear to indicate a net negative feedback from clouds (as would be the case for the low end of Forster’s range).

    The second study you cite (Tsushima et al.) implies a reduction of the IPCC model-based 2xCO2 climate sensitivity (3.2±0.7C) by around 1C due to differences in the cloud feedback. [This still implies a slightly positive net feedback from clouds.] The big question appears to be the magnitude of the albedo feedback (reflected SW radiation from increased low-altitude clouds); the authors estimate this to be rather small, so end up with only a slight reduction to the IPCC range of climate sensitivity

    There also appears to be some question concerning the changes in LW and solar components of the cloud feedback

    We have applied the same feedback analysis to the annual variation obtained from the three general circulation models (submitted to AMIP-I), in which the microphysical properties of cloud is computed explicitly. Although the gain factors of overall feedback in these models happen to be approximately similar to the gain factor, which is determined using ERBE, the longwave and solar gain factors obtained from these models are quite different from the observed. Since the difference almost disappears if the contribution from the cloud feedback is removed, we believe that a major fraction of the discrepancy is attributable to the failure of the models to satisfactorily simulate the individual contributions from the longwave and solar components of the cloud feedback.

    At any rate, the studies are interesting, but not necessarily conclusive of a strongly net positive cloud feedback, as assumed by IPCC, but rather of a negative to weakly positive feedback.

    Max

  2092. manacker Says:

    Robert S

    You asked about the Ramanathan + Inamdar study I cited.

    Click to access FCMTheRadiativeForcingDuetoCloudsandWaterVapor.pdf

    Here are the quotes I mentioned related to cloud feedback.

    Clouds reduce the absorbed solar radiation by 48 W m−2 (Cs = −48Wm−2) while enhancing the greenhouse effect by 30Wm−2 (Cl = 30Wm−2), and therefore clouds cool the global surface–atmosphere system by 18Wm−2(C = −18Wm−2) on average. The mean value of C is several times the 4Wm−2 heating expected from doubling of CO2 and thus Earth would probably be substantially warmer without clouds.

    The main message to infer from the recent studies is that we lack accurate data to answer some fundamental questions. How much solar energy reaches the surface of the planet? How do clouds regulate the surface solar insulation? Considerable work is needed to develop radiation-budget instruments for surface-based measurements.

    Cloud feedback. This is still an unresolved issue (see Chapter 8). The few results we have on the role of cloud feedback in climate change is mostly from GCMs. Their treatment of clouds is so rudimentary that we need an observational basis to check the model conclusions. We do not know how the net forcing of −18 W m−2 will change in response to global warming. Thus, the magnitude as well as the sign of the cloud feedback is uncertain.

    Max

  2093. Robert S Says:

    “It is generally accepted that the outgoing reflected SW radiation increases with surface temperature (due to increased low altitude clouds over the tropics)…”

    Forster and Gregory explicitly state that reflected SW has decreased over the ERBE period, while the OLR has increased, and they find that the net cloud feedback is neutral to slightly positive. BUT, given the strongly positive albedo feedback found in by ERBE scanner, the net feedback is found to be around 2.3 Wm-2/K — half the magnitude, but opposite sign found in LC09. Speaking of which, another rebuttal of LC09 has been accepted to GRL; this time by Murphy. He states in the conclusion:

    The Earth as a whole can only gain and lose significant amounts of heat from sunlight and thermal emission. When considering a limited region, such as the tropics, heat transport to other regions must be considered. LC09 did not consider such heat transport even though small changes in the horizontal heat transport could swamp their signals.

    Compared to a simple regression of individual data points, the LC09 method of correlating differences obtained from various time intervals results in a less accurate estimate of the slope and misleadingly high correlation coefficients. Within their method, choosing intervals in order to obtain the highest correlation coefficient will bias the result to a low climate sensitivity. In addition, LC09 smoothed the shortwave data before their linear fit, a process that can bias the slope to low climate sensitivity. More rigorous correlations between surface temperature and satellite measurements of outgoing radiation (Forster and Gregory, 2006; Murphy et al., 2009; Chung et al., 2010) show two important results. First, the longwave feedback is smaller than the blackbody response, consistent with a positive water vapor climate feedback. Second, the shortwave feedback was positive during both the ERBE and Clouds and the Earth’s Radiant Energy System (CERES) observation periods.

    “The big question appears to be the magnitude of the albedo feedback (reflected SW radiation from increased low-altitude clouds); the authors estimate this to be rather small, so end up with only a slight reduction to the IPCC range of climate sensitivity”

    They actually found a very large positive feedback in both SW and LW components. If you’ll read section 4, Observed feedbacks, you’ll see that the authors find a feedback parameter f of roughly 0.7, implying a climate sensitivity ~4K for 2xCO2. BUT, they reduce the feedback parameter by 0.1 because of the gain factor for albedo feedback, bringing their stated best estimate of CS to ~3K. In neither FG06 nor Tsushima do they find a best estimate CS to be much different to that of the IPCC’s 3.2K.

    The largest difference between FG06/Tsushima and LC09 is that both the former found a fairly large positive net feedback (though with a relatively large uncertainty), whereas LC09 found a very large net negative feedback of roughly -4.5 Wm-2/K (and thus, a CS of 0.5K).

  2094. Robert S Says:

    The Norris 2004 study you post doesn’t appear to have been published anywhere, but it’s interesting nonetheless. Wylie et al. (2005) on the other hand found no trend in total cloud cover for the period 1979-2001 using data from NOAA HIRS, but did find a small but statistically significant increase in high cloud cover over the tropics, which goes against the Iris hypothesis.

    There are other problems with Norris (2004), such as the comparison of reconstructed cloud cover to trends from ISCCP, which according to Evan et al. (2007), are more likely due to a satellite viewing geometry artifact rather than physical changes in the atmosphere. CRFs as used in Norris (2004) are also not the best metric for assessing the net effect from clouds, as per Soden (2004).

  2095. manacker Says:

    Robert S

    You are probably correct in concluding that no studies on cloud feedbacks are perfect, starting with Soden, Ramanathan + Inamdar, etc., etc.

    But I also believe it is reasonable to conclude, based on all the recent data out there, that the net feedback from clouds (LW+SW) is very likely to be negative (rather than strongly positive, as assumed by all the models cited by IPCC), and that the 2xCO2 climate sensitivity is very likely to be below 1.5C (probably closer to 1C).

    This is based on physical observations as well as improved model studies using superparameterization, which came out after IPCC AR4.

    Trenberth has recently been quoted as saying that clouds may act as a “natural thermostat”, and it very much appears that this is the case.

    Max

  2096. A C Osborn Says:

    manacker, I admire your stamina & dedication in trying to get Bart’s bloggers to admit that the “Science” has moved on and that there IS contradictory evidence out there, keep up the good work.

  2097. manacker Says:

    Robert S

    Posted links to seven studies that all show negative cloud feedback, primarily resulting from strong negative SW feedback to to increased low level cloud reflectivity with warming.

    Somehow this post has gotten stuck in the incoming filter (maybe because of all the many links) , so will keep trying to post it.

    Max

  2098. manacker Says:

    Robert S

    Posted this earlier, but the “spam filter” apparently does not like all the reference links I cited – so have broken post down into seven separate parts and am posting links separately:

    Below are a few more papers implying a net negative cloud feedback. In addition to the studies I already cited (Wyant, Spencer, Norris), these indicate that the climate models (such as those cited by IPCC) understate the significant observed negative feedback from SW cloud reflectivity.

    Part 1:
    Low-Latitude Cloud Feedbacks CPT at Stony Brook, Wuyin Lin and Minghua Zhang
    (see reference 1)

    Two climate change experiments have been carried out with the CAM, in which perturbations of SSTs from other coupled models are used as forcing. In both experiments, the model showed negative cloud feedback due to dominance of the reduction of shortwave cloud forcing.
    All experiments show increase of low cloud clouds, but decrease of high and middle clouds. The increase low cloud amount however is from the optically thick low clouds, while the decreased middle and high clouds are from the optically thin clouds.

    Link and Part 2 to follow

    Max

  2099. manacker Says:

    Link for Part 1
    Reference 1
    http://climate.msrc.sunysb.edu/cpt/

  2100. manacker Says:

    Robert S

    Here is Part 2

    Zhang et al. “Mechanisms of Low Cloud-Climate Feedback in Idealized Single-Column Simulations with the Community Atmospheric Model, Version 3 (CAM3)”, Journal of Climate, Sep. 15, 2008
    (see reference 2)

    The stratiform condensation is about 14% larger in the warm simulation than in the control simulation, as shown by the blue lines, since it is determined by the relative humidity rather than the moisture content. The larger condensation leads to more cloud liquid water as shown in Fig. 10c. This contributes to the increase of shortwave cloud cooling or the negative cloud feedback.

    Link plus part 3 to follow.

    Max

  2101. manacker Says:

    Link for Part 2
    Reference 2
    http://findarticles.com/p/articles/mi_7598/is_200809/ai_n32200137/pg_10/?tag=content;col1

  2102. manacker Says:

    Robert S

    Part 3:

    Another post-AR4 study (Spencer + Braswell, 2008) finds that current model diagnoses of cloud feedbacks could be significantly biased in the positive direction.

    (see reference 3)

    For those model runs producing monthly anomaly statistics similar to those measured by satellites, the diagnosed feedbacks have positive biases generally in the range of -0.6 to -1.3 W m-2 K-1. The amount of bias depends mostly upon the size of the non-feedback cloud variability relative to the surface temperature variability. These results suggest that current observational diagnoses of cloud feedback could be significantly biased in the positive direction.

    Link plus Part 4 to follow

    Max

  2103. manacker Says:

    Link for Part 3
    Reference 3
    http://journals.ametsoc.org/doi/abs/10.1175/2008JCLI2253.1

  2104. manacker Says:

    Robert S

    The spam filter had a problem with Part 4, so am sending Part 5 first:

    Liu et al. (1997)
    (see reference 5)

    Then it is found that in the diagnostic cloud case, the negative feedback of the solar short wave (SW) flux acts significantly to balance the effect of upwelling / downwelling in addition to the latent flux. In addition, the variability of the SW flux is shown to be closely related to the variability of the middle and high cloud covers. Therefore, the negative feedback of the SW surface flux may have significant contribution to the cloud feedback on the SST variability.

    Link and Part 6 to follow.

    Max

  2105. manacker Says:

    Link for Part 5
    Reference 5
    http://www.springerlink.com/content/r154kx168n535768/

  2106. manacker Says:

    Robert S

    Here is Part 6:

    (see reference 6)

    Roeckner et al. Cloud optical depth feedbacks and climate modeling

    (see reference 6)

    A feedback analysis of the simulated climate change supports earlier suggestions of the importance of cloud optical depth feedbacks. The net effect of clouds is to provide a negative feedback on surface temperature, rather than the positive feedback found in earlier general circulation model studies without considering cloud optical depth feedbacks

    Link and Part 7 to follow.

    Max

  2107. manacker Says:

    Link for Part 6
    Reference 6

    Click to access 329138a0.pdf

  2108. manacker Says:

    Robert S

    Here is Part 7 = last part. (Will try sending Part 4 afterward.)

    (see reference 7)

    Although clouds contribute to the greenhouse warming of the climate system by absorbing more outgoing infrared radiation (positive feedback), they also produce a cooling through the reflection and reduction in absorption of solar radiation (negative feedback) (Cubasch & Cess, 1990). It is generally assumed that low clouds become more reflective as temperatures increase, thereby introducing a negative feedback, whilst the feedback from high clouds depends upon their height and coverage and could be of either sign (Gates et al., 1992).

    Link to follow.

    Max

  2109. manacker Says:

    Link for Part 7
    Reference 7
    http://www.global-climate-change.org.uk/6-9-2-2.php

  2110. manacker Says:

    Link for Part 4
    Reference 4
    http://cat.inist.fr/?aModele=afficheN&cpsidt=21325482

  2111. manacker Says:

    Robert S

    Based on all the studies I cited (total of 10), it is clear that the IPCC model assumption of strong net positive cloud feedback is not substantiated by the observations or the latest model studies using improved techniques.

    These studies point to a more realistic estimate for the 2xCO2 climate sensitivity (including all feedbacks) of 1.0 to 1.5C, rather than 2.5 to 3.9C (as estimated by IPCC, with the assumed positive cloud feedback alone contributing 1.3C of this total).

    Let’s move on now to what this all means.

    Will come back with some thoughts on this.

    Regards,

    Max

  2112. manacker Says:

    Robert S

    Well, let’s do a quick calculation on projected warming to year 2100, based on the data we have discussed, which show a net negative feedback from clouds with warming. Let’s assume that all the other feedbacks are as assumed by the climate models cited by IPCC (although I believe that there is a good case for the postulation that the water vapor feedback is exaggerated., as discussed earlier).

    Wyant et al. have estimated a 2xCO2 climate sensitivity of 1.5C, including this negative cloud feedback as determined by superparameterization. Let’s forget Spencer et al. for now, who also showed a negative cloud feedback based on CERES observations over the tropics, but on a shorter-term basis, and stay with the Wyant estimate. The net negative cloud feedback (primarily resulting from SW reflection by increased low altitude clouds) was also confirmed in the other studies I cited.

    Eliminating the assumed strongly positive cloud feedback from IPCC puts the 2xCO2 climate sensitivity at 3.2 – 1.3 = 1.9C. If we include the impact of a net negative feedback puts this at somewhere between 1.0 and 1.5C.

    Using the IPCC “wordmanship”, let’s say that it is somewhere between “extremely likely” (>95%) and “very likely” (>90%) that the 2xCO2 climate sensitivity does not exceed 1.5C.

    So let’s stick with a 2xCO2 climate sensitivity (once corrected for cloud feedback) of 1.5C.

    In 1850, atmospheric CO2 was an estimated 285 ppmv. Today (2010) it is at 390 ppmv, as measured at Mauna Loa.

    Over the past 50 years it has increased at a compounded annual growth rate of 0.4% per year. This was also at the same rate over the past 5 years, so there has been no observed acceleration in the CAGR since the mid 20th century. [The CAGR for the entire 160-year record was only 0.2%, but that includes the beginning of industrialization at a much slower rate with a much smaller primarily rural population in the late 19th and early 20th centuries.]

    IPCC Case B1 assumes it will continue at a somewhat higher CAGR of 0.48% per year, reaching a level of around 570 ppmv (or 2x the estimated 1850 value) by year 2100. This CAGR assumption seems reasonable (even if it may be a bit on the high side).

    With a 2xCO2 CS of 1.5C, we have:

    dT = 1.5C for 2xCO2
    dT = 1.5 * (ln 390 / 285) / (ln 2) = 0.7C from 1850 to 2010
    dT = 1.5 * (ln 570 / 390) / (ln 2) = 0.8C from 2010 to 2100

    In other words, we have theoretically seen 0.7C warming from increased CO2 from 1850 to today, and will see a further 0.8C warming from today to year 2100.

    In actual fact, we did see warming of around 0.7C (from all causes) from 1850 to today. So, even without getting into the detail of attribution studies, the observations give a reality check for the theory (which should make Bart happy).

    Three other IPCC “scenarios” (A1T, B2 and A1B) assume a sharp increase in CAGR of CO2 to 0.65%, 0.80% and 0.86%, respectively. This increase does not appear reasonable, in view of ever increasing scarcity and higher prices of petroleum, world-wide efforts to increase energy efficiency and decrease the use of fossil fuels and the expected leveling off of human population in mid-century. (In fact, it is highly likely that the CAGR will decrease, due to these factors, rather than increase or remain at 0.4% CAGR).

    The final two IPCC “scenarios” (A2 and A1F1) assume atmospheric CO2 levels by 2100 that are physically impossible to reach, even if all optimistically estimated fossil fuel reserves on our planet were consumed by then (there just isn’t enough contained carbon), so they can be discarded, as well.

    So we are really down to a forecasted theoretical GH temperature increase by year 2100 that is only slightly higher than what we have already seen, without any noticeable problems, since 1850.

    I do not believe that this is cause for concern, Robert.

    Do you?

    If so, why?

    Max

  2113. manacker Says:

    A C Osborn

    Thanks for your post.

    Max

  2114. Robert S Says:

    “Let’s move on now to what this all means.”

    Is this a joke? I’m sorry, but no. Your recent slew of links adds little to this discussion — 4/7 are modeling studies, 3/4 of which were published at least a decade before AR4 (Parts 4 and 6 were published 2 decades before); 2/7 were just webpages, and not studies, 1 of which was simply an explanation of the cloud feedback*; and 1/7 (S+B) adds nothing to whether or not the CF is positive or negative.

    *This was your part 7 link. The question isn’t whether increased Low Cloud Cover (LCC) provides a negative feedback, but whether LCC increases, where it increases, and what HCC does. Both Clement (2009)
    http://scienceonline.org/cgi/content/abstract/sci;325/5939/460
    and Norris 2005

    Click to access JCLI3558.pdf

    found that LCC decreased in limited regions. Wylie 2005 found that total global cloud cover did not change from 1979-2001, but HCC showed a small but significant increase over the tropics (not decrease as supposed by the Iris hypothesis).

    As for Zhang (Part 2 of your links), Kiehl (2005) found the higher the resolution, the higher the CS from the CAM3 model.
    ftp://eos.atmos.washington.edu/pub/breth/CPT/kiehl_etal_CCSM3_climsens_jcl06.pdf

    -LC09 contained large problems, as highlighted by Trenberth et al. 2010 and Murphy 2010, and came to the exact opposite conclusion of two previous studies using the same dataset (but didn’t discuss why).
    -Even Spencer is unsure of how his method for diagnosing feedbacks is related to CS, but Lin2010 found that this method does not accurately represent the true feedback signal.
    -Superparameterizations is a new technique that may be better than GCMs in some ways, but worse in others. A co-author of Wyant 2006 (Bretherton ) says we should not be more confident in CS estimates from superparameterizations and other CRMs.

    You can move on if you’d like, but the evidence does not support such a step.

    “In actual fact, we did see warming of around 0.7C (from all causes) from 1850 to today. So, even without getting into the detail of attribution studies, the observations give a reality check for the theory (which should make Bart happy).”

    Unless you think the system equilibrates instantly with a new level of forcing, your calcs do not provide much of a ‘reality check’ for a CS<1.5K.
    .
    .
    I don't think this conversation is going anywhere. I continue to say that the evidence is pretty sparse either way (it is), and you continue to say something along the lines of "given that CS is now proven to be <1.5K…". I'm not sure I'm up to the challenge of convincing the unconvincable.

  2115. manacker Says:

    Robert S

    You are wiggling and squirming a bit here.

    I have shown you several studies, which point to a net negative cloud feedback. These are from actual physical observations as well as improved modelling techniques, which were not yet available at the time of AR4.

    On the basis of this (very likely) net negative cloud feedback, the (very likely) 2xCO2 climate sensitivity is at or below 1.5C, instead of 3.2C, as assumed by IPCC with an assumed strongly positive net cloud feedback.

    Get used to it, Robert.

    You say “the evidence is pretty sparse either way”. That may be right, but it is just a bit more sparse (or contrived) in the direction of the IPCC assumption, that’s all.

    And then we have the cooling of the atmosphere (surface and troposphere) after 2000 and of the upper ocean, since improved Argo measurements were installed in 2003, despite record increase of CO2 and model forecasts of strong warming, which also speak against a high climate sensitivity.

    In light of all the evidence I cited, you are definitely not “up to the challenge of convincing” me that the exaggerated CS according to IPCC is correct.

    And I can see from this exchange that your mind has always been made up that it is correct, regardless of any evidence to the contrary.

    So let’s let it rest there and hope any lurkers here can make up their own minds.

    It was nice blogging with you, anyway. Thanks.

    Max

  2116. manacker Says:

    Robert S

    This may help explain why the IPCC assumptions on positive feedbacks are wrong.
    http://www.drroyspencer.com/2009/05/a-layman’s-explanation-of-why-global-warming-predictions-by-climate-models-are-wrong/

    Max

  2117. DLM Says:

    Max says:”On the basis of this (very likely) net negative cloud feedback, the (very likely) 2xCO2 climate sensitivity is at or below 1.5C, instead of 3.2C, as assumed by IPCC with an assumed strongly positive net cloud feedback.”

    That pretty much accounts for all of Kevin ‘Travesty’ Trenberth’s “missing heat”. But my guess is they will continue to blame the thermometers. The dogma needs heat, whether real or imagined.

    You have a lot of patience Max. You have been very gentle with your obstinate, obtuse pupils. Like a seasoned kindergarten teacher.

  2118. manacker Says:

    DLM

    Thanks for your post.

    Yes, it takes patience.

    You mention Kevin ‘Travesty’ Trenberth’s “missing heat”.

    It’s true, as you wrote, that most of the AGW-supporters will “blame the thermometers” for the current cooling (and the resulting “missing heat”).

    But just how big a “travesty” is this really for the AGW paradigm?

    From 2000 to 2009, CO2 has increased from 369 to 390 ppmv.

    Using the IPCC estimate of 3.2C GH warming for 2xCO2 (which Robert S believes is correct, despite all the evidence to the contrary), this increase should have caused an atmospheric warming of 0.26C.

    In actual fact we saw a cooling of 0.08C at the surface and 0.11C in the troposphere over this period, for a net discrepancy between the observed facts and the AGW theory of 0.35C (or half the amount of total warming observed from 1850 to today). Ouch!

    Let’s assume that the theoretical 0.26C atmospheric warming is correct, but that it went into the upper ocean (top 500 meters), where it is hiding to come back out again as added warming some day (as James E. Hansen has suggested with his “hidden in the pipeline” postulation).

    The upper 500 meters of ocean has 170 times the heat capacity of the entire troposphere, so this warming is equivalent to a warming of 0.26 / 170 = 0.0015C of the upper ocean.

    This infinitesimal amount of upper-ocean warming would be impossible to measure, but Argo measurements tell us that, in actual fact, the upper ocean has cooled since they started in 2003.

    So the atmosphere has cooled (both at the surface and in the troposphere), while the GH theory (according to IPCC) tells us it should have warmed.

    At the same time, the upper ocean has also cooled.

    The latent heat from ice that has melted plus water that has evaporated into the atmosphere since 2000 is too small to make any difference.

    So where is this missing energy if it cannot be found anywhere on our planet?

    Instead of blaming the thermometers, Kevin Trenberth thinks it may be going back into outer space, with clouds acting as a natural thermostat to reflect more incoming SW radiation (back to the long discussion with Robert S on cloud feedbacks).

    See “The Mystery of Global Warming’s Missing Heat”:
    http://www.npr.org/templates/story/story.php?storyId=88520025

    Another postulation has it disappearing into the lower ocean. This is so vast, with such a large heat content, that it would have warmed by only 0.0002C!

    In either case, it is not “lurking in hiding” somewhere to come out and eventually cause more atmospheric warming, as Hansen has postulated.

    And the “hidden in the pipeline” hypothesis has thereby been falsified (along with the postulation of a 2xCO2 climate sensitivity of 3.2C).

    For a more detailed discussion see:

    Have Changes In Ocean Heat Falsified The Global Warming Hypothesis? – A Guest Weblog by William DiPuccio

    Max

  2119. cohenite Says:

    A remarkable discussion between manacker and Robert; a couple of queries: manacker, you mentioned a defence of L&C’s OLR CS paper by H Tuuri; do you have a link?

    Robert you referred to the Sherwood and Allen, windshear paper as establishing a THS; personally I think this is a dreadful paper as Lubos describes;

    http://motls.blogspot.com/search?q=sherwood+and+allen

    Secondly, you refer to the Clements paper as supporting a positive feedback from clouds; the Clements paper says this:

    “The only model that passed this test simulated a reduction in cloud cover over
    much of the Pacific when greenhouse gases were increased, providing modeling evidence for a
    positive low-level cloud feedback”

    That is, it got hotter with less clouds; this something that Kump and Pollard had already noted;

    http://www.scienceonline.org/cgi/content/abstract/320/5873/195

    That is, less clouds hotter, more clouds cooler. This to me indicates that clouds are a moderating feedback which is different from a +ve or -ve one and the fact that is warmer at night with clouds tends to support this view. Then there is Pinker who wrote a paper on SW flux at TOA and BOA; in the context of cloud forcing, which is perhaps a better description than feedback, her comments here are helpful:

    Click to access debate_australia_tim_lambert.pdf

    The upshot of Pinker’s work is that clouds have a net negative forcing at BOA:

    (surface) SW CRF at BOA ~ -0.8 – -1.0 W/m^2/% cloud

    and

    surface LW CRF at BOA ~+0.6 W/m^2/% cloud cover

    Therefore net SW forcing at BOA ~ -0.2 – -0.4 W/m^/% cloud cover

  2120. manacker Says:

    cohenite

    For H. Tuuri analysis and validation of LC09 numbers see responses 51, 55, 56, 58 and 59 below:
    http://www.realclimate.org/index.php/archives/2010/01/lindzen-and-choi-unraveled/comment-page-2/#comments

    Thanks for links to Kemp + Pollard paper on biological cloud feedbacks plus the Motl discussion of the Sherwood + Allen “global blowing” proxy to find the missing tropospheric hot spot GH fingerprint.

    Max

  2121. cohenite Says:

    Thank you manacker; you may be interested in my take on the Dessler papers here:

    http://jennifermarohasy.com/blog/2009/04/more-worst-agw-papers/

  2122. manacker Says:

    cohenite

    Thanks for link to your blog on jennifermarohasy.

    I’d have to agree that the Dessler et al. attempt to rewrite his co-authored 2004 observations (Minschwaner + Dessler) that RH decreased sharply with surface warming in the tropics (without presenting any new observational data) ranks among the worst AGW papers out there. Maybe it doesn’t quite reach the level of the Sherwood “global blowing” treatise, but it comes close. Another one (long forgotten by now, except by IPCC) is the since debunked “calm night / windy night” attempt to refute the UHI by Parker et al.

    BTW, I don’t know if you have seen this critique of IPCC AR4 (primarily WG1) and SPM 2007 reports, which was assembled on a now-defunct CA thread; it points out a series of errors, exaggerations and false claims in these reports.
    http://sites.google.com/site/globalwarmingquestions/ipcc

    Max

  2123. cohenite Says:

    I haven’t seen the “calm night/windy night” paper but I’m a bit of a fan of worst papers; here are 2 recent contenders; the first is predicting future temp increases of 25 F with increased humidity, the wet-bulb temperature:

    AGW to reach…"The Edge of Wetness"…

    The second one is concerned with CS but is uncertain about the levels of certainty;

    Insufficient Forcing Uncertainty

    As a matter of interest what is the worst case scenario for future temp increases based on AGW?

  2124. Robert S Says:

    “personally I think this is a dreadful paper as Lubos describes”

    That’s neat. It wasn’t the only paper I referenced.

    “That is, it got hotter with less clouds”

    Yes, that’s why it is a positive feedback — Clement found that cloud cover decreased with increasing temperatures. You can argue about cause and effect with regards to what initiated the warming, I suppose, but cloud cover continued to decrease with warming temperatures.

    And I haven’t read Pinker, so could you explain specifically how it finds evidence for a negative cloud feedback?

    “you may be interested in my take on the Dessler papers here:

    Your ‘rebuttal’ of Dessler 2008 is pretty lame, to put it mildly (almost as lame as the “rewriting history” response). In fact, taking a look at your list of “best climate research papers of all time”, I feel my time would be better spent elsewhere. No offense meant.

  2125. cohenite Says:

    None taken, I’ve been insulted by experts everywhere from Tamino to Deltoid; as for feedback from clouds I think I said clouds were a moderator [sic] and this can be contrary as my night example shows; that is, clouds at night warm whereas clouds at day cool; as for Professor Pinker, my friend Steve Short summed up her findings and cloud feedback thus:

    “According to Pinker (2005), surface solar irradiance increased by 0.16 W/m^2/year over the 18 year period 1983 – 2001 or 2.88 W/m^2 over the entire period. This was a period of claimed significant anthropogenic global warming.

    This change in surface solar irradiance over 1983 – 2001 is almost exactly 1.2% of the mean total surface solar irradiance of recent decades of 238.9 W/m^2 (K, T & F, 2009).

    According to NASA, mean global cloud cover declined from about 0.677 (67.7%) in 1983 to about 0.644 (64.4%) in 2001 or a decline of 0.033 (3.3%). The 27 year mean global cloud cover 1983 – 2008 is about 0.664 (66.4%) (all NASA data)

    The average Bond Albedo (A) of recent decades has been almost exactly 0.300, hence 1 – A = 0.700

    It is possible to estimate the relationship between albedo and total cloud cover about the average global cloud cover and it is described by the simple relationship:

    Albedo (A) = 0.250C + 0.134 where C = cloud cover. The 0.134 term presumably represents the surface SW reflection.

    For example; A = 0.300 = 0.25 x 0.664 + 0.134

    This means that in 1983; A = 0.25 x 0.677 + 0.134 = 0.303

    and

    in 2001; A = 0.25 x 0.644 + 0.134 = 0.295

    Thus in 1983; 1 – A = 1 – 0.303 = 0.697

    and in 2001; 1 – A = 1 – 0.295 = 0.705

    Therefore, between 1983 and 2001, the known reduction in the Earth’s albedo A as measured by NASA would have increased solar irradiance by 200 x [(0.705 – 0.697)/(0.705 + 0.695)]% = 200 x (0.008/1.402)% = 1.1%

    This estimate of 1.1% increase in solar irradiance from cloud cover reduction over the 18 year period 1983 – 2001 is very close to the 1.2% increase in solar irradiance measured by Pinker for the same period.

    Within the precision of the available data and this exercise, it may therefore be concluded that it is highly likely that Pinker’s finding was due to an almost exactly functionally equivalent decrease in Earth’s Bond albedo over the same period resulting from global cloud cover reduction.

    Hence surface warming over that period may be reasonably attributed to that effect.”

  2126. Robert S Says:

    I’m trying to understand precisely what you mean by ‘moderating feedback’ — somehow different than a negative feedback?

    Neither Pinker nor Steve Short provide evidence for a negative cloud feedback. And Short’s analysis seems overly simplistic without taking into account the optical properties and location of the clouds involved in the reduction of CC.

  2127. Robert S Says:

    Considering Wylie 2005 finds no trend in total cloud cover for the period 1979-2001, it sounds like we have conflicting data.

    By “NASA data”, I assume Steve is referring to ISCCP, which Pinker also used for determining trends in S, but as Evan 2007 shows, trends from this dataset are suspect

    Click to access Evan_etal_GL028083.pdf

  2128. cohenite Says:

    Well Robert, a +ve feedback causes Venus, a -ve feedback causes snowball and a moderating feedback causes homeostasis with some error bars [think woolly mammoths and alternatively GREENland].

  2129. Robert S Says:

    So not different from a “negative feedback”…

  2130. manacker Says:

    Robert S

    You wrote to cohenite:

    You can argue about cause and effect with regards to what initiated the warming, I suppose, but cloud cover continued to decrease with warming temperatures.

    Duh! I can see this every day when scattered clouds appear and disappear, without contriving to call it a “positive feedback”, somehow magically related to atmospheric CO2 levels.

    To reword your sentence:

    So warming continued to increase as cloud cover decreased.

    (Happens every day, Robert, quite independently of atmospheric CO2 levels.)

    This is supposed to be “science”?

    Max

  2131. manacker Says:

    Robert S

    A change of subject.

    IPCC has been myopically fixating on the recent “blip” in our climate (measured since 1850, but with emphasis starting around 1976), trying to tie it to another “blip” in atmospheric CO2 (measured since 1958, with some somewhat questionable ice core data before 1958), which IPCC is assuming has come from anthropogenic sources. Since the theoretical physics behind GH warming only predict a very small effect of doubling CO2, all sorts of “positive feedbacks” are contrived to make this GH warming appear less insignificant (our recent exchange).

    But how realistic is this all?

    For the largest part of the past 500 million years atmospheric CO2 has been significantly higher than today, with temperatures warming and cooling independently of CO2.

    From: R.A. Berner and Z. Kothavala—GEOCARB III: “A revised model of atmospheric CO2 over Phanerozoic time”

    Click to access qn020100182.pdf

    There appears to have been very high early Paleozoic levels of CO2 (7,000+ ppmv), followed by a large drop during the Devonian, and a rise to moderately high values (2,000+ ppmv) during the Mesozoic, followed by a gradual decline through both the later Mesozoic and Cenozoic, finally reaching a low point at current level of around 300 ppmv.

    Another chart shows our planet’s temperature over this same geological history, plotted against atmospheric CO2 levels. This shows no correlation between CO2 and temperature (and certainly no evidence of causation).
    http://www.freerepublic.com/focus/f-news/1644060/posts

    Then there is the shorter-term Vostok record, which goes back to 420,000 years BP. This shows CO2 lagging temperature by several centuries, with temperature decreasing at times when CO2 levels are relatively high and increasing at other times when they are much lower, i.e. no apparent indication that CO2 has been the “driver” of temperature.

    Click to access 1999.pdf

    Click to access CO2,Temperaturesandiceages-f.pdf

    Nowhere do I see the dreaded “dangerous CO2 level of at most 450 ppmv” where “irreversible tipping points” in our planet’s climate are reached, according to James E. Hansen’s computer models.

    Maybe Hansen needs to brush up on Geology 101 and forget his computer models, the “hidden in the pipeline” postulation, the “coal death trains” and all that other nonsense.

    And IPCC should fold up and be replaced with a much smaller group of serious scientists, who do not start off with an agenda to sell AGW.

    Max

  2132. A C Osborn Says:

    Max, I think I have you sussed now, you want to be the last poster on the longest AGW blog ever. :)

  2133. MapleLeaf Says:

    Max,

    Please watch this,it is talk given by Dr. Alley at the American Geophysical Union last year:

    http://thingsbreak.wordpress.com/2009/12/19/richard-alley-the-biggest-control-knob-carbon-dioxide-in-earths-climate-history/

    I suggest if you contact Dr. Alley should you have any questions or wish to try and show him that he and his learned peers have it all wrong. Hopefully, you will learn a great deal watching his talk– I and many others sure did.

  2134. manacker Says:

    cohenite

    I’m sure you have seen the various Spencer studies on clouds, but this one gets right to the basics, pointing out a key error in the way IPCC models view clouds.

    Observations by Spencer et al. showed that on a short-term basis over the tropics clouds act as a strong negative feedback or dampening effect with surface warming. At the same time, Roy Spencer has pointed out why the postulation that clouds act as “positive feedbacks” to enhance the impact of GH forcing by CO2 is basically a false premise.
    http://www.drroyspencer.com/research-articles/satellite-and-climate-model-evidence/

    Clouds represent a natural forcing all on their own, with low level clouds reflecting incoming SW radiation and higher clouds generally acting to absorb and re-radiate outgoing LW energy radiated from the surface. The net impact of clouds is estimated to be one of strong cooling of our planet.

    In attempting to explain the missing heat over the past several years despite record CO2 increase, Kevin Trenberth has indicated that the recent cooling may have been the result of clouds acting as a “natural thermostat”. This is a very astute and unusual observation, coming from someone who usually sticks closely to the “party line paradigm”.

    Ramanathan and Inamdar estimated net cooling of 48 W/m^2 from low clouds and net warming of 30 W/m^2 from high clouds, for a net cooling impact from clouds of 18 W/m^2 (or somewhat more than 10 times the total estimated forcing from all GHGs since 1750, according to IPCC).

    The net cooling impact is thus quite large.

    We have all observed that temperature drops when low-level clouds block the sun, and that it warms up again when the clouds move away. The clouds are not “reacting” as a “feedback” to changes in the surface temperature, but are simply blocking the net SW solar radiation reaching the surface, and therefore the temperature. This is a clear example of “forcing” by clouds.

    As Willis Eschenbach has observed and reported, in the tropics clouds begin to form as the ocean temperature rises; this is an example of a dampening effect, or negative “feedback”.

    Then there is the effect of slowing down nighttime cooling at higher latitudes by absorbing outgoing LW radiation. Is this a positive “feedback” or a natural “forcing”?

    Using CERES observations from the Aqua and Terra satellites, Spencer shows that:

    strong negative feedback is observed to occur on shorter time scales in response to non-radiative forcing events (evaporation/precipitation), which are superimposed upon a more slowly varying background of radiative imbalance, probably due to natural fluctuations in cloud cover changing the rate of solar heating of the ocean mixed layer.

    From these data Spencer concludes:

    The resulting picture that emerges is of an IN-sensitive climate system, dominated by negative feedback. And it appears that the reason why most climate models are instead VERY sensitive is due to the illusion of a sensitive climate system that can arise when one is not careful about the physical interpretation of how clouds operate in terms of cause and effect (forcing and feedback).

    And

    And as I have tried to demonstrate here, the main reason for the current inadequacy of such methods of comparison between models and observations is the contaminating effect of clouds causing temperatures to change (forcing) when trying to estimate how temperatures cause clouds to change (feedback). This not a new issue, as it has been addressed by Forster and Gregory (2006, applied to satellite measurements) and Forster and Taylor (2006, applied to climate model output). I have merely demonstrated that the same contamination occurs from internal fluctuations in clouds in the climate system.

    The bottom line from the model and observational evidence presented here is that:
    Net feedbacks in the real climate system — on both short and long time scales — are probably negative. A misinterpretation of cloud behavior has led climate modelers to build models in which cloud feedbacks are instead positive, which has led the models to predict too much global warming in response to anthropogenic greenhouse gas emissions.

    As IPCC put it in SPM 2007:

    Cloud feedbacks remain the largest source of uncertainty.

    It appears that a key part of this problem is that the climate models all view clouds solely as a “feedback” to anthropogenic forcing, rather than as a natural “forcing” per se.

    Max

  2135. manacker Says:

    Maple Leaf

    Yeah. I have seen “the learned” Dr. Alley’s presentation.

    Upon closer examination, it is not too convincing.

    Alley skirts around the “what came first: CO2 or temperature” question in the Vostok record, without really addressing it head on.

    For the rest of the problems, I commented on this thread to a blogger named DLM on Alley’s talk (April 11, 2010, 22:29).

    But back to the geologic record, Alley just touches on two relatively brief periods, rather than looking at the entire record. From the entire record one can see that CO2 has not been the major driver of our planet’s climate over geological time.

    Don’t fall for a sales pitch, just because it is given by a “learned scientist”.

    Max

  2136. manacker Says:

    A C Osborn

    Being the “last poster on the longest AGW blog ever” is not really that much of a thrill, but responding to challenges by others and engaging in a positive debate with them is rewarding.

    It becomes less interesting if it slips into “ad hom” attacks or censorship of “inconvenient” observations, but Bart has kept this site pretty much clear of that, to his credit.

    Max

  2137. MapleLeaf Says:

    AC Osborn,

    I see why you are frustrated. I just watched Dr. Alley’s talk again. Now Max says above that:

    ” Alley skirts around the “what came first: CO2 or temperature” question in the Vostok record, without really addressing it head on.”

    Watch the video from minute 33:50. Alley then speaks directly to that very issue for about five minutes. And, Max fails to recognise that just because CO2 acted as a feedback in the past, it does not preclude it acting as a driver at other times, such as now, if increased or decreased sufficiently. There are both surface and space-based observations that clearly demonstrate that we are currently experiencing an enhanced Greenhouse effect. Murphy et al. (2009), as you probably know, show that the planet has been in a net positive energy imbalance since the fifties b/c of elevated GHGs.

    Alley and others argue that CO2 is an important player in climate change, they do not dismiss the role of other feedbacks and drivers.

    To suggest Alley”s talk was a sales pitch, as Max has done, is ludicrous and an insult to his work and that of his colleagues.

    Anyhow A C Osborn, good luck, but I think I’ve seen plenty here to convince that you are up against a classic D-K victim, and I’m not going to waste more of my time, and suggest that you don’t either :)

    IMHO, I think that Bart has a problem with his blog. Allowing differing opinions is great, but when multiple trolls hi-jack almost every new thread, then it detracts from the science, and discourages rational and objective views from being aired. I prefer the approach used by John Cook– that way both sides are forced to stay on topic or have their comments deleted. I am OK having my posts deleted if Bart, for example, deems it to be off topic or if it breaks another of the house rules. The downside is that it makes for a lot of work for the moderator, and that can be a huge issue.

    Anyhow, in an attempt to stay on topic I made a strong case for the robustness on the Curry thread– yes, I was OT, and eventually tried to move the debate over here.

  2138. manacker Says:

    Maple Leaf

    CO2 acted as a feedback in the past

    No, ML. It acted as a “follower” (with temperature as the “leader”).

    And the record shows that there were times of high CO2 that temperature went down, and times of much lower CO2 that temperature went up.

    Not a very robust indication of CO2 causation of temperature.

    And, with all his talking around this subject, Alley failed to convincingly address the basic problem for CO2 causation here.

    Alley is very convinced of his story – no doubt. He is a great salesman for his “pitch”.

    But he only addresses a few relatively minor episodes in our planet’s geological history, where he feels that the “CO2 as a driver of climate” story fits, leaving out the rest of the record, where it does not fit.

    Was it a “sales pitch” for AGW (and a 2xCO2 climate sensitivity of 3+C) or not?

    Look at it again, ML, and decide for yourself.

    As far as “deleting” any comments related to the Alley presentation as OT, Bart is the one who brought the Alley lecture up, in the first place, so it was definitely not OT in his mind.

    And you brought it up most recently. Right?

    Max

  2139. Robert S Says:

    Max, I’m trying to let this thread die (or at least my involvement in it), but I had to comment on your latest post directed at me — regarding the GEOCARB study and Vostok data. No more than a few days ago, you made the following statement on paleoclimate:

    “The paleo-climate stuff you mention as ‘evidence of a high climate sensitivity’ is all so dicey that one can prove almost anything one wants to with it.”

    Interesting…

    I think I can confidently say this will be my last post here:).

  2140. manacker Says:

    Robert S

    Sorry to see you go, Robert, but let me tell you that your comment is spot on.

    Paleo-climate data do not tell us much (despite the opinions of some “learned” scientists like Richard Alley).

    The data are dicey to start off with.

    And then if one cherry-picks those episodes which help support the “sales pitch” for the established paradigm, they become even more dicey.

    So we are apparently in agreement.

    Max

  2141. Robert S Says:

    I’m not saying I agree with you.

    -Robert

  2142. cohenite Says:

    manacker; hold your nerve against the likes of Maple Leaf who combines the 2 main tactics of pro-AGW argument against you in his reference to the Alley talk which are, appeal to authority which in this case is Alley, and censorship and insult, calling you a troll [by implication] and saying you should not have the right to comment; contrary to ML I have found your comments, along with Robert’s, to be very informative.

    As for ML’s assertion that CO2 is a driver this is rubbish and indeed is contradicted by the IPCC which invests heavily in the concept of an “enhanced greenhouse” which is described here:

    IPCC "Explains" the Greenhouse Effect

    This description skirts around the fact that CO2 tends to follow increases in temperature; certainly over the 20thC the coefficient of determination between CO2 and temperature is 0.42 basically because temperature often moves in the opposite direction to CO2 which brings us back to Beenstock, cointegration, unit roots and Breusch. What I take from that and the overwhelming historical evidence is that water swamps CO2, temperature drives CO2 and at CO2 levels below 200ppm basically life will cease.

  2143. MapleLeaf Says:

    Max,

    This thread is about the SAT record. Plain and simple, can you not comprehend that fact? Yet you and (especially) Max have been cheerfully pontificating about all sorts of contrarian talking points, not just Prof. Alley.

    So cohenite anytime someone references a scientist or authority on something we are appealing to authority? Do not be ridiculous. I learnt what I know by accepting that my profs. knew more than I did.

    Note that I said that I was (and am) completely OK also having my off-topic posts deleted. Yet you twist that cohenite to suggest that I am for censoring opposing views. It is that distortion that got you into deep water at Deltoid.

    Guys, you do not know more about this than the integrate knowledge amassed by thousands of a scientists since they started looking into this. Thinking otherwise is simply deluding yourselves.

    http://www.realclimate.org/index.php/archives/2004/12/co2-in-ice-cores/
    http://www.realclimate.org/index.php/archives/2007/04/the-lag-between-temp-and-co2/

    Yes, and those links are form those “evil” people at RC. Feel free to email Drs. Alley and Severinghaus.

    It is seems to me that you guys may be suffering from D-K, as do many contrarians who troll these fora pontificating on disciplines in which they are not qualified. You will no doubt take offense at that observation and deny it, but consider for a moment that I may in fact be correct. The challenge with that request is that D-K victim are incapable of comprehending the reality of their situation even if it is pointed out to them……

  2144. cohenite Says:

    ML; this what Jeff Severinghaus says from your RC link:

    “So one should not claim that greenhouse gases are the major cause of the ice ages. No credible scientist has argued that position (even though Al Gore implied as much in his movie). The fundamental driver has long been thought, and continues to be thought, to be the distribution of sunshine over the Earth’s surface as it is modified by orbital variations. This hypothesis was proposed by James Croll in the 19th century, mathematically refined by Milankovitch in the 1940s, and continues to pass numerous critical tests even today.

    The greenhouse gases are best regarded as a biogeochemical feedback, initiated by the orbital variations, but then feeding back to amplify the warming once it is already underway. By the way, the lag of CO2 of about 1000 years corresponds rather closely to the expected time it takes to flush excess respiration-derived CO2 out of the deep ocean via natural ocean currents. So the lag is quite close to what would be expected, if CO2 were acting as a feedback.”

    I may be wrong but that is basically what I was saying and, as I understand manacker, what he too was saying. However, there is a major proviso to what Severinghaus says about the CO2 lag; Manacker linked to the Lansner article before and it is worth revisiting:

    Click to access CO2,Temperaturesandiceages-f.pdf

    What Lansner shows is that CO2 does not just follow temperature; while temperature drags CO2 up in a way Severinghaus notes the interesting effect is what happens when temperature drops; at that time there is no correlation with CO2; this is also demonstrated in the 20thC as the lack of any correlation between temperature, CO2 levels and emissions is apparent:

    This is vindication of Beenstock, and Milankovitch but not AGW.

  2145. Frank Says:

    Max & cohenite,

    You two “classic D-K victims” need to get your facts straight on the CO2 – temperature relationship. Everyone knows that CO2 leads temperature by at least a gazillion years, as expertly pointed out by world famous climate researcher Laurie David in her fantastic book:

    I don’t know how much longer I can put up with you denialists!!!

    Sorry, couldn’t resist…

    Regards – F.

  2146. cohenite Says:

    Cheers Frank; the David book is designed for kids and the pro-reviews typically gush that it is great for kids; one of the con reviews says this:

    “Readers should be aware that the the first editions of this book contain a chart (page 18 ) of the past 650,000 years that shows that CO2 emissions rose BEFORE global temperatures. In fact, the chart is reversed – CO2 emissions did rise, but only hundreds (in some cases thousands) of years AFTER temperature rises. For some reason the chart lines are backwards.”

    This is OT but I lose my sense of humour at the tactic of scaring/informing kids of the doom of AGW; this is typical; note the train segment at the beginning:

    And the COP 15 opening film is particularly egregious:

  2147. MapleLeaf Says:

    Cohenite,

    You need to try and reconcile this statement you made:

    “this is also demonstrated in the 20thC as the lack of any correlation between temperature, CO2 levels and emissions is apparent”

    with this one that you made earlier,

    certainly over the 20thC the coefficient of determination between CO2 and temperature is 0.42

    You seem to be contradicting yourself in trying to move the goal posts.

    Also, Barton P. Levenson, after allowing for autocorrelation, found that changes in CO2 over the SAT record explained 60% of the variance in annual global SAT. That is non-trivial, especially considering the known impacts of internal climate variability on the SAT record. Tamino has been remarkably successful as estimating global SAT using simple toy models, as has BPL.

    I am not sure what you are trying to debate. Your point about the caveats is taken, and we both agree that in the past CO2 acted primarily as a feedabck, bit not always, CO2 led T during the PET. Anyhow, you seemed to ignore the part when Severinghaus states (after recognizing the role of other drivers) that:

    “The contribution of CO2 to the glacial-interglacial coolings and warmings amounts to about one-third of the full amplitude, about one-half if you include methane and nitrous oxide.

    He concludes that:
    The quantitative contribution of CO2 to the ice age cooling and warming is fully consistent with current understanding of CO2’s warming properties, as manifested in the IPCC’s projections of future warming of 3±1.5 C for a doubling of CO2 concentration. So there is no inconsistency between Milankovitch and current global warming.

    In an earlier post on RC:

    “At least three careful ice core studies have shown that CO2 starts to rise about 800 years (600-1000 years) after Antarctic temperature during glacial terminations. These terminations are pronounced warming periods that mark the ends of the ice ages that happen every 100,000 years or so.

    Does this prove that CO2 doesn’t cause global warming? The answer is no.

    And that is the crux of the matter cohenite. Independent studies of surface and satellite data both show that we are experiencing an enhanced greenhouse effect on account of higher GHGs. f the mechanism for the (fairly) recent enhanced GH effect is not higher GHGs, then what is? You are not presenting a credible alternate hypothesis which explains the global nature of the SAT increases.

    I’m trying to get a handle what your position is.
    Are you denying that the surface and satellite data are showing an enhanced greenhouse effect?
    Or are you claiming that CO2 cannot act as a driver of T, just b/c in the past is acted primarily (not always) as a feedback? In-situ data indicate that hypothesis to be wanting.

    Now I’m really trying to bring us back to the SAT record. The MSU data, OHC data, SST data and radiosonde data all corroborate the GISS, NCDC, JMA and HadCRUT chronologies. They also all show that the long-term trends in global and atmospheric and oceanic temperature are all up.

    Experts, skeptics (JeffId, RomanN) and it seems most other reasonable and rational people, while recognizing that the SAT constructions are not perfect, do recognize that they are robust. Yet, there seem to be a handful of people (some of whom are posting here) who are not being objective, or reasonable or rational.

    IMHO, the real debate, if any, concerning AGW is what the expected climate sensitivity is to doubling CO2 (although CO2 equivalent would be more realistic to account for increases in other GHGs).

    On that question, I am secretly hoping that the CS is +2 C and not +3 or +4 C. The problem with that is if it is determined to be +1.5 or +2.0 C (which is unlikely going by the exhaustive literature on the subject) there will be very little impetus to keep CO2 levels below 560 ppm. Oh joy.

    And I hope we can agree that the beautifully elaborate red-herring manufactured by VS is irrelevant to the radiative forcing of GHGs. Not to mention the fact that Tamino soundly refuted VS on the issue of the unit root and random walk. It was a rather entertaining and good try by the desperate contrarians though :) In the meantime, the planet continues to accumulate heat and warm (over the long term of course), and those in denial about AGW keep their heads firmly embedded in the sand and do whatever it takes to convince themselves that there really is nothing to worry about and continue to be obsessed with Al Gore.

    Well guys this has been “fun”, but these debates get tiresome and take up way, way too much of my time (not to mention that my wife and family are not happy with me arguing with Dunning-Krugers, they recognise that it is futile, and so should I especially as that fact has been demonstrated on this and other threads). I do thank the contrarians and D-Ks for one thing though, I have learnt an awful lot more about climate science addressing their misinformation and myths.

    Good night all, I won’t be checking this thread again.

  2148. cohenite Says:

    The PETM was at best only partially to do with CO2 forcing;

    http://www.nature.com/ngeo/journal/v2/n8/abs/ngeo578.html

    What is inexplicable by AGW terms is that after the PETM CO2 levels dropped but temperature continued to rise:

    The thing about AGW CS is that since 1900 CO2 concentration has gone up 40% but temp only 0.7%; a CS of 3C should have seen a temp increase of 40% of 3C or 1.2C; however of that 0.7C increase a solar effect has been either 0.4C [TAR] or 0.1C [AR4]; natural variation, even if neutral or stationary [and there is compelling evidence that this is problematic and I am not referring to McLean et al], will have contributed something because there have been more +ve PDO’s in the period. Therefore the 0.7 has been reduced to somewhere between 0.0C and 0.3C. AGW CS therefore has to make up either 3C or 2.6C with a remaining 60% CO2 increase. The equilibrium sensitivity therefore depends on a pipeline effect; only the dubious Schuckmann paper has offered any joy to the AGW theory for that.

    As for VS; what I took out of that is what Breusch and Vahid concluded; with a deterministic temperature the 1976 break is verified confirming a natural phase shift effect on temperature but if temp has a unit root characteristic or is stochastic than the correlation with CO2 forcing or feedback is illusionary:

    http://landshape.org/enm/problem-2-of-climate-science/#disqus_thread

  2149. manacker Says:

    MapleLeaf

    The ongoing debate surrounding AGW is not only “about the SAT record”, but, more generally about the validity of the premise that AGW, caused principally by human CO2 emissions, a) has been the primary cause of observed warming and b) represents a serious potential threat.

    There are those here who buy into this notion and other who are rationally skeptical of it.

    As rational skeptics, in the scientific sense (see Wiki) the second category insists that the first category show empirical data, based on actual physical observations, to support the premise.

    In response, the first category attempts to provide evidence to support the above-stated AGW premise, whether from recent climate change or from paleo-climate studies.

    These attempts fall short, as this thread has shown. It is easy to show a) that temperature has risen in several multi-decadal spurts since modern measurements started in 1850, b) that atmospheric CO2 has risen since measurements started in 1958, c) that Arctic sea ice has shrunk since satellite measurements started in 1979, d) that sea level has risen, again in multi-decadal spurts, since tide gauge records started in the 19th century, etc. However, the lack of an apparent robust statistical correlation linking observed temperature to observed atmospheric CO2 makes the case for CO2 causation for the observed warming weak, and the case for AGW as a potential threat even weaker. This remains a dilemma for supporters of the AGW premise.

    That is the scientific part of the ongoing debate. This is the basis for the rest of the discussion: if the “science” supporting the premise is flawed, faulty or skewed, the rest is meaningless. This can involve assumptions on positive feedbacks or any other theoretical deliberations related to the AGW premise.

    Then there are the political and economic aspects. Inasmuch as AGW has become a multi-billion dollar big business with major political implications, this part is significant. For some on either side of the debate, this may have become the overriding factor. And there are certainly some, who believe that they can benefit personally from AGW, and are acting in self-interest (see Lomborg’s “Climate-Industrial Complex”, fashioned after Dwight Eisenhower’s warnings in the 1950s of a “Military-Industrial Complex”, tied to Cold War defense spending).

    And, finally, there are the emotions. Many of the proponents of the dangerous AGW premise are driven by fear (a very strong emotion). Sometimes there is frustration that others (whom they see as “deniers”) do not see the perceived dangers that threaten us, and that they stand in the way of the perceived solutions (as “obstructionists”). This can lead to anger, resulting in personal attack, etc. Emotions make a rational discussion of the topic very difficult.

    So it is a far broader and deeper topic than simply a discussion “about the SAT record”, ML.

    Max

  2150. manacker Says:

    cohenite

    Your point about the lack of observed validation of IPCC’s 2xCO2 CS of 3.2C is one of the weak points of the whole AGW premise.

    Since 1850 observed temperature increased by 0.7C.

    As you point out, some of this can be attributed to natural causes (several solar studies attribute around 0.35C, on average, to the unusually high level of 20th century solar activity).

    This would leave 0.35C for human GHGs plus all other factors.

    IPCC AR4 tells us that all anthropogenic factors other than CO2 cancelled one another out, so that the CO2 forcing (1750 to 2005) was 1.66 W/m^2 and all anthropogenic forcings together were 1.6 W/m^2.

    So we have a CO2 level of 285 ppmv, as estimated by IPCC for 1850 and a measured level of 390 ppmv today, as measured at Mauna Loa, for a ratio of 1.37.

    IPCC tells us that a doubling of CO2 should cause 3.2C GH warming, so we should have seen 3.2 * (ln 1.37 / ln 2) = 1.4C warming

    Yet we only saw 0.7C total warming over the entire 150+ year period, of which half was caused by natural changes in solar activity.

    That’s when some of the “defenders of the paradigm” start getting desperate.

    In order to support the premise of a 3.2C climate sensitivity, Hansen et al. dreamed up the “hidden in the pipeline” postulation, whereby a significant part of the warming is just not yet visible, but presumably hidden in added warmth of the upper ocean, from where it will some day miraculously jump out of hiding to warm our atmosphere even more, by some as yet unexplained mechanism.

    Since 2000 the atmosphere has cooled (both at the surface and in the troposphere), as the records show.

    Since 2003, when improved Argo measurements started delivering more reliable data than the old expendable buoys used prior to that, the upper ocean has also been cooling.

    Latent heat of observed melting ice or increased water vapor is too small to make a difference, so the theoretical GH energy from the increased CO2 levels is “missing” and the “hidden in the pipeline” postulation has been invalidated, an along with it the IPCC estimate of a 3.2C climate sensitivity.

    R.I.P.

    Max

  2151. Tim Curtin Says:

    Well done, Max and cohenite. What I find really amusing is that those who find they are losing the argument like Maple Leaf and Robert S, are also those who are the first to invoke the Dunning-Kruger effect against you, when of course they themselves exemplify the D-K effect in spades.

    BTW, my own seminar at the ANU’s Crawford School’s MAP should now be up at its website, and will be on my own tomorrow; Tim Lambert (deltoidblog@gmail.com) also has it.

    Marco, you seem to have lost your voice! But many thanks for advertising my seminar at Deltoid.

  2152. manacker Says:

    MapleLeaf

    You wrote to cohenite:

    Does this prove that CO2 doesn’t cause global warming? The answer is no.

    Your logic here is flawed for two reasons.

    First, science is not about “proof”. It is about empirical data derived from physical observations or experimentation, which provide evidence to validate or support a hypothesis or theory.

    Second, it is not up to those who rationally question the validity of the premise that AGW, caused principally by human CO2 emissions, has been a primary cause of past warming or represents a serious potential threat, to provide empirical evidence that this premise is invalid.

    You ask cohenite:

    If the mechanism for the (fairly) recent enhanced GH effect is not higher GHGs, then what is? You are not presenting a credible alternate hypothesis which explains the global nature of the SAT increases.

    You must realize, ML, that it is not up to cohenite, as a rational skeptic of the dangerous AGW premise, to present “a credible alternate hypothesis which explains the global nature of the SAT increases”, as you request.

    It is up to those who support the dangerous AGW premise to provide empirical evidence that this premise is valid.

    And so far the supporters of this premise have not been able to do so.

    And that is the basic problem.

    Max

  2153. Rowett Says:

    “Hello. This is the Environmental Protection Agency, Division of Answering Ignorant Questions from Belligerent Know-it-alls, Climate Sector. Can I help you?”

    http://epa.gov/climatechange/endangerment.html#comments

    http://rabett.blogspot.com/

  2154. Frank Says:

    Taking advantage of the lull here to make a prediction. I’m willing to bet that solid research on the feedbacks will make it impossible for the IPCC to hold the line much longer at +1.2 K for 2xCO2, let alone +3.2 K. Probably looking at +0.5 K, per Spencer’s and Gray’s respective analyses.

    Regards – F.

  2155. manacker Says:

    Frank

    Thanks for your post.

    As a “classic D-K victim” I am indeed trying to get my “facts straight on the CO2-temperature relationship”, but the observed data do not help me much.

    I can find a similar correlation between the consumption of McDonalds’ “Big Macs” and the “globally and annually averaged [hand picked] land and sea surface temperature” (which vegetarian AGW supporters might endorse).

    You made the point earlier that “convicting” CO2 of the past warming “by default” (i.e. “because our models cannot explain it any other way”) is a flawed logic based on ignorance rather than knowledge.

    Unfortunately this is the backbone of the IPCC argumentation for the premise that AGW has been responsible for past warming and represents a serious potential threat.

    What is lacking are a) a statistically robust correlation between atmospheric CO2 and global temperature (original topic of this thread) and b) empirical data based on physical observations to support the AGW premise.

    Model simulations do not provide either.

    Regards,

    Max

    PS I’d second your prediction, although it will be very painful for IPCC to make this concession, as it will represent the “death knell” for the premise of alarming AGW, upon which the entire existence of IPCC is based. There may be some challenging of observed data, papers pointing out the inaccuracy of measurement devices, “proxy” data to challenge the observations, etc. first. But I am an optimist: the truth will eventually come out (and my bet follows yours on that score).

  2156. A C Osborn Says:

    Max, Cohenite, Tim, VS and others, thankyou for such a splendid rebuttal of the IPCC position.
    I do find it odd that Mapleleaf thought I was frustrated with this discussion, the only frustration came from the earlier AD Hom attacks and the repitition of the old IPCC science when it is confounded by newer work.

    BIG Thanks to Bart for hosting what turned out to be such an interesting Thread.

  2157. manacker Says:

    Bart

    Thanks for running a very informational thread. It attracted some interesting posters, and was rewarding both as a lurker first and then as a participant.

    Max

  2158. Girma Says:

    The global mean temperature pattern is CYCLIC.

    Because both human emission of CO2 and global temperature have been increasing at the same time does not mean that they are related. This relationship will break when the global temperature starts its cooling phase.

    IPCC projections of global mean temperature are incorrect:

  2159. Bart Says:

    Girma,

    There’s a whole body of well established physics, going back to the 19th century, according to which GHG and temperature are related. This is a good start.

  2160. Girma Says:

    Observation beats theory all the time. The global temperature pattern is not random.

    With a high correlation coefficient of 0.88, the global temperature pattern is CYCLIC:

    Predictions Of Global Mean Temperatures & IPCC Projections

    As a result, the effect of CO2 on global temperature is nil, zilch, naught.

  2161. Bart Says:

    Data on temperature, the sun and CO2

    Indeed, the global avg temp pattern is not random.

    Your WUWT post looks like not much more than curve fitting, but admittedly I’ve given it a quick glance only. It reminds me a little of claiming that my body weight is random CYCLIC, if it happened to have followed a similar pattern as the global avg temp. Sorry, but basic physics can not be thrown out of the window that easily. If I eat more than my body needs, I’ll gain weight. If more radiation comes in than goes out of the earth system, the temperature will go up.

  2162. Girma Says:

    Bart

    Why this plateau in global mean temperature for the last 12 years?

    http://www.woodfortrees.org/plot/hadcrut3vgl/from:1998/plot/hadcrut3vgl/from:1998/trend

    Human emission of CO2 has been definitely increasing, but not the temperature!

    Wait and see. If this plateau continues or dips in the coming years, it is over for AGW!

  2163. willard Says:

    > Observation beats theory all the time.

    One can easily observe from the history of science that this theory is false. The simplest example is Galileo’s theory refuting Aristotle’s observations about free falling bodies.

  2164. manacker Says:

    Girma

    Of course you are right. The 160-year HadCRUT temperature record does indeed show that temperature has warmed and cooled in a cyclical fashion, with a half cycle time of around 30 years and a slight underlying warming trend (in sort of a tilted sine curve).

    Dr. Syun-Ichi Akasofu has referred to this natural cycle in various papers.

    The only period that shows a reasonable correlation between CO2 and temperature is the late 20th century warming cycle. This appears to have stopped after 2000, at least for now. Whether this is part of a new slight cooling cycle remains to be seen.

    Bart is correct when he writes that there is a well-defined theory linking GHGs (including CO2) to “global” temperature. Then there are also more questionable model assumptions that lead to a warming impact of 3 to 4 times the theoretical value from the GH theory, resulting from model simulated “positive feedbacks”.

    The theory is fine. It is just the actual physical observations that do not really correlate that well with the theory, that’s all.

    This points to the probability that there are other factors, which may be just as important (or even more so) than GHGs for determining our planet’s climate, as the Met Office has conceded for the most recent cooling.

    Max

  2165. manacker Says:

    Girma

    BTW, I’ve plotted the HadCRUT temperature record showing the multi-decadal warming/cooling cycles and linear trend lines.

    HadCRUT 1850-2009 with Multi-decadal Cycles

    As you can see, there were three essentially indistinguishable warming cycles (late 19th century, early 20th century and late 20th century), with two multi-decadal cycles of slight cooling in between.

    Max

  2166. manacker Says:

    Willard

    “Galileo’s theory refuting Aristotle’s observations about free falling bodies.”

    You forgot to mention the “leaning tower of Pisa experiment”, which provided empirical data based on physical observations to support Galileo’s theory.

    Max

  2167. Harry Says:

    What is the use of a single parameter, averaged global temperature? Has anyone an idea about what it means, physically?

    I have sincere objections to this measure. As it is being constructed, using thermometers on selected locations, and afterwards extrapolating from these locations into and over areas without thermometer readings, it clearly implies filling in the global image with the data one wants.

    And for those who care, homogenization changes the physical properties of islands. It makes hem look as if they were ocean. Or was it the other way?

  2168. Bart Says:

    Girma,

    Did you read this post, looked at the graphs (e.g. the one with the 11 year running mean), checked some of the references I gave?

    It is entirely expected that a dataset of gradually increasing global avg temp exhibits short periods of apparent plateaus as well as short periods of massive warming. That’s why the long term trend is what matters.

  2169. manacker Says:

    Bart and Girma

    No one would dispute that it is the long-term trend that matters (as Bart has written).

    This trend shows gradual warming with a linear trend of 0.041C per decade since the modern HadCRUT record of “globally and annually averaged land and sea surface temperature” started in 1850 (total warming over the 160 years of around 0.7C).

    Harry also makes a valid point that this indicator has several shortcomings, due to the unequal global coverage, changes in measurement locations over time, urban heat island distortions over time, “ex post facto” corrections, “homogenization”, “variance adjustments”, etc.

    The multi-decadal oscillations (with a repetitive period of around 30 years for each half cycle) are probably more than just random “short periods of apparent plateaus as well as short periods of massive warming”, as Bart has written; several hypotheses have been proposed, but none of these appear to be conclusive. One postulation ties these to changes in ocean currents (PDO, NAO, ENSO, etc.), which seems logical, in view of the strong ENSO impact of the 1990s (especially in the record year 1998, as acknowledged by everyone).

    Ocean temperature measurement might be a better indicator of any global warming of our planet, but these have been very sketchy prior to the Argo measurements in 2003, so there is no good record available.

    But the surface record is the only long-term temperature record we have prior to 1979, when satellite measurements started, so we have to live with it, “warts and all”.

    Max

  2170. MrPete Says:

    Bart,
    I believe you can fix the incessant italics in this thread by editing the following comment, and adding a close-italics tag at the end. It appears to be the origin of italics-without-end. :)

    Feel free to delete this comment…

    MapleLeaf, May 7, 2010 at 08:06

    [Thanks. BV]

  2171. Girma Says:

    Max

    I agree with your results.

    Thank you

  2172. hempster Says:

    While reading Scafettas latest attempt of attribution multi-decadal changes in global temperatures, here: http://arxiv.org/PS_cache/arxiv/pdf/1005/1005.4639v1.pdf , I got an idea, a really simple one – what if most of the controversy around global warming / climate change is caused by ill-defined “climate” ? By current definition, climate is weather averaged over 30 years (or more), but if there’s plenty of evidence about quasi-periodic 60-years long cycles in climate, climate should be defined as average over 60 years or any integer multiples of this period. Only this way we can integrate out periodic component and what is left is climate, right? Half-period or 30 years is a really bad choice – it has strongest possible dependence on the phase and can show both strong global cooling or global warming, depending on starting point.
    What do you think?

  2173. manacker Says:

    hempster

    Your suggestion of changing the (arbitrary) “definition” of “climate” from 30 to 60 years makes sense to me, for the following reason:

    Since the HadCRUT record started in 1850 we have observed a fairly regular oscillation of warming/cooling cycles of about 30 years for each half cycle. The pattern is obviously not perfectly smooth, but resembles a sine curve on a slightly tilted axis (of +0.041C per decade warming).

    Simply looking at the latest 30-year warming “half cycle” gives us a distorted picture of what is really going on.

    Scafetta demonstrates this based on a study of celestial cycles, which “would reproduce the temperature oscillations much better than typical general circulation models such as those adopted by the IPCC”,

    Concluding:

    The failure of the climate models, which use all known climate forcing and mechanisms, to reproduce the temperature oscillations at multiple time scales, including the large 60-year temperature modulation, indicates that the current climate models are missing fundamental climate mechanisms. The above findings indicate, with a very high statistical confidence level, that major climate forcings have an astronomical origin and that these forcings are not included in the current climate models.

    Regardless of whether Scafetta’s conclusion on the impact of celestial cycles on climate are correct, it seems reasonable, based on the actual physical observations over the past 150 years, to increase the “definition” of “climate” from 30 to 60 years in order to incorporate a total cycle of the observed oscillation.

    This should also help the climate models incorporate climate forcings with an astronomical origin, as Scafetta proposes.

    Max

  2174. tempterrain Says:

    Max,

    You seem to like to dress up your climate change denialism in respectable scientific clothing these days. Readers of this blog might like to take a look at what you were wearing a few years ago when your ‘argument’ was:

    “How foolish can we be? To seriously believe all the hype that man is causing a climate disaster that will destroy the planet is not only, basically, stupid, it is extremely arrogant. We insignificant humans do not have the power to destroy this planet. Never did.

    We also do not have the ability to change the current climate trends, or even to accurately forecast what is going to happen over the next ten let alone 100 years. Let’s hope things will get warmer, rather than colder. We don’t need another ice age.

    Forget all the junk science by so-called experts that are all in on the multi-billion dollar “climate research scam”. Forget all the disaster reports being sold by environmental activists via the sensationalist media. Forget all the self-righteous calls for action by power-hungry politicians. Use your common sense. It’s all a hoax.”

  2175. manacker Says:

    Tempterrain (PeterM)

    Do you have anything constructive to offer?

    Of course, I am even more convinced today than I was (after reviewing IPCC SPM 2007) 3 years ago (especially after all the recent revelations of shenanigans and “junk science” by the scientists and bureaucrats involved with IPCC) a) that humans are not destroying our planet with CO2 emissions and b) that we do not have the ability to make changes in our climate by reducing these emissions.

    Convince me otherwise, Peter, by bringing empirical data based on actual physical observations, which confirm your hypothesis a) that AGW, caused principally by human CO2 emissions, has been the primary cause of recent warming and b) that this represents a serious potential threat.

    (Have asked you for this repeatedly on other threads, but you have been unable to deliver.)

    Max

  2176. tempterrain Says:

    Max,

    Empirical Data? Sure.

    http://www.skepticalscience.com/empirical-evidence-for-global-warming.htm

    http://climateprogress.org/2010/03/01/climate-science-video-empirical-evidence-for-human-caused-global-warming/

    Hope these help :-)

  2177. manacker Says:

    tempterrain

    Thanks for your post.

    I had requested from you (here, as well as repeatedly on other sites)

    empirical data based on actual physical observations, which confirm your hypothesis a) that AGW, caused principally by human CO2 emissions, has been the primary cause of recent warming and b) that this represents a serious potential threat.

    Instead of bringing such evidence you cite two references:
    · a blog from “Skeptical Science” entitled “Empirical Evidence that humans are causing global warming”
    · a youtube from “Climate Progress – Crock of the Week” entiled “What We Know About Climate Change”

    The first reference demonstrates:
    · We’re raising CO2 levels
    · CO2 traps heat
    · The Planet is accumulating heat

    There is no argument with any of the claims (except that the part on upper ocean heat content does not show the cooling that has been observed there since the new, more reliable Argo measurements were installed in 2003).

    Otherwise the claims look OK.

    However, they do not constitute “empirical data based on actual physical observations, which confirm your hypothesis a) that AGW, caused principally by human CO2 emissions, has been the primary cause of recent warming and b) that this represents a serious potential threat.”

    You have already cited the second reference as your “empirical data” on another site, to which I have already replied there.

    I will copy my reply below (taking out the specific references to the other site):

    Re the you-tube from “Climate Denial – Crock of the Week”.

    Sorry. This “sales pitch” does not provide any empirical data based on physical observations to confirm your theory of dangerous AGW, despite what the title advertises.

    It cites evidence that CO2 is a GH gas, going back to Arrhenius and Keeling (yawn!).

    It tells us humans are emitting CO2 and that evidence shows that this is at least part of the reason that atmospheric CO2 levels are rising (yawn!).

    Then Richard Alley tells us that it is straightforward that CO2 should cause warming (yawn!).

    Then we hear about the 33C natural GH effect (caused primarily by water and to a much smaller extent by CO2). (Yawn!)

    Then we see a headline in a paper that reads: “Increase in greenhouse forcing inferred from outgoing long-wave radiation”. (Yawn!)

    Then the talk is about measurements of outgoing LW radiation and conclusion that GH gases must be trapping more LW radiation (no talk about any observed changes in TOTAL incoming and outgoing radiation, though). (Still yawn!)

    Then there is much talk about proof that it is warming (yawn!).

    We are even told that 90% of 29,000 independent data sets tell us it is warming (yawn!).

    I won’t mention all the indicators cited, such as Arctic sea ice retreat, etc., since they are all known.

    There is one statement (that has since been proven false), namely that glaciers are receding at an accelerated rate. Another “groaner” is the statement on accelerated sea level rise (which has actually been rising at about the same rate since the mid 19th century).

    But even if these statements were true, they would not provide the empirical data,which I have requested.

    This (rather silly) sales pitch (with interspersed blabber of Bolshevik plots and conspiracies) claims to prove a) that it is warming, b) that CO2 is a GH gas which causes a slowdown in radiation of LW energy into space and c) that humans are emitting CO2, thereby causing an increase in its concentration.

    That’s all, folks.

    But it does not provide empirical data based on actual physical observations, which confirm your hypothesis a) that AGW, caused principally by human CO2 emissions, has been the primary cause of recent warming and b) that this represents a serious potential threat.

    And that is what I have requested from you.

    Max

  2178. On the blogroll « the Air Vent Says:

    […] The comments are 90% of blogs, but in this case it was more like 99.9%.  Not Bart’s fault, but that’s what happens.  Tom Fuller was at fault for starting it: Tom Fuller had a link to a post which I believe you will find worth your time. https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-a… […]

  2179. Jere Krischel Says:

    Sorry to join in so late after the fact, but these three bits I actually have something I understand that I can talk to:

    @Bart
    “It’s like claiming that allthough I’ve eaten much more than my body needed over the past twenty years, my chances of having gained or lost weight are equal nevertheless, and it’s just a coincidence that I’ve gained weight. No, it’s not a coincidence. It’s physics (or biology).”

    @DLM
    “There is no energy balance in the human body. The need for food varies. You can eat more and gain weight, you can eat less and lose weight, you can eat more and lose weight, you can eat less and gain weight. You can eat more, or less, and stay the same. You can hold your caloric intake stable, and lose weight…gain…lose…gain…and so on. In fact your body weight could very well fluctuate in a random way, if food intake was kept stable, and the same is true with food intake fluctuating up and down.”

    “You and I know that if you eat too much, you will get fat.”

    @Adrian
    “If, as Bart said, one eats more than one’s body requires (or to satisfy others, one’s body and its parasites), one gains weight. This is pure conservation principle.”

    The problem with all of your understanding about weight gain in this context is that although it is not regulated by calories in/calories out, it is regulated by insulin and insulin sensitivity/resistance that varies per person. You simply cannot take a basic physics model, the 2nd law of thermodynamics, and assert you understand weight gain in the human body. It does turn out that the real driver is simple (insulin), but it is *not* the simple driver you originally imagined.

    In a nutshell, carbohydrate intake regulates blood sugar levels, which regulate insulin levels. Fat tissue, if “insulin resistant”, responds to insulin by storing fat instead of releasing it back into the bloodstream for energy. When you see a 500 pound fat man eating a pizza like he’s starving, it’s because his muscle tissues are being starved for energy (all of it is being stored as fat due to his high insulin levels and high insulin resistance).

    Your body is a self regulating body, and it is the deleterious effects of carbohydrates that mess that regulation up. Now, you might make the case that CO2 somehow adjusts the “regulation” of every other forcing or feedback, and therefore is similar to insulin in this example, but I’m not sure if anyone has made any case for it being as determinative as insulin is on weight.

    Another model to illustrate how the human body cannot be considered in simple physics terms is the transfer of energy. Take a solid, heat it on the bottom, and eventually the heat travels to the top. Depending on the substance, it may quickly transfer, or it may slowly transfer. Take a human, put their feet in a bucket of hot water, and you can measure their oral temperature for as long as you want, and you won’t see that “transfer”. The messy human bits between the feet and the skull do all kinds of complex temperature regulation that defy simple physics modeling.

    Anyway, great conversation, took me almost an entire day to read through it, but it was well worth it. Props to VS for his insightful analysis!

  2180. Jere Krischel Says:

    @Bart
    “All other things being equal, if I eat more, my weight will increase more in comparison to the situation where I ate less.”

    See my previous post – your body knows how to regulate itself to rid itself of excess calories without weight gain, barring carbohydrate intake combined with insulin resistant fat cells.

  2181. Barton Paul Levenson Says:

    Wow, I’d better keep up with the internet more diligently! I didn’t see Tim Curtin’s attempted take-down of my correlation page until today (Friday, 10 September 2010).

    Tim, I use ln CO2 because radiative forcing is proportionate to ln CO2 in about the range 1 – 1440 ppmv, and dT is proportionate to RF.

    As to autocorrelation and spurious regression, here are the results of Engel-Granger tests for cointegration. Data: Y = NASA GISS dT, X = ln CO2.

    For unit root in dT: tau-c = -1.49, p < 0.539
    For unit root in ln CO2: tau-c = 6.57, p < 1
    Cointegrating regression: OLS 1880-2008, N = 129

    b0 -16.65, t -23.46, p < 5.42 x 10^-48
    b1 2.885, t 23.43, p < 6.17 x 10^-48

    R^2 0.81 (Golly!), AIC -209.8, SIC -204.1, rho 0.48, DW 1.04.

    Unit root in residuals: tau-c = -5.45, p < 1.796 x 10^-5

    In short, unit roots are not rejected for the variables, but ARE rejected for the residuals — by gosh, there's evidence for a cointegrating regression! How about that!

  2182. Vaughan Pratt Says:

    Only just found out about this thread. VS’s point about unit root tests duly noted. I have a Monday deadline or this post would be a tad longer. :) Meanwhile I have a couple questions for VS if he’s still around, or anyone else on VS’s wavelength.

    1. Do unit root tests generally judge cos(x) stationary on sufficiently large domains, say longer than 2 pi? (Intuitively it looks stationary to me once you see enough of it to see it wiggling.)

    2. Presumably any reasonable unit root test would judge exp(x) nonstationary on every finite domain, yes?

    3. Assuming the above are as I guessed, what about cos(x) + exp(x) on the interval [-10, d]? With what confidence does say the Dickey-Fuller test judge this stationary as a function of d? In particular what is the least d for which the DF test would be say 95% confident in judging it nonstationary?

    Sorry, gotta run, back Tuesday (Jan. 18).

  2183. Michael Tobis Says:

    Willard just reminded me of this remarkable thread.

    I do not know what this unit root test is about so my eyes glazed over instantly on first encountering it.

    On the other hand it seems like it is very much the sort of thing I should have at least heard of though and I am a bit shocked at this lapse too. Perhaps I know of it under another name? Anyway I would appreciate a grad-student level exposition if you can point to one.

  2184. Bart Says:

    mt,

    I can’t point you to a grad-student level exposition I’m afraid, but I can point you to some comments where VS wrapped up his argument:

    Here VS gives “a quick overview of the current statistical discussion”.

    His test results here.

    Regarding he Zorita and von Storch (2008) paper VS wrote this.

    here.

    Here VS presents two graphs, one of a deterministic and one of a stochastic trend. The stochastic trend is so wide that ‘anything goes’, and it was the basis for my april first post.

    Tamino chimed in here and here, followed by a virtual fist fight between Tamino and VS.

    Sorry, I don’t have a good wrap up of this discussion yet, partly because it went all over the place and partly because VS’s position remained very elusive.

  2185. MikeN Says:

    Max, Tim Curtin, you make some incorrect arguments, precisely because you are not thinking about the anomaly concept.

    Namely, when you say A)removing stations in cold places, makes the GISS temperature warmer, B)not having Africa in the early time period makes things cooler, and C)

    You have to consider that GISTEMP averages are anomalies, not the actual temperature. Now perhaps you are thinking it makes no difference, because you just add the average back to get the actual temperature. For the GISTEMP average this is correct, however, GISTEMP is not calculated by taking the average temperature at each station. They take anomalies for each station, then average these together, with lots of other adjustments that I will ignore. So in example A, removing stations at northern latitudes, this won’t make GISTEMP warmer. It only makes it warmer if the removed stations are not only colder, but colder than normal FOR THOSE STATIONS. Actually, not quite that. The anomalies would have to be lower than the anomalies for the rest of the world. The actual evidence is that these stations are warming more, because they are at high latitudes, which is what we would expect to happen if you warm up a place with varying temperatures, the cold parts warm up more. Similar for B, not having African temperature stations in 1880-1910. This only matters if the missing stations were warmer than 1950-1980 than New York was, that is Nairobi 1880-Nairobi 1980 > New York 1880-New York 1980. I have no reason for thinking this is the case.
    Anomalies are correlated with each other, so losing some stations isn’t too bad. Even if one place is warm and the other is cold, they will still have similar anomalies, if the stations are nearby. The GISS coverage of the Arctic is built on the same idea, using stations that are far away to extrapolate those temperatures. Given more warming at higher latitudes, even this would be an understatement of anomalies for the Arctic.

  2186. TRC Curtin Says:

    MikeN (22nd) said “Anomalies are correlated with each other, so losing some stations isn’t too bad” – well are they? that has yet to be demonstrated statistically.

    Earlier you said: “This only matters if the missing stations were warmer than 1950-1980 than New York was, that is Nairobi 1880-Nairobi 1980 > New York 1880-New York 1980. I have no reason for thinking this is the case”. This is poorly put: I think you are trying to say “this only matters if the missing stations in the tropics (eg Nairobi) in 1880-1910 were TRENDING warmer in 1950-1980 than eg New York. Let’s check that…

    1. NASA_GISS with its great interactive maps shows a zero anomaly (radius for missing stations 1200 km) for the globe for 1880-2010 relative to 1951-1980, but has central Africa and NW Africa with anomaly of 0.2-0.5, which is rubbish, I know the history of those regions, and there were no met stations there before 1910. At 250 km smoothing radius, which reduces the areas showing some anomaly, there is a global anomaly of 0.1, which rather proves my point not yours!

    2. Now reset the period, and ask Gistemp to show the global anomalies for just 1880-1910 vis a vis 1951-1980 base (250 km radius), and lo!, we have a nice cool “global” anomaly of MINUS 0.25, but huge swathes of grey frankly admit there was zero data for 80% of the land surface areas, except SE Australia proudly shows virtually the only positive land anomaly (0.2-0.5, which is bad news for Garnaut, as it must reduce the anomaly post-1910). But he cannot/does not do statistics, and while Trevor Breusch can, he does not.

    I rest my case.

  2187. data masking db2 Says:

    data masking db2…

    […]Global average temperature increase GISS HadCRU and NCDC compared « My view on climate change[…]…

  2188. On trend lines and autocorrelation in time series | Ecologically Orientated Says:

    […] some real confusion on the topic this week in an amazing old climate change discussion at Bart Verheggen’s blog.  The discussion went for over 2000 comments, much of it on a discussion of random walks and the […]

  2189. stefanthedenier Says:

    localized temp increases / decreases regularly, otherwise wouldn’t be any winds; overall GLOBAL temperature is always the same: http://globalwarmingdenier.wordpress.com/climate/

  2190. The Contrarian Matrix | ClimateBall (tm) Says:

    […] is no such thing as a global average temperature. Global temperature is, statistically speaking, a random walk. That most of the W since 1950 is from A means little; a global MWP would undermine that […]

  2191. climatefreak Says:

    When I perform Cochrane-Orcutt estimation on the regression of Hadley CRUTEM4 on elapsed time, the trend line is still significant. VS’s remark that the fact of temperature’s being I(1) invalidates the trend line does not hold up to statistical analysis.

    As for CO2 and dT, they are indeed I(2) and I(1), respectively. But they are also cointegrated by an Engel-Granger test. So VS and the folks at WUWT are wrong on that score, too.

    I know these remarks are ten years out of date, but reading through them I am frustrated by the fact that no one seems to have done the appropriate follow-up analyses. I just did, using the R-based package Gretl.

  2192. cohenite Says:

    To climatefreak, see:

    VS Says:
    March 15, 2010 at 16:17

Leave a comment